Sample records for normal hearing sensitivity

  1. Cortical processing of speech in individuals with auditory neuropathy spectrum disorder.

    PubMed

    Apeksha, Kumari; Kumar, U Ajith

    2018-06-01

    Auditory neuropathy spectrum disorder (ANSD) is a condition where cochlear amplification function (involving outer hair cells) is normal but neural conduction in the auditory pathway is disordered. This study was done to investigate the cortical representation of speech in individuals with ANSD and to compare it with the individuals with normal hearing. Forty-five participants including 21 individuals with ANSD and 24 individuals with normal hearing were considered for the study. Individuals with ANSD had hearing thresholds ranging from normal hearing to moderate hearing loss. Auditory cortical evoked potentials-through odd ball paradigm-were recorded using 64 electrodes placed on the scalp for /ba/-/da/ stimulus. Onset cortical responses were also recorded in repetitive paradigm using /da/ stimuli. Sensitivity and reaction time required to identify the oddball stimuli were also obtained. Behavioural results indicated that individuals in ANSD group had significantly lower sensitivity and longer reaction times compared to individuals with normal hearing sensitivity. Reliable P300 could be elicited in both the groups. However, a significant difference in scalp topographies was observed between the two groups in both repetitive and oddball paradigms. Source localization using local auto regressive analyses revealed that activations were more diffuses in individuals with ANSD when compared to individuals with normal hearing sensitivity. Results indicated that the brain networks and regions activated in individuals with ANSD during detection and discrimination of speech sounds are different from normal hearing individuals. In general, normal hearing individuals showed more focused activations while in individuals with ANSD activations were diffused.

  2. Effects of Age and Hearing Loss on Gap Detection and the Precedence Effect: Broadband Stimuli

    ERIC Educational Resources Information Center

    Roberts, Richard A.; Lister, Jennifer J.

    2004-01-01

    Older listeners with normal-hearing sensitivity and impaired-hearing sensitivity often demonstrate poorer-than-normal performance on tasks of speech understanding in noise and reverberation. Deficits in temporal resolution and in the precedence effect may underlie this difficulty. Temporal resolution is often studied by means of a gap-detection…

  3. Perception of a Sung Vowel as a Function of Frequency-Modulation Rate and Excursion in Listeners with Normal Hearing and Hearing Impairment

    ERIC Educational Resources Information Center

    Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels Henrik; Dau, Torsten

    2014-01-01

    Purpose: Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss affects the perception of a sung vowel based on FM…

  4. Decision strategies of hearing-impaired listeners in spectral shape discrimination

    NASA Astrophysics Data System (ADS)

    Lentz, Jennifer J.; Leek, Marjorie R.

    2002-03-01

    The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listener's responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.

  5. Frequency-Shift Hearing Aid

    NASA Technical Reports Server (NTRS)

    Weinstein, Leonard M.

    1994-01-01

    Proposed hearing aid maps spectrum of speech into band of lower frequencies at which ear remains sensitive. By redirecting normal speech frequencies into frequency band from 100 to 1,500 Hz, hearing aid allows people to understand normal conversation, including telephone calls. Principle operation of hearing aid adapted to other uses such as, clearing up noisy telephone or radio communication. In addition, loud-speakers more easily understood in presence of high background noise.

  6. The effect of tinnitus on some psychoacoustical abilities in individuals with normal hearing sensitivity.

    PubMed

    Jain, Chandni; Sahoo, Jitesh Prasad

    Tinnitus is the perception of a sound without an external source. It can affect auditory perception abilities in individuals with normal hearing sensitivity. The aim of the study was to determine the effect of tinnitus on psychoacoustic abilities in individuals with normal hearing sensitivity. The study was conducted on twenty subjects with tinnitus and twenty subjects without tinnitus. Tinnitus group was again divided into mild and moderate tinnitus based on the tinnitus handicap inventory. Differential limen of intensity, differential limen of frequency, gap detection test, modulation detection thresholds were done through the mlp toolbox in Matlab and speech in noise test was done with the help of Quick SIN in Kannada. RESULTS of the study showed that the clinical group performed poorly in all the tests except for differential limen of intensity. Tinnitus affects aspects of auditory perception like temporal resolution, speech perception in noise and frequency discrimination in individuals with normal hearing. This could be due to subtle changes in the central auditory system which is not reflected in the pure tone audiogram.

  7. Consonant-recognition patterns and self-assessment of hearing handicap.

    PubMed

    Hustedde, C G; Wiley, T L

    1991-12-01

    Two companion experiments were conducted with normal-hearing subjects and subjects with high-frequency, sensorineural hearing loss. In Experiment 1, the validity of a self-assessment device of hearing handicap was evaluated in two groups of hearing-impaired listeners with significantly different consonant-recognition ability. Data for the Hearing Performance Inventory--Revised (Lamb, Owens, & Schubert, 1983) did not reveal differences in self-perceived handicap for the two groups of hearing-impaired listeners; it was sensitive to perceived differences in hearing abilities for listeners who did and did not have a hearing loss. Experiment 2 was aimed at evaluation of consonant error patterns that accounted for observed group differences in consonant-recognition ability. Error patterns on the Nonsense-Syllable Test (NST) across the two subject groups differed in both degree and type of error. Listeners in the group with poorer NST performance always demonstrated greater difficulty with selected low-frequency and high-frequency syllables than did listeners in the group with better NST performance. Overall, the NST was sensitive to differences in consonant-recognition ability for normal-hearing and hearing-impaired listeners.

  8. Processing of phonological variation in children with hearing loss: compensation for English place assimilation in connected speech.

    PubMed

    Skoruppa, Katrin; Rosen, Stuart

    2014-06-01

    In this study, the authors explored phonological processing in connected speech in children with hearing loss. Specifically, the authors investigated these children's sensitivity to English place assimilation, by which alveolar consonants like t and n can adapt to following sounds (e.g., the word ten can be realized as tem in the phrase ten pounds). Twenty-seven 4- to 8-year-old children with moderate to profound hearing impairments, using hearing aids (n = 10) or cochlear implants (n = 17), and 19 children with normal hearing participated. They were asked to choose between pictures of familiar (e.g., pen) and unfamiliar objects (e.g., astrolabe) after hearing t- and n-final words in sentences. Standard pronunciations (Can you find the pen dear?) and assimilated forms in correct (… pem please?) and incorrect contexts (… pem dear?) were presented. As expected, the children with normal hearing chose the familiar object more often for standard forms and correct assimilations than for incorrect assimilations. Thus, they are sensitive to word-final place changes and compensate for assimilation. However, the children with hearing impairment demonstrated reduced sensitivity to word-final place changes, and no compensation for assimilation. Restricted analyses revealed that children with hearing aids who showed good perceptual skills compensated for assimilation in plosives only.

  9. Excitatory, inhibitory and facilitatory frequency response areas in the inferior colliculus of hearing impaired mice.

    PubMed

    Felix, Richard A; Portfors, Christine V

    2007-06-01

    Individuals with age-related hearing loss often have difficulty understanding complex sounds such as basic speech. The C57BL/6 mouse suffers from progressive sensorineural hearing loss and thus is an effective tool for dissecting the neural mechanisms underlying changes in complex sound processing observed in humans. Neural mechanisms important for processing complex sounds include multiple tuning and combination sensitivity, and these responses are common in the inferior colliculus (IC) of normal hearing mice. We examined neural responses in the IC of C57Bl/6 mice to single and combinations of tones to examine the extent of spectral integration in the IC after age-related high frequency hearing loss. Ten percent of the neurons were tuned to multiple frequency bands and an additional 10% displayed non-linear facilitation to the combination of two different tones (combination sensitivity). No combination-sensitive inhibition was observed. By comparing these findings to spectral integration properties in the IC of normal hearing CBA/CaJ mice, we suggest that high frequency hearing loss affects some of the neural mechanisms in the IC that underlie the processing of complex sounds. The loss of spectral integration properties in the IC during aging likely impairs the central auditory system's ability to process complex sounds such as speech.

  10. Development of a test of suprathreshold acuity in noise in Brazilian Portuguese: a new method for hearing screening and surveillance.

    PubMed

    Vaez, Nara; Desgualdo-Pereira, Liliane; Paglialonga, Alessia

    2014-01-01

    This paper describes the development of a speech-in-noise test for hearing screening and surveillance in Brazilian Portuguese based on the evaluation of suprathreshold acuity performances. The SUN test (Speech Understanding in Noise) consists of a list of intervocalic consonants in noise presented in a multiple-choice paradigm by means of a touch screen. The test provides one out of three possible results: "a hearing check is recommended" (red light), "a hearing check would be advisable" (yellow light), and "no hearing difficulties" (green light) (Paglialonga et al., Comput. Biol. Med. 2014). This novel test was developed in a population of 30 normal hearing young adults and 101 adults with varying degrees of hearing impairment and handicap, including normal hearing. The test had 84% sensitivity and 76% specificity compared to conventional pure-tone screening and 83% sensitivity and 86% specificity to detect disabling hearing impairment. The test outcomes were in line with the degree of self-perceived hearing handicap. The results found here paralleled those reported in the literature for the SUN test and for conventional speech-in-noise measures. This study showed that the proposed test might be a viable method to identify individuals with hearing problems to be referred to further audiological assessment and intervention.

  11. Sensitive high frequency hearing in earless and partially eared harlequin frogs (Atelopus).

    PubMed

    Womack, Molly C; Christensen-Dalsgaard, Jakob; Coloma, Luis A; Hoke, Kim L

    2018-04-19

    Harlequin frogs, genus Atelopus , communicate at high frequencies despite most species lacking a complete tympanic middle ear that facilitates high frequency hearing in most anurans and other tetrapods. Here we test whether Atelopus are better at sensing high frequency acoustic sound compared to other eared and earless species in the Bufonidae family, determine whether middle ear variation within Atelopus affects hearing sensitivity, and test potential hearing mechanisms in Atelopus We determine that at high frequencies (2000-4000 Hz) Atelopus are 10-34 dB more sensitive than other earless bufonids but are relatively insensitive to mid-range frequencies (900-1500 Hz) compared to eared bufonids. Hearing among Atelopus species is fairly consistent, evidence that the partial middle ears present in a subset of Atelopus species do not convey a substantial hearing advantage. We further demonstrate that Atelopus hearing is not likely facilitated by vibration of the skin overlying the normal tympanic membrane region or the body lung wall, leaving the extratympanic hearing pathways in Atelopus enigmatic. Together these results show Atelopus have sensitive high frequency hearing without the aid of a tympanic middle ear and prompt further study of extratympanic hearing mechanisms in anurans. © 2018. Published by The Company of Biologists Ltd.

  12. Army Hearing Program Talking Points Calendar Year 2015

    DTIC Science & Technology

    2016-12-14

    outside the range of normal hearing sensitivity (greater than 25 dB), CY15 data.  Data: DOEHRS-HC Data Repository , Soldiers who had a DD2215 or...1.  Data: Defense Occupational and Environmental Health Readiness System-Hearing Conservation (DOEHRS-HC) Data Repository , CY15—Army Profile...Soldiers have a hearing loss that required a fit-for-duty (Readiness) evaluation:  An H-3 Hearing Profile.  Data: DOEHRS-HC Data Repository

  13. Speech Recognition in Fluctuating and Continuous Maskers: Effects of Hearing Loss and Presentation Level.

    ERIC Educational Resources Information Center

    Summers, Van; Molis, Michelle R.

    2004-01-01

    Listeners with normal-hearing sensitivity recognize speech more accurately in the presence of fluctuating background sounds, such as a single competing voice, than in unmodulated noise at the same overall level. These performance differences ore greatly reduced in listeners with hearing impairment, who generally receive little benefit from…

  14. Development of a Test of Suprathreshold Acuity in Noise in Brazilian Portuguese: A New Method for Hearing Screening and Surveillance

    PubMed Central

    Vaez, Nara; Desgualdo-Pereira, Liliane; Paglialonga, Alessia

    2014-01-01

    This paper describes the development of a speech-in-noise test for hearing screening and surveillance in Brazilian Portuguese based on the evaluation of suprathreshold acuity performances. The SUN test (Speech Understanding in Noise) consists of a list of intervocalic consonants in noise presented in a multiple-choice paradigm by means of a touch screen. The test provides one out of three possible results: “a hearing check is recommended” (red light), “a hearing check would be advisable” (yellow light), and “no hearing difficulties” (green light) (Paglialonga et al., Comput. Biol. Med. 2014). This novel test was developed in a population of 30 normal hearing young adults and 101 adults with varying degrees of hearing impairment and handicap, including normal hearing. The test had 84% sensitivity and 76% specificity compared to conventional pure-tone screening and 83% sensitivity and 86% specificity to detect disabling hearing impairment. The test outcomes were in line with the degree of self-perceived hearing handicap. The results found here paralleled those reported in the literature for the SUN test and for conventional speech-in-noise measures. This study showed that the proposed test might be a viable method to identify individuals with hearing problems to be referred to further audiological assessment and intervention. PMID:25247181

  15. Individual differences in selective attention predict speech identification at a cocktail party.

    PubMed

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-08-31

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise.

  16. Hyperventilation-induced nystagmus in vestibular schwannoma and unilateral sensorineural hearing loss.

    PubMed

    Mandalà, Marco; Giannuzzi, Annalisa; Astore, Serena; Trabalzini, Franco; Nuti, Daniele

    2013-07-01

    We evaluated the incidence and characteristics of hyperventilation-induced nystagmus (HVN) in 49 patients with gadolinium-enhanced magnetic resonance imaging evidence of vestibular schwannoma and 53 patients with idiopathic unilateral sensorineural hearing loss and normal radiological findings. The sensitivity and specificity of the hyperventilation test were compared with other audio-vestibular diagnostic tests (bedside examination of eye movements, caloric test, auditory brainstem responses) in the two groups of patients. The hyperventilation test scored the highest diagnostic efficiency (sensitivity 65.3 %; specificity 98.1 %) of the four tests in the differential diagnosis of vestibular schwannoma and idiopathic unilateral sensorineural hearing loss. Small tumors with a normal caloric response or caloric paresis were associated with ipsilateral HVN and larger tumors and severe caloric deficits with contralateral HVN. These results confirm that the hyperventilation test is a useful diagnostic test for predicting vestibular schwannoma in patients with unilateral sensorineural hearing loss.

  17. Auditory brainstem responses of CBA/J mice with neonatal conductive hearing losses and treatment with GM1 ganglioside.

    PubMed

    Money, M K; Pippin, G W; Weaver, K E; Kirsch, J P; Webster, D B

    1995-07-01

    Exogenous administration of GM1 ganglioside to CBA/J mice with a neonatal conductive hearing loss ameliorates the atrophy of spiral ganglion neurons, ventral cochlear nucleus neurons, and ventral cochlear nucleus volume. The present investigation demonstrates the extent of a conductive loss caused by atresia and tests the hypothesis that GM1 ganglioside treatment will ameliorate the conductive hearing loss. Auditory brainstem responses were recorded from four groups of seven mice each: two groups received daily subcutaneous injections of saline (one group had normal hearing; the other had a conductive hearing loss); the other two groups received daily subcutaneous injections of GM1 ganglioside (one group had normal hearing; the other had a conductive hearing loss). In mice with a conductive loss, decreases in hearing sensitivity were greatest at high frequencies. The decreases were determined by comparing mean ABR thresholds of the conductive loss mice with those of normal hearing mice. The conductive hearing loss induced in the mice in this study was similar to that seen in humans with congenital aural atresias. GM1 ganglioside treatment had no significant effect on ABR wave I thresholds or latencies in either group.

  18. Intelligibility of Telephone Speech for the Hearing Impaired When Various Microphones Are Used for Acoustic Coupling.

    ERIC Educational Resources Information Center

    Janota, Claus P.; Janota, Jeanette Olach

    1991-01-01

    Various candidate microphones were evaluated for acoustic coupling of hearing aids to a telephone receiver. Results from testing by 9 hearing-impaired adults found comparable listening performance with a pressure gradient microphone at a 10 decibel higher level of interfering noise than with a normal pressure-sensitive microphone. (Author/PB)

  19. Individual Sensitivity to Spectral and Temporal Cues in Listeners with Hearing Impairment

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Wright, Richard A.; Blackburn, Michael C.; Tatman, Rachael; Gallun, Frederick J.

    2015-01-01

    Purpose: The present study was designed to evaluate use of spectral and temporal cues under conditions in which both types of cues were available. Method: Participants included adults with normal hearing and hearing loss. We focused on 3 categories of speech cues: static spectral (spectral shape), dynamic spectral (formant change), and temporal…

  20. Validation of questionnaire-reported hearing with medical records: A report from the Swiss Childhood Cancer Survivor Study

    PubMed Central

    Scheinemann, Katrin; Grotzer, Michael; Kompis, Martin; Kuehni, Claudia E.

    2017-01-01

    Background Hearing loss is a potential late effect after childhood cancer. Questionnaires are often used to assess hearing in large cohorts of childhood cancer survivors and it is important to know if they can provide valid measures of hearing loss. We therefore assessed agreement and validity of questionnaire-reported hearing in childhood cancer survivors using medical records as reference. Procedure In this validation study, we studied 361 survivors of childhood cancer from the Swiss Childhood Cancer Survivor Study (SCCSS) who had been diagnosed after 1989 and had been exposed to ototoxic cancer treatment. Questionnaire-reported hearing was compared to the information in medical records. Hearing loss was defined as ≥ grade 1 according to the SIOP Boston Ototoxicity Scale. We assessed agreement and validity of questionnaire-reported hearing overall and stratified by questionnaire respondents (survivor or parent), sociodemographic characteristics, time between follow-up and questionnaire and severity of hearing loss. Results Questionnaire reports agreed with medical records in 85% of respondents (kappa 0.62), normal hearing was correctly assessed in 92% of those with normal hearing (n = 249), and hearing loss was correctly assessed in 69% of those with hearing loss (n = 112). Sensitivity of the questionnaires was 92%, 74%, and 39% for assessment of severe, moderate and mild bilateral hearing loss; and 50%, 33% and 10% for severe, moderate and mild unilateral hearing loss, respectively. Results did not differ by sociodemographic characteristics of the respondents, and survivor- and parent-reports were equally valid. Conclusions Questionnaires are a useful tool to assess hearing in large cohorts of childhood cancer survivors, but underestimate mild and unilateral hearing loss. Further research should investigate whether the addition of questions with higher sensitivity for mild degrees of hearing loss could improve the results. PMID:28333999

  1. Effects of Age and Working Memory Capacity on Speech Recognition Performance in Noise Among Listeners With Normal Hearing.

    PubMed

    Gordon-Salant, Sandra; Cole, Stacey Samuels

    2016-01-01

    This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.

  2. Evidence of hearing loss in a “normally-hearing” college-student population

    PubMed Central

    Le Prell, C. G.; Hensley, B.N.; Campbell, K. C. M.; Hall, J. W.; Guire, K.

    2011-01-01

    We report pure-tone hearing threshold findings in 56 college students. All subjects reported normal hearing during telephone interviews, yet not all subjects had normal sensitivity as defined by well-accepted criteria. At one or more test frequencies (0.25–8 kHz), 7% of ears had thresholds ≥25 dB HL and 12% had thresholds ≥20 dB HL. The proportion of ears with abnormal findings decreased when three-frequency pure-tone-averages were used. Low-frequency PTA hearing loss was detected in 2.7% of ears and high-frequency PTA hearing loss was detected in 7.1% of ears; however, there was little evidence for “notched” audiograms. There was a statistically reliable relationship in which personal music player use was correlated with decreased hearing status in male subjects. Routine screening and education regarding hearing loss risk factors are critical as college students do not always self-identify early changes in hearing. Large-scale systematic investigations of college students’ hearing status appear to be warranted; the current sample size was not adequate to precisely measure potential contributions of different sound sources to the elevated thresholds measured in some subjects. PMID:21288064

  3. Improving speech-in-noise recognition for children with hearing loss: Potential effects of language abilities, binaural summation, and head shadow

    PubMed Central

    Nittrouer, Susan; Caldwell-Tarr, Amanda; Tarr, Eric; Lowenstein, Joanna H.; Rice, Caitlin; Moberly, Aaron C.

    2014-01-01

    Objective: This study examined speech recognition in noise for children with hearing loss, compared it to recognition for children with normal hearing, and examined mechanisms that might explain variance in children’s abilities to recognize speech in noise. Design: Word recognition was measured in two levels of noise, both when the speech and noise were co-located in front and when the noise came separately from one side. Four mechanisms were examined as factors possibly explaining variance: vocabulary knowledge, sensitivity to phonological structure, binaural summation, and head shadow. Study sample: Participants were 113 eight-year-old children. Forty-eight had normal hearing (NH) and 65 had hearing loss: 18 with hearing aids (HAs), 19 with one cochlear implant (CI), and 28 with two CIs. Results: Phonological sensitivity explained a significant amount of between-groups variance in speech-in-noise recognition. Little evidence of binaural summation was found. Head shadow was similar in magnitude for children with NH and with CIs, regardless of whether they wore one or two CIs. Children with HAs showed reduced head shadow effects. Conclusion: These outcomes suggest that in order to improve speech-in-noise recognition for children with hearing loss, intervention needs to be comprehensive, focusing on both language abilities and auditory mechanisms. PMID:23834373

  4. Identifying hearing loss by means of iridology.

    PubMed

    Stearn, Natalie; Swanepoel, De Wet

    2006-11-13

    Isolated reports of hearing loss presenting as markings on the iris exist, but to date the effectiveness of iridology to identify hearing loss has not been investigated. This study therefore aimed to determine the efficacy of iridological analysis in the identification of moderate to profound sensorineural hearing loss in adolescents. A controlled trial was conducted with an iridologist, blind to the actual hearing status of participants, analyzing the irises of participants with and without hearing loss. Fifty hearing impaired and fifty normal hearing subjects, between the ages of 15 and 19 years, controlled for gender, participated in the study. An experienced iridologist analyzed the randomised set of participants' irises. A 70% correct identification of hearing status was obtained by iridological analyses with a false negative rate of 41% compared to a 19% false positive rate. The respective sensitivity and specificity rates therefore came to 59% and 81%. Iridological analysis of hearing status indicated a statistically significant relationship to actual hearing status (P < 0.05). Although statistically significant sensitivity and specificity rates for identifying hearing loss by iridology were not comparable to those of traditional audiological screening procedures.

  5. Individual differences in selective attention predict speech identification at a cocktail party

    PubMed Central

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-01-01

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise. DOI: http://dx.doi.org/10.7554/eLife.16747.001 PMID:27580272

  6. Effect of occlusion, directionality and age on horizontal localization

    NASA Astrophysics Data System (ADS)

    Alworth, Lynzee Nicole

    Localization acuity of a given listener is dependent upon the ability discriminate between interaural time and level disparities. Interaural time differences are encoded by low frequency information whereas interaural level differences are encoded by high frequency information. Much research has examined effects of hearing aid microphone technologies and occlusion separately and prior studies have not evaluated age as a factor in localization acuity. Open-fit hearing instruments provide new earmold technologies and varying microphone capabilities; however, these instruments have yet to be evaluated with regard to horizontal localization acuity. Thus, the purpose of this study is to examine the effects of microphone configuration, type of dome in open-fit hearing instruments, and age on the horizontal localization ability of a given listener. Thirty adults participated in this study and were grouped based upon hearing sensitivity and age (young normal hearing, >50 years normal hearing, >50 hearing impaired). Each normal hearing participant completed one localization experiment (unaided/unamplified) where they listened to the stimulus "Baseball" and selected the point of origin. Hearing impaired listeners were fit with the same two receiver-in-the-ear hearing aids and same dome types, thus controlling for microphone technologies, type of dome, and fitting between trials. Hearing impaired listeners completed a total of 7 localization experiments (unaided/unamplified; open dome: omnidirectional, adaptive directional, fixed directional; micromold: omnidirectional, adaptive directional, fixed directional). Overall, results of this study indicate that age significantly affects horizontal localization ability as younger adult listeners with normal hearing made significantly fewer localization errors than older adult listeners with normal hearing. Also, results revealed a significant difference in performance between dome type; however, upon further examination was not significant. Therefore, results examining type of dome should be viewed with caution. Results examining microphone configuration and microphone configuration by dome type were not significant. Moreover, results evaluating performance relative to unaided (unamplified) were not significant. Taken together, these results suggest open-fit hearing instruments, regardless of microphone or dome type, do not degrade horizontal localization acuity within a given listener relative to their 'older aged' normal hearing counterparts in quiet environments.

  7. Toward a Nonspeech Test of Auditory Cognition: Semantic Context Effects in Environmental Sound Identification in Adults of Varying Age and Hearing Abilities

    PubMed Central

    Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian

    2016-01-01

    Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791

  8. Perspectives on the Pure-Tone Audiogram.

    PubMed

    Musiek, Frank E; Shinn, Jennifer; Chermak, Gail D; Bamiou, Doris-Eva

    The pure-tone audiogram, though fundamental to audiology, presents limitations, especially in the case of central auditory involvement. Advances in auditory neuroscience underscore the considerably larger role of the central auditory nervous system (CANS) in hearing and related disorders. Given the availability of behavioral audiological tests and electrophysiological procedures that can provide better insights as to the function of the various components of the auditory system, this perspective piece reviews the limitations of the pure-tone audiogram and notes some of the advantages of other tests and procedures used in tandem with the pure-tone threshold measurement. To review and synthesize the literature regarding the utility and limitations of the pure-tone audiogram in determining dysfunction of peripheral sensory and neural systems, as well as the CANS, and to identify other tests and procedures that can supplement pure-tone thresholds and provide enhanced diagnostic insight, especially regarding problems of the central auditory system. A systematic review and synthesis of the literature. The authors independently searched and reviewed literature (journal articles, book chapters) pertaining to the limitations of the pure-tone audiogram. The pure-tone audiogram provides information as to hearing sensitivity across a selected frequency range. Normal or near-normal pure-tone thresholds sometimes are observed despite cochlear damage. There are a surprising number of patients with acoustic neuromas who have essentially normal pure-tone thresholds. In cases of central deafness, depressed pure-tone thresholds may not accurately reflect the status of the peripheral auditory system. Listening difficulties are seen in the presence of normal pure-tone thresholds. Suprathreshold procedures and a variety of other tests can provide information regarding other and often more central functions of the auditory system. The audiogram is a primary tool for determining type, degree, and configuration of hearing loss; however, it provides the clinician with information regarding only hearing sensitivity, and no information about central auditory processing or the auditory processing of real-world signals (i.e., speech, music). The pure-tone audiogram offers limited insight into functional hearing and should be viewed only as a test of hearing sensitivity. Given the limitations of the pure-tone audiogram, a brief overview is provided of available behavioral tests and electrophysiological procedures that are sensitive to the function and integrity of the central auditory system, which provide better diagnostic and rehabilitative information to the clinician and patient. American Academy of Audiology

  9. Binaural hearing with electrical stimulation

    PubMed Central

    Kan, Alan; Litovsky, Ruth Y.

    2014-01-01

    Bilateral cochlear implantation is becoming a standard of care in many clinics. While much benefit has been shown through bilateral implantation, patients who have bilateral cochlear implants (CIs) still do not perform as well as normal hearing listeners in sound localization and understanding speech in noisy environments. This difference in performance can arise from a number of different factors, including the areas of hardware and engineering, surgical precision and pathology of the auditory system in deaf persons. While surgical precision and individual pathology are factors that are beyond careful control, improvements can be made in the areas of clinical practice and the engineering of binaural speech processors. These improvements should be grounded in a good understanding of the sensitivities of bilateral CI patients to the acoustic binaural cues that are important to normal hearing listeners for sound localization and speech in noise understanding. To this end, we review the current state-of-the-art in the understanding of the sensitivities of bilateral CI patients to binaural cues in electric hearing, and highlight the important issues and challenges as they relate to clinical practice and the development of new binaural processing strategies. PMID:25193553

  10. Altered Brain Functional Activity in Infants with Congenital Bilateral Severe Sensorineural Hearing Loss: A Resting-State Functional MRI Study under Sedation.

    PubMed

    Xia, Shuang; Song, TianBin; Che, Jing; Li, Qiang; Chai, Chao; Zheng, Meizhu; Shen, Wen

    2017-01-01

    Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL) and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF) and regional homogeneity (ReHo) of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45) was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.

  11. Validity of the Hum Test, a Simple and Reliable Alternative to the Weber Test.

    PubMed

    Ahmed, Omar H; Gallant, Sara C; Ruiz, Ryan; Wang, Binhuan; Shapiro, William H; Voigt, Erich P

    2018-06-01

    To compare the diagnostic performance of the Hum Test against the Weber Test using pure tone audiometry (PTA) as the "gold standard" comparator. 29 participants with normal hearing of ages 18 to 35 without any history of hearing abnormalities or otologic conditions were enrolled. Subjects underwent three tests (Hum Test, Weber Test, and PTA) across two conditions: with an ear plug in one ear (side randomized) and without ear plugs. When examining the ability of the Hum Test to detect simulated conductive hearing loss (CHL), the test had a sensitivity of 89.7% and specificity of 100% with high pitched humming and 93.1% and 100%, respectively, with low pitched humming. The Weber Test had a sensitivity and specificity of 96.6% and 100%, respectively. McNemar's test demonstrated agreement between the Hum Test, performed with either high pitched ( P = .32) or low pitched ( P = .56) humming, and the Weber Test. Receiver operating characteristic (ROC) curves for the Hum Test (both high and low pitched) and Weber test were compared and demonstrated no statistically significant difference. The Hum Test is comparable to the Weber Test with regards to its sensitivity, specificity, and diagnostic accuracy in assessing new onset unilateral CHL in previously normal hearing subjects.

  12. Temporal modulation transfer functions for listeners with real and simulated hearing loss

    PubMed Central

    Desloge, Joseph G.; Reed, Charlotte M.; Braida, Louis D.; Perez, Zachary D.; Delhorne, Lorraine A.

    2011-01-01

    A functional simulation of hearing loss was evaluated in its ability to reproduce the temporal modulation transfer functions (TMTFs) for nine listeners with mild to profound sensorineural hearing loss. Each hearing loss was simulated in a group of three age-matched normal-hearing listeners through spectrally shaped masking noise or a combination of masking noise and multiband expansion. TMTFs were measured for both groups of listeners using a broadband noise carrier as a function of modulation rate in the range 2 to 1024 Hz. The TMTFs were fit with a lowpass filter function that provided estimates of overall modulation-depth sensitivity and modulation cutoff frequency. Although the simulations were capable of accurately reproducing the threshold elevations of the hearing-impaired listeners, they were not successful in reproducing the TMTFs. On average, the simulations resulted in lower sensitivity and higher cutoff frequency than were observed in the TMTFs of the hearing-impaired listeners. Discrepancies in performance between listeners with real and simulated hearing loss are possibly related to inaccuracies in the simulation of recruitment. PMID:21682411

  13. Speak on time! Effects of a musical rhythmic training on children with hearing loss.

    PubMed

    Hidalgo, Céline; Falk, Simone; Schön, Daniele

    2017-08-01

    This study investigates temporal adaptation in speech interaction in children with normal hearing and in children with cochlear implants (CIs) and/or hearing aids (HAs). We also address the question of whether musical rhythmic training can improve these skills in children with hearing loss (HL). Children named pictures presented on the screen in alternation with a virtual partner. Alternation rate (fast or slow) and the temporal predictability (match vs mismatch of stress occurrences) were manipulated. One group of children with normal hearing (NH) and one with HL were tested. The latter group was tested twice: once after 30 min of speech therapy and once after 30 min of musical rhythmic training. Both groups of children (NH and with HL) can adjust their speech production to the rate of alternation of the virtual partner. Moreover, while children with normal hearing benefit from the temporal regularity of stress occurrences, children with HL become sensitive to this manipulation only after rhythmic training. Rhythmic training may help children with HL to structure the temporal flow of their verbal interactions. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Audibility of reverse alarms under hearing protectors for normal and hearing-impaired listeners.

    PubMed

    Robinson, G S; Casali, J G

    1995-11-01

    The question of whether or not an individual suffering from a hearing loss is capable of hearing an auditory alarm or warning is an extremely important industrial safety issue. The ISO Standard that addresses auditory warnings for workplaces requires that any auditory alarm or warning be audible to all individuals in the workplace including those suffering from a hearing loss and/or wearing hearing protection devices (HPDs). Research was undertaken to determine how the ability to detect an alarm or warning signal changed for individuals with normal hearing and two levels of hearing loss as the levels of masking noise and alarm were manipulated. Pink noise was used as the masker and a heavy-equipment reverse alarm was used as the signal. The rating method paradigm of signal detection theory was used as the experimental procedure to separate the subjects' absolute sensitivities to the alarm from their individual criteria for deciding to respond in an affirmative manner. Results indicated that even at a fairly low signal-to-noise ratio (0 dB), subjects with a substantial hearing loss [a pure-tone average (PTA) hearing level of 45-50 dBHL in both ears] were capable of hearing the reverse alarm while wearing a high-attenuation earmuff in the pink noise used in the study.

  15. Individual Differences Reveal Correlates of Hidden Hearing Deficits

    PubMed Central

    Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G.

    2015-01-01

    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.” PMID:25653371

  16. The HEAR-QL: Quality of Life Questionnaire for Children with Hearing Loss

    PubMed Central

    Umansky, Amy M.; Jeffe, Donna B.; Lieu, Judith E.C.

    2012-01-01

    Background Few quality of life (QOL) assessment tools are available for children with specific chronic conditions, and none have been designed specifically for children with hearing loss (HL). A validated hearing-related QOL questionnaire could help clinicians determine whether an intervention is beneficial and whether one intervention is better than another. Purpose To examine QOL in children with HL and assess the validity, reliability, and factor structure of a new measure, the Hearing Environments and Reflection on Quality of Life (HEAR-QL) questionnaire. Research Design A descriptive and correlational study of a convenience sample of children. Study Sample Participants included 35 children with unilateral HL, 45 with bilateral HL, and 35 siblings with normal hearing. Data Collection and Analysis Children 7-12 years old were recruited by mail from a tertiary-care pediatric otolaryngology practice and the local county's Special School District. With parent consent, children completed the validated Pediatric Quality of Life Inventory™ (PedsQL™) 4.0 and a 35-item HEAR-QL questionnaire. The factor structure of the HEAR-QL was determined through principal components analysis (PCA), and mean scores were computed for each subscale and the total HEAR-QL. Three weeks following return of the initial questionnaires, a second HEAR-QL questionnaire was sent to participants to assess test-retest reliability. Both PedsQL and HEAR-QL scores were compared between children with and without HL, between children with unilateral and bilateral HL, and between children who used and did not use a hearing device using analysis of variance. Sensitivity and specificity were calculated for both the HEAR-QL and PedsQL. A multivariable, hierarchical linear regression analysis was conducted with independent variables associated with HEAR-QL in unadjusted tests. Results Using exploratory PCA, the 35-item HEAR-QL was reduced to 26 items (Cronbach's α=0.97; sensitivity 91% and specificity 92% at cut-off score of 93.5) loading on 3 factors: difficulty hearing in certain environments/situations (Environments α=0.97), impact of HL on social/sports activities (Activities α=0.92), and impact of HL on child's feelings (Feelings α=0.88). Sensitivity 78.8% and specificity 30.9% at a cut-off score of 69.6 on the PedsQL ((at-risk for impaired QOL) were lower than for the HEAR-QL. Participants with HL reported significantly lower mean total HEAR-QL scores (71 [SD 18] versus 98 [SD 5]; p < 0.001), but not mean total PedsQL scores (77 [SD 14] versus 83 [SD 15]; p = 0.47), than participants with normal hearing. Among children with bilateral HL, children who used a hearing device reported lower mean total HEAR-QL scores (p = 0.01), but not mean total PedsQL scores (p = 0.55), than children who did not use a hearing device. The intraclass correlation (ICC) for test-retest reliability for the 26-item HEAR-QL total score was .83. Hearing status and use of a device were independently associated with the HEAR-QL, and the variables in the model accounted for 46% of the HEAR-QL total score variance. Conclusion The HEAR-QL appears to be a valid, reliable, and sensitive questionnaire for children with HL. The HEAR-QL was better able than the PedsQL to distinguish between children with and without HL and can help evaluate interventions for children with HL. PMID:22212764

  17. The HEAR-QL: quality of life questionnaire for children with hearing loss.

    PubMed

    Umansky, Amy M; Jeffe, Donna B; Lieu, Judith E C

    2011-01-01

    Few quality of life (QOL) assessment tools are available for children with specific chronic conditions, and none have been designed specifically for children with hearing loss (HL). A validated hearing-related QOL questionnaire could help clinicians determine whether an intervention is beneficial and whether one intervention is better than another. To examine QOL in children with HL and assess the validity, reliability, and factor structure of a new measure, the Hearing Environments and Reflection on Quality of Life (HEAR-QL) questionnaire. A descriptive and correlational study of a convenience sample of children. Participants included 35 children with unilateral HL, 45 with bilateral HL, and 35 siblings with normal hearing. Children 7-12 yr old were recruited by mail from a tertiary-care pediatric otolaryngology practice and the local county's Special School District. With parent consent, children completed the validated Pediatric Quality of Life Inventory™ (PedsQL) 4.0 and a 35-item HEAR-QL questionnaire. The factor structure of the HEAR-QL was determined through principal components analysis (PCA), and mean scores were computed for each subscale and the total HEAR-QL. Three weeks following the return of the initial questionnaires, a second HEAR-QL questionnaire was sent to participants to assess test-retest reliability. Both PedsQL and HEAR-QL scores were compared between children with and without HL, between children with unilateral and bilateral HL, and between children who used and did not use a hearing device using analysis of variance. Sensitivity and specificity were calculated for both the HEAR-QL and the PedsQL. A multivariable, hierarchical linear regression analysis was conducted with independent variables associated with the HEAR-QL in unadjusted tests. Using exploratory PCA, the 35-item HEAR-QL was reduced to 26 items (Cronbach's α = 0.97, sensitivity of 91% and specificity of 92% at a cutoff score of 93.5) loading on three factors: difficulty hearing in certain environments/situations (Environments α = 0.97), impact of HL on social/sports activities (Activities α = 0.92), and impact of HL on child's feelings (Feelings α = 0.88). Sensitivity of 78.8% and specificity of 30.9% at a cutoff score of 69.6 on the PedsQL (at risk for impaired QOL) were lower than for the HEAR-QL. Participants with HL reported significantly lower mean total HEAR-QL scores (71 [SD 18] vs. 98 [SD 5], p < .001), but not mean total PedsQL scores (77 [SD 14] vs. 83 [SD 15], p = .47), than participants with normal hearing. Among children with bilateral HL, children who used a hearing device reported lower mean total HEAR-QL scores (p = .01), but not mean total PedsQL scores (p = .55), than children who did not use a hearing device. The intraclass correlation coefficient for test-retest reliability for the 26-item HEAR-QL total score was 0.83. Hearing status and use of a device were independently associated with the HEAR-QL, and the variables in the model accounted for 46% of the HEAR-QL total score variance. The HEAR-QL appears to be a valid, reliable, and sensitive questionnaire for children with HL. The HEAR-QL was better able than the PedsQL to distinguish between children with and without HL and can help evaluate interventions for children with HL. American Academy of Audiology.

  18. Estimating when and how words are acquired: A natural experiment on the development of the mental lexicon

    PubMed Central

    Auer, Edward T.; Bernstein, Lynne E.

    2009-01-01

    Purpose Sensitivity of subjective estimates of Age of Acquisition (AOA) and Acquisition Channel (AC) (printed, spoken, signed) to differences in word exposure within and between populations that differ dramatically in perceptual experience was examined. Methods 50 participants with early-onset deafness and 50 with normal hearing rated 175 words in terms of subjective age-of-acquisition and acquisition channel. Additional data were collected using a standardized test of reading and vocabulary. Results Deaf participants rated words as learned later (M = 10 years) than did participants with normal hearing (M = 8.5 years) (F(1,99) = 28.59; p < .01). Group-averaged item ratings of AOA were highly correlated across the groups (r = .971), and with normative order of acquisition (deaf: r = .950 and hearing: r = .946). The groups differed in their ratings of acquisition channel: Hearing: printed = 30%, spoken = 70%, signed =0%; Deaf: printed = 45%, spoken = 38%, signed = 17%. Conclusions Subjective AOA and AC measures are sensitive to between- and within-group differences in word experience. The results demonstrate that these subjective measures can be applied as reliable proxies for direct measures of lexical development in studies of lexical knowledge in adults with prelingual onset deafness. PMID:18506048

  19. Auditory adaptation testing as a tool for investigating tinnitus origin: two patients with vestibular schwannoma.

    PubMed

    Silverman, Carol A; Silman, Shlomo; Emmer, Michele B

    2017-06-01

    To enhance the understanding of tinnitus origin by disseminating two case studies of vestibular schwannoma (VS) involving behavioural auditory adaptation testing (AAT). Retrospective case study. Two adults who presented with unilateral, non-pulsatile subjective tinnitus and bilateral normal-hearing sensitivity. At the initial evaluation, the otolaryngologic and audiologic findings were unremarkable, bilaterally. Upon retest, years later, VS was identified. At retest, the tinnitus disappeared in one patient and was slightly attenuated in the other patient. In the former, the results of AAT were positive for left retrocochlear pathology; in the latter, the results were negative for the left ear although a moderate degree of auditory adaptation was present despite bilateral normal-hearing sensitivity. Imaging revealed a small VS in both patients, confirmed surgically. Behavioural AAT in patients with tinnitus furnishes a useful tool for exploring tinnitus origin. Decrease or disappearance of tinnitus in patients with auditory adaptation suggests that the tinnitus generator is the cochlea or the cochlear nerve adjacent to the cochlea. Patients with unilateral tinnitus and bilateral, symmetric, normal-hearing thresholds, absent other audiovestibular symptoms, should be routinely monitored through otolaryngologic and audiologic re-evaluations. Tinnitus decrease or disappearance may constitute a red flag for retrocochlear pathology.

  20. Fitting and verification of frequency modulation systems on children with normal hearing.

    PubMed

    Schafer, Erin C; Bryant, Danielle; Sanders, Katie; Baldus, Nicole; Algier, Katherine; Lewis, Audrey; Traber, Jordan; Layden, Paige; Amin, Aneeqa

    2014-06-01

    Several recent investigations support the use of frequency modulation (FM) systems in children with normal hearing and auditory processing or listening disorders such as those diagnosed with auditory processing disorders, autism spectrum disorders, attention-deficit hyperactivity disorder, Friedreich ataxia, and dyslexia. The American Academy of Audiology (AAA) published suggested procedures, but these guidelines do not cite research evidence to support the validity of the recommended procedures for fitting and verifying nonoccluding open-ear FM systems on children with normal hearing. Documenting the validity of these fitting procedures is critical to maximize the potential FM-system benefit in the above-mentioned populations of children with normal hearing and those with auditory-listening problems. The primary goal of this investigation was to determine the validity of the AAA real-ear approach to fitting FM systems on children with normal hearing. The secondary goal of this study was to examine speech-recognition performance in noise and loudness ratings without and with FM systems in children with normal hearing sensitivity. A two-group, cross-sectional design was used in the present study. Twenty-six typically functioning children, ages 5-12 yr, with normal hearing sensitivity participated in the study. Participants used a nonoccluding open-ear FM receiver during laboratory-based testing. Participants completed three laboratory tests: (1) real-ear measures, (2) speech recognition performance in noise, and (3) loudness ratings. Four real-ear measures were conducted to (1) verify that measured output met prescribed-gain targets across the 1000-4000 Hz frequency range for speech stimuli, (2) confirm that the FM-receiver volume did not exceed predicted uncomfortable loudness levels, and (3 and 4) measure changes to the real-ear unaided response when placing the FM receiver in the child's ear. After completion of the fitting, speech recognition in noise at a -5 signal-to-noise ratio and loudness ratings at a +5 signal-to-noise ratio were measured in four conditions: (1) no FM system, (2) FM receiver on the right ear, (3) FM receiver on the left ear, and (4) bilateral FM system. The results of this study suggested that the slightly modified AAA real-ear measurement procedures resulted in a valid fitting of one FM system on children with normal hearing. On average, prescriptive targets were met for 1000, 2000, 3000, and 4000 Hz within 3 dB, and maximum output of the FM system never exceeded and was significantly lower than predicted uncomfortable loudness levels for the children. There was a minimal change in the real-ear unaided response when the open-ear FM receiver was placed into the ear. Use of the FM system on one or both ears resulted in significantly better speech recognition in noise relative to a no-FM condition, and the unilateral and bilateral FM receivers resulted in a comfortably loud signal when listening in background noise. Real-ear measures are critical for obtaining an appropriate fit of an FM system on children with normal hearing. American Academy of Audiology.

  1. Individual differences reveal correlates of hidden hearing deficits.

    PubMed

    Bharadwaj, Hari M; Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G

    2015-02-04

    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of "normal hearing." Copyright © 2015 the authors 0270-6474/15/352161-12$15.00/0.

  2. The relationship between distortion product otoacoustic emissions and extended high-frequency audiometry in tinnitus patients. Part 1: normally hearing patients with unilateral tinnitus.

    PubMed

    Fabijańska, Anna; Smurzyński, Jacek; Hatzopoulos, Stavros; Kochanek, Krzysztof; Bartnik, Grażyna; Raj-Koziak, Danuta; Mazzoli, Manuela; Skarżyński, Piotr H; Jędrzejczak, Wieslaw W; Szkiełkowska, Agata; Skarżyński, Henryk

    2012-12-01

    The aim of this study was to evaluate distortion product otoacoustic emissions (DPOAEs) and extended high-frequency (EHF) thresholds in a control group and in patients with normal hearing sensitivity in the conventional frequency range and reporting unilateral tinnitus. Seventy patients were enrolled in the study: 47 patients with tinnitus in the left ear (Group 1) and 23 patients with tinnitus in the right ear (Group 2). The control group included 60 otologically normal subjects with no history of pathological tinnitus. Pure-tone thresholds were measured at all standard frequencies from 0.25 to 8 kHz, and at 10, 12.5, 14, and 16 kHz. The DPOAEs were measured in the frequency range from approximately 0.5 to 9 kHz using the primary tones presented at 65/55 dB SPL. The left ears of patients in Group 1 had higher median hearing thresholds than those in the control subjects at all 4 EHFs, and lower mean DPOAE levels than those in the controls for almost all primary frequencies, but significantly lower only in the 2-kHz region. Median hearing thresholds in the right ears of patients in Group 2 were higher than those in the right ears of the control subjects in the EHF range at 12.5, 14, and 16 kHz. The mean DPOAE levels in the right ears were lower in patients from Group 2 than those in the controls for the majority of primary frequencies, but only reached statistical significance in the 8-kHz region. Hearing thresholds in tinnitus ears with normal hearing sensitivity in the conventional range were higher in the EHF region than those in non-tinnitus control subjects, implying that cochlear damage in the basal region may result in the perception of tinnitus. In general, DPOAE levels in tinnitus ears were lower than those in ears of non-tinnitus subjects, suggesting that subclinical cochlear impairment in limited areas, which can be revealed by DPOAEs but not by conventional audiometry, may exist in tinnitus ears. For patients with tinnitus, DPOAE measures combined with behavioral EHF hearing thresholds may provide additional clinical information about the status of the peripheral hearing.

  3. Age-group differences in speech identification despite matched audiometrically normal hearing: contributions from auditory temporal processing and cognition

    PubMed Central

    Füllgrabe, Christian; Moore, Brian C. J.; Stone, Michael A.

    2015-01-01

    Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60–79 years) participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125–6 kHz were matched to nine young (YNH; 18–27 years) participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1) identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2) identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3) detection of modulation of the temporal envelope (TE) at frequencies 5–180 Hz; (4) monaural and binaural sensitivity to temporal fine structure (TFS); (5) various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (the improvement in speech identification obtained by amplitude modulating a noise background) and spatial masking release (the benefit obtained from spatially separating masker and target speech) were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN) identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in audiometric sensitivity. PMID:25628563

  4. Does hearing aid use affect audiovisual integration in mild hearing impairment?

    PubMed

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Colonius, Hans

    2018-04-01

    There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.

  5. How to quantify binaural hearing in patients with unilateral hearing using hearing implants.

    PubMed

    Snik, Ad; Agterberg, Martijn; Bosman, Arjan

    2015-01-01

    Application of bilateral hearing devices in bilateral hearing loss and unilateral application in unilateral hearing loss (second ear with normal hearing) does not a priori lead to binaural hearing. An overview is presented on several measures of binaural benefits that have been used in patients with unilateral or bilateral deafness using one or two cochlear implants, respectively, and in patients with unilateral or bilateral conductive/mixed hearing loss using one or two percutaneous bone conduction implants (BCDs), respectively. Overall, according to this overview, the most significant and sensitive measure is the benefit in directional hearing. Measures using speech (viz. binaural summation, binaural squelch or use of the head shadow effect) showed minor benefits, except for patients with bilateral conductive/mixed hearing loss using two BCDs. Although less feasible in daily practise, the binaural masking level difference test seems to be a promising option in the assessment of binaural function. © 2015 S. Karger AG, Basel.

  6. Validity, Discriminative Ability and Reliability of the Hearing-Related Quality of Life (HEAR-QL) Questionnaire for Adolescents

    PubMed Central

    Rachakonda, Tara; Jeffe, Donna B.; Shin, Jennifer J.; Mankarious, Leila; Fanning, Robert J.; Lesperance, Marci M.; Lieu, Judith E.C.

    2014-01-01

    Objectives The prevalence of hearing loss (HL) in adolescents has grown over the past decade, but hearing-related quality of life (QOL) has not been well-measured. We sought to develop a reliable, valid measure of hearing-related QOL for adolescents, the Hearing Environments And Reflection on Quality of Life (HEAR-QL). Study Design Multi-site observational study. Methods Adolescents with HL and siblings without HL were recruited from five centers. Participants completed the HEAR-QL and validated questionnaires measuring generic pediatric QOL (PedsQL), depression and anxiety (RCADS-25), and hearing-related QOL for adults (HHIA) to determine construct and discriminant validity. Participants completed the HEAR-QL two weeks later for test-retest reliability. We used exploratory principal components analysis to determine the HEAR-QL factor structure and measured reliability. Sensitivity and specificity of the HEAR-QL, PedsQL, HHIA and RCADS-25 were assessed. We compared scores on all surveys between those with normal hearing, unilateral and bilateral HL. Results 233 adolescents (13–18 years old) participated—179 with HL, 54 without HL. The original 45-item HEAR-QL was shortened to 28 items after determining factor structure. The resulting HEAR-QL-28 demonstrated excellent reliability (Cronbach’s alpha= 0.95) and construct validity (HHIA: r =.845, PedsQL: r =.587; RCADS-25: r =.433). The HEAR-QL-28 displayed excellent discriminant validity, with higher area under the curve (0.932) than the PedsQL (0.597) or RCADS-25 (0.529). Teens with bilateral HL using hearing devices reported worse QOL on the HEAR-QL and HHIA than peers with HL not using devices. Conclusions The HEAR-QL is a sensitive, reliable and valid measure of hearing-related QOL for adolescents. PMID:23900836

  7. Validity, discriminative ability, and reliability of the hearing-related quality of life questionnaire for adolescents.

    PubMed

    Rachakonda, Tara; Jeffe, Donna B; Shin, Jennifer J; Mankarious, Leila; Fanning, Robert J; Lesperance, Marci M; Lieu, Judith E C

    2014-02-01

    The prevalence of hearing loss (HL) in adolescents has grown over the past decade, but hearing-related quality of life (QOL) has not been well-measured. We sought to develop a reliable, valid measure of hearing-related QOL for adolescents and the Hearing Environments And Reflection on Quality of Life (HEAR-QL). Multisite observational study. Adolescents with HL and siblings without HL were recruited from five centers. Participants completed the HEAR-QL and validated questionnaires measuring generic pediatric QOL (PedsQL), depression and anxiety (RCADS-25), and hearing-related QOL for adults (HHIA) to determine construct and discriminant validity. Participants completed the HEAR-QL 2 weeks later for test-retest reliability. We used exploratory principal components analysis to determine the HEAR-QL factor structure and measured reliability. Sensitivity and specificity of the HEAR-QL, PedsQL, HHIA, and RCADS-25 were assessed. We compared scores on all surveys between those with normal hearing, unilateral, and bilateral HL. A total of 233 adolescents (13-18 years old) participated: 179 with HL, 54 without HL. The original 45-item HEAR-QL was shortened to 28 items after determining factor structure. The resulting HEAR-QL-28 demonstrated excellent reliability (Cronbach's alpha = 0.95) and construct validity (HHIA: r = .845, PedsQL: r = .587; RCADS-25: r = .433). The HEAR-QL-28 displayed excellent discriminant validity, with higher area under the curve (0.932) than the PedsQL (0.597) or RCADS-25 (0.529). Teens with bilateral HL using hearing devices reported worse QOL on the HEAR-QL and HHIA than peers with HL not using devices. The HEAR-QL is a sensitive, reliable, and valid measure of hearing-related QOL for adolescents. 2b. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  8. Binaural hearing with electrical stimulation.

    PubMed

    Kan, Alan; Litovsky, Ruth Y

    2015-04-01

    Bilateral cochlear implantation is becoming a standard of care in many clinics. While much benefit has been shown through bilateral implantation, patients who have bilateral cochlear implants (CIs) still do not perform as well as normal hearing listeners in sound localization and understanding speech in noisy environments. This difference in performance can arise from a number of different factors, including the areas of hardware and engineering, surgical precision and pathology of the auditory system in deaf persons. While surgical precision and individual pathology are factors that are beyond careful control, improvements can be made in the areas of clinical practice and the engineering of binaural speech processors. These improvements should be grounded in a good understanding of the sensitivities of bilateral CI patients to the acoustic binaural cues that are important to normal hearing listeners for sound localization and speech in noise understanding. To this end, we review the current state-of-the-art in the understanding of the sensitivities of bilateral CI patients to binaural cues in electric hearing, and highlight the important issues and challenges as they relate to clinical practice and the development of new binaural processing strategies. This article is part of a Special Issue entitled . Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Diagnostic utility of the acoustic reflex in predicting hearing in paediatric populations.

    PubMed

    Pérez-Villa, Yolanda E; Mena-Ramírez, María E; Aguirre, Laura E Chamlati; Mora-Magaña, Ignacio; Gutiérrez-Farfán, Ileana S

    2014-01-01

    The sensitivity of prediction of acoustic reflex, in determining the level of hearing loss, is especially useful in paediatric populations. It is based on the difference between the pure tone stapedius reflex threshold and contralateral white noise. The white noise threshold was 60 dB and that of pure tone was 80 dB. Our objective was to determine the diagnostic sensitivity of the prediction of the acoustic reflex. We studied children aged <10 years, from October 2011 to May 2012, by measuring the acoustic reflex with white noise and pure tone. We used contrast tests, with X2 and student t-test. Concordance was measured with Kappa. Results were considered significant at P≤.05. Our protocol was approved by Institutional Ethics Committee. Informed consent was obtained from the parents in all cases. Prediction of normal hearing was 0.84 for the right ear and 0.78 in left ear, while for hearing loss of an unspecified grade, it was 0.98 for the right ear and 0.96 in the left ear. Kappa value was 0.7 to 0.6 for the right ear and left ear. The acoustic reflex is of little diagnostic utility in predicting the degree of hearing loss, but it predicts more than 80% of normal hearing. The clinical utility of the reflex is indisputable, as it is an objective method, simple and rapid to use, that can be performed from birth and whose results are independent of the cooperation and willingness of the subject. It is proposed as an obligatory part of hearing screening. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  10. Sensorineural hearing loss degrades behavioral and physiological measures of human spatial selective auditory attention

    PubMed Central

    Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.

    2018-01-01

    Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752

  11. Musicians change their tune: how hearing loss alters the neural code.

    PubMed

    Parbery-Clark, Alexandra; Anderson, Samira; Kraus, Nina

    2013-08-01

    Individuals with sensorineural hearing loss have difficulty understanding speech, especially in background noise. This deficit remains even when audibility is restored through amplification, suggesting that mechanisms beyond a reduction in peripheral sensitivity contribute to the perceptual difficulties associated with hearing loss. Given that normal-hearing musicians have enhanced auditory perceptual skills, including speech-in-noise perception, coupled with heightened subcortical responses to speech, we aimed to determine whether similar advantages could be observed in middle-aged adults with hearing loss. Results indicate that musicians with hearing loss, despite self-perceptions of average performance for understanding speech in noise, have a greater ability to hear in noise relative to nonmusicians. This is accompanied by more robust subcortical encoding of sound (e.g., stimulus-to-response correlations and response consistency) as well as more resilient neural responses to speech in the presence of background noise (e.g., neural timing). Musicians with hearing loss also demonstrate unique neural signatures of spectral encoding relative to nonmusicians: enhanced neural encoding of the speech-sound's fundamental frequency but not of its upper harmonics. This stands in contrast to previous outcomes in normal-hearing musicians, who have enhanced encoding of the harmonics but not the fundamental frequency. Taken together, our data suggest that although hearing loss modifies a musician's spectral encoding of speech, the musician advantage for perceiving speech in noise persists in a hearing-impaired population by adaptively strengthening underlying neural mechanisms for speech-in-noise perception. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Temporal resolution in individuals with neurological disorders

    PubMed Central

    Rabelo, Camila Maia; Weihing, Jeffrey A; Schochat, Eliane

    2015-01-01

    OBJECTIVE: Temporal processing refers to the ability of the central auditory nervous system to encode and detect subtle changes in acoustic signals. This study aims to investigate the temporal resolution ability of individuals with mesial temporal sclerosis and to determine the sensitivity and specificity of the gaps-in-noise test in identifying this type of lesion. METHOD: This prospective study investigated differences in temporal resolution between 30 individuals with normal hearing and without neurological lesions (G1) and 16 individuals with both normal hearing and mesial temporal sclerosis (G2). Test performances were compared, and the sensitivity and specificity were calculated. RESULTS: There was no difference in gap detection thresholds between the two groups, although G1 revealed better average thresholds than G2 did. The sensitivity and specificity of the gaps-in-noise test for neurological lesions were 68% and 98%, respectively. CONCLUSIONS: Temporal resolution ability is compromised in individuals with neurological lesions caused by mesial temporal sclerosis. The gaps-in-noise test was shown to be a sensitive and specific measure of central auditory dysfunction in these patients. PMID:26375561

  13. Speech Recognition in Adults With Cochlear Implants: The Effects of Working Memory, Phonological Sensitivity, and Aging.

    PubMed

    Moberly, Aaron C; Harris, Michael S; Boyce, Lauren; Nittrouer, Susan

    2017-04-14

    Models of speech recognition suggest that "top-down" linguistic and cognitive functions, such as use of phonotactic constraints and working memory, facilitate recognition under conditions of degradation, such as in noise. The question addressed in this study was what happens to these functions when a listener who has experienced years of hearing loss obtains a cochlear implant. Thirty adults with cochlear implants and 30 age-matched controls with age-normal hearing underwent testing of verbal working memory using digit span and serial recall of words. Phonological capacities were assessed using a lexical decision task and nonword repetition. Recognition of words in sentences in speech-shaped noise was measured. Implant users had only slightly poorer working memory accuracy than did controls and only on serial recall of words; however, phonological sensitivity was highly impaired. Working memory did not facilitate speech recognition in noise for either group. Phonological sensitivity predicted sentence recognition for implant users but not for listeners with normal hearing. Clinical speech recognition outcomes for adult implant users relate to the ability of these users to process phonological information. Results suggest that phonological capacities may serve as potential clinical targets through rehabilitative training. Such novel interventions may be particularly helpful for older adult implant users.

  14. Speech Recognition in Adults With Cochlear Implants: The Effects of Working Memory, Phonological Sensitivity, and Aging

    PubMed Central

    Harris, Michael S.; Boyce, Lauren; Nittrouer, Susan

    2017-01-01

    Purpose Models of speech recognition suggest that “top-down” linguistic and cognitive functions, such as use of phonotactic constraints and working memory, facilitate recognition under conditions of degradation, such as in noise. The question addressed in this study was what happens to these functions when a listener who has experienced years of hearing loss obtains a cochlear implant. Method Thirty adults with cochlear implants and 30 age-matched controls with age-normal hearing underwent testing of verbal working memory using digit span and serial recall of words. Phonological capacities were assessed using a lexical decision task and nonword repetition. Recognition of words in sentences in speech-shaped noise was measured. Results Implant users had only slightly poorer working memory accuracy than did controls and only on serial recall of words; however, phonological sensitivity was highly impaired. Working memory did not facilitate speech recognition in noise for either group. Phonological sensitivity predicted sentence recognition for implant users but not for listeners with normal hearing. Conclusion Clinical speech recognition outcomes for adult implant users relate to the ability of these users to process phonological information. Results suggest that phonological capacities may serve as potential clinical targets through rehabilitative training. Such novel interventions may be particularly helpful for older adult implant users. PMID:28384805

  15. Individual Sensitivity to Spectral and Temporal Cues in Listeners With Hearing Impairment

    PubMed Central

    Wright, Richard A.; Blackburn, Michael C.; Tatman, Rachael; Gallun, Frederick J.

    2015-01-01

    Purpose The present study was designed to evaluate use of spectral and temporal cues under conditions in which both types of cues were available. Method Participants included adults with normal hearing and hearing loss. We focused on 3 categories of speech cues: static spectral (spectral shape), dynamic spectral (formant change), and temporal (amplitude envelope). Spectral and/or temporal dimensions of synthetic speech were systematically manipulated along a continuum, and recognition was measured using the manipulated stimuli. Level was controlled to ensure cue audibility. Discriminant function analysis was used to determine to what degree spectral and temporal information contributed to the identification of each stimulus. Results Listeners with normal hearing were influenced to a greater extent by spectral cues for all stimuli. Listeners with hearing impairment generally utilized spectral cues when the information was static (spectral shape) but used temporal cues when the information was dynamic (formant transition). The relative use of spectral and temporal dimensions varied among individuals, especially among listeners with hearing loss. Conclusion Information about spectral and temporal cue use may aid in identifying listeners who rely to a greater extent on particular acoustic cues and applying that information toward therapeutic interventions. PMID:25629388

  16. Increased medial olivocochlear reflex strength in normal-hearing, noise-exposed humans

    PubMed Central

    2017-01-01

    Research suggests that college-aged adults are vulnerable to tinnitus and hearing loss due to exposure to traumatic levels of noise on a regular basis. Recent human studies have associated exposure to high noise exposure background (NEB, i.e., routine noise exposure) with the reduced cochlear output and impaired speech processing ability in subjects with clinically normal hearing sensitivity. While the relationship between NEB and the functions of the auditory afferent neurons are studied in the literature, little is known about the effects of NEB on functioning of the auditory efferent system. The objective of the present study was to investigate the relationship between medial olivocochlear reflex (MOCR) strength and NEB in subjects with clinically normal hearing sensitivity. It was hypothesized that subjects with high NEB would exhibit reduced afferent input to the MOCR circuit which would subsequently lead to reduced strength of the MOCR. In normal-hearing listeners, the study examined (1) the association between NEB and baseline click-evoked otoacoustic emissions (CEOAEs) and (2) the association between NEB and MOCR strength. The MOCR was measured using CEOAEs evoked by 60 dB pSPL linear clicks in a contralateral acoustic stimulation (CAS)-off and CAS-on (a broadband noise at 60 dB SPL) condition. Participants with at least 6 dB signal-to-noise ratio (SNR) in the CAS-off and CAS-on conditions were included for analysis. A normalized CEOAE inhibition index was calculated to express MOCR strength in a percentage value. NEB was estimated using a validated questionnaire. The results showed that NEB was not associated with the baseline CEOAE amplitude (r = -0.112, p = 0.586). Contrary to the hypothesis, MOCR strength was positively correlated with NEB (r = 0.557, p = 0.003). NEB remained a significant predictor of MOCR strength (β = 2.98, t(19) = 3.474, p = 0.003) after the unstandardized coefficient was adjusted to control for effects of smoking, sound level tolerance (SLT) and tinnitus. These data provide evidence that MOCR strength is associated with NEB. The functional significance of increased MOCR strength is discussed. PMID:28886123

  17. How do albino fish hear?

    PubMed Central

    Lechner, W; Ladich, F

    2011-01-01

    Pigmentation disorders such as albinism are occasionally associated with hearing impairments in mammals. Therefore, we wanted to investigate whether such a phenomenon also exists in non-mammalian vertebrates. We measured the hearing abilities of normally pigmented and albinotic specimens of two catfish species, the European wels Silurus glanis (Siluridae) and the South American bronze catfish Corydoras aeneus (Callichthyidae). The non-invasive auditory evoked potential (AEP) recording technique was utilized to determine hearing thresholds at 10 frequencies from 0.05 to 5 kHz. Neither auditory sensitivity nor shape of AEP waveforms differed between normally pigmented and albinotic specimens at any frequency tested in both species. Silurus glanis and C. aeneus showed the best hearing between 0.3 and 1 kHz; the lowest thresholds were 78.4 dB at 0.5 kHz in S. glanis (pigmented), 75 dB at 1 kHz in S. glanis (albinotic), 77.6 dB at 0.5 kHz in C. aeneus (pigmented) and 76.9 dB at 1 kHz in C. aeneus (albinotic). This study indicates no association between albinism and hearing ability. Perhaps because of the lack of melanin in the fish inner ear, hearing in fishes is less likely to be affected by albinism than in mammals. PMID:21552308

  18. How do albino fish hear?

    PubMed

    Lechner, W; Ladich, F

    2011-03-01

    Pigmentation disorders such as albinism are occasionally associated with hearing impairments in mammals. Therefore, we wanted to investigate whether such a phenomenon also exists in non-mammalian vertebrates. We measured the hearing abilities of normally pigmented and albinotic specimens of two catfish species, the European wels Silurus glanis (Siluridae) and the South American bronze catfish Corydoras aeneus (Callichthyidae). The non-invasive auditory evoked potential (AEP) recording technique was utilized to determine hearing thresholds at 10 frequencies from 0.05 to 5 kHz. Neither auditory sensitivity nor shape of AEP waveforms differed between normally pigmented and albinotic specimens at any frequency tested in both species. Silurus glanis and C. aeneus showed the best hearing between 0.3 and 1 kHz; the lowest thresholds were 78.4 dB at 0.5 kHz in S. glanis (pigmented), 75 dB at 1 kHz in S. glanis (albinotic), 77.6 dB at 0.5 kHz in C. aeneus (pigmented) and 76.9 dB at 1 kHz in C. aeneus (albinotic). This study indicates no association between albinism and hearing ability. Perhaps because of the lack of melanin in the fish inner ear, hearing in fishes is less likely to be affected by albinism than in mammals.

  19. The role of hearing ability and speech distortion in the facilitation of articulatory motor cortex.

    PubMed

    Nuttall, Helen E; Kennedy-Higgins, Daniel; Devlin, Joseph T; Adank, Patti

    2017-01-08

    Excitability of articulatory motor cortex is facilitated when listening to speech in challenging conditions. Beyond this, however, we have little knowledge of what listener-specific and speech-specific factors engage articulatory facilitation during speech perception. For example, it is unknown whether speech motor activity is independent or dependent on the form of distortion in the speech signal. It is also unknown if speech motor facilitation is moderated by hearing ability. We investigated these questions in two experiments. We applied transcranial magnetic stimulation (TMS) to the lip area of primary motor cortex (M1) in young, normally hearing participants to test if lip M1 is sensitive to the quality (Experiment 1) or quantity (Experiment 2) of distortion in the speech signal, and if lip M1 facilitation relates to the hearing ability of the listener. Experiment 1 found that lip motor evoked potentials (MEPs) were larger during perception of motor-distorted speech that had been produced using a tongue depressor, and during perception of speech presented in background noise, relative to natural speech in quiet. Experiment 2 did not find evidence of motor system facilitation when speech was presented in noise at signal-to-noise ratios where speech intelligibility was at 50% or 75%, which were significantly less severe noise levels than used in Experiment 1. However, there was a significant interaction between noise condition and hearing ability, which indicated that when speech stimuli were correctly classified at 50%, speech motor facilitation was observed in individuals with better hearing, whereas individuals with relatively worse but still normal hearing showed more activation during perception of clear speech. These findings indicate that the motor system may be sensitive to the quantity, but not quality, of degradation in the speech signal. Data support the notion that motor cortex complements auditory cortex during speech perception, and point to a role for the motor cortex in compensating for differences in hearing ability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Localization and interaural time difference (ITD) thresholds for cochlear implant recipients with preserved acoustic hearing in the implanted ear

    PubMed Central

    Gifford, René H.; Grantham, D. Wesley; Sheffield, Sterling W.; Davis, Timothy J.; Dwyer, Robert; Dorman, Michael F.

    2014-01-01

    The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from −90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100–900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. PMID:24607490

  1. Localization and interaural time difference (ITD) thresholds for cochlear implant recipients with preserved acoustic hearing in the implanted ear.

    PubMed

    Gifford, René H; Grantham, D Wesley; Sheffield, Sterling W; Davis, Timothy J; Dwyer, Robert; Dorman, Michael F

    2014-06-01

    The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from -90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100-900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. Copyright © 2014. Published by Elsevier B.V.

  2. Perceptions & Attitudes of Male Homosexuals from Differing Socio-Cultural & Audiological Backgrounds.

    ERIC Educational Resources Information Center

    Swartz, Daniel B.

    This study examined four male homosexual, sociocultural groups: normal-hearing homosexuals with normal-hearing parents, deaf homosexuals with normal-hearing parents, deaf homosexuals with hearing-impaired parents, and hard-of-hearing homosexuals with normal-hearing parents. Differences with regard to self-perception, identity, and attitudes were…

  3. Temporal and speech processing skills in normal hearing individuals exposed to occupational noise.

    PubMed

    Kumar, U Ajith; Ameenudin, Syed; Sangamanatha, A V

    2012-01-01

    Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13), 41 50 ( = 13), 41-50 (n = 9), and 51-60 (n = 6) years and their non-noise-exposed counterparts (n = 30 in each age group). Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.

  4. OVERLAP OF HEARING AND VOICING RANGES IN SINGING

    PubMed Central

    Hunter, Eric J.; Titze, Ingo R.

    2008-01-01

    Frequency and intensity ranges in voice production by trained and untrained singers were superimposed onto the average normal human hearing range. The vocal output for all subjects was shown both in Voice Range Profiles and Spectral Level Profiles. Trained singers took greater advantage of the dynamic range of the auditory system with harmonic energy (45% of the hearing range compared to 38% for untrained vocalists). This difference seemed to come from the trained singers ablily to exploit the most sensitive part of the hearing range (around 3 to 4 kHz) through the use of the singer’s formant. The trained vocalists’ average maximum third-octave spectral band level was 95 dB SPL, compared to 80 dB SPL for untrained. PMID:19844607

  5. OVERLAP OF HEARING AND VOICING RANGES IN SINGING.

    PubMed

    Hunter, Eric J; Titze, Ingo R

    2005-04-01

    Frequency and intensity ranges in voice production by trained and untrained singers were superimposed onto the average normal human hearing range. The vocal output for all subjects was shown both in Voice Range Profiles and Spectral Level Profiles. Trained singers took greater advantage of the dynamic range of the auditory system with harmonic energy (45% of the hearing range compared to 38% for untrained vocalists). This difference seemed to come from the trained singers ablily to exploit the most sensitive part of the hearing range (around 3 to 4 kHz) through the use of the singer's formant. The trained vocalists' average maximum third-octave spectral band level was 95 dB SPL, compared to 80 dB SPL for untrained.

  6. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words.

    PubMed

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H; Fitzgibbons, Peter J; Cohen, Julie I

    2015-02-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech.

  7. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words

    PubMed Central

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Fitzgibbons, Peter J.; Cohen, Julie I.

    2015-01-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech. PMID:25698021

  8. Weighting of Acoustic Cues to a Manner Distinction by Children With and Without Hearing Loss

    PubMed Central

    Lowenstein, Joanna H.

    2015-01-01

    Purpose Children must develop optimal perceptual weighting strategies for processing speech in their first language. Hearing loss can interfere with that development, especially if cochlear implants are required. The three goals of this study were to measure, for children with and without hearing loss: (a) cue weighting for a manner distinction, (b) sensitivity to those cues, and (c) real-world communication functions. Method One hundred and seven children (43 with normal hearing [NH], 17 with hearing aids [HAs], and 47 with cochlear implants [CIs]) performed several tasks: labeling of stimuli from /bɑ/-to-/wɑ/ continua varying in formant and amplitude rise time (FRT and ART), discrimination of ART, word recognition, and phonemic awareness. Results Children with hearing loss were less attentive overall to acoustic structure than children with NH. Children with CIs, but not those with HAs, weighted FRT less and ART more than children with NH. Sensitivity could not explain cue weighting. FRT cue weighting explained significant amounts of variability in word recognition and phonemic awareness; ART cue weighting did not. Conclusion Signal degradation inhibits access to spectral structure for children with CIs, but cannot explain their delayed development of optimal weighting strategies. Auditory training could strengthen the weighting of spectral cues for children with CIs, thus aiding spoken language acquisition. PMID:25813201

  9. Development and Validation of a Portable Hearing Self-Testing System Based on a Notebook Personal Computer.

    PubMed

    Liu, Yan; Yang, Dong; Xiong, Fen; Yu, Lan; Ji, Fei; Wang, Qiu-Ju

    2015-09-01

    Hearing loss affects more than 27 million people in mainland China. It would be helpful to develop a portable and self-testing audiometer for the timely detection of hearing loss so that the optimal clinical therapeutic schedule can be determined. The objective of this study was to develop a software-based hearing self-testing system. The software-based self-testing system consisted of a notebook computer, an external sound card, and a pair of 10-Ω insert earphones. The system could be used to test the hearing thresholds by individuals themselves in an interactive manner using software. The reliability and validity of the system at octave frequencies of 0.25 Hz to 8.0 kHz were analyzed in three series of experiments. Thirty-seven normal-hearing particpants (74 ears) were enrolled in experiment 1. Forty individuals (80 ears) with sensorineural hearing loss (SNHL) participated in experiment 2. Thirteen normal-hearing participants (26 ears) and 37 participants (74 ears) with SNHL were enrolled in experiment 3. Each participant was enrolled in only one of the three experiments. In all experiments, pure-tone audiometry in a sound insulation room (standard test) was regarded as the gold standard. SPSS for Windows, version 17.0, was used for statistical analysis. The paired t-test was used to compare the hearing thresholds between the standard test and software-based self-testing (self-test) in experiments 1 and 2. In experiment 3 (main study), one-way analysis of variance and post hoc comparisons were used to compare the hearing thresholds among the standard test and two rounds of the self-test. Linear correlation analysis was carried out for the self-tests performed twice. The concordance was analyzed between the standard test and the self-test using the kappa method. p < 0.05 was considered statistically significant. Experiments 1 and 2: The hearing thresholds determined by the two methods were not significantly different at frequencies of 250, 500, or 8000 Hz (p > 0.05) but were significantly different at frequencies of 1000, 2000, and 4000 Hz (p < 0.05), except for 1000 Hz in the right ear in experiment 2. Experiment 3: The hearing thresholds determined by the standard test and self-tests repeated twice were not significantly different at any frequency (p > 0.05). The overall sensitivity of the self-test method was 97.6%, and the specificity was 98.3%. The sensitivity was 97.6% and the specificity was 97% for the patients with SNHL. The self-test had significant concordance with the standard test (kappa value = 0.848, p < 0.001). This portable hearing self-testing system based on a notebook personal computer is a reliable and sensitive method for hearing threshold assessment and monitoring. American Academy of Audiology.

  10. Characterizing the binaural contribution to speech-in-noise reception in elderly hearing-impaired listeners.

    PubMed

    Neher, Tobias

    2017-02-01

    To scrutinize the binaural contribution to speech-in-noise reception, four groups of elderly participants with or without audiometric asymmetry <2 kHz and with or without near-normal binaural intelligibility level difference (BILD) completed tests of monaural and binaural phase sensitivity as well as cognitive function. Groups did not differ in age, overall degree of hearing loss, or cognitive function. Analyses revealed an influence of BILD status but not audiometric asymmetry on monaural phase sensitivity, strong correlations between monaural and binaural detection thresholds, and monaural and binaural but not cognitive BILD contributions. Furthermore, the N 0 S π threshold at 500 Hz predicted BILD performance effectively.

  11. Low empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls.

    PubMed

    Netten, Anouk P; Rieffe, Carolien; Theunissen, Stephanie C P M; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J; Frijns, Johan H M

    2015-01-01

    The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children's level of empathy, their attendance to others' emotions, emotion recognition, and supportive behavior. Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.

  12. Binaural release from masking with single- and multi-electrode stimulation in children with cochlear implantsa)

    PubMed Central

    Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.

    2016-01-01

    Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs. PMID:27475132

  13. Binaural release from masking with single- and multi-electrode stimulation in children with cochlear implants.

    PubMed

    Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y

    2016-07-01

    Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs.

  14. Accelerometer-Determined Physical Activity and Mortality in a National Prospective Cohort Study: Considerations by Hearing Sensitivity.

    PubMed

    Loprinzi, Paul D

    2015-12-01

    Previous work demonstrates that hearing impairment and physical inactivity are associated with premature all-cause mortality. The purpose of this study was to discern whether increased physical activity among those with hearing impairment can produce survival benefits. Data from the 2003-2006 National Health and Nutrition Examination Survey were used, with follow-up through 2011. Physical activity was objectively measured over 7 days via accelerometry. Hearing sensitivity was objectively measured using a modified Hughson Westlake procedure. Among the 1,482 participants, 152 died during the follow-up period (10.26%, unweighted); the unweighted median follow-up period was 89 months (interquartile range = 74-98 months). For those with normal hearing and after adjustments, for every 60-min increase in physical activity, adults had a 19% (HR [Hazard Ratio] = 0.81; 95% confidence interval [CI] [0.48-1.35]; p = .40) reduced risk of all-cause mortality; however, this association was not statistically significant. In a similar manner, physical activity was not associated with all-cause mortality among those with mild hearing loss (HR = 0.76; 95% CI [0.51-1.13]; p = .17). However, after adjustments, and for every 60-min increase in physical activity for those with moderate or greater hearing loss, there was a 20% (HR = 0.20; 95% CI [0.67-0.95]; p = .01) reduced risk of all-cause mortality. Physical activity may help to prolong survival among those with greater hearing impairment.

  15. Cognitive spare capacity: evaluation data and its association with comprehension of dynamic conversations

    PubMed Central

    Keidser, Gitte; Best, Virginia; Freeston, Katrina; Boyce, Alexandra

    2015-01-01

    It is well-established that communication involves the working memory system, which becomes increasingly engaged in understanding speech as the input signal degrades. The more resources allocated to recovering a degraded input signal, the fewer resources, referred to as cognitive spare capacity (CSC), remain for higher-level processing of speech. Using simulated natural listening environments, the aims of this paper were to (1) evaluate an English version of a recently introduced auditory test to measure CSC that targets the updating process of the executive function, (2) investigate if the test predicts speech comprehension better than the reading span test (RST) commonly used to measure working memory capacity, and (3) determine if the test is sensitive to increasing the number of attended locations during listening. In Experiment I, the CSC test was presented using a male and a female talker, in quiet and in spatially separated babble- and cafeteria-noises, in an audio-only and in an audio-visual mode. Data collected on 21 listeners with normal and impaired hearing confirmed that the English version of the CSC test is sensitive to population group, noise condition, and clarity of speech, but not presentation modality. In Experiment II, performance by 27 normal-hearing listeners on a novel speech comprehension test presented in noise was significantly associated with working memory capacity, but not with CSC. Moreover, this group showed no significant difference in CSC as the number of talker locations in the test increased. There was no consistent association between the CSC test and the RST. It is recommended that future studies investigate the psychometric properties of the CSC test, and examine its sensitivity to the complexity of the listening environment in participants with both normal and impaired hearing. PMID:25999904

  16. Effect of Dual Sensory Loss on Auditory Localization: Implications for Intervention

    PubMed Central

    Simon, Helen J.; Levitt, Harry

    2007-01-01

    Our sensory systems are remarkable in several respects. They are extremely sensitive, they each perform more than one function, and they interact in a complementary way, thereby providing a high degree of redundancy that is particularly helpful should one or more sensory systems be impaired. In this article, the problem of dual hearing and vision loss is addressed. A brief description is provided on the use of auditory cues in vision loss, the use of visual cues in hearing loss, and the additional difficulties encountered when both sensory systems are impaired. A major focus of this article is the use of sound localization by normal hearing, hearing impaired, and blind individuals and the special problem of sound localization in people with dual sensory loss. PMID:18003869

  17. Information processing of visually presented picture and word stimuli by young hearing-impaired and normal-hearing children.

    PubMed

    Kelly, R R; Tomlison-Keasey, C

    1976-12-01

    Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.

  18. Unilateral hearing during development: hemispheric specificity in plastic reorganizations

    PubMed Central

    Kral, Andrej; Heid, Silvia; Hubka, Peter; Tillein, Jochen

    2013-01-01

    The present study investigates the hemispheric contributions of neuronal reorganization following early single-sided hearing (unilateral deafness). The experiments were performed on ten cats from our colony of deaf white cats. Two were identified in early hearing screening as unilaterally congenitally deaf. The remaining eight were bilaterally congenitally deaf, unilaterally implanted at different ages with a cochlear implant. Implanted animals were chronically stimulated using a single-channel portable signal processor for two to five months. Microelectrode recordings were performed at the primary auditory cortex under stimulation at the hearing and deaf ear with bilateral cochlear implants. Local field potentials (LFPs) were compared at the cortex ipsilateral and contralateral to the hearing ear. The focus of the study was on the morphology and the onset latency of the LFPs. With respect to morphology of LFPs, pronounced hemisphere-specific effects were observed. Morphology of amplitude-normalized LFPs for stimulation of the deaf and the hearing ear was similar for responses recorded at the same hemisphere. However, when comparisons were performed between the hemispheres, the morphology was more dissimilar even though the same ear was stimulated. This demonstrates hemispheric specificity of some cortical adaptations irrespective of the ear stimulated. The results suggest a specific adaptation process at the hemisphere ipsilateral to the hearing ear, involving specific (down-regulated inhibitory) mechanisms not found in the contralateral hemisphere. Finally, onset latencies revealed that the sensitive period for the cortex ipsilateral to the hearing ear is shorter than that for the contralateral cortex. Unilateral hearing experience leads to a functionally-asymmetric brain with different neuronal reorganizations and different sensitive periods involved. PMID:24348345

  19. Unilateral hearing during development: hemispheric specificity in plastic reorganizations.

    PubMed

    Kral, Andrej; Heid, Silvia; Hubka, Peter; Tillein, Jochen

    2013-01-01

    The present study investigates the hemispheric contributions of neuronal reorganization following early single-sided hearing (unilateral deafness). The experiments were performed on ten cats from our colony of deaf white cats. Two were identified in early hearing screening as unilaterally congenitally deaf. The remaining eight were bilaterally congenitally deaf, unilaterally implanted at different ages with a cochlear implant. Implanted animals were chronically stimulated using a single-channel portable signal processor for two to five months. Microelectrode recordings were performed at the primary auditory cortex under stimulation at the hearing and deaf ear with bilateral cochlear implants. Local field potentials (LFPs) were compared at the cortex ipsilateral and contralateral to the hearing ear. The focus of the study was on the morphology and the onset latency of the LFPs. With respect to morphology of LFPs, pronounced hemisphere-specific effects were observed. Morphology of amplitude-normalized LFPs for stimulation of the deaf and the hearing ear was similar for responses recorded at the same hemisphere. However, when comparisons were performed between the hemispheres, the morphology was more dissimilar even though the same ear was stimulated. This demonstrates hemispheric specificity of some cortical adaptations irrespective of the ear stimulated. The results suggest a specific adaptation process at the hemisphere ipsilateral to the hearing ear, involving specific (down-regulated inhibitory) mechanisms not found in the contralateral hemisphere. Finally, onset latencies revealed that the sensitive period for the cortex ipsilateral to the hearing ear is shorter than that for the contralateral cortex. Unilateral hearing experience leads to a functionally-asymmetric brain with different neuronal reorganizations and different sensitive periods involved.

  20. Low Empathy in Deaf and Hard of Hearing (Pre)Adolescents Compared to Normal Hearing Controls

    PubMed Central

    Netten, Anouk P.; Rieffe, Carolien; Theunissen, Stephanie C. P. M.; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J.; Frijns, Johan H. M.

    2015-01-01

    Objective The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. Methods The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children’s level of empathy, their attendance to others’ emotions, emotion recognition, and supportive behavior. Results Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Conclusions Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships. PMID:25906365

  1. Evaluation of Extended-wear Hearing Aid Technology for Operational Military Use

    DTIC Science & Technology

    2017-07-01

    for a transparent hearing protection device that could protect the hearing of normal-hearing listeners without degrading auditory situational...method, suggest that continuous noise protection is also comparable to conventional earplug devices. Behavioral testing on listeners with normal...associated with the extended-wear hearing aid could be adapted to provide long-term hearing protection for listeners with normal hearing with minimal

  2. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing

    PubMed Central

    Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far (‘radial’) and left-right (‘angular’) movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup. PMID:28675088

  3. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing.

    PubMed

    Lundbeck, Micha; Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.

  4. Perception of Binaural Cues Develops in Children Who Are Deaf through Bilateral Cochlear Implantation

    PubMed Central

    Gordon, Karen A.; Deighton, Michael R.; Abbasalipour, Parvaneh; Papsin, Blake C.

    2014-01-01

    There are significant challenges to restoring binaural hearing to children who have been deaf from an early age. The uncoordinated and poor temporal information available from cochlear implants distorts perception of interaural timing differences normally important for sound localization and listening in noise. Moreover, binaural development can be compromised by bilateral and unilateral auditory deprivation. Here, we studied perception of both interaural level and timing differences in 79 children/adolescents using bilateral cochlear implants and 16 peers with normal hearing. They were asked on which side of their head they heard unilaterally or bilaterally presented click- or electrical pulse- trains. Interaural level cues were identified by most participants including adolescents with long periods of unilateral cochlear implant use and little bilateral implant experience. Interaural timing cues were not detected by new bilateral adolescent users, consistent with previous evidence. Evidence of binaural timing detection was, for the first time, found in children who had much longer implant experience but it was marked by poorer than normal sensitivity and abnormally strong dependence on current level differences between implants. In addition, children with prior unilateral implant use showed a higher proportion of responses to their first implanted sides than children implanted simultaneously. These data indicate that there are functional repercussions of developing binaural hearing through bilateral cochlear implants, particularly when provided sequentially; nonetheless, children have an opportunity to use these devices to hear better in noise and gain spatial hearing. PMID:25531107

  5. Early Radiosurgery Improves Hearing Preservation in Vestibular Schwannoma Patients With Normal Hearing at the Time of Diagnosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akpinar, Berkcan; Mousavi, Seyed H., E-mail: mousavish@upmc.edu; McDowell, Michael M.

    Purpose: Vestibular schwannomas (VS) are increasingly diagnosed in patients with normal hearing because of advances in magnetic resonance imaging. We sought to evaluate whether stereotactic radiosurgery (SRS) performed earlier after diagnosis improved long-term hearing preservation in this population. Methods and Materials: We queried our quality assessment registry and found the records of 1134 acoustic neuroma patients who underwent SRS during a 15-year period (1997-2011). We identified 88 patients who had VS but normal hearing with no subjective hearing loss at the time of diagnosis. All patients were Gardner-Robertson (GR) class I at the time of SRS. Fifty-seven patients underwent earlymore » (≤2 years from diagnosis) SRS and 31 patients underwent late (>2 years after diagnosis) SRS. At a median follow-up time of 75 months, we evaluated patient outcomes. Results: Tumor control rates (decreased or stable in size) were similar in the early (95%) and late (90%) treatment groups (P=.73). Patients in the early treatment group retained serviceable (GR class I/II) hearing and normal (GR class I) hearing longer than did patients in the late treatment group (serviceable hearing, P=.006; normal hearing, P<.0001, respectively). At 5 years after SRS, an estimated 88% of the early treatment group retained serviceable hearing and 77% retained normal hearing, compared with 55% with serviceable hearing and 33% with normal hearing in the late treatment group. Conclusions: SRS within 2 years after diagnosis of VS in normal hearing patients resulted in improved retention of all hearing measures compared with later SRS.« less

  6. Right-Ear Advantage for Speech-in-Noise Recognition in Patients with Nonlateralized Tinnitus and Normal Hearing Sensitivity.

    PubMed

    Tai, Yihsin; Husain, Fatima T

    2018-04-01

    Despite having normal hearing sensitivity, patients with chronic tinnitus may experience more difficulty recognizing speech in adverse listening conditions as compared to controls. However, the association between the characteristics of tinnitus (severity and loudness) and speech recognition remains unclear. In this study, the Quick Speech-in-Noise test (QuickSIN) was conducted monaurally on 14 patients with bilateral tinnitus and 14 age- and hearing-matched adults to determine the relation between tinnitus characteristics and speech understanding. Further, Tinnitus Handicap Inventory (THI), tinnitus loudness magnitude estimation, and loudness matching were obtained to better characterize the perceptual and psychological aspects of tinnitus. The patients reported low THI scores, with most participants in the slight handicap category. Significant between-group differences in speech-in-noise performance were only found at the 5-dB signal-to-noise ratio (SNR) condition. The tinnitus group performed significantly worse in the left ear than in the right ear, even though bilateral tinnitus percept and symmetrical thresholds were reported in all patients. This between-ear difference is likely influenced by a right-ear advantage for speech sounds, as factors related to testing order and fatigue were ruled out. Additionally, significant correlations found between SNR loss in the left ear and tinnitus loudness matching suggest that perceptual factors related to tinnitus had an effect on speech-in-noise performance, pointing to a possible interaction between peripheral and cognitive factors in chronic tinnitus. Further studies, that take into account both hearing and cognitive abilities of patients, are needed to better parse out the effect of tinnitus in the absence of hearing impairment.

  7. Spectrotemporal modulation sensitivity for hearing-impaired listeners: dependence on carrier center frequency and the relationship to speech intelligibility.

    PubMed

    Mehraei, Golbarg; Gallun, Frederick J; Leek, Marjorie R; Bernstein, Joshua G W

    2014-07-01

    Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4-32 Hz), spectral ripple density [0.5-4 cycles/octave (c/o)] and carrier center frequency (500-4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4-12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements.

  8. Binaural sensitivity in children who use bilateral cochlear implants.

    PubMed

    Ehlers, Erica; Goupell, Matthew J; Zheng, Yi; Godar, Shelly P; Litovsky, Ruth Y

    2017-06-01

    Children who are deaf and receive bilateral cochlear implants (BiCIs) perform better on spatial hearing tasks using bilateral rather than unilateral inputs; however, they underperform relative to normal-hearing (NH) peers. This gap in performance is multi-factorial, including the inability of speech processors to reliably deliver binaural cues. Although much is known regarding binaural sensitivity of adults with BiCIs, less is known about how the development of binaural sensitivity in children with BiCIs compared to NH children. Sixteen children (ages 9-17 years) were tested using synchronized research processors. Interaural time differences and interaural level differences (ITDs and ILDs, respectively) were presented to pairs of pitch-matched electrodes. Stimuli were 300-ms, 100-pulses-per-second, constant-amplitude pulse trains. In the first and second experiments, discrimination of interaural cues (either ITDs or ILDs) was measured using a two-interval left/right task. In the third experiment, subjects reported the perceived intracranial position of ITDs and ILDs in a lateralization task. All children demonstrated sensitivity to ILDs, possibly due to monaural level cues. Children who were born deaf had weak or absent sensitivity to ITDs; in contrast, ITD sensitivity was noted in children with previous exposure to acoustic hearing. Therefore, factors such as auditory deprivation, in particular, lack of early exposure to consistent timing differences between the ears, may delay the maturation of binaural circuits and cause insensitivity to binaural differences.

  9. Suppression tuning of distortion-product otoacoustic emissions: Results from cochlear mechanics simulation

    PubMed Central

    Liu, Yi-Wen; Neely, Stephen T.

    2013-01-01

    This paper presents the results of simulating the acoustic suppression of distortion-product otoacoustic emissions (DPOAEs) from a computer model of cochlear mechanics. A tone suppressor was introduced, causing the DPOAE level to decrease, and the decrement was plotted against an increasing suppressor level. Suppression threshold was estimated from the resulting suppression growth functions (SGFs), and suppression tuning curves (STCs) were obtained by plotting the suppression threshold as a function of suppressor frequency. Results show that the slope of SGFs is generally higher for low-frequency suppressors than high-frequency suppressors, resembling those obtained from normal hearing human ears. By comparing responses of normal (100%) vs reduced (50%) outer-hair-cell sensitivities, the model predicts that the tip-to-tail difference of the STCs correlates well with that of intra-cochlear iso-displacement tuning curves. The correlation is poorer, however, between the sharpness of the STCs and that of the intra-cochlear tuning curves. These results agree qualitatively with what was recently reported from normal-hearing and hearing-impaired human subjects, and examination of intra-cochlear model responses can provide the needed insight regarding the interpretation of DPOAE STCs obtained in individual ears. PMID:23363112

  10. The Envoy® Totally Implantable Hearing System, St. Croix Medical

    PubMed Central

    Kroll, Kai; Grant, Iain L.; Javel, Eric

    2002-01-01

    The Totally Implantable Envoy® System is currently undergoing clinical trials in both the United States and Europe. The fully implantable hearing device is intended for use in patients with sensorineural hearing loss. The device employs piezoelectric transducers to sense ossicle motion and drive the stapes. Programmable signal processing parameters include amplification, compression, and variable frequency response. The fully implantable attribute allows users to take advantage of normal external ear resonances and head-related transfer functions, while avoiding undesirable earmold effects. The high sensitivity, low power consumption, and high fidelity attributes of piezoelectric transducers minimize acoustic feedback and maximize battery life (Gyo, 1996; Yanagihara, (1987) and 2001). The surgical procedure to install the device has been accurately defined and implantation is reversible. PMID:25425915

  11. Effects of Varying Reverberation on Music Perception for Young Normal-Hearing and Old Hearing-Impaired Listeners.

    PubMed

    Reinhart, Paul N; Souza, Pamela E

    2018-01-01

    Reverberation enhances music perception and is one of the most important acoustic factors in auditorium design. However, previous research on reverberant music perception has focused on young normal-hearing (YNH) listeners. Old hearing-impaired (OHI) listeners have degraded spatial auditory processing; therefore, they may perceive reverberant music differently. Two experiments were conducted examining the effects of varying reverberation on music perception for YNH and OHI listeners. Experiment 1 examined whether YNH listeners and OHI listeners prefer different amounts of reverberation for classical music listening. Symphonic excerpts were processed at a range of reverberation times using a point-source simulation. Listeners performed a paired-comparisons task in which they heard two excerpts with different reverberation times, and they indicated which they preferred. The YNH group preferred a reverberation time of 2.5 s; however, the OHI group did not demonstrate any significant preference. Experiment 2 examined whether OHI listeners are less sensitive to (e, less able to discriminate) differences in reverberation time than YNH listeners. YNH and OHI participants listened to pairs of music excerpts and indicated whether they perceived the same or different amount of reverberation. Results indicated that the ability of both groups to detect differences in reverberation time improved with increasing reverberation time difference. However, discrimination was poorer for the OHI group than for the YNH group. This suggests that OHI listeners are less sensitive to differences in reverberation when listening to music than YNH listeners, which might explain the lack of group reverberation time preferences of the OHI group.

  12. Audiometric Characteristics of Hyperacusis Patients

    PubMed Central

    Sheldrake, Jacqueline; Diehl, Peter U.; Schaette, Roland

    2015-01-01

    Hyperacusis is a frequent auditory disorder where sounds of normal volume are perceived as too loud or even painfully loud. There is a high degree of co-morbidity between hyperacusis and tinnitus, most hyperacusis patients also have tinnitus, but only about 30–40% of tinnitus patients also show symptoms of hyperacusis. In order to elucidate the mechanisms of hyperacusis, detailed measurements of loudness discomfort levels (LDLs) across the hearing range would be desirable. However, previous studies have only reported LDLs for a restricted frequency range, e.g., from 0.5 to 4 kHz or from 1 to 8 kHz. We have measured audiograms and LDLs in 381 patients with a primary complaint of hyperacusis for the full standard audiometric frequency range from 0.125 to 8 kHz. On average, patients had mild high-frequency hearing loss, but more than a third of the tested ears had normal hearing thresholds (HTs), i.e., ≤20 dB HL. LDLs were found to be significantly decreased compared to a normal-hearing reference group, with average values around 85 dB HL across the frequency range. However, receiver operating characteristic analysis showed that LDL measurements are neither sensitive nor specific enough to serve as a single test for hyperacusis. There was a moderate positive correlation between HTs and LDLs (r = 0.36), i.e., LDLs tended to be higher at frequencies where hearing loss was present, suggesting that hyperacusis is unlikely to be caused by HT increase, in contrast to tinnitus for which hearing loss is a main trigger. Moreover, our finding that LDLs are decreased across the full range of audiometric frequencies, regardless of the pattern or degree of hearing loss, indicates that hyperacusis might be due to a generalized increase in auditory gain. Tinnitus on the other hand is thought to be caused by neuroplastic changes in a restricted frequency range, suggesting that tinnitus and hyperacusis might not share a common mechanism. PMID:26029161

  13. An Evaluation of the BKB-SIN, HINT, QuickSIN, and WIN Materials on Listeners with Normal Hearing and Listeners with Hearing Loss

    ERIC Educational Resources Information Center

    Wilson, Richard H.; McArdle, Rachel A.; Smith, Sherri L.

    2007-01-01

    Purpose: The purpose of this study was to examine in listeners with normal hearing and listeners with sensorineural hearing loss the within- and between-group differences obtained with 4 commonly available speech-in-noise protocols. Method: Recognition performances by 24 listeners with normal hearing and 72 listeners with sensorineural hearing…

  14. Spectrotemporal modulation sensitivity for hearing-impaired listeners: Dependence on carrier center frequency and the relationship to speech intelligibility

    PubMed Central

    Mehraei, Golbarg; Gallun, Frederick J.; Leek, Marjorie R.; Bernstein, Joshua G. W.

    2014-01-01

    Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4–32 Hz), spectral ripple density [0.5–4 cycles/octave (c/o)] and carrier center frequency (500–4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4–12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements. PMID:24993215

  15. Visual context processing deficits in schizophrenia: effects of deafness and disorganization.

    PubMed

    Horton, Heather K; Silverstein, Steven M

    2011-07-01

    Visual illusions allow for strong tests of perceptual functioning. Perceptual impairments can produce superior task performance on certain tasks (i.e., more veridical perception), thereby avoiding generalized deficit confounds while tapping mechanisms that are largely outside of conscious control. Using a task based on the Ebbinghaus illusion, a perceptual phenomenon where the perceived size of a central target object is affected by the size of surrounding inducers, we tested hypotheses related to visual integration in deaf (n = 31) and hearing (n = 34) patients with schizophrenia. In past studies, psychiatrically healthy samples displayed increased visual integration relative to schizophrenia samples and thus were less able to correctly judge target sizes. Deafness, and especially the use of sign language, leads to heightened sensitivity to peripheral visual cues and increased sensitivity to visual context. Therefore, relative to hearing subjects, deaf subjects were expected to display increased context sensitivity (ie, a more normal illusion effect as evidenced by a decreased ability to correctly judge central target sizes). Confirming the hypothesis, deaf signers were significantly more sensitive to the illusion than nonsigning hearing patients. Moreover, an earlier age of sign language acquisition, higher levels of linguistic ability, and shorter illness duration were significantly related to increased context sensitivity. As predicted, disorganization was associated with reduced context sensitivity for all subjects. The primary implications of these data are that perceptual organization impairment in schizophrenia is plastic and that it is related to a broader failure in coordinating cognitive activity.

  16. Modulation Detection Interference for Asynchronous Presentation of Masker and Target in Listeners with Normal and Impaired Hearing

    ERIC Educational Resources Information Center

    Koopman, Jan; Houtgast, Tammo; Dreschler, Wouter A.

    2008-01-01

    Purpose: The sensitivity to sinusoidal amplitude modulations (SAMs) is reduced when other modulated maskers are presented simultaneously at a distant frequency (also referred to as "modulation detection interference" [MDI]). This article describes the results of onset differences between masker and target as a parameter. Method: Carrier…

  17. Head Position Comparison between Students with Normal Hearing and Students with Sensorineural Hearing Loss.

    PubMed

    Melo, Renato de Souza; Amorim da Silva, Polyanna Waleska; Souza, Robson Arruda; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica

    2013-10-01

    Introduction Head sense position is coordinated by sensory activity of the vestibular system, located in the inner ear. Children with sensorineural hearing loss may show changes in the vestibular system as a result of injury to the inner ear, which can alter the sense of head position in this population. Aim Analyze the head alignment in students with normal hearing and students with sensorineural hearing loss and compare the data between groups. Methods This prospective cross-sectional study examined the head alignment of 96 students, 48 with normal hearing and 48 with sensorineural hearing loss, aged between 7 and 18 years. The analysis of head alignment occurred through postural assessment performed according to the criteria proposed by Kendall et al. For data analysis we used the chi-square test or Fisher exact test. Results The students with hearing loss had a higher occurrence of changes in the alignment of the head than normally hearing students (p < 0.001). Forward head posture was the type of postural change observed most, occurring in greater proportion in children with hearing loss (p < 0.001), followed by the side slope head posture (p < 0.001). Conclusion Children with sensorineural hearing loss showed more changes in the head posture compared with children with normal hearing.

  18. Head Position Comparison between Students with Normal Hearing and Students with Sensorineural Hearing Loss

    PubMed Central

    Melo, Renato de Souza; Amorim da Silva, Polyanna Waleska; Souza, Robson Arruda; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica

    2013-01-01

    Introduction Head sense position is coordinated by sensory activity of the vestibular system, located in the inner ear. Children with sensorineural hearing loss may show changes in the vestibular system as a result of injury to the inner ear, which can alter the sense of head position in this population. Aim Analyze the head alignment in students with normal hearing and students with sensorineural hearing loss and compare the data between groups. Methods This prospective cross-sectional study examined the head alignment of 96 students, 48 with normal hearing and 48 with sensorineural hearing loss, aged between 7 and 18 years. The analysis of head alignment occurred through postural assessment performed according to the criteria proposed by Kendall et al. For data analysis we used the chi-square test or Fisher exact test. Results The students with hearing loss had a higher occurrence of changes in the alignment of the head than normally hearing students (p < 0.001). Forward head posture was the type of postural change observed most, occurring in greater proportion in children with hearing loss (p < 0.001), followed by the side slope head posture (p < 0.001). Conclusion Children with sensorineural hearing loss showed more changes in the head posture compared with children with normal hearing. PMID:25992037

  19. High-Level Psychophysical Tuning Curves: Forward Masking in Normal-Hearing and Hearing-Impaired Listeners.

    ERIC Educational Resources Information Center

    Nelson, David A.

    1991-01-01

    Forward-masked psychophysical tuning curves were obtained at multiple probe levels from 26 normal-hearing listeners and 24 ears of 21 hearing-impaired listeners with cochlear hearing loss. Results indicated that some cochlear hearing losses influence the sharp tuning capabilities usually associated with outer hair cell function. (Author/JDD)

  20. Measurements of acoustic impedance at the input to the occluded ear canal.

    PubMed

    Larson, V D; Nelson, J A; Cooper, W A; Egolf, D P

    1993-01-01

    Multi-frequency (multi-component) acoustic impedance measurements may evolve into a sensitive technique for the remote detection of aural pathologies. Such data are also relevant to models used in hearing aid design and could be an asset to the hearing aid prescription and fitting process. This report describes the development and use of a broad-band procedure which acquires impedance data in 20 Hz intervals and describes a comparison of data collected at two sites by different investigators. Mean data were in excellent agreement, and an explanation for a single case of extreme normal variability is presented.

  1. Cortical Auditory Evoked Potentials in (Un)aided Normal-Hearing and Hearing-Impaired Adults

    PubMed Central

    Van Dun, Bram; Kania, Anna; Dillon, Harvey

    2016-01-01

    Cortical auditory evoked potentials (CAEPs) are influenced by the characteristics of the stimulus, including level and hearing aid gain. Previous studies have measured CAEPs aided and unaided in individuals with normal hearing. There is a significant difference between providing amplification to a person with normal hearing and a person with hearing loss. This study investigated this difference and the effects of stimulus signal-to-noise ratio (SNR) and audibility on the CAEP amplitude in a population with hearing loss. Twelve normal-hearing participants and 12 participants with a hearing loss participated in this study. Three speech sounds—/m/, /g/, and /t/—were presented in the free field. Unaided stimuli were presented at 55, 65, and 75 dB sound pressure level (SPL) and aided stimuli at 55 dB SPL with three different gains in steps of 10 dB. CAEPs were recorded and their amplitudes analyzed. Stimulus SNRs and audibility were determined. No significant effect of stimulus level or hearing aid gain was found in normal hearers. Conversely, a significant effect was found in hearing-impaired individuals. Audibility of the signal, which in some cases is determined by the signal level relative to threshold and in other cases by the SNR, is the dominant factor explaining changes in CAEP amplitude. CAEPs can potentially be used to assess the effects of hearing aid gain in hearing-impaired users. PMID:27587919

  2. Hearing and hearing loss: Causes, effects, and treatments

    NASA Astrophysics Data System (ADS)

    Schmiedt, Richard A.

    2003-04-01

    Hearing loss can have multiple causes. The outer and middle ears are conductive pathways for acoustic energy to the inner ear (cochlea) and help shape our spectral sensitivity. Conductive hearing loss is mechanical in nature such that the energy transfer to the cochlea is impeded, often from eardrum perforations or middle ear fluid buildup. Beyond the middle ear, the cochlea comprises three interdependent systems necessary for normal hearing. The first is that of basilar-membrane micromechanics including the outer hair cells. This system forms the basis of the cochlear amplifier and is the most vulnerable to noise and drug exposure. The second system comprises the ion pumps in the lateral wall tissues of the cochlea. These highly metabolic cells provide energy to the cochlear amplifier in the form of electrochemical potentials. This second system is particularly vulnerable to the effects of aging. The third system comprises the inner hair cells and their associated sensory nerve fibers. This system is the transduction stage, changing mechanical vibrations to nerve impulses. New treatments for hearing loss are on the horizon; however, at present the best strategy is avoidance of cochlear trauma and the proper use of hearing aids. [Work supported by NIA and MUSC.

  3. Interaural time discrimination of envelopes carried on high-frequency tones as a function of level and interaural carrier mismatch

    PubMed Central

    Blanks, Deidra A.; Buss, Emily; Grose, John H.; Fitzpatrick, Douglas C.; Hall, Joseph W.

    2009-01-01

    Objectives The present study investigated interaural time discrimination for binaurally mismatched carrier frequencies in listeners with normal hearing. One goal of the investigation was to gain insights into binaural hearing in patients with bilateral cochlear implants, where the coding of interaural time differences may be limited by mismatches in the neural populations receiving stimulation on each side. Design Temporal envelopes were manipulated to present low frequency timing cues to high frequency auditory channels. Carrier frequencies near 4 kHz were amplitude modulated at 128 Hz via multiplication with a half-wave rectified sinusoid, and that modulation was either in-phase across ears or delayed to one ear. Detection thresholds for non-zero interaural time differences were measured for a range of stimulus levels and a range of carrier frequency mismatches. Data were also collected under conditions designed to limit cues based on stimulus spectral spread, including masking and truncation of sidebands associated with modulation. Results Listeners with normal hearing can detect interaural time differences in the face of substantial mismatches in carrier frequency across ears. Conclusions The processing of interaural time differences in listeners with normal hearing is likely based on spread of excitation into binaurally matched auditory channels. Sensitivity to interaural time differences in listeners with cochlear implants may depend upon spread of current that results in the stimulation of neural populations that share common tonotopic space bilaterally. PMID:18596646

  4. Cochlear neuropathy and the coding of supra-threshold sound.

    PubMed

    Bharadwaj, Hari M; Verhulst, Sarah; Shaheen, Luke; Liberman, M Charles; Shinn-Cunningham, Barbara G

    2014-01-01

    Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses (SSSRs) in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds (NHTs), paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation (FM), reveal individual differences that correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers (ANFs) without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in SSSRs in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.

  5. Speech-evoked auditory brainstem responses in children with hearing loss.

    PubMed

    Koravand, Amineh; Al Osman, Rida; Rivest, Véronique; Poulin, Catherine

    2017-08-01

    The main objective of the present study was to investigate subcortical auditory processing in children with sensorineural hearing loss. Auditory Brainstem Responses (ABRs) were recorded using click and speech/da/stimuli. Twenty-five children, aged 6-14 years old, participated in the study: 13 with normal hearing acuity and 12 with sensorineural hearing loss. No significant differences were observed for the click-evoked ABRs between normal hearing and hearing-impaired groups. For the speech-evoked ABRs, no significant differences were found for the latencies of the following responses between the two groups: onset (V and A), transition (C), one of the steady-state wave (F), and offset (O). However, the latency of the steady-state waves (D and E) was significantly longer for the hearing-impaired compared to the normal hearing group. Furthermore, the amplitude of the offset wave O and of the envelope frequency response (EFR) of the speech-evoked ABRs was significantly larger for the hearing-impaired compared to the normal hearing group. Results obtained from the speech-evoked ABRs suggest that children with a mild to moderately-severe sensorineural hearing loss have a specific pattern of subcortical auditory processing. Our results show differences for the speech-evoked ABRs in normal hearing children compared to hearing-impaired children. These results add to the body of the literature on how children with hearing loss process speech at the brainstem level. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Print Knowledge of Preschool Children with Hearing Loss

    ERIC Educational Resources Information Center

    Werfel, Krystal L.; Lund, Emily; Schuele, C. Melanie

    2015-01-01

    Measures of print knowledge were compared across preschoolers with hearing loss and normal hearing. Alphabet knowledge did not differ between groups, but preschoolers with hearing loss performed lower on measures of print concepts and concepts of written words than preschoolers with normal hearing. Further study is needed in this area.

  7. Social conversational skills development in early implanted children.

    PubMed

    Guerzoni, Letizia; Murri, Alessandra; Fabrizi, Enrico; Nicastri, Maria; Mancini, Patrizia; Cuda, Domenico

    2016-09-01

    Social conversational skills are a salient aspect of early pragmatic development in young children. These skills include two different abilities, assertiveness and responsiveness. This study investigated the development of these abilities in early implanted children and their relationships with lexical development and some language-sensitive variables. Prospective, observational, nonrandomized study. Participants included 28 children with congenital profound sensorineural hearing loss. The mean age at device activation was 13.3 months (standard deviation [SD] ±4.2). The Social-Conversational Skills Rating Scale was used to evaluate assertiveness and responsiveness. The MacArthur-Bates Communicative Development Inventory (Words and Sentences form) was used to analyze the lexical development. The device experience was 12 months for each child, and the mean age at testing was 25.9 months (SD ±4.6). Assertiveness and responsiveness scores were within the normal range of normal-hearing age-matched peers. Age at cochlear implant activation exerted a significant impact, with the highest scores associated to the youngest patients. The residual correlations between assertiveness and responsiveness with the lexical development were positive and strongly significant (r = 0.69 and 0.73, respectively). Preoperative hearing threshold demonstrated an associated significant coefficient on the assertiveness score. Age at diagnosis and maternal education level were not correlated with the social conversational skills. Early-implanted children developed social conversational skills that are similar to normal-hearing peers matched for age 1 year after device activation. Social conversational skills and lexical development were strongly correlated, but the present study design cannot specify the direction of this relationship. Children with better preoperative residual hearing exhibited better assertive ability. 4 Laryngoscope, 126:2098-2105, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  8. Motivation to Address Self-Reported Hearing Problems in Adults with Normal Hearing Thresholds

    ERIC Educational Resources Information Center

    Alicea, Carly C. M.; Doherty, Karen A.

    2017-01-01

    Purpose: The purpose of this study was to compare the motivation to change in relation to hearing problems in adults with normal hearing thresholds but who report hearing problems and that of adults with a mild-to-moderate sensorineural hearing loss. Factors related to their motivation were also assessed. Method: The motivation to change in…

  9. Use of Adaptive Digital Signal Processing to Improve Speech Communication for Normally Hearing aand Hearing-Impaired Subjects.

    ERIC Educational Resources Information Center

    Harris, Richard W.; And Others

    1988-01-01

    A two-microphone adaptive digital noise cancellation technique improved word-recognition ability for 20 normal and 12 hearing-impaired adults by reducing multitalker speech babble and speech spectrum noise 18-22 dB. Word recognition improvements averaged 37-50 percent for normal and 27-40 percent for hearing-impaired subjects. Improvement was best…

  10. The effect of compression speed on intelligibility: simulated hearing-aid processing with and without original temporal fine structure information.

    PubMed

    Hopkins, Kathryn; King, Andrew; Moore, Brian C J

    2012-09-01

    Hearing aids use amplitude compression to compensate for the effects of loudness recruitment. The compression speed that gives the best speech intelligibility varies among individuals. Moore [(2008). Trends Amplif. 12, 300-315] suggested that an individual's sensitivity to temporal fine structure (TFS) information may affect which compression speed gives most benefit. This hypothesis was tested using normal-hearing listeners with a simulated hearing loss. Sentences in a competing talker background were processed using multi-channel fast or slow compression followed by a simulation of threshold elevation and loudness recruitment. Signals were either tone vocoded with 1-ERB(N)-wide channels (where ERB(N) is the bandwidth of normal auditory filters) to remove the original TFS information, or not processed further. In a second experiment, signals were vocoded with either 1 - or 2-ERB(N)-wide channels, to test whether the available spectral detail affects the optimal compression speed. Intelligibility was significantly better for fast than slow compression regardless of vocoder channel bandwidth. The results suggest that the availability of original TFS or detailed spectral information does not affect the optimal compression speed. This conclusion is tentative, since while the vocoder processing removed the original TFS information, listeners may have used the altered TFS in the vocoded signals.

  11. The relationship between loudness intensity functions and the click-ABR wave V latency.

    PubMed

    Serpanos, Y C; O'Malley, H; Gravel, J S

    1997-10-01

    To assess the relationship of loudness growth and the click-evoked auditory brain stem response (ABR) wave V latency-intensity function (LIF) in listeners with normal hearing or cochlear hearing loss. The effect of hearing loss configuration on the intensity functions was also examined. Behavioral and electrophysiological intensity functions were obtained using click stimuli of comparable intensities in listeners with normal hearing (Group I; n = 10), and cochlear hearing loss of flat (Group II; n = 10) or sloping (Group III; n = 10) configurations. Individual intensity functions were obtained from measures of loudness growth using the psychophysical methods of absolute magnitude estimation and production of loudness (geometrically averaged to provide the measured loudness function), and from the wave V latency measures of the ABR. Slope analyses for the behavioral and electrophysiological intensity functions were separately performed by group. The loudness growth functions for the groups with cochlear hearing loss approximated the normal function at high intensities, with overall slope values consistent with those reported from previous psychophysical research. The ABR wave V LIF for the group with a flat configuration of cochlear hearing loss approximated the normal function at high intensities, and was displaced parallel to the normal function for the group with sloping configuration. The relationship between the behavioral and electrophysiological intensity functions was examined at individual intensities across the range of the functions for each subject. A significant relationship was obtained between loudness and the ABR wave V LIFs for the groups with normal hearing and flat configuration of cochlear hearing loss; the association was not significant (p = 0.10) for the group with a sloping configuration of cochlear hearing loss. The results of this study established a relationship between loudness and the ABR wave V latency for listeners with normal hearing, and flat cochlear hearing loss. In listeners with a sloping configuration of cochlear hearing loss, the relationship was not significant. This suggests that the click-evoked ABR may be used to estimate loudness growth at least for individuals with normal hearing and those with a flat configuration of cochlear hearing loss. Predictive equations were derived to estimate loudness growth for these groups. The use of frequency-specific stimuli may provide more precise information on the nature of the relationship between loudness growth and the ABR wave V latency, particularly for listeners with sloping configurations of cochlear hearing loss.

  12. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability-Implications for Cochlear Implant Candidacy.

    PubMed

    Firszt, Jill B; Reeder, Ruth M; Holden, Laura K

    At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of covariables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc), and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-sex-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal-hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal-hearing participant groups were not significantly different for speech in noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments, and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates.

  13. Vestibular (dys)function in children with sensorineural hearing loss: a systematic review.

    PubMed

    Verbecque, Evi; Marijnissen, Tessa; De Belder, Niels; Van Rompaey, Vincent; Boudewyns, An; Van de Heyning, Paul; Vereeck, Luc; Hallemans, Ann

    2017-06-01

    The objective of this study is to provide an overview of the prevalence of vestibular dysfunction in children with SNHL classified according to the applied test and its corresponding sensitivity and specificity. Data were gathered using a systematic search query including reference screening. Pubmed, Web of Science and Embase were searched. Strategy and reporting of this review was based on the Meta-analysis of Observational Studies in Epidemiology (MOOSE) guidelines. Methodological quality was assessed with the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist. All studies, regardless the applied vestibular test, showed that vestibular function differs significantly between children with hearing loss and normal hearing (p < 0.05). Compared with caloric testing, the sensitivity of the Rotational Chair Test (RCT) varies between 61 and 80% and specificity between 21 and 80%, whereas this was, respectively, 71-100% and 30-100% for collic Vestibular Evoked Myogenic Potentials (cVEMP). Compared with RCT, the sensitivity was 88-100% and the specificity was 69-100% for the Dynamic Visual Acuity test, respectively, 67-100% and 71-100% for the (video) Head Impulse Test and 83% and 86% for the ocular VEMP. Currently, due to methodological shortcoming, evidence on sensitivity and specificity of vestibular tests is unknown to moderate. Future research should focus on adequate sample sizes (subgroups >30).

  14. Application of distortion product otoacoustic emissions to inflation of the eustachian tube in low frequency tinnitus with normal hearing.

    PubMed

    Wang, Hui; Song, Ningying; Li, Xujing; Lv, Hongguang

    2013-06-01

    This study was designed to investigate the applications of distortion product otoacoustic emissions to assess the efficacy of eustachian tube inflation on low frequency tinnitus with normal hearing. Ninety-four patients (155 ears) suffering from subjective tinnitus with normal hearing sensitivity participated in this study. Control group consists of fifty volunteers (100 ears) without tinnitus. They were subjected to full history taking, otoscopy, basic audiologic evaluation and distortion product otoacoustic emissions (DPOAE). As for the patients with decreased DPOAE amplitude over a limited frequency range from 0.5 to 1kHz, we offered nose dropping and tubal inflation for a week and DPOAE was preformed again. The patients were followed up for a month. 34.8% DPOAE-gram showed decreased amplitude at the frequencies from 0.5 to 1kHz in tinnitus group and "the ring" is mostly lower in pitch. Among the patients accepted the treatment of eustachian tube inflation, 16.7% the tinnitus disappeared, no recurrence within one month; 66.67% the tinnitus reduced within one month. 95.5% the amplitude of DPOAE showed improved over the limited frequency. 16.7% the tinnitus still existed. The changes of the mechanical properties of ossicular chain or the tympanic membrane influenced by tympanum pressure may cause tinnitus, which is sub-clinical prior to the changes of audiometry and tympanometry. The low frequency tinnitus may gain transitory relief from ringing with the tubal inflation. DPOAE was proved to be a useful tool in the evaluation of the efficacy of tubal inflation on low frequency tinnitus with normal hearing. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Verbal Working Memory in Older Adults: The Roles of Phonological Capacities and Processing Speed

    ERIC Educational Resources Information Center

    Nittrouer, Susan; Lowenstein, Joanna H.; Wucinich, Taylor; Moberly, Aaron C.

    2016-01-01

    Purpose: This study examined the potential roles of phonological sensitivity and processing speed in age-related declines of verbal working memory. Method: Twenty younger and 25 older adults with age-normal hearing participated. Two measures of verbal working memory were collected: digit span and serial recall of words. Processing speed was…

  16. Factors Affecting Sensitivity to Frequency Change in School-Age Children and Adults

    ERIC Educational Resources Information Center

    Buss, Emily; Taylor, Crystal N.; Leibold, Lori J.

    2014-01-01

    Purpose: The factors affecting frequency discrimination in school-age children are poorly understood. The goal of the present study was to evaluate developmental effects related to memory for pitch and the utilization of temporal fine structure. Method: Listeners were 5.1- to 13.6-year-olds and adults, all with normal hearing. A subgroup of…

  17. How close should a student with unilateral hearing loss stay to a teacher in a noisy classroom?

    PubMed

    Noh, Heil; Park, Yong-Gyu

    2012-06-01

    To determine the optimal seating position in a noisy classroom for students with unilateral hearing loss (UHL) without any auditory rehabilitation as compared to normal-hearing adults and student peers. Speech discrimination scores (SDS) for babble noise at distances of 3, 4, 6, 8, and 10 m from a speaker were measured in a simulated classroom measuring 300 m3 (reverberation time = 0.43 s). Students with UHL (n = 25, 10-19 years old), normal-hearing students (n = 25), and normal-hearing adults (n = 25). The SDS for the normal-hearing adults at the 3, 4, 6, 8, and 10 m distances were 90.0±6.4%, 84.7±7.9%, 80.6±10.0%, 75.5±12.6%, and 68.8±13.0%, respectively. Those for the normal-hearing students were 90.1±6.2%, 78.1±9.4%, 66.4±10.7%, 61.8±11.2%, and 60.8±10.9%. Those for the UHL group were 81.7±9.0%, 70.2±12.4%, 62.1±17.2%, 52.4±17.1%, and 48.9±17.9%. The UHL group needed a seating position of 4.35 m to achieve an equivalent mean SDS as those for normal-hearing adults seated at 10 m. Likewise, the UHL group needed to be seated at 6.27 m to have an equivalent SDS as the normal-hearing students seated at 10 m. Students with UHL in noisy classrooms require seating ranging from 4.35 m to no further than 6.27 m away from a teacher to obtain a SDS comparable to normal hearing adults and student peers.

  18. An Overview of the Major Phenomena of the Localization of Sound Sources by Normal-Hearing, Hearing-Impaired, and Aided Listeners

    PubMed Central

    2014-01-01

    Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention. PMID:25492094

  19. Effects of Anxiety Sensitivity and Hearing Loss on Tinnitus Symptom Severity

    PubMed Central

    Moon, Kyung Ray; Park, Subin; Jung, YouJi; Lee, AhReum

    2018-01-01

    Objective The aim of the present study was to examine the relative role of anxiety sensitivity and hearing loss on the tinnitus symptoms severity in a large clinical sample of patients with tinnitus. Methods A total of 1,705 patients with tinnitus who visited the tinnitus clinic underwent the pure-tone audiometric testing and a battery of self-report questionnaires. Multiple linear regression analyses were performed to identify the relationship of anxiety sensitivity and hearing loss to tinnitus symptoms severity. Results Both anxiety sensitivity and hearing loss were a significant association with of annoyance (anxiety sensitivity β=0.11, p=0.010; hearing loss β=0.09, p=0.005) and THI score (anxiety sensitivity β=0.21, p<0.001; hearing loss β=0.10, p<0.001) after adjusting for confounding factors. Meanwhile, the awareness time (β=0.19, p<0.001) and loudness (β=0.11, p<0.001) of tinnitus was associated with only the hearing loss but not with anxiety sensitivity. Conclusion Our results indicate that both hearing loss and anxiety sensitivity were associated with increased tinnitus symptom severity. Furthermore, these associations could be different according to the characteristics of tinnitus symptoms. PMID:29422923

  20. Functional changes in people with different hearing status and experiences of using Chinese sign language: an fMRI study.

    PubMed

    Li, Qiang; Xia, Shuang; Zhao, Fei; Qi, Ji

    2014-01-01

    The purpose of this study was to assess functional changes in the cerebral cortex in people with different sign language experience and hearing status whilst observing and imitating Chinese Sign Language (CSL) using functional magnetic resonance imaging (fMRI). 50 participants took part in the study, and were divided into four groups according to their hearing status and experience of using sign language: prelingual deafness signer group (PDS), normal hearing non-signer group (HnS), native signer group with normal hearing (HNS), and acquired signer group with normal hearing (HLS). fMRI images were scanned from all subjects when they performed block-designed tasks that involved observing and imitating sign language stimuli. Nine activation areas were found in response to undertaking either observation or imitation CSL tasks and three activated areas were found only when undertaking the imitation task. Of those, the PDS group had significantly greater activation areas in terms of the cluster size of the activated voxels in the bilateral superior parietal lobule, cuneate lobe and lingual gyrus in response to undertaking either the observation or the imitation CSL task than the HnS, HNS and HLS groups. The PDS group also showed significantly greater activation in the bilateral inferior frontal gyrus which was also found in the HNS or the HLS groups but not in the HnS group. This indicates that deaf signers have better sign language proficiency, because they engage more actively with the phonetic and semantic elements. In addition, the activations of the bilateral superior temporal gyrus and inferior parietal lobule were only found in the PDS group and HNS group, and not in the other two groups, which indicates that the area for sign language processing appears to be sensitive to the age of language acquisition. After reading this article, readers will be able to: discuss the relationship between sign language and its neural mechanisms. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Predicting social functioning in children with a cochlear implant and in normal-hearing children: the role of emotion regulation.

    PubMed

    Wiefferink, Carin H; Rieffe, Carolien; Ketelaar, Lizet; Frijns, Johan H M

    2012-06-01

    The purpose of the present study was to compare children with a cochlear implant and normal hearing children on aspects of emotion regulation (emotion expression and coping strategies) and social functioning (social competence and externalizing behaviors) and the relation between emotion regulation and social functioning. Participants were 69 children with cochlear implants (CI children) and 67 normal hearing children (NH children) aged 1.5-5 years. Parents answered questionnaires about their children's language skills, social functioning, and emotion regulation. Children also completed simple tasks to measure their emotion regulation abilities. Cochlear implant children had fewer adequate emotion regulation strategies and were less socially competent than normal hearing children. The parents of cochlear implant children did not report fewer externalizing behaviors than those of normal hearing children. While social competence in normal hearing children was strongly related to emotion regulation, cochlear implant children regulated their emotions in ways that were unrelated with social competence. On the other hand, emotion regulation explained externalizing behaviors better in cochlear implant children than in normal hearing children. While better language skills were related to higher social competence in both groups, they were related to fewer externalizing behaviors only in cochlear implant children. Our results indicate that cochlear implant children have less adequate emotion-regulation strategies and less social competence than normal hearing children. Since they received their implants relatively recently, they might eventually catch up with their hearing peers. Longitudinal studies should further explore the development of emotion regulation and social functioning in cochlear implant children. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Chinese Writing of Deaf or Hard-of-Hearing Students and Normal-Hearing Peers from Complex Network Approach.

    PubMed

    Jin, Huiyuan; Liu, Haitao

    2016-01-01

    Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences.

  3. Sound localization in noise in hearing-impaired listeners.

    PubMed

    Lorenzi, C; Gatehouse, S; Lever, C

    1999-06-01

    The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.

  4. Spectrotemporal Modulation Sensitivity as a Predictor of Speech Intelligibility for Hearing-Impaired Listeners

    PubMed Central

    Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.

    2014-01-01

    Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210

  5. Speech prosody perception in cochlear implant users with and without residual hearing.

    PubMed

    Marx, Mathieu; James, Christopher; Foxton, Jessica; Capber, Amandine; Fraysse, Bernard; Barone, Pascal; Deguine, Olivier

    2015-01-01

    The detection of fundamental frequency (F0) variations plays a prominent role in the perception of intonation. Cochlear implant (CI) users with residual hearing might have access to these F0 cues. The objective was to study if and how residual hearing facilitates speech prosody perception in CI users. The authors compared F0 difference limen (F0DL) and question/statement discrimination performance for 15 normal-hearing subjects (NHS) and two distinct groups of CI subjects, according to the presence or absence of acoustic residual hearing: one "combined group" (n = 11) with residual hearing and one CI-only group (n = 10) without any residual hearing. To assess the relative contribution of the different acoustic cues for intonation perception, the sensitivity index d' was calculated for three distinct auditory conditions: one condition with original recordings, one condition with a constant F0, and one with equalized duration and amplitude. In the original condition, combined subjects showed better question/statement discrimination than CI-only subjects, d' 2.44 (SE 0.3) and 0.91 (SE 0.25), respectively. Mean d' score of NHS was 3.3 (SE 0.06). When F0 variations were removed, the scores decreased significantly for combined subjects (d' = 0.66, SE 0.51) and NHS (d' = 0.4, SE 0.09). Duration and amplitude equalization affected the scores of CI-only subjects (mean d' = 0.34, SE 0.28) but did not influence the scores of combined subjects (d' = 2.7, SE 0.02) or NHS (d' = 3.3, SE 0.33). Mean F0DL was poorer in CI-only subjects (34%, SE 15) compared with combined subjects (8.8%, SE 1.4) and NHS (2.4%, SE 0.05). In CI subjects with residual hearing, intonation d' score was correlated with mean residual hearing level (r = -0.86, n = 11, p < 0.001) and mean F0DL (r = 0.84, n = 11, p < 0.001). Where CI subjects with residual hearing had thresholds better than 60 dB HL in the low frequencies, they displayed near-normal question/statement discrimination abilities. Normal listeners mainly relied on F0 variations which were the most effective prosodic cue. In comparison, CI subjects without any residual hearing had poorer F0 discrimination and showed a strong deficit in speech prosody perception. However, this CI-only group appeared to be able to make some use of amplitude and duration cues for statement/question discrimination.

  6. The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise.

    PubMed

    Shen, Jing; Souza, Pamela E

    2017-09-18

    This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise. Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise. The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise. Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss.

  7. The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise

    PubMed Central

    Souza, Pamela E.

    2017-01-01

    Purpose This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise. Method Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise. Results The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise. Conclusions Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss. PMID:28800370

  8. Modeling the time-varying and level-dependent effects of the medial olivocochlear reflex in auditory nerve responses.

    PubMed

    Smalt, Christopher J; Heinz, Michael G; Strickland, Elizabeth A

    2014-04-01

    The medial olivocochlear reflex (MOCR) has been hypothesized to provide benefit for listening in noisy environments. This advantage can be attributed to a feedback mechanism that suppresses auditory nerve (AN) firing in continuous background noise, resulting in increased sensitivity to a tone or speech. MOC neurons synapse on outer hair cells (OHCs), and their activity effectively reduces cochlear gain. The computational model developed in this study implements the time-varying, characteristic frequency (CF) and level-dependent effects of the MOCR within the framework of a well-established model for normal and hearing-impaired AN responses. A second-order linear system was used to model the time-course of the MOCR using physiological data in humans. The stimulus-level-dependent parameters of the efferent pathway were estimated by fitting AN sensitivity derived from responses in decerebrate cats using a tone-in-noise paradigm. The resulting model uses a binaural, time-varying, CF-dependent, level-dependent OHC gain reduction for both ipsilateral and contralateral stimuli that improves detection of a tone in noise, similarly to recorded AN responses. The MOCR may be important for speech recognition in continuous background noise as well as for protection from acoustic trauma. Further study of this model and its efferent feedback loop may improve our understanding of the effects of sensorineural hearing loss in noisy situations, a condition in which hearing aids currently struggle to restore normal speech perception.

  9. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability - Implications for Cochlear Implant Candidacy

    PubMed Central

    Firszt, Jill B.; Reeder, Ruth M.; Holden, Laura K.

    2016-01-01

    Objectives At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of co-variables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. Design The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc) and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-gender-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Results Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal hearing participant groups were not significantly different for speech-in-noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Conclusions Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates. PMID:28067750

  10. Earless toads sense low frequencies but miss the high notes.

    PubMed

    Womack, Molly C; Christensen-Dalsgaard, Jakob; Coloma, Luis A; Chaparro, Juan C; Hoke, Kim L

    2017-10-11

    Sensory losses or reductions are frequently attributed to relaxed selection. However, anuran species have lost tympanic middle ears many times, despite anurans' use of acoustic communication and the benefit of middle ears for hearing airborne sound. Here we determine whether pre-existing alternative sensory pathways enable anurans lacking tympanic middle ears (termed earless anurans) to hear airborne sound as well as eared species or to better sense vibrations in the environment. We used auditory brainstem recordings to compare hearing and vibrational sensitivity among 10 species (six eared, four earless) within the Neotropical true toad family (Bufonidae). We found that species lacking middle ears are less sensitive to high-frequency sounds, however, low-frequency hearing and vibrational sensitivity are equivalent between eared and earless species. Furthermore, extratympanic hearing sensitivity varies among earless species, highlighting potential species differences in extratympanic hearing mechanisms. We argue that ancestral bufonids may have sufficient extratympanic hearing and vibrational sensitivity such that earless lineages tolerated the loss of high frequency hearing sensitivity by adopting species-specific behavioural strategies to detect conspecifics, predators and prey. © 2017 The Author(s).

  11. The Perception of Stress Pattern in Young Cochlear Implanted Children: An EEG Study.

    PubMed

    Vavatzanidis, Niki K; Mürbe, Dirk; Friederici, Angela D; Hahne, Anja

    2016-01-01

    Children with sensorineural hearing loss may (re)gain hearing with a cochlear implant-a device that transforms sounds into electric pulses and bypasses the dysfunctioning inner ear by stimulating the auditory nerve directly with an electrode array. Many implanted children master the acquisition of spoken language successfully, yet we still have little knowledge of the actual input they receive with the implant and specifically which language sensitive cues they hear. This would be important however, both for understanding the flexibility of the auditory system when presented with stimuli after a (life-) long phase of deprivation and for planning therapeutic intervention. In rhythmic languages the general stress pattern conveys important information about word boundaries. Infant language acquisition relies on such cues and can be severely hampered when this information is missing, as seen for dyslexic children and children with specific language impairment. Here we ask whether children with a cochlear implant perceive differences in stress patterns during their language acquisition phase and if they do, whether it is present directly following implant stimulation or if and how much time is needed for the auditory system to adapt to the new sensory modality. We performed a longitudinal ERP study, testing in bimonthly intervals the stress pattern perception of 17 young hearing impaired children (age range: 9-50 months; mean: 22 months) during their first 6 months of implant use. An additional session before the implantation served as control baseline. During a session they passively listened to an oddball paradigm featuring the disyllable "baba," which was stressed either on the first or second syllable (trochaic vs. iambic stress pattern). A group of age-matched normal hearing children participated as controls. Our results show, that within the first 6 months of implant use the implanted children develop a negative mismatch response for iambic but not for trochaic deviants, thus showing the same result as the normal hearing controls. Even congenitally deaf children show the same developing pattern. We therefore conclude (a) that young implanted children have early access to stress pattern information and (b) that they develop ERP responses similar to those of normal hearing children.

  12. Postural control assessment in students with normal hearing and sensorineural hearing loss.

    PubMed

    Melo, Renato de Souza; Lemos, Andrea; Macky, Carla Fabiana da Silva Toscano; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica

    2015-01-01

    Children with sensorineural hearing loss can present with instabilities in postural control, possibly as a consequence of hypoactivity of their vestibular system due to internal ear injury. To assess postural control stability in students with normal hearing (i.e., listeners) and with sensorineural hearing loss, and to compare data between groups, considering gender and age. This cross-sectional study evaluated the postural control of 96 students, 48 listeners and 48 with sensorineural hearing loss, aged between 7 and 18 years, of both genders, through the Balance Error Scoring Systems scale. This tool assesses postural control in two sensory conditions: stable surface and unstable surface. For statistical data analysis between groups, the Wilcoxon test for paired samples was used. Students with hearing loss showed more instability in postural control than those with normal hearing, with significant differences between groups (stable surface, unstable surface) (p<0.001). Students with sensorineural hearing loss showed greater instability in the postural control compared to normal hearing students of the same gender and age. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  13. Effects of reverberation and noise on speech intelligibility in normal-hearing and aided hearing-impaired listeners.

    PubMed

    Xia, Jing; Xu, Buye; Pentony, Shareka; Xu, Jingjing; Swaminathan, Jayaganesh

    2018-03-01

    Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts. Once corrected for ceiling effects, the differences in the effects of reverberation on speech intelligibility between the two groups were much smaller. This suggests that, at least, part of the difference in susceptibility to reverberation between normal-hearing and hearing-impaired listeners was due to ceiling effects. Across both groups, a complex interaction between the noise characteristics and reverberation was observed on the speech intelligibility scores. Further fine-grained analyses of the perception of consonants showed that, for both listener groups, final consonants were more susceptible to reverberation than initial consonants. However, differences in the perception of specific consonant features were observed between the groups.

  14. Comparison of single-microphone noise reduction schemes: can hearing impaired listeners tell the difference?

    PubMed

    Huber, Rainer; Bisitz, Thomas; Gerkmann, Timo; Kiessling, Jürgen; Meister, Hartmut; Kollmeier, Birger

    2018-06-01

    The perceived qualities of nine different single-microphone noise reduction (SMNR) algorithms were to be evaluated and compared in subjective listening tests with normal hearing and hearing impaired (HI) listeners. Speech samples added with traffic noise or with party noise were processed by the SMNR algorithms. Subjects rated the amount of speech distortions, intrusiveness of background noise, listening effort and overall quality, using a simplified MUSHRA (ITU-R, 2003 ) assessment method. 18 normal hearing and 18 moderately HI subjects participated in the study. Significant differences between the rating behaviours of the two subject groups were observed: While normal hearing subjects clearly differentiated between different SMNR algorithms, HI subjects rated all processed signals very similarly. Moreover, HI subjects rated speech distortions of the unprocessed, noisier signals as being more severe than the distortions of the processed signals, in contrast to normal hearing subjects. It seems harder for HI listeners to distinguish between additive noise and speech distortions or/and they might have a different understanding of the term "speech distortion" than normal hearing listeners have. The findings confirm that the evaluation of SMNR schemes for hearing aids should always involve HI listeners.

  15. Effects of Noise on Speech Recognition and Listening Effort in Children with Normal Hearing and Children with Mild Bilateral or Unilateral Hearing Loss

    ERIC Educational Resources Information Center

    Lewis, Dawna; Schmid, Kendra; O'Leary, Samantha; Spalding, Jody; Heinrichs-Graham, Elizabeth; High, Robin

    2016-01-01

    Purpose: This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Method Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL,…

  16. Validation of the use of self-reported hearing loss and the Hearing Handicap Inventory for elderly among rural Indian elderly population.

    PubMed

    Deepthi, R; Kasthuri, Arvind

    2012-01-01

    Hearing loss is a potentially disabling problem among elderly leading to physical and social dysfunction. Though audiometric assessment of hearing loss is considered as gold standard, it is not feasible in community settings. Several questionnaires measuring hearing handicap have been developed. Knowledge regarding applicability of these questionnaires among rural elderly is limited, hence a study was planned to validate single question and Shortened Hearing Handicap Inventory for Elderly (HHIE-S) in detecting hearing loss against pure tone-audiometry among rural Indian elderly. A single question 'do you feel you have a hearing loss?' and the HHIE-S was administered to 175 elderly in two rural areas. Hearing ability was assessed using pure tone audiometry. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of both screening tools were compared with pure tone averages (PTAs) greater than 25, 40 and 55 dB hearing level (mild, moderate and severe hearing loss, respectively). The single question yielded low sensitivity (30.9%) and high specificity (93.9%) for mild hearing loss. Similarly HHIE-S yielded a sensitivity of 26.2% and specificity of 95.9%. Sensitivity with single question increased to 76.2% and specificity decreased to 83.1% with severe hearing loss. Sensitivity with HHIE-S also increased to 76.2% and specificity decreased to 87.7% with severe hearing loss. These hearing screening questionnaires will be useful in identifying more disabling hearing losses among rural elderly which helps in rehabilitation services planning. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Chinese Writing of Deaf or Hard-of-Hearing Students and Normal-Hearing Peers from Complex Network Approach

    PubMed Central

    Jin, Huiyuan; Liu, Haitao

    2016-01-01

    Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences. PMID:27920733

  18. Emergent Literacy Skills in Preschool Children With Hearing Loss Who Use Spoken Language: Initial Findings From the Early Language and Literacy Acquisition (ELLA) Study.

    PubMed

    Werfel, Krystal L

    2017-10-05

    The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed measures of oral language, phonological processing, and print knowledge twice at a 6-month interval. A series of repeated-measures analyses of variance were used to compare change across groups. Main effects of time were observed for all variables except phonological recoding. Main effects of group were observed for vocabulary, morphosyntax, phonological memory, and concepts of print. Interaction effects were observed for phonological awareness and concepts of print. Children with hearing loss performed more poorly than children with normal hearing on measures of oral language, phonological memory, and conceptual print knowledge. Two interaction effects were present. For phonological awareness and concepts of print, children with hearing loss demonstrated less positive change than children with normal hearing. Although children with hearing loss generally demonstrated a positive growth in emergent literacy skills, their initial performance was lower than that of children with normal hearing, and rates of change were not sufficient to catch up to the peers over time.

  19. Screening an elderly hearing impaired population for mild cognitive impairment using Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA).

    PubMed

    Lim, Magdalene Yeok Leng; Loo, Jenny Hooi Yin

    2018-07-01

    To determine if there is an association between hearing loss and poorer cognitive scores on Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) and to determine if poor hearing acuity affects scoring on the cognitive screening tests of MMSE and MoCA. One hundred fourteen elderly patients (Singapore residents) aged between 55 and 86 years were sampled. Participants completed a brief history questionnaire, pure tone audiometry, and 2 cognitive screening tests-the MMSE and MoCA. Average hearing thresholds of the better ear in the frequencies of 0.5, 1, 2, and 4 kHz were used for data analysis. Hearing loss was significantly associated with poorer cognitive scores in Poisson regression models adjusted for age. Mini-Mental State Examination scores were shown to decrease by 2.8% (P = .029), and MoCA scores by 3.5% (P = .013) for every 10 dB of hearing loss. Analysis of hearing-sensitive components of "Registration" and "Recall" in MMSE and MoCA using chi-square tests showed significantly poorer performance in the hearing loss group as compared to the normal hearing group. Phonetic analysis of target words with high error rates shows that the poor performance was likely contributed by decreased hearing acuity, on top of a possible true deficit in cognition in the hearing impaired. Hearing loss is associated with poorer cognitive scores on MMSE and MoCA, and cognitive scoring is likely confounded by poor hearing ability. This highlights an important, often overlooked aspect of sensory impairment during cognitive screening. Provisions should be made when testing for cognition in the hearing-impaired population to avoid over-referral and subsequent misdiagnoses of cognitive impairment. Copyright © 2018 John Wiley & Sons, Ltd.

  20. [Emotional response to music by postlingually-deafened adult cochlear implant users].

    PubMed

    Wang, Shuo; Dong, Ruijuan; Zhou, Yun; Li, Jing; Qi, Beier; Liu, Bo

    2012-10-01

    To assess the emotional response to music by postlingually-deafened adult cochlear implant users. Munich music questionnaire (MUMU) was used to match the music experience and the motivation of use of music between 12 normal-hearing and 12 cochlear implant subjects. Emotion rating test in Musical Sounds in Cochlear Implants (MuSIC) test battery was used to assess the emotion perception ability for both normal-hearing and cochlear implant subjects. A total of 15 pieces of music phases were used. Responses were given by selecting the rating scales from 1 to 10. "1" represents "very sad" feeling, and "10" represents "very happy feeling. In comparison with normal-hearing subjects, 12 cochlear implant subjects made less active use of music for emotional purpose. The emotion ratings for cochlear implant subjects were similar to normal-hearing subjects, but with large variability. Post-lingually deafened cochlear implant subjects on average performed similarly in emotion rating tasks relative to normal-hearing subjects, but their active use of music for emotional purpose was obviously less than normal-hearing subjects.

  1. Individual Differences in Auditory Brainstem Response Wave Characteristics

    PubMed Central

    Jagadeesh, Anoop; Mauermann, Manfred; Ernst, Frauke

    2016-01-01

    Little is known about how outer hair cell loss interacts with noise-induced and age-related auditory nerve degradation (i.e., cochlear synaptopathy) to affect auditory brainstem response (ABR) wave characteristics. Given that listeners with impaired audiograms likely suffer from mixtures of these hearing deficits and that ABR amplitudes have successfully been used to isolate synaptopathy in listeners with normal audiograms, an improved understanding of how different hearing pathologies affect the ABR source generators will improve their sensitivity in hearing diagnostics. We employed a functional model for human ABRs in which different combinations of hearing deficits were simulated and show that high-frequency cochlear gain loss steepens the slope of the ABR Wave-V latency versus intensity and amplitude versus intensity curves. We propose that grouping listeners according to a ratio of these slope metrics (i.e., the ABR growth ratio) might offer a way to factor out the outer hair cell loss deficit and maximally relate individual differences for constant ratios to other peripheral hearing deficits such as cochlear synaptopathy. We compared the model predictions to recorded click-ABRs from 30 participants with normal or high-frequency sloping audiograms and confirm the predicted relationship between the ABR latency growth curve and audiogram slope. Experimental ABR amplitude growth showed large individual differences and was compared with the Wave-I amplitude, Wave-V/I ratio, or the interwaveI–W latency in the same listeners. The model simulations along with the ABR recordings suggest that a hearing loss profile depicting the ABR growth ratio versus the Wave-I amplitude or Wave-V/I ratio might be able to differentiate outer hair cell deficits from cochlear synaptopathy in listeners with mixed pathologies. PMID:27837052

  2. The Sensitivity of Adolescent Hearing Screens Significantly Improves by Adding High Frequencies.

    PubMed

    Sekhar, Deepa L; Zalewski, Thomas R; Beiler, Jessica S; Czarnecki, Beth; Barr, Ashley L; King, Tonya S; Paul, Ian M

    2016-09-01

    One in 6 US adolescents has high-frequency hearing loss, often related to hazardous noise. Yet, the American Academy of Pediatrics (AAP) hearing screen (500, 1,000, 2,000, 4,000 Hertz) primarily includes low frequencies (<3,000 Hertz). Study objectives were to determine (1) sensitivity and specificity of the AAP hearing screen for adolescent hearing loss and (2) if adding high frequencies increases sensitivity, while repeat screening of initial referrals reduces false positive results (maintaining acceptable specificity). Eleventh graders (n = 134) participated in hearing screening (2013-2014) including "gold-standard" sound-treated booth testing to calculate sensitivity and specificity. Of the 43 referrals, 27 (63%) had high-frequency hearing loss. AAP screen sensitivity and specificity were 58.1% (95% confidence interval 42.1%-73.0%) and 91.2% (95% confidence interval 83.4-96.1), respectively. Adding high frequencies (6,000, 8,000 Hertz) significantly increased sensitivity to 79.1% (64.0%-90.0%; p = .003). Specificity with repeat screening was 81.3% (71.8%-88.7%; p = .003). Adolescent hearing screen sensitivity improves with high frequencies. Repeat testing maintains acceptable specificity. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  3. Correlated evolution between hearing sensitivity and social calls in bats

    PubMed Central

    Bohn, Kirsten M; Moss, Cynthia F; Wilkinson, Gerald S

    2006-01-01

    Echolocating bats are auditory specialists, with exquisite hearing that spans several octaves. In the ultrasonic range, bat audiograms typically show highest sensitivity in the spectral region of their species-specific echolocation calls. Well-developed hearing in the audible range has been commonly attributed to a need to detect sounds produced by prey. However, bat pups often emit isolation calls with low-frequency components that facilitate mother–young reunions. In this study, we examine whether low-frequency hearing in bats exhibits correlated evolution with (i) body size; (ii) high-frequency hearing sensitivity or (iii) pup isolation call frequency. Using published audiograms, we found that low-frequency hearing sensitivity is not dependent on body size but is related to high-frequency hearing. After controlling for high-frequency hearing, we found that low-frequency hearing exhibits correlated evolution with isolation call frequency. We infer that detection and discrimination of isolation calls have favoured enhanced low-frequency hearing because accurate parental investment is critical: bats have low reproductive rates, non-volant altricial young and must often identify their pups within large crèches. PMID:17148288

  4. Evaluation of gap filling skills and reading mistakes of cochlear implanted and normally hearing students.

    PubMed

    Çizmeci, Hülya; Çiprut, Ayça

    2018-06-01

    This study aimed to (1) evaluate the gap filling skills and reading mistakes of students with cochlear implants, and to (2) compare their results with those of their normal-hearing peers. The effects of implantation age and total time of cochlear implant use were analyzed in relation to the subjects' reading skills development. The study included 19 students who underwent cochlear implantation and 20 students with normal hearing, who were enrolled at the 6th to 8th grades. The subjects' ages ranged between 12 and 14 years old. Their reading skills were evaluated by using the Informal Reading Inventory. A significant relationship were found between implanted and normal-hearing students in terms of the percentages of reading error and the percentages of gap filling scores. The average order of the reading errors of students using cochlear implants was higher than that of normal-hearing students. As for the gap filling, the performances of implanted students in the passage are lower than those of their normal-hearing peers. No significant relationship was found between the variables tested in terms of age and duration of implantation on the reading performances of implanted students. Even if they were early implanted, there were significant differences in the reading performances of implanted students compared with those of their normal-hearing peers in older classes. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Perception of stochastic envelopes by normal-hearing and cochlear-implant listeners

    PubMed Central

    Gomersall, Philip A.; Turner, Richard E.; Baguley, David M.; Deeks, John M.; Gockel, Hedwig E.; Carlyon, Robert P.

    2016-01-01

    We assessed auditory sensitivity to three classes of temporal-envelope statistics (modulation depth, modulation rate, and comodulation) that are important for the perception of ‘sound textures’. The textures were generated by a probabilistic model that prescribes the temporal statistics of a selected number of modulation envelopes, superimposed onto noise carriers. Discrimination thresholds were measured for normal-hearing (NH) listeners and users of a MED-EL pulsar cochlear implant (CI), for separate manipulations of the average rate and modulation depth of the envelope in each frequency band of the stimulus, and of the co-modulation between bands. Normal-hearing (NH) listeners' discrimination of envelope rate was similar for baseline modulation rates of 5 and 34 Hz, and much poorer than previously reported for sinusoidally amplitude-modulated sounds. In contrast, discrimination of model parameters that controlled modulation depth was poorer at the lower baseline rate, consistent with the idea that, at the lower rate, subjects get fewer ‘looks’ at the relevant information when comparing stimuli differing in modulation depth. NH listeners could discriminate differences in co-modulation across bands; a multidimensional scaling study revealed that this was likely due to genuine across-frequency processing, rather than within-channel cues. CI users' discrimination performance was worse overall than for NH listeners, but showed a similar dependence on stimulus parameters. PMID:26706708

  6. Hearing screening in children with skeletal dysplasia.

    PubMed

    Tunkel, David E; Kerbavaz, Richard; Smith, Beth; Rose-Hardison, Danielle; Alade, Yewande; Hoover-Fong, Julie

    2011-12-01

    To determine the prevalence of hearing loss and abnormal tympanometry in children with skeletal dysplasia. Clinical screening program. National convention of the Little People of America. Convenience sample of volunteers aged 18 years or younger with skeletal dysplasias. Hearing screening with behavioral testing and/or otoacoustic emissions, otoscopy, and tympanometry. A failed hearing screen was defined as hearing 35 dB HL (hearing level) or greater at 1 or more tested frequencies or by a "fail" otoacoustic emissions response. Types B and C tympanograms were considered abnormal. A total of 58 children (aged ≤18 years) with skeletal dysplasia enrolled, and 56 completed hearing screening. Forty-one children had normal hearing (71%); 9 failed in 1 ear (16%); and 6 failed in both ears (10%). Forty-four children had achondroplasia, and 31 had normal hearing in both ears (71%); 8 failed hearing screening in 1 ear (18%), and 3 in both ears (7%). Tympanometry was performed in 45 children, with normal tympanograms found in 21 (47%), bilateral abnormal tympanograms in 15 (33%), and unilateral abnormal tympanograms in 9 (20%). Fourteen children with achondroplasia had normal tympanograms (42%); 11 had bilateral abnormal tympanograms (33%); and 8 had unilateral abnormal tympanograms (24%). For those children without functioning tympanostomy tubes, there was a 9.5 times greater odds of hearing loss if there was abnormal tympanometry (P = .03). Hearing loss and middle-ear disease are both highly prevalent in children with skeletal dysplasias. Abnormal tympanometry is highly associated with the presence of hearing loss, as expected in children with eustachian tube dysfunction. Hearing screening with medical intervention is recommended for these children.

  7. A comparative evaluation of dental caries status among hearing-impaired and normal children of Malda, West Bengal, evaluated with the Caries Assessment Spectrum and Treatment.

    PubMed

    Kar, Sudipta; Kundu, Goutam; Maiti, Shyamal Kumar; Ghosh, Chiranjit; Bazmi, Badruddin Ahamed; Mukhopadhyay, Santanu

    2016-01-01

    Dental caries is one of the major modern-day diseases of dental hard tissue. It may affect both normal and hearing-impaired children. This study is aimed to evaluate and compare the prevalence of dental caries in hearing-impaired and normal children of Malda, West Bengal, utilizing the Caries Assessment Spectrum and Treatment (CAST). In a cross-sectional, case-control study of dental caries status of 6-12-year-old children was assessed. Statistically significant difference was found in studied (hearing-impaired) and control group (normal children). In the present study, caries affected hearing-impaired children found to be about 30.51% compared to 15.81% in normal children, and the result was statistically significant. Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group, and the result is significant at P < 0.05. Statistical analysis was carried out utilizing Z-test. Statistically significant difference was found in studied (hearing-impaired) and control group (normal children). In the present study, caries effected hearing-impaired children found about 30.51% instead of 15.81% in normal children, and the result was statistically significant (P < 0.05). Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group. Dental health of hearing-impaired children was found unsatisfactory than normal children when studied in relation to dental caries status evaluated with CAST.

  8. Improving flexible thinking in deaf and hard of hearing children with virtual reality technology.

    PubMed

    Passig, D; Eden, S

    2000-07-01

    The study investigated whether rotating three-dimensional (3-D) objects using virtual reality (VR) will affect flexible thinking in deaf and hard of hearing children. Deaf and hard of hearing subjects were distributed into experimental and control groups. The experimental group played virtual 3-D Tetris (a game using VR technology) individually, 15 minutes once weekly over 3 months. The control group played conventional two-dimensional (2-D) Tetris over the same period. Children with normal hearing participated as a second control group in order to establish whether deaf and hard of hearing children really are disadvantaged in flexible thinking. Before-and-after testing showed significantly improved flexible thinking in the experimental group; the deaf and hard of hearing control group showed no significant improvement. Also, before the experiment, the deaf and hard of hearing children scored lower in flexible thinking than the children with normal hearing. After the experiment, the difference between the experimental group and the control group of children with normal hearing was smaller.

  9. Affective Properties of Mothers' Speech to Infants with Hearing Impairment and Cochlear Implants

    ERIC Educational Resources Information Center

    Kondaurova, Maria V.; Bergeson, Tonya R.; Xu, Huiping; Kitamura, Christine

    2015-01-01

    Purpose: The affective properties of infant-directed speech influence the attention of infants with normal hearing to speech sounds. This study explored the affective quality of maternal speech to infants with hearing impairment (HI) during the 1st year after cochlear implantation as compared to speech to infants with normal hearing. Method:…

  10. Coordination of Gaze and Speech in Communication between Children with Hearing Impairment and Normal-Hearing Peers

    ERIC Educational Resources Information Center

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2014-01-01

    Purpose: To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Method: Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions,…

  11. Story retelling skills in Persian speaking hearing-impaired children.

    PubMed

    Jarollahi, Farnoush; Mohamadi, Reyhane; Modarresi, Yahya; Agharasouli, Zahra; Rahimzadeh, Shadi; Ahmadi, Tayebeh; Keyhani, Mohammad-Reza

    2017-05-01

    Since the pragmatic skills of hearing-impaired Persian-speaking children have not yet been investigated particularly through story retelling, this study aimed to evaluate some pragmatic abilities of normal-hearing and hearing-impaired children using a story retelling test. 15 normal-hearing and 15 profound hearing-impaired 7-year-old children were evaluated using the story retelling test with the content validity of 89%, construct validity of 85%, and reliability of 83%. Three macro structure criteria including topic maintenance, event sequencing, explicitness, and four macro structure criteria including referencing, conjunctive cohesion, syntax complexity, and utterance length were assessed. The test was performed with live voice in a quiet room where children were then asked to retell the story. The tasks of the children were recorded on a tape, transcribed, scored and analyzed. In the macro structure criteria, utterances of hearing-impaired students were less consistent, enough information was not given to listeners to have a full understanding of the subject, and the story events were less frequently expressed in a rational order than those of normal-hearing group (P < 0.0001). Regarding the macro structure criteria of the test, unlike the normal-hearing students who obtained high scores, hearing-impaired students failed to gain any scores on the items of this section. These results suggest that Hearing-impaired children were not able to use language as effectively as their hearing peers, and they utilized quite different pragmatic functions. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Nonlinear frequency compression: effects on sound quality ratings of speech and music.

    PubMed

    Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-03-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.

  13. Bilateral cochlear implants in children: Effects of auditory experience and deprivation on auditory perception

    PubMed Central

    Litovsky, Ruth Y.; Gordon, Karen

    2017-01-01

    Spatial hearing skills are essential for children as they grow, learn and play. They provide critical cues for determining the locations of sources in the environment, and enable segregation of important sources, such as speech, from background maskers or interferers. Spatial hearing depends on availability of monaural cues and binaural cues. The latter result from integration of inputs arriving at the two ears from sounds that vary in location. The binaural system has exquisite mechanisms for capturing differences between the ears in both time of arrival and intensity. The major cues that are thus referred to as being vital for binaural hearing are: interaural differences in time (ITDs) and interaural differences in levels (ILDs). In children with normal hearing (NH), spatial hearing abilities are fairly well developed by age 4–5 years. In contrast, children who are deaf and hear through cochlear implants (CIs) do not have an opportunity to experience normal, binaural acoustic hearing early in life. These children may function by having to utilize auditory cues that are degraded with regard to numerous stimulus features. In recent years there has been a notable increase in the number of children receiving bilateral CIs, and evidence suggests that while having two CIs helps them function better than when listening through a single CI, they generally perform worse than their NH peers. This paper reviews some of the recent work on bilaterally implanted children. The focus is on measures of spatial hearing, including sound localization, release from masking for speech understanding in noise and binaural sensitivity using research processors. Data from behavioral and electrophysiological studies are included, with a focus on the recent work of the authors and their collaborators. The effects of auditory plasticity and deprivation on the emergence of binaural and spatial hearing are discussed along with evidence for reorganized processing from both behavioral and electrophysiological studies. The consequences of both unilateral and bilateral auditory deprivation during development suggest that the relevant set of issues is highly complex with regard to successes and the limitations experienced by children receiving bilateral cochlear implants. PMID:26828740

  14. Evidence for Website Claims about the Benefits of Teaching Sign Language to Infants and Toddlers with Normal Hearing

    ERIC Educational Resources Information Center

    Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer

    2012-01-01

    The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…

  15. The effect of sensorineural hearing loss and tinnitus on speech recognition over air and bone conduction military communications headsets.

    PubMed

    Manning, Candice; Mermagen, Timothy; Scharine, Angelique

    2017-06-01

    Military personnel are at risk for hearing loss due to noise exposure during deployment (USACHPPM, 2008). Despite mandated use of hearing protection, hearing loss and tinnitus are prevalent due to reluctance to use hearing protection. Bone conduction headsets can offer good speech intelligibility for normal hearing (NH) listeners while allowing the ears to remain open in quiet environments and the use of hearing protection when needed. Those who suffer from tinnitus, the experience of perceiving a sound not produced by an external source, often show degraded speech recognition; however, it is unclear whether this is a result of decreased hearing sensitivity or increased distractibility (Moon et al., 2015). It has been suggested that the vibratory stimulation of a bone conduction headset might ameliorate the effects of tinnitus on speech perception; however, there is currently no research to support or refute this claim (Hoare et al., 2014). Speech recognition of words presented over air conduction and bone conduction headsets was measured for three groups of listeners: NH, sensorineural hearing impaired, and/or tinnitus sufferers. Three levels of speech-to-noise (SNR = 0, -6, -12 dB) were created by embedding speech items in pink noise. Better speech recognition performance was observed with the bone conduction headset regardless of hearing profile, and speech intelligibility was a function of SNR. Discussion will include study limitations and the implications of these findings for those serving in the military. Published by Elsevier B.V.

  16. Extended high-frequency thresholds in college students: effects of music player use and other recreational noise.

    PubMed

    Le Prell, Colleen G; Spankovich, Christopher; Lobariñas, Edward; Griffiths, Scott K

    2013-09-01

    Human hearing is sensitive to sounds from as low as 20 Hz to as high as 20,000 Hz in normal ears. However, clinical tests of human hearing rarely include extended high-frequency (EHF) threshold assessments, at frequencies extending beyond 8000 Hz. EHF thresholds have been suggested for use monitoring the earliest effects of noise on the inner ear, although the clinical usefulness of EHF threshold testing is not well established for this purpose. The primary objective of this study was to determine if EHF thresholds in healthy, young adult college students vary as a function of recreational noise exposure. A retrospective analysis of a laboratory database was conducted; all participants with both EHF threshold testing and noise history data were included. The potential for "preclinical" EHF deficits was assessed based on the measured thresholds, with the noise surveys used to estimate recreational noise exposure. EHF thresholds measured during participation in other ongoing studies were available from 87 participants (34 male and 53 female); all participants had hearing within normal clinical limits (≤25 HL) at conventional frequencies (0.25-8 kHz). EHF thresholds closely matched standard reference thresholds [ANSI S3.6 (1996) Annex C]. There were statistically reliable threshold differences in participants who used music players, with 3-6 dB worse thresholds at the highest test frequencies (10-16 kHz) in participants who reported long-term use of music player devices (>5 yr), or higher listening levels during music player use. It should be possible to detect small changes in high-frequency hearing for patients or participants who undergo repeated testing at periodic intervals. However, the increased population-level variability in thresholds at the highest frequencies will make it difficult to identify the presence of small but potentially important deficits in otherwise normal-hearing individuals who do not have previously established baseline data. American Academy of Audiology.

  17. Cochlear implant effectiveness in postlingual single-sided deaf individuals: what's the point?

    PubMed

    Finke, Mareike; Bönitz, Hanna; Lyxell, Björn; Illg, Angelika

    2017-06-01

    By extending the indication criteria for cochlear implants (CI), the population of CI candidates increased in age, as well as range and type of hearing loss. This qualitative study identified factors that contributed to seek CI treatment in single-sided deaf individuals and gained insights how single-sided deafness (SSD) and hearing with a CI affect their lives. An open-ended questionnaire and a standardised inventory (IOI-HA) were used. Qualitative data reflecting the reasons to seek CI treatment and the individual experiences after CI switch-on were collected. A total of 19 postlingually deafened single-sided deaf CI users. Participants use their CI daily and stated that their life satisfaction increased since CI activation. The analysis of the qualitative data revealed four core categories: sound localisation, tinnitus and noise sensitivity, fear to lose the second ear and quality of life. Our results show how strongly and diversely quality of hearing and quality of life is affected by acquired SSD and improved after CI activation. Our data suggest that the fear of hearing loss (HL) on the normal hearing (NH) ear is an important but so far neglected reason to seek treatment with a CI in individuals with postlingual SSD.

  18. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants

    PubMed Central

    Zheng, Yi; Godar, Shelly P.; Litovsky, Ruth Y.

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users. PMID:26288142

  19. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    PubMed

    Zheng, Yi; Godar, Shelly P; Litovsky, Ruth Y

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  20. Initial Stop Voicing in Bilingual Children with Cochlear Implants and Their Typically Developing Peers with Normal Hearing

    ERIC Educational Resources Information Center

    Bunta, Ferenc; Goodin-Mayeda, C. Elizabeth; Procter, Amanda; Hernandez, Arturo

    2016-01-01

    Purpose: This study focuses on stop voicing differentiation in bilingual children with normal hearing (NH) and their bilingual peers with hearing loss who use cochlear implants (CIs). Method: Twenty-two bilingual children participated in our study (11 with NH, "M" age = 5;1 [years;months], and 11 with CIs, "M" hearing age =…

  1. False Belief Development in Children Who Are Hard of Hearing Compared with Peers with Normal Hearing

    ERIC Educational Resources Information Center

    Walker, Elizabeth A.; Ambrose, Sophie E.; Oleson, Jacob; Moeller, Mary Pat

    2017-01-01

    Purpose: This study investigates false belief (FB) understanding in children who are hard of hearing (CHH) compared with children with normal hearing (CNH) at ages 5 and 6 years and at 2nd grade. Research with this population has theoretical significance, given that the early auditory-linguistic experiences of CHH are less restricted compared with…

  2. Judgments of Emotion in Clear and Conversational Speech by Young Adults with Normal Hearing and Older Adults with Hearing Impairment

    ERIC Educational Resources Information Center

    Morgan, Shae D.; Ferguson, Sarah Hargus

    2017-01-01

    Purpose: In this study, we investigated the emotion perceived by young listeners with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners) when listening to speech produced conversationally or in a clear speaking style. Method: The first experiment included 18 YNH listeners, and the second included 10 additional…

  3. How Hearing Loss and Age Affect Emotional Responses to Nonspeech Sounds

    ERIC Educational Resources Information Center

    Picou, Erin M.

    2016-01-01

    Purpose: The purpose of this study was to evaluate the effects of hearing loss and age on subjective ratings of emotional valence and arousal in response to nonspeech sounds. Method: Three groups of adults participated: 20 younger listeners with normal hearing (M = 24.8 years), 20 older listeners with normal hearing (M = 55.8 years), and 20 older…

  4. Auditory and tactile gap discrimination by observers with normal and impaired hearing.

    PubMed

    Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Perez, Zachary D; Delhorne, Lorraine A; Villabona, Timothy J

    2014-02-01

    Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21-69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250-250, 250-400, 400-250, and 400-400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.

  5. Cochlear Implants Special Issue Article: Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

    PubMed Central

    Luo, Xin; Fu, Qian-Jie; Galvin, John J.

    2007-01-01

    The present study investigated the ability of normal-hearing listeners and cochlear implant users to recognize vocal emotions. Sentences were produced by 1 male and 1 female talker according to 5 target emotions: angry, anxious, happy, sad, and neutral. Overall amplitude differences between the stimuli were either preserved or normalized. In experiment 1, vocal emotion recognition was measured in normal-hearing and cochlear implant listeners; cochlear implant subjects were tested using their clinically assigned processors. When overall amplitude cues were preserved, normal-hearing listeners achieved near-perfect performance, whereas listeners with cochlear implant recognized less than half of the target emotions. Removing the overall amplitude cues significantly worsened mean normal-hearing and cochlear implant performance. In experiment 2, vocal emotion recognition was measured in listeners with cochlear implant as a function of the number of channels (from 1 to 8) and envelope filter cutoff frequency (50 vs 400 Hz) in experimental speech processors. In experiment 3, vocal emotion recognition was measured in normal-hearing listeners as a function of the number of channels (from 1 to 16) and envelope filter cutoff frequency (50 vs 500 Hz) in acoustic cochlear implant simulations. Results from experiments 2 and 3 showed that both cochlear implant and normal-hearing performance significantly improved as the number of channels or the envelope filter cutoff frequency was increased. The results suggest that spectral, temporal, and overall amplitude cues each contribute to vocal emotion recognition. The poorer cochlear implant performance is most likely attributable to the lack of salient pitch cues and the limited functional spectral resolution. PMID:18003871

  6. Spinster Homolog 2 (Spns2) Deficiency Causes Early Onset Progressive Hearing Loss

    PubMed Central

    Chen, Jing; Ingham, Neil; Kelly, John; Jadeja, Shalini; Goulding, David; Pass, Johanna; Mahajan, Vinit B.; Tsang, Stephen H.; Nijnik, Anastasia; Jackson, Ian J.; White, Jacqueline K.; Forge, Andrew; Jagger, Daniel; Steel, Karen P.

    2014-01-01

    Spinster homolog 2 (Spns2) acts as a Sphingosine-1-phosphate (S1P) transporter in zebrafish and mice, regulating heart development and lymphocyte trafficking respectively. S1P is a biologically active lysophospholipid with multiple roles in signalling. The mechanism of action of Spns2 is still elusive in mammals. Here, we report that Spns2-deficient mice rapidly lost auditory sensitivity and endocochlear potential (EP) from 2 to 3 weeks old. We found progressive degeneration of sensory hair cells in the organ of Corti, but the earliest defect was a decline in the EP, suggesting that dysfunction of the lateral wall was the primary lesion. In the lateral wall of adult mutants, we observed structural changes of marginal cell boundaries and of strial capillaries, and reduced expression of several key proteins involved in the generation of the EP (Kcnj10, Kcnq1, Gjb2 and Gjb6), but these changes were likely to be secondary. Permeability of the boundaries of the stria vascularis and of the strial capillaries appeared normal. We also found focal retinal degeneration and anomalies of retinal capillaries together with anterior eye defects in Spns2 mutant mice. Targeted inactivation of Spns2 in red blood cells, platelets, or lymphatic or vascular endothelial cells did not affect hearing, but targeted ablation of Spns2 in the cochlea using a Sox10-Cre allele produced a similar auditory phenotype to the original mutation, suggesting that local Spns2 expression is critical for hearing in mammals. These findings indicate that Spns2 is required for normal maintenance of the EP and hence for normal auditory function, and support a role for S1P signalling in hearing. PMID:25356849

  7. Risk Factors for Hearing Decrement Among U.S. Air Force Aviation-Related Personnel.

    PubMed

    Greenwell, Brandon M; Tvaryanas, Anthony P; Maupin, Genny M

    2018-02-01

    The purpose of this study was to analyze historical hearing sensitivity data to determine factors associated with an occupationally significant change in hearing sensitivity in U.S. Air Force aviation-related personnel. This study was a longitudinal, retrospective cohort analysis of audiogram records for Air Force aviation-related personnel on active duty during calendar year 2013 without a diagnosis of non-noise-related hearing loss. The outcomes of interest were raw change in hearing sensitivity from initial baseline to 2013 audiogram and initial occurrence of a significant threshold shift (STS) and non-H1 audiogram profile. Potential predictor variables included age and elapsed time in cohort for each audiogram, gender, and Air Force Specialty Code. Random forest analyses conducted on a learning sample were used to identify relevant predictor variables. Mixed effects models were fitted to a separate validation sample to make statistical inferences. The final dataset included 167,253 nonbaseline audiograms on 10,567 participants. Only the interaction between time since baseline audiogram and age was significantly associated with raw change in hearing sensitivity by STS metric. None of the potential predictors were associated with the likelihood for an STS. Time since baseline audiogram, age, and their interaction were significantly associated with the likelihood for a non-HI hearing profile. In this study population, age and elapsed time since baseline audiogram were modestly associated with decreased hearing sensitivity and increased likelihood for a non-H1 hearing profile. Aircraft type, as determined from Air Force Specialty Code, was not associated with changes in hearing sensitivity by STS metric.Greenwell BM, Tvaryanas AP, Maupin GM. Risk factors for hearing decrement among U.S. Air Force aviation-related personnel. Aerosp Med Hum Perform. 2018; 89(2):80-86.

  8. Visual Field Abnormalities among Adolescent Boys with Hearing Impairments

    PubMed Central

    KHORRAMI-NEJAD, Masoud; HERAVIAN, Javad; SEDAGHAT, Mohamad-Reza; MOMENI-MOGHADAM, Hamed; SOBHANI-RAD, Davood; ASKARIZADEH, Farshad

    2016-01-01

    The aim of this study was to compare the visual field (VF) categorizations (based on the severity of VF defects) between adolescent boys with hearing impairments and those with normal hearing. This cross-sectional study involved the evaluation of the VF of 64 adolescent boys with hearing impairments and 68 age-matched boys with normal hearing at high schools in Tehran, Iran, in 2013. All subjects had an intelligence quotient (IQ) > 70. The hearing impairments were classified based on severity and time of onset. Participants underwent a complete eye examination, and the VFs were investigated using automated perimetry with a Humphrey Visual Field Analyzer. This device was used to determine their foveal threshold (FT), mean deviation (MD), and Glaucoma Hemifield Test (GHT) results. Most (50%) of the boys with hearing impairments had profound hearing impairments. There was no significant between-group difference in age (P = 0.49) or IQ (P = 0.13). There was no between-group difference in the corrected distance visual acuity (P = 0.183). According to the FT, MD, and GHT results, the percentage of boys with abnormal VFs in the hearing impairment group was significantly greater than that in the normal hearing group: 40.6% vs. 22.1%, 59.4% vs. 19.1%, and 31.2% vs. 8.8%, respectively (P < 0.0001). The mean MD in the hearing impairment group was significantly worse than that in the normal hearing group (-0.79 ± 2.04 and -4.61 ± 6.52 dB, respectively, P < 0.0001), and the mean FT was also significantly worse (38.97 ± 1.66 vs. 35.30 ± 1.43 dB, respectively, P <0.0001). Moreover, there was a significant between-group difference in the GHT results (P < 0.0001). Thus, there were higher percentages of boys with VF abnormalities and higher mean MD, FT, and GHT results among those with hearing impairments compared to those with normal hearing. These findings emphasize the need for detailed VF assessments for patients with hearing impairments. PMID:28293650

  9. Socio-demographic determinants of hearing impairment studied in 103,835 term babies.

    PubMed

    Van Kerschaver, Erwin; Boudewyns, An N; Declau, Frank; Van de Heyning, Paul H; Wuyts, Floris L

    2013-02-01

    Serious hearing problems appear in approximately one in 1000 newborns. In 2000, the Joint Committee on Infant Hearing defined a list of risk factors for neonatal hearing impairment relating to health, physical characteristics and family history. The aim of this study is to determine which personal, environmental and social factors are associated with the prevalence of congenital hearing impairment (CHI). The entire population of 103,835 term newborns in Flanders, Belgium, was tested by a universal neonatal hearing screening (UNHS) programme using automated auditory brainstem responses (AABR). In the case of a positive result, a CHI diagnosis was verified in specialized referral centres. Socio-demographic risk factors were investigated across the entire population to study any relationship with CHI. The prevalence of bilateral CHI of 35 dB nHL (normal hearing level) or more was 0.87/1000 newborns. The sensitivity and specificity of the screening test were 94.02 and 99.96%, respectively. The socio-demographic factors of gender, birth order, birth length, feeding type, level of education and origin of the mother were found to be independent predictors of CHI. The socio-demographic factors found to be associated with CHI extend the list of classic risk factors as defined by the American Academy of Pediatrics (AAP). Assessment of these additional factors may alert the treating physician to the increased risk of newborn hearing impairment and urge the need for accurate follow-up. Moreover, this extended assessment may improve decision making in medical practice and screening policy.

  10. Audiological follow-up of 24 patients affected by Williams syndrome.

    PubMed

    Barozzi, Stefania; Soi, Daniela; Spreafico, Emanuela; Borghi, Anna; Comiotto, Elisabetta; Gagliardi, Chiara; Selicorni, Angelo; Forti, Stella; Cesarani, Antonio; Brambilla, Daniele

    2013-09-01

    Williams syndrome is a neurodevelopmental disorder associated with cardiovascular problems, facial abnormalities and several behavioural and neurological disabilities. It is also characterized by some typical audiological features including abnormal sensitivity to sounds, cochlear impairment related to the outer hair cells of the basal turn of the cochlea, and sensorineural or mixed hearing loss, predominantly in the high frequency range. The aim of this report is to describe a follow-up study of auditory function in a cohort of children affected by this syndrome. 24 patients, aged 5-14 years, were tested by means of air/bone conduction pure-tone audiometry, immittance test and transient evoked otoacoustic emissions. They were evaluated again 5 years after the first assessment, and 10 of them underwent a second follow-up examination after a further 5 years. The audiometric results showed hearing loss, defined by a pure tone average >15 dB HL, in 12.5% of the participants. The incidence of hearing loss did not change over the 5-year period and increased to 30% in the patients who underwent the 10-year follow-up. Progressive sensorineural hearing loss was detected in 20% of the patients. A remarkable finding of our study regarded sensorineural hearing impairment in the high frequency range, which increased significantly from 25% to 50% of the participants over the 5-year period. The increase became even more significant in the group of patients who underwent the 10-year follow-up, by which time the majority of them (80%) had developed sensorineural hearing loss. Otoacoustic emissions were found to be absent in a high percentage of patients, thus confirming the cochlear fragility of individuals with Williams syndrome. Our study verified that most of the young Williams syndrome patients had normal hearing sensitivity within the low-middle frequency range, but showed a weakness regarding the high frequencies, the threshold of which worsened significantly over time in most patients. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  11. Talker Differences in Clear and Conversational Speech: Perceived Sentence Clarity for Young Adults with Normal Hearing and Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Ferguson, Sarah Hargus; Morgan, Shae D.

    2018-01-01

    Purpose: The purpose of this study is to examine talker differences for subjectively rated speech clarity in clear versus conversational speech, to determine whether ratings differ for young adults with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners), and to explore effects of certain talker characteristics…

  12. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults with Normal Hearing but Not Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker

    2016-01-01

    Purpose: Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Method: Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13…

  13. Hearing Tests Based on Biologically Calibrated Mobile Devices: Comparison With Pure-Tone Audiometry

    PubMed Central

    Grysiński, Tomasz; Kręcicki, Tomasz

    2018-01-01

    Background Hearing screening tests based on pure-tone audiometry may be conducted on mobile devices, provided that the devices are specially calibrated for the purpose. Calibration consists of determining the reference sound level and can be performed in relation to the hearing threshold of normal-hearing persons. In the case of devices provided by the manufacturer, together with bundled headphones, the reference sound level can be calculated once for all devices of the same model. Objective This study aimed to compare the hearing threshold measured by a mobile device that was calibrated using a model-specific, biologically determined reference sound level with the hearing threshold obtained in pure-tone audiometry. Methods Trial participants were recruited offline using face-to-face prompting from among Otolaryngology Clinic patients, who own Android-based mobile devices with bundled headphones. The hearing threshold was obtained on a mobile device by means of an open access app, Hearing Test, with incorporated model-specific reference sound levels. These reference sound levels were previously determined in uncontrolled conditions in relation to the hearing threshold of normal-hearing persons. An audiologist-assisted self-measurement was conducted by the participants in a sound booth, and it involved determining the lowest audible sound generated by the device within the frequency range of 250 Hz to 8 kHz. The results were compared with pure-tone audiometry. Results A total of 70 subjects, 34 men and 36 women, aged 18-71 years (mean 36, standard deviation [SD] 11) participated in the trial. The hearing threshold obtained on mobile devices was significantly different from the one determined by pure-tone audiometry with a mean difference of 2.6 dB (95% CI 2.0-3.1) and SD of 8.3 dB (95% CI 7.9-8.7). The number of differences not greater than 10 dB reached 89% (95% CI 88-91), whereas the mean absolute difference was obtained at 6.5 dB (95% CI 6.2-6.9). Sensitivity and specificity for a mobile-based screening method were calculated at 98% (95% CI 93-100.0) and 79% (95% CI 71-87), respectively. Conclusions The method of hearing self-test carried out on mobile devices with bundled headphones demonstrates high compatibility with pure-tone audiometry, which confirms its potential application in hearing monitoring, screening tests, or epidemiological examinations on a large scale. PMID:29321124

  14. Hearing Tests Based on Biologically Calibrated Mobile Devices: Comparison With Pure-Tone Audiometry.

    PubMed

    Masalski, Marcin; Grysiński, Tomasz; Kręcicki, Tomasz

    2018-01-10

    Hearing screening tests based on pure-tone audiometry may be conducted on mobile devices, provided that the devices are specially calibrated for the purpose. Calibration consists of determining the reference sound level and can be performed in relation to the hearing threshold of normal-hearing persons. In the case of devices provided by the manufacturer, together with bundled headphones, the reference sound level can be calculated once for all devices of the same model. This study aimed to compare the hearing threshold measured by a mobile device that was calibrated using a model-specific, biologically determined reference sound level with the hearing threshold obtained in pure-tone audiometry. Trial participants were recruited offline using face-to-face prompting from among Otolaryngology Clinic patients, who own Android-based mobile devices with bundled headphones. The hearing threshold was obtained on a mobile device by means of an open access app, Hearing Test, with incorporated model-specific reference sound levels. These reference sound levels were previously determined in uncontrolled conditions in relation to the hearing threshold of normal-hearing persons. An audiologist-assisted self-measurement was conducted by the participants in a sound booth, and it involved determining the lowest audible sound generated by the device within the frequency range of 250 Hz to 8 kHz. The results were compared with pure-tone audiometry. A total of 70 subjects, 34 men and 36 women, aged 18-71 years (mean 36, standard deviation [SD] 11) participated in the trial. The hearing threshold obtained on mobile devices was significantly different from the one determined by pure-tone audiometry with a mean difference of 2.6 dB (95% CI 2.0-3.1) and SD of 8.3 dB (95% CI 7.9-8.7). The number of differences not greater than 10 dB reached 89% (95% CI 88-91), whereas the mean absolute difference was obtained at 6.5 dB (95% CI 6.2-6.9). Sensitivity and specificity for a mobile-based screening method were calculated at 98% (95% CI 93-100.0) and 79% (95% CI 71-87), respectively. The method of hearing self-test carried out on mobile devices with bundled headphones demonstrates high compatibility with pure-tone audiometry, which confirms its potential application in hearing monitoring, screening tests, or epidemiological examinations on a large scale. ©Marcin Masalski, Tomasz Grysiński, Tomasz Kręcicki. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 10.01.2018.

  15. Underwater hearing sensitivity of a male and a female Steller sea lion (Eumetopias jubatus).

    PubMed

    Kastelein, Ronald A; van Schie, Robbert; Verboom, Wim C; de Haan, Dick

    2005-09-01

    The unmasked underwater hearing sensitivities of an 8-year-old male and a 7-year-old female Steller sea lion were measured in a pool, by using behavioral psychophysics. The animals were trained with positive reinforcement to respond when they detected an acoustic signal and not to respond when they did not. The signals were narrow-band, frequency-modulated stimuli with a duration of 600 ms and center frequencies ranging from 0.5 to 32 kHz for the male and from 4 to 32 kHz for the female. Detection thresholds at each frequency were measured by varying signal amplitude according to the up-down staircase method. The resulting underwater audiogram (50% detection thresholds) for the male Steller sea lion showed the typical mammalian U-shape. His maximum sensitivity (77 dB re: 1 microPa, rms) occurred at 1 kHz. The range of best hearing (10 dB from the maximum sensitivity) was from 1 to 16 kHz (4 octaves). Higher hearing thresholds (indicating poorer sensitivity) were observed below 1 kHz and above 16 kHz. The maximum sensitivity of the female (73 dB re: 1 microPa, rms) occurred at 25 kHz. Higher hearing thresholds (indicating poorer sensitivity) were observed for signals below 16 kHz and above 25 kHz. At frequencies for which both subjects were tested, hearing thresholds of the male were significantly higher than those of the female. The hearing sensitivity differences between the male and female Steller sea lion in this study may be due to individual differences in sensitivity between the subjects or due to sexual dimorphism in hearing.

  16. Air and Bone Conduction Thresholds of Deaf and Normal Hearing Subjects before and during the Elimination of Cutaneous-Tactile Interference with Anesthesia. Final Report.

    ERIC Educational Resources Information Center

    Nober, E. Harris

    The study investigated whether low frequency air and bone thresholds elicited at high intensity levels from deaf children with a sensory-neural diagnosis reflect valid auditory sensitivity or are mediated through cutaneous-tactile receptors. Subjects were five totally deaf (mean age 17.0) yielding vibrotactile thresholds but with no air and bone…

  17. Using individual differences to test the role of temporal and place cues in coding frequency modulation

    PubMed Central

    Whiteford, Kelly L.; Oxenham, Andrew J.

    2015-01-01

    The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding. PMID:26627783

  18. Using individual differences to test the role of temporal and place cues in coding frequency modulation.

    PubMed

    Whiteford, Kelly L; Oxenham, Andrew J

    2015-11-01

    The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding.

  19. Binaural masking release in children with Down syndrome.

    PubMed

    Porter, Heather L; Grantham, D Wesley; Ashmead, Daniel H; Tharpe, Anne Marie

    2014-01-01

    Binaural hearing results in a number of listening advantages relative to monaural hearing, including enhanced hearing sensitivity and better speech understanding in adverse listening conditions. These advantages are facilitated in part by the ability to detect and use interaural cues within the central auditory system. Binaural hearing for children with Down syndrome could be impacted by multiple factors including, structural anomalies within the peripheral and central auditory system, alterations in synaptic communication, and chronic otitis media with effusion. However, binaural hearing capabilities have not been investigated in these children. This study tested the hypothesis that children with Down syndrome experience less binaural benefit than typically developing peers. Participants included children with Down syndrome aged 6 to 16 years (n = 11), typically developing children aged 3 to 12 years (n = 46), adults with Down syndrome (n = 3), and adults with no known neurological delays (n = 6). Inclusionary criteria included normal to near-normal hearing sensitivity. Two tasks were used to assess binaural ability. Masking level difference (MLD) was calculated by comparing threshold for a 500-Hz pure-tone signal in 300-Hz wide Gaussian noise for N0S0 and N0Sπ signal configurations. Binaural intelligibility level difference was calculated using simulated free-field conditions. Speech recognition threshold was measured for closed-set spondees presented from 0-degree azimuth in speech-shaped noise presented from 0-, 45- and 90-degree azimuth, respectively. The developmental ability of children with Down syndrome was estimated and information regarding history of otitis media was obtained for all child participants via parent survey. Individuals with Down syndrome had higher masked thresholds for pure-tone and speech stimuli than typically developing individuals. Children with Down syndrome had significantly smaller MLDs than typically developing children. Adults with Down syndrome and control adults had similar MLDs. Similarities in simulated spatial release from masking were observed for all groups for the experimental parameters used in this study. No association was observed for any measure of binaural ability and developmental age for children with Down syndrome. Similar group psychometric functions were observed for children with Down syndrome and typically developing children in most instances, suggesting that attentiveness and motivation contributed equally to performance for both groups on most tasks. The binaural advantages afforded to typically developing children, such as enhanced hearing sensitivity in noise, were not as robust for children with Down syndrome in this study. Children with Down syndrome experienced less binaural benefit than typically developing peers for some stimuli, suggesting that they could require more favorable signal-to-noise ratios to achieve optimal performance in some adverse listening conditions. The reduced release from masking observed for children with Down syndrome could represent a delay in ability rather than a deficit that persists into adulthood. This could have implications for the planning of interventions for individuals with Down syndrome.

  20. Normal-Hearing Listeners’ and Cochlear Implant Users’ Perception of Pitch Cues in Emotional Speech

    PubMed Central

    Fuller, Christina; Gilbers, Dicky; Broersma, Mirjam; Goudbeek, Martijn; Free, Rolien; Başkent, Deniz

    2015-01-01

    In cochlear implants (CIs), acoustic speech cues, especially for pitch, are delivered in a degraded form. This study’s aim is to assess whether due to degraded pitch cues, normal-hearing listeners and CI users employ different perceptual strategies to recognize vocal emotions, and, if so, how these differ. Voice actors were recorded pronouncing a nonce word in four different emotions: anger, sadness, joy, and relief. These recordings’ pitch cues were phonetically analyzed. The recordings were used to test 20 normal-hearing listeners’ and 20 CI users’ emotion recognition. In congruence with previous studies, high-arousal emotions had a higher mean pitch, wider pitch range, and more dominant pitches than low-arousal emotions. Regarding pitch, speakers did not differentiate emotions based on valence but on arousal. Normal-hearing listeners outperformed CI users in emotion recognition, even when presented with CI simulated stimuli. However, only normal-hearing listeners recognized one particular actor’s emotions worse than the other actors’. The groups behaved differently when presented with similar input, showing that they had to employ differing strategies. Considering the respective speaker’s deviating pronunciation, it appears that for normal-hearing listeners, mean pitch is a more salient cue than pitch range, whereas CI users are biased toward pitch range cues. PMID:27648210

  1. Large-scale Phenotyping of Noise-Induced Hearing Loss in 100 Strains of Mice

    PubMed Central

    Myint, Anthony; White, Cory H.; Ohmen, Jeffrey D.; Li, Xin; Wang, Juemei; Lavinsky, Joel; Salehi, Pezhman; Crow, Amanda L.; Ohyama, Takahiro; Friedman, Rick A.

    2015-01-01

    A cornerstone technique in the study of hearing is the Auditory Brainstem Response (ABR), an electrophysiologic technique that can be used as a quantitative measure of hearing function. Previous studies have published databases of baseline ABR thresholds for mouse strains, providing a valuable resource for the study of baseline hearing function and genetic mapping of hearing traits in mice. In this study, we further expand upon the existing literature by characterizing the baseline ABR characteristics of 100 inbred mouse strains, 47 of which are newly characterized for hearing function. We identify several distinct patterns of baseline hearing deficits and provide potential avenues for further investigation. Additionally, we characterize the sensitivity of the same 100 strains to noise exposure using permanent thresholds shifts, identifying several distinct patterns of noise-sensitivity. The resulting data provides a new resource for studying hearing loss and noise-sensitivity in mice. PMID:26706709

  2. Hearing in Noise Test Brazil: standardization for young adults with normal hearing.

    PubMed

    Sbompato, Andressa Forlevise; Corteletti, Lilian Cassia Bornia Jacob; Moret, Adriane de Lima Mortari; Jacob, Regina Tangerino de Souza

    2015-01-01

    Individuals with the same ability of speech recognition in quiet can have extremely different results in noisy environments. To standardize speech perception in adults with normal hearing in the free field using the Brazilian Hearing in Noise Test. Contemporary, cross-sectional cohort study. 79 adults with normal hearing and without cognitive impairment participated in the study. Lists of Hearing in Noise Test sentences were randomly in quiet, noise front, noise right, and noise left. There were no significant differences between right and left ears at all frequencies tested (paired t-1 test). Nor were significant differences observed when comparing gender and interaction between these conditions. A difference was observed among the free field positions tested, except in the situations of noise right and noise left. Results of speech perception in adults with normal hearing in the free field during different listening situations in noise indicated poorer performance during the condition with noise and speech in front, i.e., 0°/0°. The values found in the standardization of the Hearing in Noise Test free field can be used as a reference in the development of protocols for tests of speech perception in noise, and for monitoring individuals with hearing impairment. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  3. Auditory, Visual, and Auditory-Visual Perceptions of Emotions by Young Children with Hearing Loss versus Children with Normal Hearing

    ERIC Educational Resources Information Center

    Most, Tova; Michaelis, Hilit

    2012-01-01

    Purpose: This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. Method: A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify…

  4. Effects of fundamental frequency and vocal-tract length cues on sentence segregation by listeners with hearing loss

    PubMed Central

    Mackersie, Carol L.; Dewey, James; Guthrie, Lesli A.

    2011-01-01

    The purpose was to determine the effect of hearing loss on the ability to separate competing talkers using talker differences in fundamental frequency (F0) and apparent vocal-tract length (VTL). Performance of 13 adults with hearing loss and 6 adults with normal hearing was measured using the Coordinate Response Measure. For listeners with hearing loss, the speech was amplified and filtered according to the NAL-RP hearing aid prescription. Target-to-competition ratios varied from 0 to 9 dB. The target sentence was randomly assigned to the higher or lower values of F0 or VTL on each trial. Performance improved for F0 differences up to 9 and 6 semitones for people with normal hearing and hearing loss, respectively, but only when the target talker had the higher F0. Recognition for the lower F0 target improved when trial-to-trial uncertainty was removed (9-semitone condition). Scores improved with increasing differences in VTL for the normal-hearing group. On average, hearing-impaired listeners did not benefit from VTL cues, but substantial inter-subject variability was observed. The amount of benefit from VTL cues was related to the average hearing loss in the 1–3-kHz region when the target talker had the shorter VTL. PMID:21877813

  5. Baseline hearing abilities and variability in wild beluga whales (Delphinapterus leucas).

    PubMed

    Castellote, Manuel; Mooney, T Aran; Quakenbush, Lori; Hobbs, Roderick; Goertz, Caroline; Gaglione, Eric

    2014-05-15

    While hearing is the primary sensory modality for odontocetes, there are few data addressing variation within a natural population. This work describes the hearing ranges (4-150 kHz) and sensitivities of seven apparently healthy, wild beluga whales (Delphinapterus leucas) during a population health assessment project that captured and released belugas in Bristol Bay, Alaska. The baseline hearing abilities and subsequent variations were addressed. Hearing was measured using auditory evoked potentials (AEPs). All audiograms showed a typical cetacean U-shape; substantial variation (>30 dB) was found between most and least sensitive thresholds. All animals heard well, up to at least 128 kHz. Two heard up to 150 kHz. Lowest auditory thresholds (35-45 dB) were identified in the range 45-80 kHz. Greatest differences in hearing abilities occurred at both the high end of the auditory range and at frequencies of maximum sensitivity. In general, wild beluga hearing was quite sensitive. Hearing abilities were similar to those of belugas measured in zoological settings, reinforcing the comparative importance of both settings. The relative degree of variability across the wild belugas suggests that audiograms from multiple individuals are needed to properly describe the maximum sensitivity and population variance for odontocetes. Hearing measures were easily incorporated into field-based settings. This detailed examination of hearing abilities in wild Bristol Bay belugas provides a basis for a better understanding of the potential impact of anthropogenic noise on a noise-sensitive species. Such information may help design noise-limiting mitigation measures that could be applied to areas heavily influenced and inhabited by endangered belugas. © 2014. Published by The Company of Biologists Ltd.

  6. Inquiring Ears Want to Know: A Fact Sheet about Your Hearing Test

    MedlinePlus

    ... track changes in hearing over time • Your hearing threshold levels (the quietest sounds you can hear) are ... Do I have normal hearing? Compare your hearing threshold levels to this scale: -10 – 25 dB 26 – ...

  7. Gender Identification Using High-Frequency Speech Energy: Effects of Increasing the Low-Frequency Limit.

    PubMed

    Donai, Jeremy J; Halbritter, Rachel M

    The purpose of this study was to investigate the ability of normal-hearing listeners to use high-frequency energy for gender identification from naturally produced speech signals. Two experiments were conducted using a repeated-measures design. Experiment 1 investigated the effects of increasing high-pass filter cutoff (i.e., increasing the low-frequency spectral limit) on gender identification from naturally produced vowel segments. Experiment 2 studied the effects of increasing high-pass filter cutoff on gender identification from naturally produced sentences. Confidence ratings for the gender identification task were also obtained for both experiments. Listeners in experiment 1 were capable of extracting talker gender information at levels significantly above chance from vowel segments high-pass filtered up to 8.5 kHz. Listeners in experiment 2 also performed above chance on the gender identification task from sentences high-pass filtered up to 12 kHz. Cumulatively, the results of both experiments provide evidence that normal-hearing listeners can utilize information from the very high-frequency region (above 4 to 5 kHz) of the speech signal for talker gender identification. These findings are at variance with current assumptions regarding the perceptual information regarding talker gender within this frequency region. The current results also corroborate and extend previous studies of the use of high-frequency speech energy for perceptual tasks. These findings have potential implications for the study of information contained within the high-frequency region of the speech spectrum and the role this region may play in navigating the auditory scene, particularly when the low-frequency portion of the spectrum is masked by environmental noise sources or for listeners with substantial hearing loss in the low-frequency region and better hearing sensitivity in the high-frequency region (i.e., reverse slope hearing loss).

  8. Predicting word-recognition performance in noise by young listeners with normal hearing using acoustic, phonetic, and lexical variables.

    PubMed

    McArdle, Rachel; Wilson, Richard H

    2008-06-01

    To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.

  9. Self-Monitoring of Listening Abilities in Normal-Hearing Children, Normal-Hearing Adults, and Children with Cochlear Implants

    PubMed Central

    Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.

    2012-01-01

    Background Self-monitoring has been shown to be an essential skill for various aspects of our lives, including our health, education, and interpersonal relationships. Likewise, the ability to monitor one’s speech reception in noisy environments may be a fundamental skill for communication, particularly for those who are often confronted with challenging listening environments, such as students and children with hearing loss. Purpose The purpose of this project was to determine if normal-hearing children, normal-hearing adults, and children with cochlear implants can monitor their listening ability in noise and recognize when they are not able to perceive spoken messages. Research Design Participants were administered an Objective-Subjective listening task in which their subjective judgments of their ability to understand sentences from the Coordinate Response Measure corpus presented in speech spectrum noise were compared to their objective performance on the same task. Study Sample Participants included 41 normal-hearing children, 35 normal-hearing adults, and 10 children with cochlear implants. Data Collection and Analysis On the Objective-Subjective listening task, the level of the masker noise remained constant at 63 dB SPL, while the level of the target sentences varied over a 12 dB range in a block of trials. Psychometric functions, relating proportion correct (Objective condition) and proportion perceived as intelligible (Subjective condition) to target/masker ratio (T/M), were estimated for each participant. Thresholds were defined as the T/M required to produce 51% correct (Objective condition) and 51% perceived as intelligible (Subjective condition). Discrepancy scores between listeners’ threshold estimates in the Objective and Subjective conditions served as an index of self-monitoring ability. In addition, the normal-hearing children were administered tests of cognitive skills and academic achievement, and results from these measures were compared to findings on the Objective-Subjective listening task. Results Nearly half of the children with normal hearing significantly overestimated their listening in noise ability on the Objective-Subjective listening task, compared to less than 9% of the adults. There was a significant correlation between age and results on the Objective-Subjective task, indicating that the younger children in the sample (age 7–12 yr) tended to overestimate their listening ability more than the adolescents and adults. Among the children with cochlear implants, eight of the 10 participants significantly overestimated their listening ability (as compared to 13 of the 24 normal-hearing children in the same age range). We did not find a significant relationship between results on the Objective-Subjective listening task and performance on the given measures of academic achievement or intelligence. Conclusions Findings from this study suggest that many children with normal hearing and children with cochlear implants often fail to recognize when they encounter conditions in which their listening ability is compromised. These results may have practical implications for classroom learning, particularly for children with hearing loss in mainstream settings. PMID:22436118

  10. Sudden onset unilateral sensorineural hearing loss after rabies vaccination.

    PubMed

    Okhovat, Saleh; Fox, Richard; Magill, Jennifer; Narula, Antony

    2015-12-15

    A 33-year-old man developed profound sudden onset right-sided hearing loss with tinnitus and vertigo, within 24 h of pretravel rabies vaccination. There was no history of upper respiratory tract infection, systemic illness, ototoxic medication or trauma, and normal otoscopic examination. Pure tone audiograms (PTA) demonstrated right-sided sensorineural hearing loss (thresholds 90-100 dB) and normal left-sided hearing. MRI internal acoustic meatus, viral serology (hepatitis B, C, HIV and cytomegalovirus) and syphilis screen were normal. Positive Epstein-Barr virus IgG, viral capsid IgG and anticochlear antibodies (anti-HSP-70) were noted. Initial treatment involved a course of high-dose oral prednisolone and acyclovir. Repeat PTAs after 12 days of treatment showed a small improvement in hearing thresholds. Salvage intratympanic steroid injections were attempted but failed to improve hearing further. Sudden onset sensorineural hearing loss (SSNHL) is an uncommon but frightening experience for patients. This is the first report of SSNHL following rabies immunisation in an adult. 2015 BMJ Publishing Group Ltd.

  11. Investigations in mechanisms and strategies to enhance hearing with cochlear implants

    NASA Astrophysics Data System (ADS)

    Churchill, Tyler H.

    Cochlear implants (CIs) produce hearing sensations by stimulating the auditory nerve (AN) with current pulses whose amplitudes are modulated by filtered acoustic temporal envelopes. While this technology has provided hearing for multitudinous CI recipients, even bilaterally-implanted listeners have more difficulty understanding speech in noise and localizing sounds than normal hearing (NH) listeners. Three studies reported here have explored ways to improve electric hearing abilities. Vocoders are often used to simulate CIs for NH listeners. Study 1 was a psychoacoustic vocoder study examining the effects of harmonic carrier phase dispersion and simulated CI current spread on speech intelligibility in noise. Results showed that simulated current spread was detrimental to speech understanding and that speech vocoded with carriers whose components' starting phases were equal was the least intelligible. Cross-correlogram analyses of AN model simulations confirmed that carrier component phase dispersion resulted in better neural envelope representation. Localization abilities rely on binaural processing mechanisms in the brainstem and mid-brain that are not fully understood. In Study 2, several potential mechanisms were evaluated based on the ability of metrics extracted from stereo AN simulations to predict azimuthal locations. Results suggest that unique across-frequency patterns of binaural cross-correlation may provide a strong cue set for lateralization and that interaural level differences alone cannot explain NH sensitivity to lateral position. While it is known that many bilateral CI users are sensitive to interaural time differences (ITDs) in low-rate pulsatile stimulation, most contemporary CI processing strategies use high-rate, constant-rate pulse trains. In Study 3, we examined the effects of pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition by bilateral CI listeners. Results showed that listeners were able to use low-rate pulse timing cues presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli even when mixed with high rates on other electrodes. These results have contributed to a better understanding of those aspects of the auditory system that support speech understanding and binaural hearing, suggested vocoder parameters that may simulate aspects of electric hearing, and shown that redundant, low-rate pulse timing supports improved spatial hearing for bilateral CI listeners.

  12. Audiometric Predictions Using SFOAE and Middle-Ear Measurements

    PubMed Central

    Ellison, John C.; Keefe, Douglas H.

    2006-01-01

    Objective The goals of the study are to determine how well stimulus-frequency otoacoustic emissions (SFOAEs) identify hearing loss, classify hearing loss as mild or moderate-severe, and correlate with pure-tone thresholds in a population of adults with normal middle-ear function. Other goals are to determine if middle-ear function as assessed by wideband acoustic transfer function (ATF) measurements in the ear canal account for the variability in normal thresholds, and if the inclusion of ATFs improves the ability of SFOAEs to identify hearing loss and predict pure-tone thresholds. Design The total suppressed SFOAE signal and its corresponding noise were recorded in 85 ears (22 normal ears and 63 ears with sensorineural hearing loss) at octave frequencies from 0.5 – 8 kHz using a nonlinear residual method. SFOAEs were recorded a second time in three impaired ears to assess repeatability. Ambient-pressure ATFs were obtained in all but one of these 85 ears, and were also obtained from an additional 31 normal-hearing subjects in whom SFOAE data were not obtained. Pure-tone air-and bone-conduction thresholds and 226-Hz tympanograms were obtained on all subjects. Normal tympanometry and the absence of air-bone gaps were used to screen subjects for normal middle-ear function. Clinical decision theory was used to assess the performance of SFOAE and ATF predictors in classifying ears as normal or impaired, and linear regression analysis was used to test the ability of SFOAE and ATF variables to predict the air-conduction audiogram. Results The ability of SFOAEs to classify ears as normal or hearing impaired was significant at all test frequencies. The ability of SFOAEs to classify impaired ears as either mild or moderate-severe was significant at test frequencies from 0.5 to 4 kHz. SFOAEs were present in cases of severe hearing loss. SFOAEs were also significantly correlated with air-conduction thresholds from 0.5 to 8 kHz. The best performance occurred using the SFOAE signal-to-noise ratio (S/N) as the predictor, and the overall best performance was at 2 kHz. The SFOAE S/N measures were repeatable to within 3.5 dB in impaired ears. The ATF measures explained up to 25% of the variance in the normal audiogram; however, ATF measures did not improve SFOAEs predictors of hearing loss except at 4 kHz. Conclusions In common with other OAE types, SFOAEs are capable of identifying the presence of hearing loss. In particular, SFOAEs performed better than distortion-product and click-evoked OAEs in predicting auditory status at 0.5 kHz; SFOAE performance was similar to that of other OAE types at higher frequencies except for a slight performance reduction at 4 kHz. Because SFOAEs were detected in ears with mild to severe cases of hearing loss they may also provide an estimate of the classification of hearing loss. Although SFOAEs were significantly correlated with hearing threshold, they do not appear to have clinical utility in predicting a specific behavioral threshold. Information on middle-ear status as assessed by ATF measures offered minimal improvement in SFOAE predictions of auditory status in a population of normal and impaired ears with normal middle-ear function. However, ATF variables did explain a significant fraction of the variability in the audiograms of normal ears, suggesting that audiometric thresholds in normal ears are partially constrained by middle-ear function as assessed by ATF tests. PMID:16230898

  13. Effects of Hearing Loss on Heart-Rate Variability and Skin Conductance Measured During Sentence Recognition in Noise

    PubMed Central

    Mackersie, Carol L.; MacPhee, Imola X.; Heldt, Emily W.

    2014-01-01

    SHORT SUMMARY (précis) Sentence recognition by participants with and without hearing loss was measured in quiet and in babble noise while monitoring two autonomic nervous system measures: heart-rate variability and skin conductance. Heart-rate variability decreased under difficult listening conditions for participants with hearing loss, but not for participants with normal hearing. Skin conductance noise reactivity was greater for those with hearing loss, than for those with normal hearing, but did not vary with the signal-to-noise ratio. Subjective ratings of workload/stress obtained after each listening condition were similar for the two participant groups. PMID:25170782

  14. Laryngeal Aerodynamics in Children with Hearing Impairment versus Age and Height Matched Normal Hearing Peers.

    PubMed

    Das, Barshapriya; Chatterjee, Indranil; Kumar, Suman

    2013-01-01

    Lack of proper auditory feedback in hearing-impaired subjects results in functional voice disorder. It is directly related to discoordination of intrinsic and extrinsic laryngeal muscles and disturbed contraction and relaxation of antagonistic muscles. A total of twenty children in the age range of 5-10 years were considered for the study. They were divided into two groups: normal hearing children and hearing aid user children. Results showed a significant difference in the vital capacity, maximum sustained phonation, and fast adduction abduction rate having equal variance for normal and hearing aid user children, respectively, but no significant difference was found in the peak flow value with being statistically significant. A reduced vital capacity in hearing aid user children suggests a limited use of the lung volume for speech production. It may be inferred from the study that the hearing aid user children have poor vocal proficiency which is reflected in their voice. The use of voicing component in hearing impaired subjects is seen due to improper auditory feedback. It was found that there was a significant difference in the vital capacity, maximum sustained phonation (MSP), and fast adduction abduction rate and no significant difference in the peak flow.

  15. Reading vocabulary in children with and without hearing loss: the roles of task and word type.

    PubMed

    Coppens, Karien M; Tellings, Agnes; Verhoeven, Ludo; Schreuder, Robert

    2013-04-01

    To address the problem of low reading comprehension scores among children with hearing impairment, it is necessary to have a better understanding of their reading vocabulary. In this study, the authors investigated whether task and word type differentiate the reading vocabulary knowledge of children with and without severe hearing loss. Seventy-two children with hearing loss and 72 children with normal hearing performed a lexical and a use decision task. Both tasks contained the same 180 words divided over 7 clusters, each cluster containing words with a similar pattern of scores on 8 word properties (word class, frequency, morphological family size, length, age of acquisition, mode of acquisition, imageability, and familiarity). Whereas the children with normal hearing scored better on the 2 tasks than the children with hearing loss, the size of the difference varied depending on the type of task and word. Performance differences between the 2 groups increased as words and tasks became more complex. Despite delays, children with hearing loss showed a similar pattern of vocabulary acquisition as their peers with normal hearing. For the most precise assessment of reading vocabulary possible, a range of tasks and word types should be used.

  16. Delayed auditory pathway maturation and prematurity.

    PubMed

    Koenighofer, Martin; Parzefall, Thomas; Ramsebner, Reinhard; Lucas, Trevor; Frei, Klemens

    2015-06-01

    Hearing loss is the most common sensory disorder in developed countries and leads to a severe reduction in quality of life. In this uncontrolled case series, we evaluated the auditory development in patients suffering from congenital nonsyndromic hearing impairment related to preterm birth. Six patients delivered preterm (25th-35th gestational weeks) suffering from mild to profound congenital nonsyndromic hearing impairment, descending from healthy, nonconsanguineous parents and were evaluated by otoacoustic emissions, tympanometry, brainstem-evoked response audiometry, and genetic testing. All patients were treated with hearing aids, and one patient required cochlear implantation. One preterm infant (32nd gestational week) initially presented with a 70 dB hearing loss, accompanied by negative otoacoustic emissions and normal tympanometric findings. The patient was treated with hearing aids and displayed a gradual improvement in bilateral hearing that completely normalized by 14 months of age accompanied by the development of otoacoustic emission responses. Conclusions We present here for the first time a fully documented preterm patient with delayed auditory pathway maturation and normalization of hearing within 14 months of birth. Although rare, postpartum development of the auditory system should, therefore, be considered in the initial stages for treating preterm hearing impaired patients.

  17. The effects of familiarity and complexity on appraisal of complex songs by cochlear implant recipients and normal hearing adults.

    PubMed

    Gfeller, Kate; Christ, Aaron; Knutson, John; Witt, Shelley; Mehr, Maureen

    2003-01-01

    The purposes of this study were (a) to develop a test of complex song appraisal that would be suitable for use with adults who use a cochlear implant (assistive hearing device) and (b) to compare the appraisal ratings (liking) of complex songs by adults who use cochlear implants (n = 66) with a comparison group of adults with normal hearing (n = 36). The article describes the development of a computerized test for appraisal, with emphasis on its theoretical basis and the process for item selection of naturalistic stimuli. The appraisal test was administered to the 2 groups to determine the effects of prior song familiarity and subjective complexity on complex song appraisal. Comparison of the 2 groups indicates that the implant users rate 2 of 3 musical genres (country western, pop) as significantly more complex than do normal hearing adults, and give significantly less positive ratings to classical music than do normal hearing adults. Appraisal responses of implant recipients were examined in relation to hearing history, age, performance on speech perception and cognitive tests, and musical background.

  18. Inner ear contribution to bone conduction hearing in the human.

    PubMed

    Stenfelt, Stefan

    2015-11-01

    Bone conduction (BC) hearing relies on sound vibration transmission in the skull bone. Several clinical findings indicate that in the human, the skull vibration of the inner ear dominates the response for BC sound. Two phenomena transform the vibrations of the skull surrounding the inner ear to an excitation of the basilar membrane, (1) inertia of the inner ear fluid and (2) compression and expansion of the inner ear space. The relative importance of these two contributors were investigated using an impedance lumped element model. By dividing the motion of the inner ear boundary in common and differential motion it was found that the common motion dominated at frequencies below 7 kHz but above this frequency differential motion was greatest. When these motions were used to excite the model it was found that for the normal ear, the fluid inertia response was up to 20 dB greater than the compression response. This changed in the pathological ear where, for example, otosclerosis of the stapes depressed the fluid inertia response and improved the compression response so that inner ear compression dominated BC hearing at frequencies above 400 Hz. The model was also able to predict experimental and clinical findings of BC sensitivity in the literature, for example the so called Carhart notch in otosclerosis, increased BC sensitivity in superior semicircular canal dehiscence, and altered BC sensitivity following a vestibular fenestration and RW atresia. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Comparison of the revised Hearing Performance Inventory with audiometric measures.

    PubMed

    Hawes, N A; Niswander, P S

    1985-01-01

    Hearing Performance Inventory scores were correlated with sensitivity, discrimination, and sensitivity + discrimination measures for 39 subjects with noise-induced hearing loss. The highest correlation obtained (0.67) was with monosyllabic speech discrimination in noise. However, there were not significant differences in correlations among the three types of audiometric measures. The audiometric variables accounted for less than half of the variance in Hearing Performance Inventory scores; therefore they are inadequate in predicting the amount of self-perceived hearing difficulties. The need for a variety of hearing handicap scales is discussed.

  20. Nonword repetition in children with cochlear implants: a potential clinical marker of poor language acquisition.

    PubMed

    Nittrouer, Susan; Caldwell-Tarr, Amanda; Sansom, Emily; Twersky, Jill; Lowenstein, Joanna H

    2014-11-01

    Cochlear implants (CIs) can facilitate the acquisition of spoken language for deaf children, but challenges remain. Language skills dependent on phonological sensitivity are most at risk for these children, so having an effective way to diagnose problems at this level would be of value for school speech-language pathologists. The goal of this study was to assess whether a nonword repetition (NWR) task could serve that purpose. Participants were 104 second graders: 49 with normal hearing (NH) and 55 with CIs. In addition to NWR, children were tested on 10 measures involving phonological awareness and processing, serial recall of words, vocabulary, reading, and grammar. Children with CIs performed more poorly than children with NH on NWR, and sensitivity to phonological structure alone explained that performance for children in both groups. For children with CIs, 2 audiological factors positively influenced outcomes on NWR: being identified with hearing loss at a younger age and having experience with wearing a hearing aid on the unimplanted ear at the time of receiving a 1st CI. NWR scores were better able to rule out than to rule in such language deficits. Well-designed NWR tasks could have clinical utility in assessments of language acquisition for school-age children with CIs.

  1. Binaural hearing in children using Gaussian enveloped and transposed tones.

    PubMed

    Ehlers, Erica; Kan, Alan; Winn, Matthew B; Stoelb, Corey; Litovsky, Ruth Y

    2016-04-01

    Children who use bilateral cochlear implants (BiCIs) show significantly poorer sound localization skills than their normal hearing (NH) peers. This difference has been attributed, in part, to the fact that cochlear implants (CIs) do not faithfully transmit interaural time differences (ITDs) and interaural level differences (ILDs), which are known to be important cues for sound localization. Interestingly, little is known about binaural sensitivity in NH children, in particular, with stimuli that constrain acoustic cues in a manner representative of CI processing. In order to better understand and evaluate binaural hearing in children with BiCIs, the authors first undertook a study on binaural sensitivity in NH children ages 8-10, and in adults. Experiments evaluated sound discrimination and lateralization using ITD and ILD cues, for stimuli with robust envelope cues, but poor representation of temporal fine structure. Stimuli were spondaic words, Gaussian-enveloped tone pulse trains (100 pulse-per-second), and transposed tones. Results showed that discrimination thresholds in children were adult-like (15-389 μs for ITDs and 0.5-6.0 dB for ILDs). However, lateralization based on the same binaural cues showed higher variability than seen in adults. Results are discussed in the context of factors that may be responsible for poor representation of binaural cues in bilaterally implanted children.

  2. Rapid word-learning in normal-hearing and hearing-impaired children: effects of age, receptive vocabulary, and high-frequency amplification.

    PubMed

    Pittman, A L; Lewis, D E; Hoover, B M; Stelmachowicz, P G

    2005-12-01

    This study examined rapid word-learning in 5- to 14-year-old children with normal and impaired hearing. The effects of age and receptive vocabulary were examined as well as those of high-frequency amplification. Novel words were low-pass filtered at 4 kHz (typical of current amplification devices) and at 9 kHz. It was hypothesized that (1) the children with normal hearing would learn more words than the children with hearing loss, (2) word-learning would increase with age and receptive vocabulary for both groups, and (3) both groups would benefit from a broader frequency bandwidth. Sixty children with normal hearing and 37 children with moderate sensorineural hearing losses participated in this study. Each child viewed a 4-minute animated slideshow containing 8 nonsense words created using the 24 English consonant phonemes (3 consonants per word). Each word was repeated 3 times. Half of the 8 words were low-pass filtered at 4 kHz and half were filtered at 9 kHz. After viewing the story twice, each child was asked to identify the words from among pictures in the slide show. Before testing, a measure of current receptive vocabulary was obtained using the Peabody Picture Vocabulary Test (PPVT-III). The PPVT-III scores of the hearing-impaired children were consistently poorer than those of the normal-hearing children across the age range tested. A similar pattern of results was observed for word-learning in that the performance of the hearing-impaired children was significantly poorer than that of the normal-hearing children. Further analysis of the PPVT and word-learning scores suggested that although word-learning was reduced in the hearing-impaired children, their performance was consistent with their receptive vocabularies. Additionally, no correlation was found between overall performance and the age of identification, age of amplification, or years of amplification in the children with hearing loss. Results also revealed a small increase in performance for both groups in the extended bandwidth condition but the difference was not significant at the traditional p = 0.05 level. The ability to learn words rapidly appears to be poorer in children with hearing loss over a wide range of ages. These results coincide with the consistently poorer receptive vocabularies for these children. Neither the word-learning or receptive-vocabulary measures were related to the amplification histories of these children. Finally, providing an extended high-frequency bandwidth did not significantly improve rapid word-learning for either group with these stimuli.

  3. Speech recognition for bilaterally asymmetric and symmetric hearing aid microphone modes in simulated classroom environments.

    PubMed

    Ricketts, Todd A; Picou, Erin M

    2013-09-01

    This study aimed to evaluate the potential utility of asymmetrical and symmetrical directional hearing aid fittings for school-age children in simulated classroom environments. This study also aimed to evaluate speech recognition performance of children with normal hearing in the same listening environments. Two groups of school-age children 11 to 17 years of age participated in this study. Twenty participants had normal hearing, and 29 participants had sensorineural hearing loss. Participants with hearing loss were fitted with behind-the-ear hearing aids with clinically appropriate venting and were tested in 3 hearing aid configurations: bilateral omnidirectional, bilateral directional, and asymmetrical directional microphones. Speech recognition testing was completed in each microphone configuration in 3 environments: Talker-Front, Talker-Back, and Question-Answer situations. During testing, the location of the speech signal changed, but participants were always seated in a noisy, moderately reverberant classroom-like room. For all conditions, results revealed expected effects of directional microphones on speech recognition performance. When the signal of interest was in front of the listener, bilateral directional microphone was best, and when the signal of interest was behind the listener, bilateral omnidirectional microphone was best. Performance with asymmetric directional microphones was between the 2 symmetrical conditions. The magnitudes of directional benefits and decrements were not significantly correlated. In comparison with their peers with normal hearing, children with hearing loss performed similarly to their peers with normal hearing when fitted with directional microphones and the speech was from the front. In contrast, children with normal hearing still outperformed children with hearing loss if the speech originated from behind, even when the children were fitted with the optimal hearing aid microphone mode for the situation. Bilateral directional microphones can be effective in improving speech recognition performance for children in the classroom, as long as child is facing the talker of interest. Bilateral directional microphones, however, can impair performance if the signal originates from behind a listener. However, these data suggest that the magnitude of decrement is not predictable from an individual's benefit. The results re-emphasize the importance of appropriate switching between microphone modes so children can take full advantage of directional benefits without being hurt by directional decrements. An asymmetric fitting limits decrements, but does not lead to maximum speech recognition scores when compared with the optimal symmetrical fitting. Therefore, the asymmetric mode may not be the best option as a default fitting for children in a classroom environment. While directional microphones improve performance for children with hearing loss, their performance in most conditions continues to be impaired relative to their normal-hearing peers, particularly when the signals of interest originate from behind or from an unpredictable location.

  4. Speech Perception in Noise in Normally Hearing Children: Does Binaural Frequency Modulated Fitting Provide More Benefit than Monaural Frequency Modulated Fitting?

    PubMed

    Mukari, Siti Zamratol-Mai Sarah; Umat, Cila; Razak, Ummu Athiyah Abdul

    2011-07-01

    The aim of the present study was to compare the benefit of monaural versus binaural ear-level frequency modulated (FM) fitting on speech perception in noise in children with normal hearing. Reception threshold for sentences (RTS) was measured in no-FM, monaural FM, and binaural FM conditions in 22 normally developing children with bilateral normal hearing, aged 8 to 9 years old. Data were gathered using the Pediatric Malay Hearing in Noise Test (P-MyHINT) with speech presented from front and multi-talker babble presented from 90°, 180°, 270° azimuths in a sound treated booth. The results revealed that the use of either monaural or binaural ear level FM receivers provided significantly better mean RTSs than the no-FM condition (P<0.001). However, binaural FM did not produce a significantly greater benefit in mean RTS than monaural fitting. The benefit of binaural over monaural FM varies across individuals; while binaural fitting provided better RTSs in about 50% of study subjects, there were those in whom binaural fitting resulted in either deterioration or no additional improvement compared to monaural FM fitting. The present study suggests that the use of monaural ear-level FM receivers in children with normal hearing might provide similar benefit as binaural use. Individual subjects' variations of binaural FM benefit over monaural FM suggests that the decision to employ monaural or binaural fitting should be individualized. It should be noted however, that the current study recruits typically developing normal hearing children. Future studies involving normal hearing children with high risk of having difficulty listening in noise is indicated to see if similar findings are obtained.

  5. Exploration of a physiologically-inspired hearing-aid algorithm using a computer model mimicking impaired hearing.

    PubMed

    Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray

    2016-01-01

    To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.

  6. Categorical loudness scaling and equal-loudness contours in listeners with normal hearing and hearing loss

    PubMed Central

    Rasetshwane, Daniel M.; Trevino, Andrea C.; Gombert, Jessa N.; Liebig-Trehearn, Lauren; Kopun, Judy G.; Jesteadt, Walt; Neely, Stephen T.; Gorga, Michael P.

    2015-01-01

    This study describes procedures for constructing equal-loudness contours (ELCs) in units of phons from categorical loudness scaling (CLS) data and characterizes the impact of hearing loss on these estimates of loudness. Additionally, this study developed a metric, level-dependent loudness loss, which uses CLS data to specify the deviation from normal loudness perception at various loudness levels and as function of frequency for an individual listener with hearing loss. CLS measurements were made in 87 participants with hearing loss and 61 participants with normal hearing. An assessment of the reliability of CLS measurements was conducted on a subset of the data. CLS measurements were reliable. There was a systematic increase in the slope of the low-level segment of the CLS functions with increase in the degree of hearing loss. ELCs derived from CLS measurements were similar to standardized ELCs (International Organization for Standardization, ISO 226:2003). The presence of hearing loss decreased the vertical spacing of the ELCs, reflecting loudness recruitment and reduced cochlear compression. Representing CLS data in phons may lead to wider acceptance of CLS measurements. Like the audiogram that specifies hearing loss at threshold, level-dependent loudness loss describes deficit for suprathreshold sounds. Such information may have implications for the fitting of hearing aids. PMID:25920842

  7. The Sensitivity of Adolescent School-Based Hearing Screens Is Significantly Improved by Adding High Frequencies

    ERIC Educational Resources Information Center

    Sekhar, Deepa L.; Zalewski, Thomas R.; Beiler, Jessica S.; Czarnecki, Beth; Barr, Ashley L.; King, Tonya S.; Paul, Ian M.

    2016-01-01

    High frequency hearing loss (HFHL), often related to hazardous noise, affects one in six U.S. adolescents. Yet, only 20 states include school-based hearing screens for adolescents. Only six states test multiple high frequencies. Study objectives were to (1) compare the sensitivity of state school-based hearing screens for adolescents to gold…

  8. Sensory-motor relationships in speech production in post-lingually deaf cochlear-implanted adults and normal-hearing seniors: Evidence from phonetic convergence and speech imitation.

    PubMed

    Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc

    2017-07-01

    Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f 0 from their own mean f 0 was measured to evaluate the ability to converge to each acoustic target. showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Improving Mobile Phone Speech Recognition by Personalized Amplification: Application in People with Normal Hearing and Mild-to-Moderate Hearing Loss.

    PubMed

    Kam, Anna Chi Shan; Sung, John Ka Keung; Lee, Tan; Wong, Terence Ka Cheong; van Hasselt, Andrew

    In this study, the authors evaluated the effect of personalized amplification on mobile phone speech recognition in people with and without hearing loss. This prospective study used double-blind, within-subjects, repeated measures, controlled trials to evaluate the effectiveness of applying personalized amplification based on the hearing level captured on the mobile device. The personalized amplification settings were created using modified one-third gain targets. The participants in this study included 100 adults of age between 20 and 78 years (60 with age-adjusted normal hearing and 40 with hearing loss). The performance of the participants with personalized amplification and standard settings was compared using both subjective and speech-perception measures. Speech recognition was measured in quiet and in noise using Cantonese disyllabic words. Subjective ratings on the quality, clarity, and comfortableness of the mobile signals were measured with an 11-point visual analog scale. Subjective preferences of the settings were also obtained by a paired-comparison procedure. The personalized amplification application provided better speech recognition via the mobile phone both in quiet and in noise for people with hearing impairment (improved 8 to 10%) and people with normal hearing (improved 1 to 4%). The improvement in speech recognition was significantly better for people with hearing impairment. When the average device output level was matched, more participants preferred to have the individualized gain than not to have it. The personalized amplification application has the potential to improve speech recognition for people with mild-to-moderate hearing loss, as well as people with normal hearing, in particular when listening in noisy environments.

  10. Effects of serum zinc level on tinnitus.

    PubMed

    Berkiten, Güler; Kumral, Tolgar Lütfi; Yıldırım, Güven; Salturk, Ziya; Uyar, Yavuz; Atar, Yavuz

    2015-01-01

    The aim of this study was to assess zinc levels in tinnitus patients, and to evaluate the effects of zinc deficiency on tinnitus and hearing loss. One-hundred patients, who presented to an outpatient clinic with tinnitus between June 2009 and 2014, were included in the study. Patients were divided into three groups according to age: Group I (patients between 18 and 30years of age); Group II (patients between 31 and 60years of age); and Group III (patients between 61 and 78years of age). Following a complete ear, nose and throat examination, serum zinc levels were measured and the severity of tinnitus was quantified using the Tinnitus Severity Index Questionnaire (TSIQ). Patients were subsequently asked to provide a subjective judgment regarding the loudness of their tinnitus. The hearing status of patients was evaluated by audiometry and high-frequency audiometry. An average hearing sensitivity was calculated as the mean value of hearing thresholds between 250 and 20,000Hz. Serum zinc levels between 70 and 120μg/dl were considered normal. The severity and loudness of tinnitus, and the hearing thresholds of the normal zinc level and zinc-deficient groups, were compared. Twelve of 100 (12%) patients exhibited low zinc levels. The mean age of the zinc-deficient group was 65.41±12.77years. Serum zinc levels were significantly lower in group III (p<0.01). The severity and loudness of tinnitus were greater in zinc-deficient patients (p=0.011 and p=0.015, respectively). Moreover, the mean thresholds of air conduction were significantly higher in zinc-deficient patients (p=0.000). We observed that zinc levels decrease as age increases. In addition, there was a significant correlation between zinc level and the severity and loudness of tinnitus. Zinc deficiency was also associated with impairments in hearing thresholds. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Better late than never: effective air-borne hearing of toads delayed by late maturation of the tympanic middle ear structures.

    PubMed

    Womack, Molly C; Christensen-Dalsgaard, Jakob; Hoke, Kim L

    2016-10-15

    Most vertebrates have evolved a tympanic middle ear that enables effective hearing of airborne sound on land. Although inner ears develop during the tadpole stages of toads, tympanic middle ear structures are not complete until months after metamorphosis, potentially limiting the sensitivity of post-metamorphic juveniles to sounds in their environment. We tested the hearing of five species of toads to determine how delayed ear development impairs airborne auditory sensitivity. We performed auditory brainstem recordings to test the hearing of the toads and used micro-computed tomography and histology to relate the development of ear structures to hearing ability. We found a large (14-27 dB) increase in hearing sensitivity from 900 to 2500 Hz over the course of ear development. Thickening of the tympanic annulus cartilage and full ossification of the middle ear bone are associated with increased hearing ability in the final stages of ear maturation. Thus, juvenile toads are at a hearing disadvantage, at least in the high-frequency range, throughout much of their development, because late-forming ear elements are critical to middle ear function at these frequencies. We discuss the potential fitness consequences of late hearing development, although research directly addressing selective pressures on hearing sensitivity across ontogeny is lacking. Given that most vertebrate sensory systems function very early in life, toad tympanic hearing may be a sensory development anomaly. © 2016. Published by The Company of Biologists Ltd.

  12. Symbolic Play and Novel Noun Learning in Deaf and Hearing Children: Longitudinal Effects of Access to Sound on Early Precursors of Language.

    PubMed

    Quittner, Alexandra L; Cejas, Ivette; Wang, Nae-Yuh; Niparko, John K; Barker, David H

    2016-01-01

    In the largest, longitudinal study of young, deaf children before and three years after cochlear implantation, we compared symbolic play and novel noun learning to age-matched hearing peers. Participants were 180 children from six cochlear implant centers and 96 hearing children. Symbolic play was measured during five minutes of videotaped, structured solitary play. Play was coded as "symbolic" if the child used substitution (e.g., a wooden block as a bed). Novel noun learning was measured in 10 trials using a novel object and a distractor. Cochlear implant vs. normal hearing children were delayed in their use of symbolic play, however, those implanted before vs. after age two performed significantly better. Children with cochlear implants were also delayed in novel noun learning (median delay 1.54 years), with minimal evidence of catch-up growth. Quality of parent-child interactions was positively related to performance on the novel noun learning, but not symbolic play task. Early implantation was beneficial for both achievement of symbolic play and novel noun learning. Further, maternal sensitivity and linguistic stimulation by parents positively affected noun learning skills, although children with cochlear implants still lagged in comparison to hearing peers.

  13. The Speech Intelligibility Index and the pure-tone average as predictors of lexical ability in children fit with hearing AIDS.

    PubMed

    Stiles, Derek J; Bentler, Ruth A; McGregor, Karla K

    2012-06-01

    To determine whether a clinically obtainable measure of audibility, the aided Speech Intelligibility Index (SII; American National Standards Institute, 2007), is more sensitive than the pure-tone average (PTA) at predicting the lexical abilities of children who wear hearing aids (CHA). School-age CHA and age-matched children with normal hearing (CNH) repeated words and nonwords, learned novel words, and completed a standardized receptive vocabulary test. Analyses of covariance allowed comparison of the 2 groups. For CHA, regression analyses determined whether SII held predictive value over and beyond PTA. CHA demonstrated poorer performance than CNH on tests of word and nonword repetition and receptive vocabulary. Groups did not differ on word learning. Aided SII was a stronger predictor of word and nonword repetition and receptive vocabulary than PTA. After accounting for PTA, aided SII remained a significant predictor of nonword repetition and receptive vocabulary. Despite wearing hearing aids, CHA performed more poorly on 3 of 4 lexical measures. Individual differences among CHA were predicted by aided SII. Unlike PTA, aided SII incorporates hearing aid amplification characteristics and speech-frequency weightings and may provide a more valid estimate of the child's access to and ability to learn from auditory input in real-world environments.

  14. Symbolic Play and Novel Noun Learning in Deaf and Hearing Children: Longitudinal Effects of Access to Sound on Early Precursors of Language

    PubMed Central

    Quittner, Alexandra L.; Cejas, Ivette; Wang, Nae-Yuh; Niparko, John K.; Barker, David H.

    2016-01-01

    In the largest, longitudinal study of young, deaf children before and three years after cochlear implantation, we compared symbolic play and novel noun learning to age-matched hearing peers. Participants were 180 children from six cochlear implant centers and 96 hearing children. Symbolic play was measured during five minutes of videotaped, structured solitary play. Play was coded as "symbolic" if the child used substitution (e.g., a wooden block as a bed). Novel noun learning was measured in 10 trials using a novel object and a distractor. Cochlear implant vs. normal hearing children were delayed in their use of symbolic play, however, those implanted before vs. after age two performed significantly better. Children with cochlear implants were also delayed in novel noun learning (median delay 1.54 years), with minimal evidence of catch-up growth. Quality of parent-child interactions was positively related to performance on the novel noun learning, but not symbolic play task. Early implantation was beneficial for both achievement of symbolic play and novel noun learning. Further, maternal sensitivity and linguistic stimulation by parents positively affected noun learning skills, although children with cochlear implants still lagged in comparison to hearing peers. PMID:27228032

  15. Upward spread of informational masking in normal-hearing and hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Alexander, Joshua M.; Lutfi, Robert A.

    2003-04-01

    Thresholds for pure-tone signals of 0.8, 2.0, and 5.0 kHz were measured in the presence of a simultaneous multitone masker in 15 normal-hearing and 8 hearing-impaired listeners. The masker consisted of fixed-frequency tones ranging from 522-8346 Hz at 1/3-octave intervals, excluding the 2/3-octave interval on either side of the signal. Masker uncertainty was manipulated by independently and randomly playing individual masker tones with probability p=0.5 or p=1.0 on each trial. Informational masking (IM) was estimated by the threshold difference (p=0.5 minus p=1.0). Decision weights were estimated from correlations of the listener's response with the occurrence of the signal and individual masker components on each trial. IM was greater for normal-hearing listeners than for hearing-impaired listeners, and most listeners had at least 10 dB of IM for one of the signal frequencies. For both groups, IM increased as the number of masker components below the signal frequency increased. Decision weights were also similar for both groups-masker frequencies below the signal were weighted more than those above. Implications are that normal-hearing and hearing-impaired individuals do not weight information differently in these masking conditions and that factors associated with listening may be partially responsible for the greater effectiveness of low-frequency maskers. [Work supported by NIDCD.

  16. Perception of Musical Emotion in the Students with Cognitive and Acquired Hearing Loss.

    PubMed

    Mazaheryazdi, Malihe; Aghasoleimani, Mina; Karimi, Maryam; Arjmand, Pirooz

    2018-01-01

    Hearing loss can affect the perception of emotional reaction to the music. The present study investigated whether the students with congenital hearing loss exposed to the deaf culture, percept the same emotion from the music as students with acquired hearing loss. Participants were divided into two groups; 30 students with bilaterally congenital moderate to severe hearing loss that were selected from deaf schools located in Tehran, Iran and 30 students with an acquired hearing loss with the same degree of hearing loss selected from Amiralam Hospital, Tehran, Iran and compared with the group of 30 age and gender-matched normal hearing subjects served our control in 2012. The musical stimuli consisted of three different sequences of music, (sadness, happiness, and fear) each with the duration of 60 sec. The students were asked to point to the lists of words that best matched with their emotions. Emotional perception of sadness, happiness, and fear in congenital hearing loss children was significantly poorly than acquired hearing loss and normal hearing group ( P <0.001). There was no significant difference in the emotional perception of sadness, happiness, and fear among the group of acquired hearing loss and normal hearing group ( P =0.75), ( P =1) and ( P =0.16) respectively. Neural plasticity induced by hearing assistant devises may be affected by the time when a hearing aid was first fitted and how the auditory system responds to the reintroduction of certain sounds via amplification. Therefore, children who experienced auditory input of different sound patterns in their early childhood will show more perceptual flexibility in different situations than the children with congenital hearing loss and Deaf culture.

  17. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing

    PubMed Central

    Papsin, Blake C.; Paludetti, Gaetano; Gordon, Karen A.

    2015-01-01

    Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in cochlear implant technology. PMID:26317976

  18. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing.

    PubMed

    Giannantonio, Sara; Polonenko, Melissa J; Papsin, Blake C; Paludetti, Gaetano; Gordon, Karen A

    2015-01-01

    Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in cochlear implant technology.

  19. Safety of the HyperSound® Audio System in Subjects with Normal Hearing.

    PubMed

    Mehta, Ritvik P; Mattson, Sara L; Kappus, Brian A; Seitzman, Robin L

    2015-06-11

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.

  20. Safety of the HyperSound® Audio System in Subjects with Normal Hearing

    PubMed Central

    Mattson, Sara L.; Kappus, Brian A.; Seitzman, Robin L.

    2015-01-01

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions. PMID:26779330

  1. The Effect of Tinnitus on Listening Effort in Normal-Hearing Young Adults: A Preliminary Study

    ERIC Educational Resources Information Center

    Degeest, Sofie; Keppler, Hannah; Corthals, Paul

    2017-01-01

    Purpose: The objective of this study was to investigate the effect of chronic tinnitus on listening effort. Method: Thirteen normal-hearing young adults with chronic tinnitus were matched with a control group for age, gender, hearing thresholds, and educational level. A dual-task paradigm was used to evaluate listening effort in different…

  2. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    ERIC Educational Resources Information Center

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  3. The effect of different cochlear implant microphones on acoustic hearing individuals’ binaural benefits for speech perception in noise

    PubMed Central

    Aronoff, Justin M.; Freed, Daniel J.; Fisher, Laurel M.; Pal, Ivan; Soli, Sigfrid D.

    2011-01-01

    Objectives Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones. Design HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues. Results The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the directional microphone when the speech and masker were spatially separated, emphasizing the importance of measuring binaural benefits separately for each HRTF. Evaluation of binaural benefits indicated that binaural squelch and spatial release from masking were found for all HRTFs and binaural summation was found for all but one HRTF, although binaural summation was less robust than the other types of binaural benefits. Additionally, the results indicated that neither interaural time nor level cues dominated binaural benefits for the normal hearing participants. Conclusions This study provides a means to measure the degree to which cochlear implant microphones affect acoustic hearing with respect to speech perception in noise. It also provides measures that can be used to evaluate the independent contributions of interaural time and level cues. These measures provide tools that can aid researchers in understanding and improving binaural benefits in acoustic hearing individuals listening via cochlear implant microphones. PMID:21412155

  4. Sensitivity in Interactions between Hearing Mothers and their Toddlers with Hearing Loss: The Effect of Cochlear Implantation

    ERIC Educational Resources Information Center

    Bakar, Zaharah Abu; Brown, P. Margaret; Remine, Maria D.

    2010-01-01

    This study investigated the potential effects of cochlear implantation and age at implantation on maternal interactional sensitivity. Three groups of dyads were studied at two points over 1 year. The hearing aid (HA) group wore hearing aids throughout the study, the early cochlear implanted (ECI) group were implanted prior to 22 months of age, and…

  5. The impact of aging and hearing status on verbal short-term memory.

    PubMed

    Verhaegen, Clémence; Collette, Fabienne; Majerus, Steve

    2014-01-01

    The aim of this study is to assess the impact of hearing status on age-related decrease in verbal short-term memory (STM) performance. This was done by administering a battery of verbal STM tasks to elderly and young adult participants matched for hearing thresholds, as well as to young normal-hearing control participants. The matching procedure allowed us to assess the importance of hearing loss as an explanatory factor of age-related STM decline. We observed that elderly participants and hearing-matched young participants showed equal levels of performance in all verbal STM tasks, and performed overall lower than the normal-hearing young control participants. This study provides evidence for recent theoretical accounts considering reduced hearing level as an important explanatory factor of poor auditory-verbal STM performance in older adults.

  6. Recognition and production of emotions in children with cochlear implants.

    PubMed

    Mildner, Vesna; Koska, Tena

    2014-01-01

    The aim of this study was to examine auditory recognition and vocal production of emotions in three prelingually bilaterally profoundly deaf children aged 6-7 who received cochlear implants before age 2, and compare them with age-matched normally hearing children. No consistent advantage was found for the normally hearing participants. In both groups, sadness was recognized best and disgust was the most difficult. Confusion matrices among other emotions (anger, happiness, and fear) showed that children with and without hearing impairment may rely on different cues. Both groups of children showed that perception is superior to production. Normally hearing children were more successful in the production of sadness, happiness, and fear, but not anger or disgust. The data set is too small to draw any definite conclusions, but it seems that a combination of early implantation and regular auditory-oral-based therapy enables children with cochlear implants to process and produce emotional content comparable with children with normal hearing.

  7. [Examination of relationship between level of hearing and written language skills in 10-14-year-old hearing impaired children].

    PubMed

    Turğut, Nedim; Karlıdağ, Turgut; Başar, Figen; Yalçın, Şinasi; Kaygusuz, İrfan; Keleş, Erol; Birkent, Ömer Faruk

    2015-01-01

    This study aims to review the relationship between written language skills and factors which are thought to affect this skill such as mean hearing loss, duration of auditory deprivation, speech discrimination score, and pre-school education attendance and socioeconomic status of hearing impaired children who attend 4th-7th grades in primary school in inclusive environment. The study included 25 hearing impaired children (14 males, 11 females; mean age 11.4±1.4 years; range 10 to 14 years) (study group) and 20 children (9 males, 11 females; mean age 11.5±1.3 years; range 10 to 14 years) (control group) with normal hearing in the same age group and studying in the same class. Study group was separated into two subgroups as group 1a and group 1b since some of the children with hearing disability used hearing aid while some used cochlear implant. Intragroup comparisons and relational screening were performed for those who use hearing aids and cochlear implants. Intergroup comparisons were performed to evaluate the effect of the parameters on written language skills. Written expression skill level of children with hearing disability was significantly lower than their normal hearing peers (p=0.001). A significant relationship was detected between written language skills and mean hearing loss (p=0.048), duration of auditory deprivation (p=0.021), speech discrimination score (p=0.014), and preschool attendance (p=0.005), when it comes to socioeconomic status we were not able to find any significant relationship (p=0.636). It can be said that hearing loss affects written language skills negatively and hearing impaired individuals develop low-level written language skills compared to their normal hearing peers.

  8. Selective attention in normal and impaired hearing.

    PubMed

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  9. Selective Attention in Normal and Impaired Hearing

    PubMed Central

    Shinn-Cunningham, Barbara G.; Best, Virginia

    2008-01-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202

  10. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    PubMed

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  11. The effect of noise-induced hearing loss on the intelligibility of speech in noise

    NASA Astrophysics Data System (ADS)

    Smoorenburg, G. F.; Delaat, J. A. P. M.; Plomp, R.

    1981-06-01

    Speech reception thresholds, both in quiet and in noise, and tone audiograms were measured for 14 normal ears (7 subjects) and 44 ears (22 subjects) with noise-induced hearing loss. Maximum hearing loss in the 4-6 kHz region equalled 40 to 90 dB (losses exceeded by 90% and 10%, respectively). Hearing loss for speech in quiet measured with respect to the median speech reception threshold for normal ears ranged from 1.8 dB to 13.4 dB. For speech in noise the numbers are 1.2 dB to 7.0 dB which means that the subjects with noise-induced hearing loss need a 1.2 to 7.0 dB higher signal-to-noise ratio than normal to understand sentences equally well. A hearing loss for speech of 1 dB corresponds to a decrease in sentence intelligibility of 15 to 20%. The relation between hearing handicap conceived as a reduced ability to understand speech and tone audiogram is discussed. The higher signal-to-noise ratio needed by people with noise-induced hearing loss to understand speech in noisy environments is shown to be due partly to the decreased bandwidth of their hearing caused by the noise dip.

  12. Glimpsing Speech in the Presence of Nonsimultaneous Amplitude Modulations from a Competing Talker: Effect of Modulation Rate, Age, and Hearing Loss

    ERIC Educational Resources Information Center

    Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.

    2016-01-01

    Purpose: This study investigated how listeners process acoustic cues preserved during sentences interrupted by nonsimultaneous noise that was amplitude modulated by a competing talker. Method: Younger adults with normal hearing and older adults with normal or impaired hearing listened to sentences with consonants or vowels replaced with noise…

  13. One-year audiologic monitoring of individuals exposed to the 1995 Oklahoma City bombing.

    PubMed

    Van Campen, L E; Dennis, J M; Hanlin, R C; King, S B; Velderman, A M

    1999-05-01

    This longitudinal study evaluated subjective, behavioral, and objective auditory function in 83 explosion survivors. Subjects were evaluated quarterly for 1 year with conventional pure-tone and extended high-frequencies audiometry, otoscopic inspections, immittance and speech audiometry, and questionnaires. There was no obvious relationship between subject location and symptoms or test results. Tinnitus, distorted hearing, loudness sensitivity, and otalgia were common symptoms. On average, 76 percent of subjects had predominantly sensorineural hearing loss at one or more frequencies. Twenty-four percent of subjects required amplification. Extended high frequencies showed evidence of acoustic trauma even when conventional frequencies fell within the normal range. Males had significantly poorer responses than females across frequencies. Auditory status of the group was significantly compromised and unchanged at the end of 1-year postblast.

  14. The relationship between mismatch response and the acoustic change complex in normal hearing infants.

    PubMed

    Uhler, Kristin M; Hunter, Sharon K; Tierney, Elyse; Gilley, Phillip M

    2018-06-01

    To examine the utility of the mismatch response (MMR) and acoustic change complex (ACC) for assessing speech discrimination in infants. Continuous EEG was recorded during sleep from 48 (24 male, 20 female) normally hearing aged 1.77 to -4.57 months in response to two auditory discrimination tasks. ACC was recorded in response to a three-vowel sequence (/i/-/a/-/i/). MMR was recorded in response to a standard vowel, /a/, (probability 85%), and to a deviant vowel, /i/, (probability of 15%). A priori comparisons included: age, sex, and sleep state. These were conducted separately for each of the three bandpass filter settings were compared (1-18, 1-30, and 1-40 Hz). A priori tests revealed no differences in MMR or ACC for age, sex, or sleep state for any of the three filter settings. ACC and MMR responses were prominently observed in all 44 sleeping infants (data from four infants were excluded). Significant differences observed for ACC were to the onset and offset of stimuli. However, neither group nor individual differences were observed to changes in speech stimuli in the ACC. MMR revealed two prominent peaks occurring at the stimulus onset and at the stimulus offset. Permutation t-tests revealed significant differences between the standard and deviant stimuli for both the onset and offset MMR peaks (p < 0.01). The 1-18 Hz filter setting revealed significant differences for all participants in the MMR paradigm. Both ACC and MMR responses were observed to auditory stimulation suggesting that infants perceive and process speech information even during sleep. Significant differences between the standard and deviant responses were observed in the MMR, but not ACC paradigm. These findings suggest that the MMR is sensitive to detecting auditory/speech discrimination processing. This paper identified that MMR can be used to identify discrimination in normal hearing infants. This suggests that MMR has potential for use in infants with hearing loss to validate hearing aid fittings. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  15. Sad and happy emotion discrimination in music by children with cochlear implants.

    PubMed

    Hopyan, Talar; Manno, Francis A M; Papsin, Blake C; Gordon, Karen A

    2016-01-01

    Children using cochlear implants (CIs) develop speech perception but have difficulty perceiving complex acoustic signals. Mode and tempo are the two components used to recognize emotion in music. Based on CI limitations, we hypothesized children using CIs would have impaired perception of mode cues relative to their normal hearing peers and would rely more heavily on tempo cues to distinguish happy from sad music. Study participants were children with 13 right CIs and 3 left CIs (M = 12.7, SD = 2.6 years) and 16 normal hearing peers. Participants judged 96 brief piano excerpts from the classical genre as happy or sad in a forced-choice task. Music was randomly presented with alterations of transposed mode, tempo, or both. When music was presented in original form, children using CIs discriminated between happy and sad music with accuracy well above chance levels (87.5%) but significantly below those with normal hearing (98%). The CI group primarily used tempo cues, whereas normal hearing children relied more on mode cues. Transposing both mode and tempo cues in the same musical excerpt obliterated cues to emotion for both groups. Children using CIs showed significantly slower response times across all conditions. Children using CIs use tempo cues to discriminate happy versus sad music reflecting a very different hearing strategy than their normal hearing peers. Slower reaction times by children using CIs indicate that they found the task more difficult and support the possibility that they require different strategies to process emotion in music than normal.

  16. A comparison of the effects of filtering and sensorineural hearing loss on patients of consonant confusions.

    PubMed

    Wang, M D; Reed, C M; Bilger, R C

    1978-03-01

    It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six-low pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of this individual listener's audiogram is given.

  17. Tinnitus in normally hearing patients: clinical aspects and repercussions.

    PubMed

    Sanchez, Tanit Ganz; Medeiros, Italo Roberto Torres de; Levy, Cristiane Passos Dias; Ramalho, Jeanne da Rosa Oiticica; Bento, Ricardo Ferreira

    2005-01-01

    Patients with tinnitus and normal hearing constitute an important group, given that findings do not suffer influence of the hearing loss. However, this group is rarely studied, so we do not know whether its clinical characteristics and interference in daily life are the same of those of the patients with tinnitus and hearing loss. To compare tinnitus characteristics and interference in daily life among patients with and without hearing loss. Historic cohort. Among 744 tinnitus patients seen at a Tinnitus Clinic, 55 with normal audiometry were retrospectively evaluated. The control group consisted of 198 patients with tinnitus and hearing loss, following the same protocol. We analyzed the patients' data as well as the tinnitus characteristics and interference in daily life. The mean age of the studied group (43.1 +/- 13.4 years) was significantly lower than that of the control group (49.9 +/- 14.5 years). In both groups, tinnitus was predominant in women, bilateral, single tone and constant, but there were no differences between both groups. The interference in concentration and emotional status (25.5% and 36.4%) was significantly lower in the studied group than that of the control group (46% and 61.6%), but it did not happen in regard to interference over sleep and social life. Patients with tinnitus and normal hearing showed similar characteristics when compared to those with hearing loss. However, the age of the patients and the interference over concentration and emotional status were significantly lower in this group.

  18. Comparative assessment of amphibious hearing in pinnipeds.

    PubMed

    Reichmuth, Colleen; Holt, Marla M; Mulsow, Jason; Sills, Jillian M; Southall, Brandon L

    2013-06-01

    Auditory sensitivity in pinnipeds is influenced by the need to balance efficient sound detection in two vastly different physical environments. Previous comparisons between aerial and underwater hearing capabilities have considered media-dependent differences relative to auditory anatomy, acoustic communication, ecology, and amphibious life history. New data for several species, including recently published audiograms and previously unreported measurements obtained in quiet conditions, necessitate a re-evaluation of amphibious hearing in pinnipeds. Several findings related to underwater hearing are consistent with earlier assessments, including an expanded frequency range of best hearing in true seals that spans at least six octaves. The most notable new results indicate markedly better aerial sensitivity in two seals (Phoca vitulina and Mirounga angustirostris) and one sea lion (Zalophus californianus), likely attributable to improved ambient noise control in test enclosures. An updated comparative analysis alters conventional views and demonstrates that these amphibious pinnipeds have not necessarily sacrificed aerial hearing capabilities in favor of enhanced underwater sound reception. Despite possessing underwater hearing that is nearly as sensitive as fully aquatic cetaceans and sirenians, many seals and sea lions have retained acute aerial hearing capabilities rivaling those of terrestrial carnivores.

  19. The effects of elevated hearing thresholds on performance in a paintball simulation of individual dismounted combat.

    PubMed

    Sheffield, Benjamin; Brungart, Douglas; Tufts, Jennifer; Ness, James

    2017-01-01

    To examine the relationship between hearing acuity and operational performance in simulated dismounted combat. Individuals wearing hearing loss simulation systems competed in a paintball-based exercise where the objective was to be the last player remaining. Four hearing loss profiles were tested in each round (no hearing loss, mild, moderate and severe) and four rounds were played to make up a match. This allowed counterbalancing of simulated hearing loss across participants. Forty-three participants across two data collection sites (Fort Detrick, Maryland and the United States Military Academy, New York). All participants self-reported normal hearing except for two who reported mild hearing loss. Impaired hearing had a greater impact on the offensive capabilities of participants than it did on their "survival", likely due to the tendency for individuals with simulated impairment to adopt a more conservative behavioural strategy than those with normal hearing. These preliminary results provide valuable insights into the impact of impaired hearing on combat effectiveness, with implications for the development of improved auditory fitness-for-duty standards, the establishment of performance requirements for hearing protection technologies, and the refinement of strategies to train military personnel on how to use hearing protection in combat environments.

  20. Impact of a Moving Noise Masker on Speech Perception in Cochlear Implant Users

    PubMed Central

    Weissgerber, Tobias; Rader, Tobias; Baumann, Uwe

    2015-01-01

    Objectives Previous studies investigating speech perception in noise have typically been conducted with static masker positions. The aim of this study was to investigate the effect of spatial separation of source and masker (spatial release from masking, SRM) in a moving masker setup and to evaluate the impact of adaptive beamforming in comparison with fixed directional microphones in cochlear implant (CI) users. Design Speech reception thresholds (SRT) were measured in S0N0 and in a moving masker setup (S0Nmove) in 12 normal hearing participants and 14 CI users (7 subjects bilateral, 7 bimodal with a hearing aid in the contralateral ear). Speech processor settings were a moderately directional microphone, a fixed beamformer, or an adaptive beamformer. The moving noise source was generated by means of wave field synthesis and was smoothly moved in a shape of a half-circle from one ear to the contralateral ear. Noise was presented in either of two conditions: continuous or modulated. Results SRTs in the S0Nmove setup were significantly improved compared to the S0N0 setup for both the normal hearing control group and the bilateral group in continuous noise, and for the control group in modulated noise. There was no effect of subject group. A significant effect of directional sensitivity was found in the S0Nmove setup. In the bilateral group, the adaptive beamformer achieved lower SRTs than the fixed beamformer setting. Adaptive beamforming improved SRT in both CI user groups substantially by about 3 dB (bimodal group) and 8 dB (bilateral group) depending on masker type. Conclusions CI users showed SRM that was comparable to normal hearing subjects. In listening situations of everyday life with spatial separation of source and masker, directional microphones significantly improved speech perception with individual improvements of up to 15 dB SNR. Users of bilateral speech processors with both directional microphones obtained the highest benefit. PMID:25970594

  1. [Subclinical sensorineural hearing loss in female patients with rheumatoid arthritis].

    PubMed

    Treviño-González, José Luis; Villegas-González, Mario Jesús; Muñoz-Maldonado, Gerardo Enrique; Montero-Cantu, Carlos Alberto; Nava-Zavala, Arnulfo Hernán; Garza-Elizondo, Mario Alberto

    2015-01-01

    The rheumatoid arthritis is a clinical entity capable to cause hearing impairment that can be diagnosed promptly with high frequencies audiometry. To detect subclinical sensorineural hearing loss in patients with rheumatoid arthritis. Cross-sectional study on patients with rheumatoid arthritis performing high frequency audiometry 125Hz to 16,000Hz and tympanometry. The results were correlated with markers of disease activity and response to therapy. High frequency audiometry was performed in 117 female patients aged from 19 to 65 years. Sensorineural hearing loss was observed at a sensitivity of pure tones from 125 to 8,000 Hz in 43.59%, a tone threshold of 10,000 to 16,000Hz in 94.02% patients in the right ear and in 95.73% in the left ear. Hearing was normal in 8 (6.84%) patients. Hearing loss was observed in 109 (93.16%), and was asymmetric in 36 (30.77%), symmetric in 73 (62.37%), bilateral in 107 (91.45%), unilateral in 2 (1.71%), and no conduction and/or mixed hearing loss was encountered. Eight (6.83%) patients presented vertigo, 24 (20.51%) tinnitus. Tympanogram type A presented in 88.90% in the right ear and 91.46% in the left ear, with 5.98 to 10.25% type As. Stapedius reflex was present in 75.3 to 85.2%. Speech discrimination in the left ear was significantly different (p = 0.02)in the group older than 50 years. No association was found regarding markers of disease activity, but there was an association with the onset of rheumatoid arthritis disease. Patients with rheumatoid arthritis had a high prevalence of sensorineural hearing loss for high and very high frequencies. Copyright © 2015 Academia Mexicana de Cirugía A.C. Published by Masson Doyma México S.A. All rights reserved.

  2. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    PubMed Central

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on functioning. PMID:26136699

  3. [Relationship between the Mandarin acceptable noise level and the personality traits in normal hearing adults].

    PubMed

    Wu, Dan; Chen, Jian-yong; Wang, Shuo; Zhang, Man-hua; Chen, Jing; Li, Yu-ling; Zhang, Hua

    2013-03-01

    To evaluate the relationship between the Mandarin acceptable noise level (ANL) and the personality trait for normal-hearing adults. Eighty-five Mandarin speakers, aged from 21 to 27, participated in this study. ANL materials and the Eysenck Personality Questionnaire (EPQ) questionnaire were used to test the acceptable noise level and the personality trait for normal-hearing subjects. SPSS 17.0 was used to analyze the results. ANL were (7.8 ± 2.9) dB in normal hearing participants. The P and N scores in EPQ were significantly correlated with ANL (r = 0.284 and 0.318, P < 0.01). No significant correlations were found between ANL and E and L scores (r = -0.036 and -.167, P > 0.05). Listeners with higher ANL were more likely to be eccentric, hostile, aggressive, and instabe, no ANL differences were found in listeners who were different in introvert-extravert or lying.

  4. Childhood Otitis Media: A Cohort Study With 30-Year Follow-Up of Hearing (The HUNT Study).

    PubMed

    Aarhus, Lisa; Tambs, Kristian; Kvestad, Ellen; Engdahl, Bo

    2015-01-01

    To study the extent to which otitis media (OM) in childhood is associated with adult hearing thresholds. Furthermore, to study whether the effects of OM on adult hearing thresholds are moderated by age or noise exposure. Population-based cohort study of 32,786 participants who had their hearing tested by pure-tone audiometry in primary school and again at ages ranging from 20 to 56 years. Three thousand sixty-six children were diagnosed with hearing loss; the remaining sample had normal childhood hearing. Compared with participants with normal childhood hearing, those diagnosed with childhood hearing loss caused by otitis media with effusion (n = 1255), chronic suppurative otitis media (CSOM; n = 108), or hearing loss after recurrent acute otitis media (rAOM; n = 613) had significantly increased adult hearing thresholds in the whole frequency range (2 dB/17-20 dB/7-10 dB, respectively). The effects were adjusted for age, sex, and noise exposure. Children diagnosed with hearing loss after rAOM had somewhat improved hearing thresholds as adults. The effects of CSOM and hearing loss after rAOM on adult hearing thresholds were larger in participants tested in middle adulthood (ages 40 to 56 years) than in those tested in young adulthood (ages 20 to 40 years). Eardrum pathology added a marginally increased risk of adult hearing loss (1-3 dB) in children with otitis media with effusion or hearing loss after rAOM. The study could not reveal significant differences in the effect of self-reported noise exposure on adult hearing thresholds between the groups with OM and the group with normal childhood hearing. This cohort study indicates that CSOM and rAOM in childhood are associated with adult hearing loss, underlining the importance of optimal treatment in these conditions. It appears that ears with a subsequent hearing loss after OM in childhood age at a faster rate than those without; however this should be confirmed by studies with several follow-up tests through adulthood.

  5. Temporary and Permanent Noise-induced Threshold Shifts: A Review of Basic and Clinical Observations.

    PubMed

    Ryan, Allen F; Kujawa, Sharon G; Hammill, Tanisha; Le Prell, Colleen; Kil, Jonathan

    2016-09-01

    To review basic and clinical findings relevant to defining temporary (TTS) and permanent (PTS) threshold shifts and their sequelae. Relevant scientific literature and government definitions were broadly reviewed. The definitions and characteristics of TTS and PTS were assessed and recent advances that expand our knowledge of the extent, nature, and consequences of noise-induced hearing loss were reviewed. Exposure to intense sound can produce TTS, acute changes in hearing sensitivity that recover over time, or PTS, a loss that does not recover to preexposure levels. In general, a threshold shift ≥10 dB at 2, 3, and 4 kHz is required for reporting purposes in human studies. The high-frequency regions of the cochlea are most sensitive to noise damage. Resonance of the ear canal also results in a frequency region of high-noise sensitivity at 4 to 6 kHz. A primary noise target is the cochlear hair cell. Although the mechanisms that underlie such hair cell damage remain unclear, there is evidence to support a role for reactive oxygen species, stress pathway signaling, and apoptosis. Another target is the synapse between the hair cell and the primary afferent neurons. Large numbers of these synapses and their neurons can be lost after noise, even though hearing thresholds may return to normal. This affects auditory processing and detection of signals in noise. The consequences of TTS and PTS include significant deficits in communication that can impact performance of military duties or obtaining/retaining civilian employment. Tinnitus and exacerbation of posttraumatic stress disorder are also potential sequelae.

  6. Vowel perception by noise masked normal-hearing young adults

    NASA Astrophysics Data System (ADS)

    Richie, Carolyn; Kewley-Port, Diane; Coughlin, Maureen

    2005-08-01

    This study examined vowel perception by young normal-hearing (YNH) adults, in various listening conditions designed to simulate mild-to-moderate sloping sensorineural hearing loss. YNH listeners were individually age- and gender-matched to young hearing-impaired (YHI) listeners tested in a previous study [Richie et al., J. Acoust. Soc. Am. 114, 2923-2933 (2003)]. YNH listeners were tested in three conditions designed to create equal audibility with the YHI listeners; a low signal level with and without a simulated hearing loss, and a high signal level with a simulated hearing loss. Listeners discriminated changes in synthetic vowel tokens /smcapi e ɛ invv æ/ when F1 or F2 varied in frequency. Comparison of YNH with YHI results failed to reveal significant differences between groups in terms of performance on vowel discrimination, in conditions of similar audibility by using both noise masking to elevate the hearing thresholds of the YNH and applying frequency-specific gain to the YHI listeners. Further, analysis of learning curves suggests that while the YHI listeners completed an average of 46% more test blocks than YNH listeners, the YHI achieved a level of discrimination similar to that of the YNH within the same number of blocks. Apparently, when age and gender are closely matched between young hearing-impaired and normal-hearing adults, performance on vowel tasks may be explained by audibility alone.

  7. Infant vocalizations and the early diagnosis of severe hearing impairment.

    PubMed

    Eilers, R E; Oller, D K

    1994-02-01

    To determine whether late onset of canonical babbling could be used as a criterion to determine risk of hearing impairment, we obtained vocalization samples longitudinally from 94 infants with normal hearing and 37 infants with severe to profound hearing impairment. Parents were instructed to report the onset of canonical babbling (the production of well-formed syllables such as "da," "na," "bee," "yaya"). Verification that the infants were producing canonical syllables was collected in laboratory audio recordings. Infants with normal hearing produced canonical vocalizations before 11 months of age (range, 3 to 10 months; mode, 7 months); infants who were deaf failed to produce canonical syllables until 11 months of age or older, often well into the third year of life (range, 11 to 49 months; mode, 24 months). The correlation between age at onset of the canonical stage and age at auditory amplification was 0.68, indicating that early identification and fitting of hearing aids is of significant benefit to infants learning language. The fact that there is no overlap in the distribution of the onset of canonical babbling between infants with normal hearing and infants with hearing impairment means that the failure of otherwise healthy infants to produce canonical syllables before 11 months of age should be considered a serious risk factor for hearing impairment and, when observed, should result in immediate referral for audiologic evaluation.

  8. Masking Release in Children and Adults With Hearing Loss When Using Amplification

    PubMed Central

    McCreery, Ryan; Kopun, Judy; Lewis, Dawna; Alexander, Joshua; Stelmachowicz, Patricia

    2016-01-01

    Purpose This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. Method Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator. Results Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression. Conclusions The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed. PMID:26540194

  9. Static and dynamic balance of children and adolescents with sensorineural hearing loss.

    PubMed

    Melo, Renato de Souza; Marinho, Sônia Elvira Dos Santos; Freire, Maryelly Evelly Araújo; Souza, Robson Arruda; Damasceno, Hélio Anderson Melo; Raposo, Maria Cristina Falcão

    2017-01-01

    To assess the static and dynamic balance performance of students with normal hearing and with sensorineural hearing loss. A cross-sectional study assessing 96 students, 48 with normal hearing and 48 with sensorineural hearing loss of both sexes, aged 7 and 18 years. To evaluate static balance, Romberg, Romberg-Barré and Fournier tests were used; and for the dynamic balance, we applied the Unterberger test. Hearing loss students showed more changes in static and dynamic balance as compared to normal hearing, in all tests used (p<0.001). The same difference was found when subjects were grouped by sex. For females, Romberg, Romberg-Barré, Fournier and Unterberger test p values were, respectively, p=0.004, p<0.001, p<0.001 and p=0.023; for males, the p values were p=0.009, p<0.001, p<0.001 and p=0.002, respectively. The same difference was observed when students were classified by age. For 7 to 10 years old students, the p values for Romberg, Romberg-Barré and Fournier tests were, respectively, p=0.007, p<0.001 and p=0.001; for those aged 11 and 14 years, the p values for Romberg, Romberg-Barré, Fournier and Unterberger tests were p=0.002, p<0.001, p<0.001 and p=0.015, respectively; and for those aged 15 and 18 years, the p values for Romberg-Barré, Fournier and Unterberger tests were, respectively, p=0.037, p<0.001 and p=0.037. Hearing-loss students showed more changes in static and dynamic balance comparing to normal hearing of same sex and age groups.

  10. Reading instead of reasoning? Predictors of arithmetic skills in children with cochlear implants.

    PubMed

    Huber, Maria; Kipman, Ulrike; Pletzer, Belinda

    2014-07-01

    The aim of the present study was to evaluate whether the arithmetic achievement of children with cochlear implants (CI) was lower or comparable to that of their normal hearing peers and to identify predictors of arithmetic achievement in children with CI. In particular we related the arithmetic achievement of children with CI to nonverbal IQ, reading skills and hearing variables. 23 children with CI (onset of hearing loss in the first 24 months, cochlear implantation in the first 60 months of life, atleast 3 years of hearing experience with the first CI) and 23 normal hearing peers matched by age, gender, and social background participated in this case control study. All attended grades two to four in primary schools. To assess their arithmetic achievement, all children completed the "Arithmetic Operations" part of the "Heidelberger Rechentest" (HRT), a German arithmetic test. To assess reading skills and nonverbal intelligence as potential predictors of arithmetic achievement, all children completed the "Salzburger Lesetest" (SLS), a German reading screening, and the Culture Fair Intelligence Test (CFIT), a nonverbal intelligence test. Children with CI did not differ significantly from hearing children in their arithmetic achievement. Correlation and regression analyses revealed that in children with CI, arithmetic achievement was significantly (positively) related to reading skills, but not to nonverbal IQ. Reading skills and nonverbal IQ were not related to each other. In normal hearing children, arithmetic achievement was significantly (positively) related to nonverbal IQ, but not to reading skills. Reading skills and nonverbal IQ were positively correlated. Hearing variables were not related to arithmetic achievement. Children with CI do not show lower performance in non-verbal arithmetic tasks, compared to normal hearing peers. Copyright © 2014. Published by Elsevier Ireland Ltd.

  11. Intelligibility of foreign-accented speech: Effects of listening condition, listener age, and listener hearing status

    NASA Astrophysics Data System (ADS)

    Ferguson, Sarah Hargus

    2005-09-01

    It is well known that, for listeners with normal hearing, speech produced by non-native speakers of the listener's first language is less intelligible than speech produced by native speakers. Intelligibility is well correlated with listener's ratings of talker comprehensibility and accentedness, which have been shown to be related to several talker factors, including age of second language acquisition and level of similarity between the talker's native and second language phoneme inventories. Relatively few studies have focused on factors extrinsic to the talker. The current project explored the effects of listener and environmental factors on the intelligibility of foreign-accented speech. Specifically, monosyllabic English words previously recorded from two talkers, one a native speaker of American English and the other a native speaker of Spanish, were presented to three groups of listeners (young listeners with normal hearing, elderly listeners with normal hearing, and elderly listeners with hearing impairment; n=20 each) in three different listening conditions (undistorted words in quiet, undistorted words in 12-talker babble, and filtered words in quiet). Data analysis will focus on interactions between talker accent, listener age, listener hearing status, and listening condition. [Project supported by American Speech-Language-Hearing Association AARC Award.

  12. Masking Release in Children and Adults with Hearing Loss When Using Amplification

    ERIC Educational Resources Information Center

    Brennan, Marc; McCreery, Ryan; Kopun, Judy; Lewis, Dawna; Alexander, Joshua; Stelmachowicz, Patricia

    2016-01-01

    Purpose: This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. Method: Sentence recognition in unmodulated noise was compared with recognition…

  13. Speech-in-Noise Tests and Supra-threshold Auditory Evoked Potentials as Metrics for Noise Damage and Clinical Trial Outcome Measures.

    PubMed

    Le Prell, Colleen G; Brungart, Douglas S

    2016-09-01

    In humans, the accepted clinical standards for detecting hearing loss are the behavioral audiogram, based on the absolute detection threshold of pure-tones, and the threshold auditory brainstem response (ABR). The audiogram and the threshold ABR are reliable and sensitive measures of hearing thresholds in human listeners. However, recent results from noise-exposed animals demonstrate that noise exposure can cause substantial neurodegeneration in the peripheral auditory system without degrading pure-tone audiometric thresholds. It has been suggested that clinical measures of auditory performance conducted with stimuli presented above the detection threshold may be more sensitive than the behavioral audiogram in detecting early-stage noise-induced hearing loss in listeners with audiometric thresholds within normal limits. Supra-threshold speech-in-noise testing and supra-threshold ABR responses are reviewed here, given that they may be useful supplements to the behavioral audiogram for assessment of possible neurodegeneration in noise-exposed listeners. Supra-threshold tests may be useful for assessing the effects of noise on the human inner ear, and the effectiveness of interventions designed to prevent noise trauma. The current state of the science does not necessarily allow us to define a single set of best practice protocols. Nonetheless, we encourage investigators to incorporate these metrics into test batteries when feasible, with an effort to standardize procedures to the greatest extent possible as new reports emerge.

  14. Spatial Release From Masking in Simulated Cochlear Implant Users With and Without Access to Low-Frequency Acoustic Hearing

    PubMed Central

    Dietz, Mathias; Hohmann, Volker; Jürgens, Tim

    2015-01-01

    For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types. PMID:26721918

  15. Audiogram of a striped dolphin (Stenella coeruleoalba)

    NASA Astrophysics Data System (ADS)

    Kastelein, Ronald A.; Hagedoorn, Monique; Au, Whitlow W. L.; de Haan, Dick

    2003-02-01

    The underwater hearing sensitivity of a striped dolphin was measured in a pool using standard psycho-acoustic techniques. The go/no-go response paradigm and up-down staircase psychometric method were used. Auditory sensitivity was measured by using 12 narrow-band frequency-modulated signals having center frequencies between 0.5 and 160 kHz. The 50% detection threshold was determined for each frequency. The resulting audiogram for this animal was U-shaped, with hearing capabilities from 0.5 to 160 kHz (8 13 oct). Maximum sensitivity (42 dB re 1 μPa) occurred at 64 kHz. The range of most sensitive hearing (defined as the frequency range with sensitivities within 10 dB of maximum sensitivity) was from 29 to 123 kHz (approximately 2 oct). The animal's hearing became less sensitive below 32 kHz and above 120 kHz. Sensitivity decreased by about 8 dB per octave below 1 kHz and fell sharply at a rate of about 390 dB per octave above 140 kHz.

  16. A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.

    PubMed

    Weller, Tobias; Best, Virginia; Buchholz, Jörg M; Young, Taegan

    2016-07-01

    Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest. The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario. This study was a descriptive case control study. Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL. A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction. Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss. This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests. American Academy of Audiology.

  17. Evaluation of Extended-Wear Hearing Aid Technology for Operational Military Use

    DTIC Science & Technology

    2016-07-01

    listeners without degrading auditory situational awareness. To this point, significant progress has been made in this evaluation process. The devices...provide long-term hearing protection for listeners with normal hearing with minimal impact on auditory situational awareness and minimal annoyance due to...Test Plan: A comprehensive test plan is complete for the measurements at AFRL, which will incorporate goals 1-2 and 4-5 above using a normal

  18. Speech-on-speech masking with variable access to the linguistic content of the masker speech for native and nonnative english speakers.

    PubMed

    Calandruccio, Lauren; Bradlow, Ann R; Dhar, Sumitrajit

    2014-04-01

    Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared with native-accented English speech was reported in Calandruccio et al (2010a). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech Affiliationect masking release. A mixed-model design with within-subject (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech and high-intelligibility, moderate-intelligibility, and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Three listener groups were tested, including monolingual English speakers with normal hearing, nonnative English speakers with normal hearing, and monolingual English speakers with hearing loss. The nonnative English speakers were from various native language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetric mild sloping to moderate sensorineural hearing loss. Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the key words within the sentences (100 key words per masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and listener groups. Monolingual English speakers with normal hearing benefited when the competing speech signal was foreign accented compared with native accented, allowing for improved speech recognition. Various levels of intelligibility across the foreign-accented speech maskers did not influence results. Neither the nonnative English-speaking listeners with normal hearing nor the monolingual English speakers with hearing loss benefited from masking release when the masker was changed from native-accented to foreign-accented English. Slight modifications between the target and the masker speech allowed monolingual English speakers with normal hearing to improve their recognition of native-accented English, even when the competing speech was highly intelligible. Further research is needed to determine which modifications within the competing speech signal caused the Mandarin-accented English to be less effective with respect to masking. Determining the influences within the competing speech that make it less effective as a masker or determining why monolingual normal-hearing listeners can take advantage of these differences could help improve speech recognition for those with hearing loss in the future. American Academy of Audiology.

  19. Children with minimal sensorineural hearing loss: prevalence, educational performance, and functional status.

    PubMed

    Bess, F H; Dodd-Murphy, J; Parker, R A

    1998-10-01

    This study was designed to determine the prevalence of minimal sensorineural hearing loss (MSHL) in school-age children and to assess the relationship of MSHL to educational performance and functional status. To determine prevalence, a single-staged sampling frame of all schools in the district was created for 3rd, 6th, and 9th grades. Schools were selected with probability proportional to size in each grade group. The final study sample was 1218 children. To assess the association of MSHL with educational performance, children identified with MSHL were assigned as cases into a subsequent case-control study. Scores of the Comprehensive Test of Basic Skills (4th Edition) (CTBS/4) then were compared between children with MSHL and children with normal hearing. School teachers completed the Screening Instrument for Targeting Education Risk (SIFTER) and the Revised Behavior Problem Checklist for a subsample of children with MSHL and their normally hearing counterparts. Finally, data on grade retention for a sample of children with MSHL were obtained from school records and compared with school district norm data. To assess the relationship between MSHL and functional status, test scores of all children with MSHL and all children with normal hearing in grades 6 and 9 were compared on the COOP Adolescent Chart Method (COOP), a screening tool for functional status. MSHL was exhibited by 5.4% of the study sample. The prevalence of all types of hearing impairment was 11.3%. Third grade children with MSHL exhibited significantly lower scores than normally hearing controls on a series of subtests of the CTBS/4; however, no differences were noted at the 6th and 9th grade levels. The SIFTER results revealed that children with MSHL scored poorer on the communication subtest than normal-hearing controls. Thirty-seven percent of the children with MSHL failed at least one grade. Finally, children with MSHL exhibited significantly greater dysfunction than children with normal hearing on several subtests of the COOP including behavior, energy, stress, social support, and self-esteem. The prevalence of hearing loss in the schools almost doubles when children with MSHL are included. This large, education-based study shows clinically important associations between MSHL and school behavior and performance. Children with MSHL experienced more difficulty than normally hearing children on a series of educational and functional test measures. Although additional research is necessary, results suggest the need for audiologists, speech-language pathologists, and educators to evaluate carefully our identification and management approaches with this population. Better efforts to manage these children could result in meaningful improvement in their educational progress and psychosocial well-being.

  20. Comparison of Social Interaction between Cochlear-Implanted Children with Normal Intelligence Undergoing Auditory Verbal Therapy and Normal-Hearing Children: A Pilot Study.

    PubMed

    Monshizadeh, Leila; Vameghi, Roshanak; Sajedi, Firoozeh; Yadegari, Fariba; Hashemi, Seyed Basir; Kirchem, Petra; Kasbi, Fatemeh

    2018-04-01

    A cochlear implant is a device that helps hearing-impaired children by transmitting sound signals to the brain and helping them improve their speech, language, and social interaction. Although various studies have investigated the different aspects of speech perception and language acquisition in cochlear-implanted children, little is known about their social skills, particularly Persian-speaking cochlear-implanted children. Considering the growing number of cochlear implants being performed in Iran and the increasing importance of developing near-normal social skills as one of the ultimate goals of cochlear implantation, this study was performed to compare the social interaction between Iranian cochlear-implanted children who have undergone rehabilitation (auditory verbal therapy) after surgery and normal-hearing children. This descriptive-analytical study compared the social interaction level of 30 children with normal hearing and 30 with cochlear implants who were conveniently selected. The Raven test was administered to the both groups to ensure normal intelligence quotient. The social interaction status of both groups was evaluated using the Vineland Adaptive Behavior Scale, and statistical analysis was performed using Statistical Package for Social Sciences (SPSS) version 21. After controlling age as a covariate variable, no significant difference was observed between the social interaction scores of both the groups (p > 0.05). In addition, social interaction had no correlation with sex in either group. Cochlear implantation followed by auditory verbal rehabilitation helps children with sensorineural hearing loss to have normal social interactions, regardless of their sex.

  1. The missing link in language development of deaf and hard of hearing children: pragmatic language development.

    PubMed

    Goberis, Dianne; Beams, Dinah; Dalpes, Molly; Abrisch, Amanda; Baca, Rosalinda; Yoshinaga-Itano, Christine

    2012-11-01

    This article will provide information about the Pragmatics Checklist, which consists of 45 items and is scored as: (1) not present, (2) present but preverbal, (3) present with one to three words, and (4) present with complex language. Information for both children who are deaf or hard of hearing and those with normal hearing are presented. Children who are deaf or hard of hearing are significantly older when demonstrating skill with complex language than their normal hearing peers. In general, even at the age of 7 years, there are several items that are not mastered by 75% of the deaf or hard of hearing children. Additionally, the article will provide some suggestions of strategies that can be considered as a means to facilitate the development of these pragmatic language skills for children who are deaf or hard of hearing. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  2. Stability of the Medial Olivocochlear Reflex as Measured by Distortion Product Otoacoustic Emissions

    ERIC Educational Resources Information Center

    Mishra, Srikanta K.; Abdala, Carolina

    2015-01-01

    Purpose: The purpose of this study was to assess the repeatability of a fine-resolution, distortion product otoacoustic emission (DPOAE)-based assay of the medial olivocochlear (MOC) reflex in normal-hearing adults. Method: Data were collected during 36 test sessions from 4 normal-hearing adults to assess short-term stability and 5 normal-hearing…

  3. Variation in hearing within a wild population of beluga whales (Delphinapterus leucas).

    PubMed

    Mooney, T Aran; Castellote, Manuel; Quakenbush, Lori; Hobbs, Roderick; Gaglione, Eric; Goertz, Caroline

    2018-05-08

    Documenting hearing abilities is vital to understanding a species' acoustic ecology and for predicting the impacts of increasing anthropogenic noise. Cetaceans use sound for essential biological functions such as foraging, navigation and communication; hearing is considered to be their primary sensory modality. Yet, we know little regarding the hearing of most, if not all, cetacean populations, which limits our understanding of their sensory ecology, population level variability and the potential impacts of increasing anthropogenic noise. We obtained audiograms (5.6-150 kHz) of 26 wild beluga whales to measure hearing thresholds during capture-release events in Bristol Bay, AK, USA, using auditory evoked potential methods. The goal was to establish the baseline population audiogram, incidences of hearing loss and general variability in wild beluga whales. In general, belugas showed sensitive hearing with low thresholds (<80 dB) from 16 to 100 kHz, and most individuals (76%) responded to at least 120 kHz. Despite belugas often showing sensitive hearing, thresholds were usually above or approached the low ambient noise levels measured in the area, suggesting that a quiet environment may be associated with hearing sensitivity and that hearing thresholds in the most sensitive animals may have been masked. Although this is just one wild population, the success of the method suggests that it should be applied to other populations and species to better assess potential differences. Bristol Bay beluga audiograms showed substantial (30-70 dB) variation among individuals; this variation increased at higher frequencies. Differences among individual belugas reflect that testing multiple individuals of a population is necessary to best describe maximum sensitivity and population variance. The results of this study quadruple the number of individual beluga whales for which audiograms have been conducted and provide the first auditory data for a population of healthy wild odontocetes. © 2018. Published by The Company of Biologists Ltd.

  4. Speech Intelligibility and Prosody Production in Children with Cochlear Implants

    PubMed Central

    Chin, Steven B.; Bergeson, Tonya R.; Phan, Jennifer

    2012-01-01

    Objectives The purpose of the current study was to examine the relation between speech intelligibility and prosody production in children who use cochlear implants. Methods The Beginner's Intelligibility Test (BIT) and Prosodic Utterance Production (PUP) task were administered to 15 children who use cochlear implants and 10 children with normal hearing. Adult listeners with normal hearing judged the intelligibility of the words in the BIT sentences, identified the PUP sentences as one of four grammatical or emotional moods (i.e., declarative, interrogative, happy, or sad), and rated the PUP sentences according to how well they thought the child conveyed the designated mood. Results Percent correct scores were higher for intelligibility than for prosody and higher for children with normal hearing than for children with cochlear implants. Declarative sentences were most readily identified and received the highest ratings by adult listeners; interrogative sentences were least readily identified and received the lowest ratings. Correlations between intelligibility and all mood identification and rating scores except declarative were not significant. Discussion The findings suggest that the development of speech intelligibility progresses ahead of prosody in both children with cochlear implants and children with normal hearing; however, children with normal hearing still perform better than children with cochlear implants on measures of intelligibility and prosody even after accounting for hearing age. Problems with interrogative intonation may be related to more general restrictions on rising intonation, and the correlation results indicate that intelligibility and sentence intonation may be relatively dissociated at these ages. PMID:22717120

  5. Free Field Word recognition test in the presence of noise in normal hearing adults.

    PubMed

    Almeida, Gleide Viviani Maciel; Ribas, Angela; Calleros, Jorge

    In ideal listening situations, subjects with normal hearing can easily understand speech, as can many subjects who have a hearing loss. To present the validation of the Word Recognition Test in a Free Field in the Presence of Noise in normal-hearing adults. Sample consisted of 100 healthy adults over 18 years of age with normal hearing. After pure tone audiometry, a speech recognition test was applied in free field condition with monosyllables and disyllables, with standardized material in three listening situations: optimal listening condition (no noise), with a signal to noise ratio of 0dB and a signal to noise ratio of -10dB. For these tests, an environment in calibrated free field was arranged where speech was presented to the subject being tested from two speakers located at 45°, and noise from a third speaker, located at 180°. All participants had speech audiometry results in the free field between 88% and 100% in the three listening situations. Word Recognition Test in Free Field in the Presence of Noise proved to be easy to be organized and applied. The results of the test validation suggest that individuals with normal hearing should get between 88% and 100% of the stimuli correct. The test can be an important tool in measuring noise interference on the speech perception abilities. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  6. Auditory Evoked Potentials for the Evaluation of Hearing Sensitivity in Navy Dolphins. Modification P00002: Assessment of Hearing Sensitivity in Adult Male Elephant Seals

    DTIC Science & Technology

    2006-12-30

    hearing in the potential and underwater behavioral hearing thresholds in four bottlenose beluga Delphinapterus leucas ," Dokl. Akad. Nauk SSSR 294...313, "Auditory filter shapes for the bottlenose dolphin (Tursiops truncatus) and 238-241. the white whale ( Delphinapterus leucas ) derived with...Rickards, F. W., Cohen, L. T., De Vidi, S., and Clark, G. M. of a beluga whale, Delphinapterus leucas ," Aquat. Mamm. 26, 212-228. (1995). "The

  7. Effects of sensorineural hearing loss on visually guided attention in a multitalker environment.

    PubMed

    Best, Virginia; Marrone, Nicole; Mason, Christine R; Kidd, Gerald; Shinn-Cunningham, Barbara G

    2009-03-01

    This study asked whether or not listeners with sensorineural hearing loss have an impaired ability to use top-down attention to enhance speech intelligibility in the presence of interfering talkers. Listeners were presented with a target string of spoken digits embedded in a mixture of five spatially separated speech streams. The benefit of providing simple visual cues indicating when and/or where the target would occur was measured in listeners with hearing loss, listeners with normal hearing, and a control group of listeners with normal hearing who were tested at a lower target-to-masker ratio to equate their baseline (no cue) performance with the hearing-loss group. All groups received robust benefits from the visual cues. The magnitude of the spatial-cue benefit, however, was significantly smaller in listeners with hearing loss. Results suggest that reduced utility of selective attention for resolving competition between simultaneous sounds contributes to the communication difficulties experienced by listeners with hearing loss in everyday listening situations.

  8. Influences of Working Memory and Audibility on Word Learning in Children with Hearing Loss

    ERIC Educational Resources Information Center

    Stiles, Derek Jason

    2010-01-01

    As a group, children with hearing loss demonstrate delays in language development relative to their peers with normal hearing. Early intervention has a profound impact on language outcomes in children with hearing loss. Data examining the relationship between degree of hearing loss and language outcomes are variable. Two approaches are used in the…

  9. Audiovisual sentence repetition as a clinical criterion for auditory development in Persian-language children with hearing loss.

    PubMed

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Rahimi, Zahra; Mayahi, Anis

    2017-02-01

    It is important for clinician such as speech-language pathologists and audiologists to develop more efficient procedures to assess the development of auditory, speech and language skills in children using hearing aid and/or cochlear implant compared to their peers with normal hearing. So, the aim of study was the comparison of the performance of 5-to-7-year-old Persian-language children with and without hearing loss in visual-only, auditory-only, and audiovisual presentation of sentence repetition task. The research was administered as a cross-sectional study. The sample size was 92 Persian 5-7 year old children including: 60 with normal hearing and 32 with hearing loss. The children with hearing loss were recruited from Soroush rehabilitation center for Persian-language children with hearing loss in Shiraz, Iran, through consecutive sampling method. All the children had unilateral cochlear implant or bilateral hearing aid. The assessment tool was the Sentence Repetition Test. The study included three computer-based experiments including visual-only, auditory-only, and audiovisual. The scores were compared within and among the three groups through statistical tests in α = 0.05. The score of sentence repetition task between V-only, A-only, and AV presentation was significantly different in the three groups; in other words, the highest to lowest scores belonged respectively to audiovisual, auditory-only, and visual-only format in the children with normal hearing (P < 0.01), cochlear implant (P < 0.01), and hearing aid (P < 0.01). In addition, there was no significant correlationship between the visual-only and audiovisual sentence repetition scores in all the 5-to-7-year-old children (r = 0.179, n = 92, P = 0.088), but audiovisual sentence repetition scores were found to be strongly correlated with auditory-only scores in all the 5-to-7-year-old children (r = 0.943, n = 92, P = 0.000). According to the study's findings, audiovisual integration occurs in the 5-to-7-year-old Persian children using hearing aid or cochlear implant during sentence repetition similar to their peers with normal hearing. Therefore, it is recommended that audiovisual sentence repetition should be used as a clinical criterion for auditory development in Persian-language children with hearing loss. Copyright © 2016. Published by Elsevier B.V.

  10. Positive Experiences and Life Aspirations among Adolescents with and without Hearing Impairments.

    ERIC Educational Resources Information Center

    Magen, Zipora

    1990-01-01

    Comparison of 79 normally hearing and 42 hearing-impaired adolescents found no differences regarding the intensity of their remembered positive experiences. Hearing-impaired subjects reported more positive interpersonal experiences, rarely experienced positive experiences "with self," and showed less desire for transpersonal commitment,…

  11. 38 CFR 17.149 - Sensori-neural aids.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... attendance or by reason of being permanently housebound; (6) Those who have a visual or hearing impairment... normally occurring visual or hearing impairments; and (8) Those visually or hearing impaired so severely... frequency ranges which contribute to a loss of communication ability; however, hearing aids are to be...

  12. Auditory maturation in premature infants: a potential pitfall for early cochlear implantation.

    PubMed

    Hof, Janny R; Stokroos, Robert J; Wix, Eduard; Chenault, Mickey; Gelders, Els; Brokx, Jan

    2013-08-01

    To describe spontaneous hearing improvement in the first years of life of a number of preterm neonates relative to cochlear implant candidacy. Retrospective case study. Hearing levels of 14 preterm neonates (mean gestational age at birth = 29 weeks) referred after newborn hearing screening were evaluated. Initial hearing thresholds ranged from 40 to 105 dBHL (mean = 85 dBHL). Hearing level improved to normal levels for four neonates and to moderate levels for five, whereas for five neonates, no improvement in hearing thresholds was observed and cochlear implantation was recommended. Three of the four neonates in whom the hearing improved to normal levels were born prior to 28 weeks gestational age. Hearing improvement was mainly observed prior to a gestational age of 80 weeks. Delayed maturation of an immature auditory pathway might be an important reason for referral after newborn hearing screening in premature infants. Caution is advised regarding early cochlear implantation in preterm born infants. Audiological follow-ups until at least 80 weeks gestational age are therefore recommended. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  13. The effect of changing the secondary task in dual-task paradigms for measuring listening effort.

    PubMed

    Picou, Erin M; Ricketts, Todd A

    2014-01-01

    The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.

  14. 29 CFR 101.20 - Formal hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... proceeding. (c) The hearing, usually open to the public, is held before a hearing officer who normally is an attorney or field examiner attached to the Regional Office but may be another qualified Agency official...

  15. The Effects of Acoustic Bandwidth on Simulated Bimodal Benefit in Children and Adults with Normal Hearing.

    PubMed

    Sheffield, Sterling W; Simha, Michelle; Jahn, Kelly N; Gifford, René H

    2016-01-01

    The primary purpose of this study was to examine the effect of acoustic bandwidth on bimodal benefit for speech recognition in normal-hearing children with a cochlear implant (CI) simulation in one ear and low-pass filtered stimuli in the contralateral ear. The effect of acoustic bandwidth on bimodal benefit in children was compared with the pattern of adults with normal hearing. Our hypothesis was that children would require a wider acoustic bandwidth than adults to (1) derive bimodal benefit, and (2) obtain asymptotic bimodal benefit. Nineteen children (6 to 12 years) and 10 adults with normal hearing participated in the study. Speech recognition was assessed via recorded sentences presented in a 20-talker babble. The AzBio female-talker sentences were used for the adults and the pediatric AzBio sentences (BabyBio) were used for the children. A CI simulation was presented to the right ear and low-pass filtered stimuli were presented to the left ear with the following cutoff frequencies: 250, 500, 750, 1000, and 1500 Hz. The primary findings were (1) adults achieved higher performance than children when presented with only low-pass filtered acoustic stimuli, (2) adults and children performed similarly in all the simulated CI and bimodal conditions, (3) children gained significant bimodal benefit with the addition of low-pass filtered speech at 250 Hz, and (4) unlike previous studies completed with adult bimodal patients, adults and children with normal hearing gained additional significant bimodal benefit with cutoff frequencies up to 1500 Hz with most of the additional benefit gained with energy below 750 Hz. Acoustic bandwidth effects on simulated bimodal benefit were similar in children and adults with normal hearing. Should the current results generalize to children with CIs, these results suggest pediatric CI recipients may derive significant benefit from minimal acoustic hearing (<250 Hz) in the nonimplanted ear and increasing benefit with broader bandwidth. Knowledge of the effect of acoustic bandwidth on bimodal benefit in children may help direct clinical decisions regarding a second CI, continued bimodal hearing, and even optimizing acoustic amplification for the nonimplanted ear.

  16. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology.

    PubMed

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2013-01-02

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.

  17. Audio-visual integration during speech perception in prelingually deafened Japanese children revealed by the McGurk effect.

    PubMed

    Tona, Risa; Naito, Yasushi; Moroto, Saburo; Yamamoto, Rinko; Fujiwara, Keizo; Yamazaki, Hiroshi; Shinohara, Shogo; Kikuchi, Masahiro

    2015-12-01

    To investigate the McGurk effect in profoundly deafened Japanese children with cochlear implants (CI) and in normal-hearing children. This was done to identify how children with profound deafness using CI established audiovisual integration during the speech acquisition period. Twenty-four prelingually deafened children with CI and 12 age-matched normal-hearing children participated in this study. Responses to audiovisual stimuli were compared between deafened and normal-hearing controls. Additionally, responses of the children with CI younger than 6 years of age were compared with those of the children with CI at least 6 years of age at the time of the test. Responses to stimuli combining auditory labials and visual non-labials were significantly different between deafened children with CI and normal-hearing controls (p<0.05). Additionally, the McGurk effect tended to be more induced in deafened children older than 6 years of age than in their younger counterparts. The McGurk effect was more significantly induced in prelingually deafened Japanese children with CI than in normal-hearing, age-matched Japanese children. Despite having good speech-perception skills and auditory input through their CI, from early childhood, deafened children may use more visual information in speech perception than normal-hearing children. As children using CI need to communicate based on insufficient speech signals coded by CI, additional activities of higher-order brain function may be necessary to compensate for the incomplete auditory input. This study provided information on the influence of deafness on the development of audiovisual integration related to speech, which could contribute to our further understanding of the strategies used in spoken language communication by prelingually deafened children. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  19. Modern prescription theory and application: realistic expectations for speech recognition with hearing AIDS.

    PubMed

    Johnson, Earl E

    2013-01-01

    A major decision at the time of hearing aid fitting and dispensing is the amount of amplification to provide listeners (both adult and pediatric populations) for the appropriate compensation of sensorineural hearing impairment across a range of frequencies (e.g., 160-10000 Hz) and input levels (e.g., 50-75 dB sound pressure level). This article describes modern prescription theory for hearing aids within the context of a risk versus return trade-off and efficient frontier analyses. The expected return of amplification recommendations (i.e., generic prescriptions such as National Acoustic Laboratories-Non-Linear 2, NAL-NL2, and Desired Sensation Level Multiple Input/Output, DSL m[i/o]) for the Speech Intelligibility Index (SII) and high-frequency audibility were traded against a potential risk (i.e., loudness). The modeled performance of each prescription was compared one with another and with the efficient frontier of normal hearing sensitivity (i.e., a reference point for the most return with the least risk). For the pediatric population, NAL-NL2 was more efficient for SII, while DSL m[i/o] was more efficient for high-frequency audibility. For the adult population, NAL-NL2 was more efficient for SII, while the two prescriptions were similar with regard to high-frequency audibility. In terms of absolute return (i.e., not considering the risk of loudness), however, DSL m[i/o] prescribed more outright high-frequency audibility than NAL-NL2 for either aged population, particularly, as hearing loss increased. Given the principles and demonstrated accuracy of desensitization (reduced utility of audibility with increasing hearing loss) observed at the group level, additional high-frequency audibility beyond that of NAL-NL2 is not expected to make further contributions to speech intelligibility (recognition) for the average listener.

  20. A Point Mutation in the Gene for Asparagine-Linked Glycosylation 10B (Alg10b) Causes Nonsyndromic Hearing Impairment in Mice (Mus musculus)

    PubMed Central

    Probst, Frank J.; Corrigan, Rebecca R.; del Gaudio, Daniela; Salinger, Andrew P.; Lorenzo, Isabel; Gao, Simon S.; Chiu, Ilene; Xia, Anping

    2013-01-01

    The study of mouse hearing impairment mutants has led to the identification of a number of human hearing impairment genes and has greatly furthered our understanding of the physiology of hearing. The novel mouse mutant neurological/sensory 5 (nse5) demonstrates a significantly reduced or absent startle response to sound and is therefore a potential murine model of human hearing impairment. Genetic analysis of 500 intercross progeny localized the mutant locus to a 524 kilobase (kb) interval on mouse chromosome 15. A missense mutation in a highly-conserved amino acid was found in the asparagine-linked glycosylation 10B gene (Alg10b), which is within the critical interval for the nse5 mutation. A 20.4 kb transgene containing a wildtype copy of the Alg10b gene rescued the mutant phenotype in nse5/nse5 homozygous animals, confirming that the mutation in Alg10b is responsible for the nse5/nse5 mutant phenotype. Homozygous nse5/nse5 mutants had abnormal auditory brainstem responses (ABRs), distortion product otoacoustic emissions (DPOAEs), and cochlear microphonics (CMs). Endocochlear potentials (EPs), on the other hand, were normal. ABRs and DPOAEs also confirmed the rescue of the mutant nse5/nse5 phenotype by the wildtype Alg10b transgene. These results suggested a defect in the outer hair cells of mutant animals, which was confirmed by histologic analysis. This is the first report of mutation in a gene involved in the asparagine (N)-linked glycosylation pathway causing nonsyndromic hearing impairment, and it suggests that the hearing apparatus, and the outer hair cells in particular, are exquisitely sensitive to perturbations of the N-linked glycosylation pathway. PMID:24303013

  1. Effects of Sex and Gender on Adaptation to Space: Neurosensory Systems

    PubMed Central

    Cohen, Helen S.; Cerisano, Jody M.; Clayton, Janine A.; Cromwell, Ronita; Danielson, Richard W.; Hwang, Emma Y.; Tingen, Candace; Allen, John R.; Tomko, David L.

    2014-01-01

    Abstract Sex and gender differences have long been a research topic of interest, yet few studies have explored the specific differences in neurological responses between men and women during and after spaceflight. Knowledge in this field is limited due to the significant disproportion of sexes enrolled in the astronaut corps. Research indicates that general neurological and sensory differences exist between the sexes, such as those in laterality of amygdala activity, sensitivity and discrimination in vision processing, and neuronal cell death (apoptosis) pathways. In spaceflight, sex differences may include a higher incidence of entry and space motion sickness and of post-flight vestibular instability in female as opposed to male astronauts who flew on both short- and long-duration missions. Hearing and auditory function in crewmembers shows the expected hearing threshold differences between men and women, in which female astronauts exhibit better hearing thresholds. Longitudinal observations of hearing thresholds for crewmembers yield normal age-related decrements; however, no evidence of sex-related differences from spaceflight has been observed. The impact of sex and gender differences should be studied by making spaceflight accessible and flying more women into space. Only in this way will we know if increasingly longer-duration missions cause significantly different neurophysiological responses in men and women. PMID:25401941

  2. Travels of "Sound"

    ERIC Educational Resources Information Center

    Raghuraman, Renuka Sundaram

    2009-01-01

    Some children are born with a hearing loss. Other children, initially, hear normally, but progressively lose their hearing over time. Other reasons for hearing loss include illness, accidents, genes, trauma, or, simply, a fluke of nature. With the right tools and optimal intervention, most children adapt well and lead active lives just like anyone…

  3. The Concept of Fractional Number among Hearing-Impaired Students.

    ERIC Educational Resources Information Center

    Titus, Janet C.

    This study investigated hearing-impaired students' understanding of the mathematical concept of fractional numbers, as measured by their ability to determine the order and equivalence of fractional numbers. Twenty-one students (ages 10-16) with hearing impairments were compared with 26 students with normal hearing. The study concluded that…

  4. Clinical Validity of hearScreen™ Smartphone Hearing Screening for School Children.

    PubMed

    Mahomed-Asmail, Faheema; Swanepoel, De Wet; Eikelboom, Robert H; Myburgh, Hermanus C; Hall, James

    2016-01-01

    The study aimed to determine the validity of a smartphone hearing screening technology (hearScreen™) compared with conventional screening audiometry in terms of (1) sensitivity and specificity, (2) referral rate, and (3) test time. One thousand and seventy school-age children in grades 1 to 3 (8 ± 1.1 average years) were recruited from five public schools. Children were screened twice, once using conventional audiometry and once with the smartphone hearing screening. Screening was conducted in a counterbalanced sequence, alternating initial screen between conventional or smartphone hearing screening. No statistically significant difference in performance between techniques was noted, with smartphone screening demonstrating equivalent sensitivity (75.0%) and specificity (98.5%) to conventional screening audiometry. While referral rates were lower with the smartphone screening (3.2 vs. 4.6%), it was not significantly different (p > 0.05). Smartphone screening (hearScreen™) was 12.3% faster than conventional screening. Smartphone hearing screening using the hearScreen™ application is accurate and time efficient.

  5. Development of Joint Engagement in Young Deaf and Hearing Children: Effects of Chronological Age and Language Skills

    ERIC Educational Resources Information Center

    Cejas, Ivette; Barker, David H.; Quittner, Alexandra L.; Niparko, John K.

    2014-01-01

    Purpose: To evaluate joint engagement (JE) in age-matched children with and without hearing and its relationship to oral language skills. Method: Participants were 180 children with severe-to-profound hearing loss prior to cochlear implant surgery, and 96 age-matched children with normal hearing; all parents were hearing. JE was evaluated in a…

  6. Comparison of self-reported and audiometrically-measured hearing loss in the Australian Defence Force.

    PubMed

    Kirk, Katherine M; McGuire, Annabel; Nasveld, Peter E; Treloar, Susan A

    2012-04-01

    To investigate the relationship between self-reported and audiometrically-measured hearing loss in a sample of Australian Defence Force personnel. Responses to a question regarding hearing problems were compared with contemporaneous audiometric data. 3335 members of the Australian Defence Force for whom anonymised medical records were available. The sensitivity of self-report data to identify higher-frequency hearing loss was lower than sensitivity at other frequencies, and positive predictive values were moderate to poor at all frequencies. Performance characteristics of self-report compared with audiometric data also varied with age, sex, and rank. While self-report hearing loss data have good performance characteristics for estimating prevalence of hearing loss as defined by audiometric criteria, this study indicates that the usefulness of self-report data in identifying individuals with hearing loss may be limited in this population.

  7. Sentence intelligibility during segmental interruption and masking by speech-modulated noise: Effects of age and hearing loss

    PubMed Central

    Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.

    2015-01-01

    This study investigated how single-talker modulated noise impacts consonant and vowel cues to sentence intelligibility. Younger normal-hearing, older normal-hearing, and older hearing-impaired listeners completed speech recognition tests. All listeners received spectrally shaped speech matched to their individual audiometric thresholds to ensure sufficient audibility with the exception of a second younger listener group who received spectral shaping that matched the mean audiogram of the hearing-impaired listeners. Results demonstrated minimal declines in intelligibility for older listeners with normal hearing and more evident declines for older hearing-impaired listeners, possibly related to impaired temporal processing. A correlational analysis suggests a common underlying ability to process information during vowels that is predictive of speech-in-modulated noise abilities. Whereas, the ability to use consonant cues appears specific to the particular characteristics of the noise and interruption. Performance declines for older listeners were mostly confined to consonant conditions. Spectral shaping accounted for the primary contributions of audibility. However, comparison with the young spectral controls who received identical spectral shaping suggests that this procedure may reduce wideband temporal modulation cues due to frequency-specific amplification that affected high-frequency consonants more than low-frequency vowels. These spectral changes may impact speech intelligibility in certain modulation masking conditions. PMID:26093436

  8. Development and validation of a smartphone-based digits-in-noise hearing test in South African English.

    PubMed

    Potgieter, Jenni-Marí; Swanepoel, De Wet; Myburgh, Hermanus Carel; Hopper, Thomas Christopher; Smits, Cas

    2015-07-01

    The objective of this study was to develop and validate a smartphone-based digits-in-noise hearing test for South African English. Single digits (0-9) were recorded and spoken by a first language English female speaker. Level corrections were applied to create a set of homogeneous digits with steep speech recognition functions. A smartphone application was created to utilize 120 digit-triplets in noise as test material. An adaptive test procedure determined the speech reception threshold (SRT). Experiments were performed to determine headphones effects on the SRT and to establish normative data. Participants consisted of 40 normal-hearing subjects with thresholds ≤15 dB across the frequency spectrum (250-8000 Hz) and 186 subjects with normal-hearing in both ears, or normal-hearing in the better ear. The results show steep speech recognition functions with a slope of 20%/dB for digit-triplets presented in noise using the smartphone application. The results of five headphone types indicate that the smartphone-based hearing test is reliable and can be conducted using standard Android smartphone headphones or clinical headphones. A digits-in-noise hearing test was developed and validated for South Africa. The mean SRT and speech recognition functions correspond to previous developed telephone-based digits-in-noise tests.

  9. Speech intelligibility with helicopter noise: tests of three helmet-mounted communication systems.

    PubMed

    Ribera, John E; Mozo, Ben T; Murphy, Barbara A

    2004-02-01

    Military aviator helmet communications systems are designed to enhance speech intelligibility (SI) in background noise and reduce exposure to harmful levels of noise. Some aviators, over the course of their aviation career, develop noise-induced hearing loss that may affect their ability to perform required tasks. New technology can improve SI in noise for aviators with normal hearing as well as those with hearing loss. SI in noise scores were obtained from 40 rotary-wing aviators (20 with normal hearing and 20 with hearing-loss waivers). There were three communications systems evaluated: a standard SPH-4B, an SPH-4B aviator helmet modified with communications earplug (CEP), and an SPH-4B modified with active noise reduction (ANR). Subjects' SI was better in noise with newer technologies than with the standard issue aviator helmet. A significant number of aviators on waivers for hearing loss performed within the range of their normal hearing counterparts when wearing the newer technology. The rank order of perceived speech clarity was 1) CEP, 2) ANR, and 3) unmodified SPH-4B. To insure optimum SI in noise for rotary-wing aviators, consideration should be given to retrofitting existing aviator helmets with new technology, and incorporating such advances in communication systems of the future. Review of standards for determining fitness to fly is needed.

  10. Longitudinal Development of Distortion Product Otoacoustic Emissions in Infants With Normal Hearing.

    PubMed

    Hunter, Lisa L; Blankenship, Chelsea M; Keefe, Douglas H; Feeney, M Patrick; Brown, David K; McCune, Annie; Fitzpatrick, Denis F; Lin, Li

    2018-01-23

    The purpose of this study was to describe normal characteristics of distortion product otoacoustic emission (DPOAE) signal and noise level in a group of newborns and infants with normal hearing followed longitudinally from birth to 15 months of age. This is a prospective, longitudinal study of 231 infants who passed newborn hearing screening and were verified to have normal hearing. Infants were enrolled from a well-baby nursery and two neonatal intensive care units (NICUs) in Cincinnati, OH. Normal hearing was confirmed with threshold auditory brainstem response and visual reinforcement audiometry. DPOAEs were measured in up to four study visits over the first year after birth. Stimulus frequencies f1 and f2 were used with f2/f1 = 1.22, and the DPOAE was recorded at frequency 2f1-f2. A longitudinal repeated-measure linear mixed model design was used to study changes in DPOAE level and noise level as related to age, middle ear transfer, race, and NICU history. Significant changes in the DPOAE and noise levels occurred from birth to 12 months of age. DPOAE levels were the highest at 1 month of age. The largest decrease in DPOAE level occurred between 1 and 5 months of age in the mid to high frequencies (2 to 8 kHz) with minimal changes occurring between 6, 9, and 12 months of age. The decrease in DPOAE level was significantly related to a decrease in wideband absorbance at the same f2 frequencies. DPOAE noise level increased only slightly with age over the first year with the highest noise levels in the 12-month-old age range. Minor, nonsystematic effects for NICU history, race, and gestational age at birth were found, thus these results were generalizable to commonly seen clinical populations. DPOAE levels were related to wideband middle ear absorbance changes in this large sample of infants confirmed to have normal hearing at auditory brainstem response and visual reinforcement audiometry testing. This normative database can be used to evaluate clinical results from birth to 1 year of age. The distributions of DPOAE level and signal to noise ratio data reported herein across frequency and age in normal-hearing infants who were healthy or had NICU histories may be helpful to detect the presence of hearing loss in infants.

  11. Effect of Stimulus Level and Bandwidth on Speech-Evoked Envelope Following Responses in Adults With Normal Hearing.

    PubMed

    Easwar, Vijayalakshmi; Purcell, David W; Aiken, Steven J; Parsa, Vijay; Scollie, Susan D

    2015-01-01

    The use of auditory evoked potentials as an objective outcome measure in infants fitted with hearing aids has gained interest in recent years. This article proposes a test paradigm using speech-evoked envelope following responses (EFRs) for use as an objective-aided outcome measure. The method uses a running speech-like, naturally spoken stimulus token /susa∫i/ (fundamental frequency [f0] = 98 Hz; duration 2.05 sec), to elicit EFRs by eight carriers representing low, mid, and high frequencies. Each vowel elicited two EFRs simultaneously, one from the region of formant one (F1) and one from the higher formants region (F2+). The simultaneous recording of two EFRs was enabled by lowering f0 in the region of F1 alone. Fricatives were amplitude modulated to enable recording of EFRs from high-frequency spectral regions. The present study aimed to evaluate the effect of level and bandwidth on speech-evoked EFRs in adults with normal hearing. As well, the study aimed to test convergent validity of the EFR paradigm by comparing it with changes in behavioral tasks due to bandwidth. Single-channel electroencephalogram was recorded from the vertex to the nape of the neck over 300 sweeps in two polarities from 20 young adults with normal hearing. To evaluate the effects of level in experiment I, EFRs were recorded at test levels of 50 and 65 dB SPL. To evaluate the effects of bandwidth in experiment II, EFRs were elicited by /susa∫i/ low-pass filtered at 1, 2, and 4 kHz, presented at 65 dB SPL. The 65 dB SPL condition from experiment I represented the full bandwidth condition. EFRs were averaged across the two polarities and estimated using a Fourier analyzer. An F test was used to determine whether an EFR was detected. Speech discrimination using the University of Western Ontario Distinctive Feature Differences test and sound quality rating using the Multiple Stimulus Hidden Reference and Anchors paradigm were measured in identical bandwidth conditions. In experiment I, the increase in level resulted in a significant increase in response amplitudes for all eight carriers (mean increase of 14 to 50 nV) and the number of detections (mean increase of 1.4 detections). In experiment II, an increase in bandwidth resulted in a significant increase in the number of EFRs detected until the low-pass filtered 4 kHz condition and carrier-specific changes in response amplitude until the full bandwidth condition. Scores in both behavioral tasks increased with bandwidth up to the full bandwidth condition. The number of detections and composite amplitude (sum of all eight EFR amplitudes) significantly correlated with changes in behavioral test scores. Results suggest that the EFR paradigm is sensitive to changes in level and audible bandwidth. This may be a useful tool as an objective-aided outcome measure considering its running speech-like stimulus, representation of spectral regions important for speech understanding, level and bandwidth sensitivity, and clinically feasible test times. This paradigm requires further validation in individuals with hearing loss, with and without hearing aids.

  12. Speech-on-speech masking with variable access to the linguistic content of the masker speech for native and non-native speakers of English

    PubMed Central

    Calandruccio, Lauren; Bradlow, Ann R.; Dhar, Sumitrajit

    2013-01-01

    Background Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared to native-accented English speech was reported in Calandruccio, Dhar and Bradlow (2010). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. Purpose The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech affect masking release. Research Design A mixed model design with within- (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech, and high-intelligibility, moderate-intelligibility and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Study Sample Three listener groups were tested including monolingual English speakers with normal hearing, non-native speakers of English with normal hearing, and monolingual speakers of English with hearing loss. The non-native speakers of English were from various native-language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetrical, mild sloping to moderate sensorineural hearing loss. Data Collection and Analysis Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the keywords within the sentences (100 keywords/masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and the listener groups. Results Monolingual speakers of English with normal hearing benefited when the competing speech signal was foreign-accented compared to native-accented allowing for improved speech recognition. Various levels of intelligibility across the foreign-accented speech maskers did not influence results. Neither the non-native English listeners with normal hearing, nor the monolingual English speakers with hearing loss benefited from masking release when the masker was changed from native-accented to foreign-accented English. Conclusions Slight modifications between the target and the masker speech allowed monolingual speakers of English with normal hearing to improve their recognition of native-accented English even when the competing speech was highly intelligible. Further research is needed to determine which modifications within the competing speech signal caused the Mandarin-accented English to be less effective with respect to masking. Determining the influences within the competing speech that make it less effective as a masker, or determining why monolingual normal-hearing listeners can take advantage of these differences could help improve speech recognition for those with hearing loss in the future. PMID:25126683

  13. The use of fundamental frequency for lexical segmentation in listeners with cochlear implants.

    PubMed

    Spitzer, Stephanie; Liss, Julie; Spahr, Tony; Dorman, Michael; Lansford, Kaitlin

    2009-06-01

    Fundamental frequency (F0) variation is one of a number of acoustic cues normal hearing listeners use for guiding lexical segmentation of degraded speech. This study examined whether F0 contour facilitates lexical segmentation by listeners fitted with cochlear implants (CIs). Lexical boundary error patterns elicited under unaltered and flattened F0 conditions were compared across three groups: listeners with conventional CI, listeners with CI and preserved low-frequency acoustic hearing, and normal hearing listeners subjected to CI simulations. Results indicate that all groups attended to syllabic stress cues to guide lexical segmentation, and that F0 contours facilitated performance for listeners with low-frequency hearing.

  14. The time course of learning during a vowel discrimination task by hearing-impaired and masked normal-hearing listeners

    NASA Astrophysics Data System (ADS)

    Davis, Carrie; Kewley-Port, Diane; Coughlin, Maureen

    2002-05-01

    Vowel discrimination was compared between a group of young, well-trained listeners with mild-to-moderate sensorineural hearing impairment (YHI), and a matched group of normal hearing, noise-masked listeners (YNH). Unexpectedly, discrimination of F1 and F2 in the YHI listeners was equal to or better than that observed in YNH listeners in three conditions of similar audibility [Davis et al., J. Acoust. Soc. Am. 109, 2501 (2001)]. However, in the same time interval, the YHI subjects completed an average of 55% more blocks of testing than the YNH group. New analyses were undertaken to examine the time course of learning during the vowel discrimination task, to determine whether performance was affected by number of trials. Learning curves for a set of vowels in the F1 and F2 regions showed no significant differences between the YHI and YNH listeners. Thus while the YHI subjects completed more trials overall, they achieved a level of discrimination similar to that of their normal-hearing peers within the same number of blocks. Implications of discrimination performance in relation to hearing status and listening strategies will be discussed. [Work supported by NIHDCD-02229.

  15. An acoustic analysis of laughter produced by congenitally deaf and normally hearing college students.

    PubMed

    Makagon, Maja M; Funayama, E Sumie; Owren, Michael J

    2008-07-01

    Relatively few empirical data are available concerning the role of auditory experience in nonverbal human vocal behavior, such as laughter production. This study compared the acoustic properties of laughter in 19 congenitally, bilaterally, and profoundly deaf college students and in 23 normally hearing control participants. Analyses focused on degree of voicing, mouth position, air-flow direction, temporal features, relative amplitude, fundamental frequency, and formant frequencies. Results showed that laughter produced by the deaf participants was fundamentally similar to that produced by the normally hearing individuals, which in turn was consistent with previously reported findings. Finding comparable acoustic properties in the sounds produced by deaf and hearing vocalizers confirms the presumption that laughter is importantly grounded in human biology, and that auditory experience with this vocalization is not necessary for it to emerge in species-typical form. Some differences were found between the laughter of deaf and hearing groups; the most important being that the deaf participants produced lower-amplitude and longer-duration laughs. These discrepancies are likely due to a combination of the physiological and social factors that routinely affect profoundly deaf individuals, including low overall rates of vocal fold use and pressure from the hearing world to suppress spontaneous vocalizations.

  16. Temporal and spatio-temporal vibrotactile displays for voice fundamental frequency: an initial evaluation of a new vibrotactile speech perception aid with normal-hearing and hearing-impaired individuals.

    PubMed

    Auer, E T; Bernstein, L E; Coulter, D C

    1998-10-01

    Four experiments were performed to evaluate a new wearable vibrotactile speech perception aid that extracts fundamental frequency (F0) and displays the extracted F0 as a single-channel temporal or an eight-channel spatio-temporal stimulus. Specifically, we investigated the perception of intonation (i.e., question versus statement) and emphatic stress (i.e., stress on the first, second, or third word) under Visual-Alone (VA), Visual-Tactile (VT), and Tactile-Alone (TA) conditions and compared performance using the temporal and spatio-temporal vibrotactile display. Subjects were adults with normal hearing in experiments I-III and adults with severe to profound hearing impairments in experiment IV. Both versions of the vibrotactile speech perception aid successfully conveyed intonation. Vibrotactile stress information was successfully conveyed, but vibrotactile stress information did not enhance performance in VT conditions beyond performance in VA conditions. In experiment III, which involved only intonation identification, a reliable advantage for the spatio-temporal display was obtained. Differences between subject groups were obtained for intonation identification, with more accurate VT performance by those with normal hearing. Possible effects of long-term hearing status are discussed.

  17. Role of DFNB1 mutations in hereditary hearing loss among assortative mating hearing impaired families from South India.

    PubMed

    Amritkumar, Pavithra; Jeffrey, Justin Margret; Chandru, Jayasankaran; Vanniya S, Paridhy; Kalaimathi, M; Ramakrishnan, Rajagopalan; Karthikeyen, N P; Srikumari Srisailapathy, C R

    2018-06-19

    DFNB1, the first locus to have been associated with deafness, has two major genes GJB2 & GJB6, whose mutations have played vital role in hearing impairment across many ethnicities in the world. In our present study we have focused on the role of these mutations in assortative mating hearing impaired families from south India. One hundred and six assortatively mating hearing impaired (HI) families of south Indian origin comprising of two subsets: 60 deaf marrying deaf (DXD) families and 46 deaf marrying normal hearing (DXN) families were recruited for this study. In the 60 DXD families, 335 members comprising of 118 HI mates, 63 other HI members and 154 normal hearing members and in the 46 DXN families, 281 members comprising of 46 HI and their 43 normal hearing partners, 50 other HI members and 142 normal hearing family members, participated in the molecular study. One hundred and sixty five (165) healthy normal hearing volunteers were recruited as controls for this study. All the participating members were screened for variants in GJB2 and GJB6 genes and the outcome of gene mutations were compared in the subsequent generation in begetting deaf offspring. The DFNB1 allele frequencies for DXD mates and their offspring were 36.98 and 38.67%, respectively and for the DXN mates and their offspring were 22.84 and 24.38%, respectively. There was a 4.6% increase in the subsequent generation in the DXD families, while a 6.75% increase in the DXN families, which demonstrates the role of assortative mating along with consanguinity in the increase of DFNB1 mutations in consecutive generations. Four novel variants, p.E42D (in GJB2 gene), p.Q57R, p.E101Q, p.R104H (in GJB6 gene) were also identified in this study. This is the first study from an Indian subcontinent reporting novel variants in the coding region of GJB6 gene. This is perhaps the first study in the world to test real-time, the hypothesis proposed by Nance et al. in 2000 (intense phenotypic assortative mating mechanism can double the frequency of the commonest forms of recessive deafness [DFNB1]) in assortative mating HI parental generation and their offspring.

  18. Clinical Value of Vestibular Evoked Myogenic Potential in Assessing the Stage and Predicting the Hearing Results in Ménière's Disease.

    PubMed

    Kim, Min-Beom; Choi, Jeesun; Park, Ga Young; Cho, Yang-Sun; Hong, Sung Hwa; Chung, Won-Ho

    2013-06-01

    Our goal was to find the clinical value of cervical vestibular evoked myogenic potential (VEMP) in Ménière's disease (MD) and to evaluate whether the VEMP results can be useful in assessing the stage of MD. Furthermore, we tried to evaluate the clinical effectiveness of VEMP in predicting hearing outcomes. The amplitude, peak latency and interaural amplitude difference (IAD) ratio were obtained using cervical VEMP. The VEMP results of MD were compared with those of normal subjects, and the MD stages were compared with the IAD ratio. Finally, the hearing changes were analyzed according to their VEMP results. In clinically definite unilateral MD (n=41), the prevalence of cervical VEMP abnormality in the IAD ratio was 34.1%. When compared with normal subjects (n=33), the VEMP profile of MD patients showed a low amplitude and a similar latency. The mean IAD ratio in MD was 23%, which was significantly different from that of normal subjects (P=0.01). As the stage increased, the IAD ratio significantly increased (P=0.09). After stratification by initial hearing level, stage I and II subjects (hearing threshold, 0-40 dB) with an abnormal IAD ratio showed a decrease in hearing over time compared to those with a normal IAD ratio (P=0.08). VEMP parameters have an important clinical role in MD. Especially, the IAD ratio can be used to assess the stage of MD. An abnormal IAD ratio may be used as a predictor of poor hearing outcomes in subjects with early stage MD.

  19. Hearing impairment in preterm very low birthweight babies detected at term by brainstem auditory evoked responses.

    PubMed

    Jiang, Z D; Brosi, D M; Wilkinson, A R

    2001-12-01

    Seventy preterm babies who were born with a birthweight <1500 g were studied with brainstem auditory evoked responses (BAER) at 37-42 wk of postconceptional age. The data were compared with those of normal term neonates to determine the prevalence of hearing impairment in preterm very low birthweight (VLBW) babies when they reached term. The BAER was recorded with click stimuli at 21 s(-1). Wave I and V latencies increased significantly (ANOVA p < 0.01 and 0.001). I-V and III-V intervals also increased significantly (p < 0.05 and 0.001). Wave V amplitude and V/I amplitude ratio did not differ significantly from those in the normal term controls. Ten of the 70 VLBW babies had a significant elevation in BAER threshold (>30 dB normal hearing level). Eleven had an increase in I-V interval (>2.5 SD above the mean in the normal controls) and one had a decrease in V/I amplitude ratio (<0.45). These results suggest that 14% (10/70) of the VLBW babies had a peripheral hearing impairment and 17% (12/70) a central impairment. Three babies had both an increase in I-V interval and an elevation in BAER threshold, suggesting that 4% (3/70) had both peripheral and central impairments. Thus, the total prevalence of hearing impairment was 27% (19/70). About one in four preterm VLBW babies has peripheral and/or central hearing impairment at term. VLBW and its associated unfavourable perinatal factors predispose the babies to hearing impairment.

  20. Influence of implantable hearing aids and neuroprosthesison music perception.

    PubMed

    Rahne, Torsten; Böhme, Lars; Götze, Gerrit

    2012-01-01

    The identification and discrimination of timbre are essential features of music perception. One dominating parameter within the multidimensional timbre space is the spectral shape of complex sounds. As hearing loss interferes with the perception and enjoyment of music, we approach the individual timbre discrimination skills in individuals with severe to profound hearing loss using a cochlear implant (CI) and normal hearing individuals using a bone-anchored hearing aid (Baha). With a recent developed behavioral test relying on synthetically sounds forming a spectral continuum, the timbre difference was changed adaptively to measure the individual just noticeable difference (JND) in a forced-choice paradigm. To explore the differences in timbre perception abilities caused by the hearing mode, the sound stimuli were varied in their fundamental frequency, thus generating different spectra which are not completely covered by a CI or Baha system. The resulting JNDs demonstrate differences in timbre perception between normal hearing individuals, Baha users, and CI users. Beside the physiological reasons, also technical limitations appear as the main contributing factors.

  1. Informational Masking and Spatial Hearing in Listeners with and without Unilateral Hearing Loss

    ERIC Educational Resources Information Center

    Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.

    2012-01-01

    Purpose: This study assessed selective listening for speech in individuals with and without unilateral hearing loss (UHL) and the potential relationship between spatial release from informational masking and localization ability in listeners with UHL. Method: Twelve adults with UHL and 12 normal-hearing controls completed a series of monaural and…

  2. Using Standardized Psychometric Tests to Identify Learning Disabilities in Students with Sensorineural Hearing Impairments.

    ERIC Educational Resources Information Center

    Sikora, Darryn M.; Plapinger, Donald S.

    1994-01-01

    The use of standardized psychoeducational diagnostic instruments to identify learning disabilities was evaluated with 19 students (ages 7 to 13) with sensorineural hearing impairments. Students with hearing impairment were found to demonstrate learning disabilities with a frequency similar to that found in students with normal hearing, suggesting…

  3. [Bilateral cochlear implants].

    PubMed

    Müller, J

    2017-07-01

    Cochlear implants (CI) are standard for the hearing rehabilitation of severe to profound deafness. Nowadays, if bilaterally indicated, bilateral implantation is usually recommended (in accordance with German guidelines). Bilateral implantation enables better speech discrimination in quiet and in noise, and restores directional and spatial hearing. Children with bilateral CI are able to undergo hearing-based hearing and speech development. Within the scope of their individual possibilities, bilaterally implanted children develop faster than children with unilateral CI and attain, e.g., a larger vocabulary within a certain time interval. Only bilateral implantation allows "binaural hearing," with all the benefits that people with normal hearing profit from, namely: better speech discrimination in quiet and in noise, as well as directional and spatial hearing. Naturally, the developments take time. Binaural CI users benefit from the same effects as normal hearing persons: head shadow effect, squelch effect, and summation and redundancy effects. Sequential CI fitting is not necessarily disadvantageous-both simultaneously and sequentially fitted patients benefit in a similar way. For children, earliest possible fitting and shortest possible interval between the two surgeries seems to positively influence the outcome if bilateral CI are indicated.

  4. Four odontocete species change hearing levels when warned of impending loud sound.

    PubMed

    Nachtigall, Paul E; Supin, Alexander Ya; Pacini, Aude F; Kastelein, Ronald A

    2018-03-01

    Hearing sensitivity change was investigated when a warning sound preceded a loud sound in the false killer whale (Pseudorca crassidens), the bottlenose dolphin (Tursiops truncatus), the beluga whale (Delphinaperus leucas) and the harbor porpoise (Phocoena phocoena). Hearing sensitivity was measured using pip-train test stimuli and auditory evoked potential recording. When the test/warning stimuli preceded a loud sound, hearing thresholds before the loud sound increased relative to the baseline by 13 to 17 dB. Experiments with multiple frequencies of exposure and shift provided evidence of different amounts of hearing change depending on frequency, indicating that the hearing sensation level changes were not likely due to a simple stapedial reflex. © 2017 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  5. Four cases of acoustic neuromas with normal hearing.

    PubMed

    Valente, M; Peterein, J; Goebel, J; Neely, J G

    1995-05-01

    In 95 percent of the cases, patients with acoustic neuromas will have some magnitude of hearing loss in the affected ear. This paper reports on four patients who had acoustic neuromas and normal hearing. Results from the case history, audiometric evaluation, auditory brainstem response (ABR), electroneurography (ENOG), and vestibular evaluation are reported for each patient. For all patients, the presence of unilateral tinnitus was the most common complaint. Audiologically, elevated or absent acoustic reflex thresholds and abnormal ABR findings were the most powerful diagnostic tools.

  6. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology

    PubMed Central

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2012-01-01

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance. PMID:26557339

  7. Assessment of Language Comprehension of 6-Year-Old Deaf Children.

    ERIC Educational Resources Information Center

    Geffner, Donna S.; Freeman, Lisa Rothman

    1980-01-01

    Results show that comprehension of word types (nouns, verbs, etc.) and linguistic structure can be orderly, producing a hierarchy of complexity similar to that found in normally hearing children. However, performance was about three years behind that of normally hearing peers. Journal availability: Elsevier North Holland, Inc., 52 Vanderbilt…

  8. The carrier rate and mutation spectrum of genes associated with hearing loss in South China hearing female population of childbearing age

    PubMed Central

    2013-01-01

    Background Given that hearing loss occurs in 1 to 3 of 1,000 live births and approximately 90 to 95 percent of them are born into hearing families, it is of importance and necessity to get better understanding about the carrier rate and mutation spectrum of genes associated with hearing impairment in the general population. Methods 7,263 unrelated women of childbearing age with normal hearing and without family history of hearing loss were tested with allele-specific PCR-based universal array. Further genetic testing were provided to the spouses of the screened carriers. For those couples at risk, multiple choices were provided, including prenatal diagnosis. Results Among the 7,263 normal hearing participants, 303 subjects carried pathogenic mutations included in the screening chip, which made the carrier rate 4.17%. Of the 303 screened carriers, 282 harbored heterozygous mutated genes associated with autosomal recessive hearing loss, and 95 spouses took further genetic tests. 8 out of the 9 couples harbored deafness-causing mutations in the same gene received prenatal diagnosis. Conclusions Given that nearly 90 to 95 percent of deaf and hard-of-hearing babies are born into hearing families, better understanding about the carrier rate and mutation spectrum of genes associated with hearing impairment in the female population of childbearing age may be of importance in carrier screening and genetic counseling. PMID:23718755

  9. The carrier rate and mutation spectrum of genes associated with hearing loss in South China hearing female population of childbearing age.

    PubMed

    Yin, Aihua; Liu, Chang; Zhang, Yan; Wu, Jing; Mai, Mingqin; Ding, Hongke; Yang, Jiexia; Zhang, Xiaozhuang

    2013-05-29

    Given that hearing loss occurs in 1 to 3 of 1,000 live births and approximately 90 to 95 percent of them are born into hearing families, it is of importance and necessity to get better understanding about the carrier rate and mutation spectrum of genes associated with hearing impairment in the general population. 7,263 unrelated women of childbearing age with normal hearing and without family history of hearing loss were tested with allele-specific PCR-based universal array. Further genetic testing were provided to the spouses of the screened carriers. For those couples at risk, multiple choices were provided, including prenatal diagnosis. Among the 7,263 normal hearing participants, 303 subjects carried pathogenic mutations included in the screening chip, which made the carrier rate 4.17%. Of the 303 screened carriers, 282 harbored heterozygous mutated genes associated with autosomal recessive hearing loss, and 95 spouses took further genetic tests. 8 out of the 9 couples harbored deafness-causing mutations in the same gene received prenatal diagnosis. Given that nearly 90 to 95 percent of deaf and hard-of-hearing babies are born into hearing families, better understanding about the carrier rate and mutation spectrum of genes associated with hearing impairment in the female population of childbearing age may be of importance in carrier screening and genetic counseling.

  10. Peripheral hearing loss reduces the ability of children to direct selective attention during multi-talker listening.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2017-07-01

    Restoring normal hearing requires knowledge of how peripheral and central auditory processes are affected by hearing loss. Previous research has focussed primarily on peripheral changes following sensorineural hearing loss, whereas consequences for central auditory processing have received less attention. We examined the ability of hearing-impaired children to direct auditory attention to a voice of interest (based on the talker's spatial location or gender) in the presence of a common form of background noise: the voices of competing talkers (i.e. during multi-talker, or "Cocktail Party" listening). We measured brain activity using electro-encephalography (EEG) when children prepared to direct attention to the spatial location or gender of an upcoming target talker who spoke in a mixture of three talkers. Compared to normally-hearing children, hearing-impaired children showed significantly less evidence of preparatory brain activity when required to direct spatial attention. This finding is consistent with the idea that hearing-impaired children have a reduced ability to prepare spatial attention for an upcoming talker. Moreover, preparatory brain activity was not restored when hearing-impaired children listened with their acoustic hearing aids. An implication of these findings is that steps to improve auditory attention alongside acoustic hearing aids may be required to improve the ability of hearing-impaired children to understand speech in the presence of competing talkers. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Binaural pitch fusion: Comparison of normal-hearing and hearing-impaired listenersa)

    PubMed Central

    Reiss, Lina A. J.; Shayman, Corey S.; Walker, Emily P.; Bennett, Keri O.; Fowler, Jennifer R.; Hartling, Curtis L.; Glickman, Bess; Lasarev, Michael R.; Oh, Yonghee

    2017-01-01

    Binaural pitch fusion is the fusion of dichotically presented tones that evoke different pitches between the ears. In normal-hearing (NH) listeners, the frequency range over which binaural pitch fusion occurs is usually <0.2 octaves. Recently, broad fusion ranges of 1–4 octaves were demonstrated in bimodal cochlear implant users. In the current study, it was hypothesized that hearing aid (HA) users would also exhibit broad fusion. Fusion ranges were measured in both NH and hearing-impaired (HI) listeners with hearing losses ranging from mild-moderate to severe-profound, and relationships of fusion range with demographic factors and with diplacusis were examined. Fusion ranges of NH and HI listeners averaged 0.17 ± 0.13 octaves and 1.7 ± 1.5 octaves, respectively. In HI listeners, fusion ranges were positively correlated with a principal component measure of the covarying factors of young age, early age of hearing loss onset, and long durations of hearing loss and HA use, but not with hearing threshold, amplification level, or diplacusis. In NH listeners, no correlations were observed with age, hearing threshold, or diplacusis. The association of broad fusion with early onset, long duration of hearing loss suggests a possible role of long-term experience with hearing loss and amplification in the development of broad fusion. PMID:28372056

  12. Children Using Cochlear Implants Capitalize on Acoustical Hearing for Music Perception

    PubMed Central

    Hopyan, Talar; Peretz, Isabelle; Chan, Lisa P.; Papsin, Blake C.; Gordon, Karen A.

    2012-01-01

    Cochlear implants (CIs) electrically stimulate the auditory nerve providing children who are deaf with access to speech and music. Because of device limitations, it was hypothesized that children using CIs develop abnormal perception of musical cues. Perception of pitch and rhythm as well as memory for music was measured by the children’s version of the Montreal Battery of Evaluation of Amusia (MBEA) in 23 unilateral CI users and 22 age-matched children with normal hearing. Children with CIs were less accurate than their normal hearing peers (p < 0.05). CI users were best able to discern rhythm changes (p < 0.01) and to remember musical pieces (p < 0.01). Contrary to expectations, abilities to hear cues in music improved as the age at implantation increased (p < 0.01). Because the children implanted at older ages also had better low frequency hearing prior to cochlear implantation and were able to use this hearing by wearing hearing aids. Access to early acoustical hearing in the lower frequency ranges appears to establish a base for music perception, which can be accessed with later electrical CI hearing. PMID:23133430

  13. Audiological findings in Noonan syndrome.

    PubMed

    Tokgoz-Yilmaz, Suna; Turkyilmaz, Meral Didem; Cengiz, Filiz Basak; Sjöstrand, Alev Pektas; Kose, Serdal Kenan; Tekin, Mustafa

    2016-10-01

    The aim of this study was to evaluate audiologic properties of patients with Noonan syndrome and compare these findings with those of unaffected peers. The study included 17 children with Noonan syndrome and 20 typically developing children without Noonan syndrome. Pure tone and speech audiometry, immitancemetric measurement, otoacoustic emissions measurement and auditory brainstem response tests were applied to all (n = 37) children. Hearing thresholds of children with Noonan syndrome were higher (poorer) than those observed unaffected peers, while the hearing sensitivity of the both groups were normal limits (p = 0.013 for right, p = 0.031 for left ear). Transient evoked otoacoustic emissions amplitudes of the children with Noonan syndrome were lower than the children without Noonan syndrome (p = 0.005 for right, p = 0.002 for left ear). Middle ear pressures and auditory brainstem response values were within normal limits and there was no difference between the two groups (p > 0.05). General benefit of the present study is to characterize the audiologic findings of children with Noonan syndrome, which is beneficial in clinics evaluating children with Noonan syndrome. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Effect of training on word-recognition performance in noise for young normal-hearing and older hearing-impaired listeners.

    PubMed

    Burk, Matthew H; Humes, Larry E; Amos, Nathan E; Strauser, Lauren E

    2006-06-01

    The objective of this study was to evaluate the effectiveness of a training program for hearing-impaired listeners to improve their speech-recognition performance within a background noise when listening to amplified speech. Both noise-masked young normal-hearing listeners, used to model the performance of elderly hearing-impaired listeners, and a group of elderly hearing-impaired listeners participated in the study. Of particular interest was whether training on an isolated word list presented by a standardized talker can generalize to everyday speech communication across novel talkers. Word-recognition performance was measured for both young normal-hearing (n = 16) and older hearing-impaired (n = 7) adults. Listeners were trained on a set of 75 monosyllabic words spoken by a single female talker over a 9- to 14-day period. Performance for the familiar (trained) talker was measured before and after training in both open-set and closed-set response conditions. Performance on the trained words of the familiar talker were then compared with those same words spoken by three novel talkers and to performance on a second set of untrained words presented by both the familiar and unfamiliar talkers. The hearing-impaired listeners returned 6 mo after their initial training to examine retention of the trained words as well as their ability to transfer any knowledge gained from word training to sentences containing both trained and untrained words. Both young normal-hearing and older hearing-impaired listeners performed significantly better on the word list in which they were trained versus a second untrained list presented by the same talker. Improvements on the untrained words were small but significant, indicating some generalization to novel words. The large increase in performance on the trained words, however, was maintained across novel talkers, pointing to the listener's greater focus on lexical memorization of the words rather than a focus on talker-specific acoustic characteristics. On return in 6 mo, listeners performed significantly better on the trained words relative to their initial baseline performance. Although the listeners performed significantly better on trained versus untrained words in isolation, once the trained words were embedded in sentences, no improvement in recognition over untrained words within the same sentences was shown. Older hearing-impaired listeners were able to significantly improve their word-recognition abilities through training with one talker and to the same degree as young normal-hearing listeners. The improved performance was maintained across talkers and across time. This might imply that training a listener using a standardized list and talker may still provide benefit when these same words are presented by novel talkers outside the clinic. However, training on isolated words was not sufficient to transfer to fluent speech for the specific sentence materials used within this study. Further investigation is needed regarding approaches to improve a hearing aid user's speech understanding in everyday communication situations.

  15. Emergent Literacy Skills in Preschool Children with Hearing Loss Who Use Spoken Language: Initial Findings from the Early Language and Literacy Acquisition (ELLA) Study

    ERIC Educational Resources Information Center

    Werfel, Krystal L.

    2017-01-01

    Purpose: The purpose of this study was to compare change in emergent literacy skills of preschool children with and without hearing loss over a 6-month period. Method: Participants included 19 children with hearing loss and 14 children with normal hearing. Children with hearing loss used amplification and spoken language. Participants completed…

  16. Speech Intelligibility in Persian Hearing Impaired Children with Cochlear Implants and Hearing Aids.

    PubMed

    Rezaei, Mohammad; Emadi, Maryam; Zamani, Peyman; Farahani, Farhad; Lotfi, Gohar

    2017-04-01

    The aim of present study is to evaluate and compare speech intelligibility in hearing impaired children with cochlear implants (CI) and hearing aid (HA) users and children with normal hearing (NH). The sample consisted of 45 Persian-speaking children aged 3 to 5-years-old. They were divided into three groups, and each group had 15, children, children with CI and children using hearing aids in Hamadan. Participants was evaluated by the test of speech intelligibility level. Results of ANOVA on speech intelligibility test showed that NH children had significantly better reading performance than hearing impaired children with CI and HA. Post-hoc analysis, using Scheffe test, indicated that the mean score of speech intelligibility of normal children was higher than the HA and CI groups; but the difference was not significant between mean of speech intelligibility in children with hearing loss that use cochlear implant and those using HA. It is clear that even with remarkabkle advances in HA technology, many hearing impaired children continue to find speech production a challenging problem. Given that speech intelligibility is a key element in proper communication and social interaction, consequently, educational and rehabilitation programs are essential to improve speech intelligibility of children with hearing loss.

  17. The Relationship between the Behavioral Hearing Thresholds and Maximum Bilirubin Levels at Birth in Children with a History of Neonatal Hyperbilirubinemia

    PubMed Central

    Panahi, Rasool; Jafari, Zahra; Sheibanizade, Abdoreza; Salehi, Masoud; Esteghamati, Abdoreza; Hasani, Sara

    2013-01-01

    Introduction: Neonatal hyperbilirubinemia is one of the most important factors affecting the auditory system and can cause sensorineural hearing loss. This study investigated the relationship between behavioral hearing thresholds in children with a history of jaundice and the maximum level of bilirubin concentration in the blood. Materials and Methods: This study was performed on 18 children with a mean age of 5.6 years and with a history of neonatal hyperbilirubinemia. Behavioral hearing thresholds, transient evoked emissions and brainstem evoked responses were evaluated in all children. Results: Six children (33.3%) had normal hearing thresholds and the remaining (66.7%) had some degree of hearing loss. There was no significant relationship (r=-0.28, P=0.09) between the mean total bilirubin levels and behavioral hearing thresholds in all samples. A transient evoked emission was seen only in children with normal hearing thresholds however in eight cases brainstem evoked responses had not detected. Conclusion: Increased blood levels of bilirubin at the neonatal period were potentially one of the causes of hearing loss. There was a lack of a direct relationship between neonatal bilirubin levels and the average hearing thresholds which emphasizes on the necessity of monitoring the various amounts of bilirubin levels. PMID:24303432

  18. High-frequency amplification and sound quality in listeners with normal through moderate hearing loss.

    PubMed

    Ricketts, Todd A; Dittberner, Andrew B; Johnson, Earl E

    2008-02-01

    One factor that has been shown to greatly affect sound quality is audible bandwidth. Provision of gain for frequencies above 4-6 kHz has not generally been supported for groups of hearing aid wearers. The purpose of this study was to determine if preference for bandwidth extension in hearing aid processed sounds was related to the magnitude of hearing loss in individual listeners. Ten participants with normal hearing and 20 participants with mild-to-moderate hearing loss completed the study. Signals were processed using hearing aid-style compression algorithms and filtered using two cutoff frequencies, 5.5 and 9 kHz, which were selected to represent bandwidths that are achievable in modern hearing aids. Round-robin paired comparisons based on the criteria of preferred sound quality were made for 2 different monaurally presented brief sound segments, including music and a movie. Results revealed that preference for either the wider or narrower bandwidth (9- or 5.5-kHz cutoff frequency, respectively) was correlated with the slope of hearing loss from 4 to 12 kHz, with steep threshold slopes associated with preference for narrower bandwidths. Consistent preference for wider bandwidth is present in some listeners with mild-to-moderate hearing loss.

  19. Tone perception in Mandarin-speaking school age children with otitis media with effusion

    PubMed Central

    McPherson, Bradley; Li, Caiwei; Yang, Feng

    2017-01-01

    Objectives The present study explored tone perception ability in school age Mandarin-speaking children with otitis media with effusion (OME) in noisy listening environments. The study investigated the interaction effects of noise, tone type, age, and hearing status on monaural tone perception, and assessed the application of a hierarchical clustering algorithm for profiling hearing impairment in children with OME. Methods Forty-one children with normal hearing and normal middle ear status and 84 children with OME with or without hearing loss participated in this study. The children with OME were further divided into two subgroups based on their severity and pattern of hearing loss using a hierarchical clustering algorithm. Monaural tone recognition was measured using a picture-identification test format incorporating six sets of monosyllabic words conveying four lexical tones under speech spectrum noise, with the signal-to-noise ratio (SNR) conditions ranging from -9 to -21 dB. Results Linear correlation indicated tone recognition thresholds of children with OME were significantly correlated with age and pure tone hearing thresholds at every frequency tested. Children with hearing thresholds less affected by OME performed similarly to their peers with normal hearing. Tone recognition thresholds of children with auditory status more affected by OME were significantly inferior to those of children with normal hearing or with minor hearing loss. Younger children demonstrated poorer tone recognition performance than older children with OME. A mixed design repeated-measure ANCOVA showed significant main effects of listening condition, hearing status, and tone type on tone recognition. Contrast comparisons revealed that tone recognition scores were significantly better under -12 dB SNR than under -15 dB SNR conditions and tone recognition scores were significantly worse under -18 dB SNR than those obtained under -15 dB SNR conditions. Tone 1 was the easiest tone to identify and Tone 3 was the most difficult tone to identify for all participants, when considering -12, -15, and -18 dB SNR as within-subject variables. The interaction effect between hearing status and tone type indicated that children with greater levels of OME-related hearing loss had more impaired tone perception of Tone 1 and Tone 2 compared to their peers with lesser levels of OME-related hearing loss. However, tone perception of Tone 3 and Tone 4 remained similar among all three groups. Tone 2 and Tone 3 were the most perceptually difficult tones for children with or without OME-related hearing loss in all listening conditions. Conclusions The hierarchical clustering algorithm demonstrated usefulness in risk stratification for tone perception deficiency in children with OME-related hearing loss. There was marked impairment in tone perception in noise for children with greater levels of OME-related hearing loss. Monaural lexical tone perception in younger children was more vulnerable to noise and OME-related hearing loss than that in older children. PMID:28829840

  20. Tone perception in Mandarin-speaking school age children with otitis media with effusion.

    PubMed

    Cai, Ting; McPherson, Bradley; Li, Caiwei; Yang, Feng

    2017-01-01

    The present study explored tone perception ability in school age Mandarin-speaking children with otitis media with effusion (OME) in noisy listening environments. The study investigated the interaction effects of noise, tone type, age, and hearing status on monaural tone perception, and assessed the application of a hierarchical clustering algorithm for profiling hearing impairment in children with OME. Forty-one children with normal hearing and normal middle ear status and 84 children with OME with or without hearing loss participated in this study. The children with OME were further divided into two subgroups based on their severity and pattern of hearing loss using a hierarchical clustering algorithm. Monaural tone recognition was measured using a picture-identification test format incorporating six sets of monosyllabic words conveying four lexical tones under speech spectrum noise, with the signal-to-noise ratio (SNR) conditions ranging from -9 to -21 dB. Linear correlation indicated tone recognition thresholds of children with OME were significantly correlated with age and pure tone hearing thresholds at every frequency tested. Children with hearing thresholds less affected by OME performed similarly to their peers with normal hearing. Tone recognition thresholds of children with auditory status more affected by OME were significantly inferior to those of children with normal hearing or with minor hearing loss. Younger children demonstrated poorer tone recognition performance than older children with OME. A mixed design repeated-measure ANCOVA showed significant main effects of listening condition, hearing status, and tone type on tone recognition. Contrast comparisons revealed that tone recognition scores were significantly better under -12 dB SNR than under -15 dB SNR conditions and tone recognition scores were significantly worse under -18 dB SNR than those obtained under -15 dB SNR conditions. Tone 1 was the easiest tone to identify and Tone 3 was the most difficult tone to identify for all participants, when considering -12, -15, and -18 dB SNR as within-subject variables. The interaction effect between hearing status and tone type indicated that children with greater levels of OME-related hearing loss had more impaired tone perception of Tone 1 and Tone 2 compared to their peers with lesser levels of OME-related hearing loss. However, tone perception of Tone 3 and Tone 4 remained similar among all three groups. Tone 2 and Tone 3 were the most perceptually difficult tones for children with or without OME-related hearing loss in all listening conditions. The hierarchical clustering algorithm demonstrated usefulness in risk stratification for tone perception deficiency in children with OME-related hearing loss. There was marked impairment in tone perception in noise for children with greater levels of OME-related hearing loss. Monaural lexical tone perception in younger children was more vulnerable to noise and OME-related hearing loss than that in older children.

  1. Loudness growth in 1/2-octave bands (LGOB)--a procedure for the assessment of loudness.

    PubMed

    Allen, J B; Hall, J L; Jeng, P S

    1990-08-01

    In this paper, a method that has been developed for the assessment and quantification of loudness perception in normal-hearing and hearing-impaired persons is described. The method has been named LGOB, which stands for loudness growth in 1/2-octave bands. The method uses 1/2-octave bands of noise, centered at 0.25, 0.5, 1.0, 2.0, and 4.0 kHz, with subjective levels between a subject's threshold of hearing and the "too loud" level. The noise bands are presented to the subject, randomized over frequency and level, and the subject is asked to respond with a loudness rating (one of: VERY SOFT, SOFT, OK, LOUD, VERY LOUD, TOO LOUD). Subject responses (normal and hearing-impaired) are then compared to the average responses of a group of normal-hearing subjects. This procedure allows one to estimate the subject's loudness growth relative to normals, as a function of frequency and level. The results may be displayed either as isoloudness contours or as recruitment curves. In its present form, the measurements take less than 30 min. The signal presentation and analysis is done using a PC and a PC plug-in board having a digital to analog converter.

  2. Intelligence and Academic Achievement With Asymptomatic Congenital Cytomegalovirus Infection.

    PubMed

    Lopez, Adriana S; Lanzieri, Tatiana M; Claussen, Angelika H; Vinson, Sherry S; Turcich, Marie R; Iovino, Isabella R; Voigt, Robert G; Caviness, A Chantal; Miller, Jerry A; Williamson, W Daniel; Hales, Craig M; Bialek, Stephanie R; Demmler-Harrison, Gail

    2017-11-01

    To examine intelligence, language, and academic achievement through 18 years of age among children with congenital cytomegalovirus infection identified through hospital-based newborn screening who were asymptomatic at birth compared with uninfected infants. We used growth curve modeling to analyze trends in IQ (full-scale, verbal, and nonverbal intelligence), receptive and expressive vocabulary, and academic achievement in math and reading. Separate models were fit for each outcome, modeling the change in overall scores with increasing age for patients with normal hearing ( n = 78) or with sensorineural hearing loss (SNHL) diagnosed by 2 years of age ( n = 11) and controls ( n = 40). Patients with SNHL had full-scale intelligence and receptive vocabulary scores that were 7.0 and 13.1 points lower, respectively, compared with controls, but no significant differences were noted in these scores among patients with normal hearing and controls. No significant differences were noted in scores for verbal and nonverbal intelligence, expressive vocabulary, and academic achievement in math and reading among patients with normal hearing or with SNHL and controls. Infants with asymptomatic congenital cytomegalovirus infection identified through newborn screening with normal hearing by age 2 years do not appear to have differences in IQ, vocabulary or academic achievement scores during childhood, or adolescence compared with uninfected children. Copyright © 2017 by the American Academy of Pediatrics.

  3. Presbycusis, sociocusis, and nosocusis

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The establishment of a baseline of normal hearing is investigated through the examination of pure tone hearing level surveys and variables such as age, sociocusis, sex, race, and otological disorders. Mathematical formulae used to predict hearing levels in industrial and nonindustrial surveys is included.

  4. [Music therapy in adults with cochlear implants : Effects on music perception and subjective sound quality].

    PubMed

    Hutter, E; Grapp, M; Argstatter, H

    2016-12-01

    People with severe hearing impairments and deafness can achieve good speech comprehension using a cochlear implant (CI), although music perception often remains impaired. A novel concept of music therapy for adults with CI was developed and evaluated in this study. This study included 30 adults with a unilateral CI following postlingual deafness. The subjective sound quality of the CI was rated using the hearing implant sound quality index (HISQUI) and musical tests for pitch discrimination, melody recognition and timbre identification were applied. As a control 55 normally hearing persons also completed the musical tests. In comparison to normally hearing subjects CI users showed deficits in the perception of pitch, melody and timbre. Specific effects of therapy were observed in the subjective sound quality of the CI, in pitch discrimination into a high and low pitch range and in timbre identification, while general learning effects were found in melody recognition. Music perception shows deficits in CI users compared to normally hearing persons. After individual music therapy in the rehabilitation process, improvements in this delicate area could be achieved.

  5. Hearing versus Listening: Attention to Speech and Its Role in Language Acquisition in Deaf Infants with Cochlear Implants

    PubMed Central

    Houston, Derek M.; Bergeson, Tonya R.

    2013-01-01

    The advent of cochlear implantation has provided thousands of deaf infants and children access to speech and the opportunity to learn spoken language. Whether or not deaf infants successfully learn spoken language after implantation may depend in part on the extent to which they listen to speech rather than just hear it. We explore this question by examining the role that attention to speech plays in early language development according to a prominent model of infant speech perception – Jusczyk’s WRAPSA model – and by reviewing the kinds of speech input that maintains normal-hearing infants’ attention. We then review recent findings suggesting that cochlear-implanted infants’ attention to speech is reduced compared to normal-hearing infants and that speech input to these infants differs from input to infants with normal hearing. Finally, we discuss possible roles attention to speech may play on deaf children’s language acquisition after cochlear implantation in light of these findings and predictions from Jusczyk’s WRAPSA model. PMID:24729634

  6. Ranking Hearing Aid Input-Output Functions for Understanding Low-, Conversational-, and High-Level Speech in Multitalker Babble

    ERIC Educational Resources Information Center

    Chung, King; Killion, Mead C.; Christensen, Laurel A.

    2007-01-01

    Purpose: To determine the rankings of 6 input-output functions for understanding low-level, conversational, and high-level speech in multitalker babble without manipulating volume control for listeners with normal hearing, flat sensorineural hearing loss, and mildly sloping sensorineural hearing loss. Method: Peak clipping, compression limiting,…

  7. Spoken and Written Narratives in Swedish Children and Adolescents with Hearing Impairment

    ERIC Educational Resources Information Center

    Asker-Arnason, Lena; Akerlund, Viktoria; Skoglund, Cecilia; Ek-Lagergren, Ingela; Wengelin, Asa; Sahlen, Birgitta

    2012-01-01

    Twenty 10- to 18-year-old children and adolescents with varying degrees of hearing impairment (HI) and hearing aids (HA), ranging from mild-moderate to severe, produced picture-elicited narratives in a spoken and written version. Their performance was compared to that of 63 normally hearing (NH) peers within the same age span. The participants…

  8. Vowel Identification by Listeners with Hearing Impairment in Response to Variation in Formant Frequencies

    ERIC Educational Resources Information Center

    Molis, Michelle R.; Leek, Marjorie R.

    2011-01-01

    Purpose: This study examined the influence of presentation level and mild-to-moderate hearing loss on the identification of a set of vowel tokens systematically varying in the frequency locations of their second and third formants. Method: Five listeners with normal hearing (NH listeners) and five listeners with hearing impairment (HI listeners)…

  9. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    ERIC Educational Resources Information Center

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  10. Psychosocial health of cochlear implant users compared to that of adults with and without hearing aids: Results of a nationwide cohort study.

    PubMed

    Bosdriesz, J R; Stam, M; Smits, C; Kramer, S E

    2018-06-01

    This study aimed to examine the psychosocial health status of adult cochlear implant (CI) users, compared to that of hearing aid (HA) users, hearing-impaired adults without hearing aids and normally hearing adults. Cross-sectional observational study, using both self-reported survey data and a speech-in-noise test. Data as collected within the Netherlands Longitudinal Study on Hearing (NL-SH) between September 2011 and June 2016 were used. Data from 1254 Dutch adults (aged 23-74), selected in a convenience sample design, were included for analyses. Psychosocial health measures included emotional and social loneliness, anxiety, depression, distress and somatisation. Psychosocial health, hearing status, use of hearing technology and covariates were measured by self-report; hearing ability was assessed through an online digit triplet speech-in-noise test. After adjusting for the degree of hearing impairment, HA users (N = 418) and hearing-impaired adults (N = 247) had significantly worse scores on emotional loneliness than CI users (N = 37). HA users had significantly higher anxiety scores than CI users in some analyses. Non-significant differences were found between normally hearing (N = 552) and CI users for all psychosocial outcomes. Psychosocial health of CI users is not worse than that of hearing-impaired adults with or without hearing aids. CI users' level of emotional loneliness is even lower than that of their hearing-impaired peers using hearing aids. A possible explanation is that CI patients receive more professional and family support, and guidance along their patient journey than adults who are fitted with hearing aids. © 2017 The Authors. Clinical Otolaryngology Published by John Wiley & Sons Ltd.

  11. A comparison of speech intonation production and perception abilities of Farsi speaking cochlear implanted and normal hearing children.

    PubMed

    Moein, Narges; Khoddami, Seyyedeh Maryam; Shahbodaghi, Mohammad Rahim

    2017-10-01

    Cochlear implant prosthesis facilitates spoken language development and speech comprehension in children with severe-profound hearing loss. However, this prosthesis is limited in encoding information about fundamental frequency and pitch that are essentially for recognition of speech prosody. The purpose of the present study is to investigate the perception and production of intonation in cochlear implant children and comparison with normal hearing children. This study carried out on 25 cochlear implanted children and 50 children with normal hearing. First, using 10 action pictures statements and questions sentences were extracted. Fundamental frequency and pitch changes were identified using Praat software. Then, these sentences were judged by 7 adult listeners. In second stage 20 sentences were played for child and he/she determined whether it was in a question form or statement one. Performance of cochlear implanted children in perception and production of intonation was significantly lower than children with normal hearing. The difference between fundamental frequency and pitch changes in cochlear implanted children and children with normal hearing was significant (P < 0/05). Cochlear implanted children performance in perception and production of intonation has significant correlation with child's age surgery and duration of prosthesis use (P < 0/05). The findings of the current study show that cochlear prostheses have limited application in facilitating the perception and production of intonation in cochlear implanted children. It should be noted that the child's age at the surgery and duration of prosthesis's use is important in reduction of this limitation. According to these findings, speech and language pathologists should consider intervention of intonation in treatment program of cochlear implanted children. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Spatial hearing benefits demonstrated with presentation of acoustic temporal fine structure cues in bilateral cochlear implant listeners.

    PubMed

    Churchill, Tyler H; Kan, Alan; Goupell, Matthew J; Litovsky, Ruth Y

    2014-09-01

    Most contemporary cochlear implant (CI) processing strategies discard acoustic temporal fine structure (TFS) information, and this may contribute to the observed deficits in bilateral CI listeners' ability to localize sounds when compared to normal hearing listeners. Additionally, for best speech envelope representation, most contemporary speech processing strategies use high-rate carriers (≥900 Hz) that exceed the limit for interaural pulse timing to provide useful binaural information. Many bilateral CI listeners are sensitive to interaural time differences (ITDs) in low-rate (<300 Hz) constant-amplitude pulse trains. This study explored the trade-off between superior speech temporal envelope representation with high-rate carriers and binaural pulse timing sensitivity with low-rate carriers. The effects of carrier pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition in quiet were examined in eight bilateral CI listeners. Stimuli consisted of speech tokens processed at different electrical stimulation rates, and pulse timings that either preserved or did not preserve acoustic TFS cues. Results showed that CI listeners were able to use low-rate pulse timing cues derived from acoustic TFS when presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli.

  13. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability among Listeners with Normal Hearing Thresholds

    ERIC Educational Resources Information Center

    Shinn-Cunningham, Barbara

    2017-01-01

    Purpose: This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method: The results from neuroscience and psychoacoustics are reviewed. Results: In noisy settings, listeners focus their…

  14. Speech Timing and Working Memory in Profoundly Deaf Children after Cochlear Implantation.

    ERIC Educational Resources Information Center

    Burkholder, Rose A.; Pisoni, David B.

    2003-01-01

    Compared speaking rates, digit span, and speech timing in profoundly deaf 8- and 9-year-olds with cochlear implants and normal-hearing children. Found that deaf children displayed longer sentence durations and pauses during recall and shorter digit spans than normal-hearing children. Articulation rates strongly correlated with immediate memory…

  15. Taxonomic Knowledge of Children with and without Cochlear Implants

    ERIC Educational Resources Information Center

    Lund, Emily; Dinsmoor, Jessica

    2016-01-01

    Purpose: The purpose of this study was to compare the taxonomic vocabulary knowledge and organization of children with cochlear implants to (a) children with normal hearing matched for age, and (b) children matched for vocabulary development. Method: Ten children with cochlear implants, 10 age-matched children with normal hearing, and 10…

  16. Verbal Working Memory in Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Nittrouer, Susan; Caldwell-Tarr, Amanda; Low, Keri E.; Lowenstein, Joanna H.

    2017-01-01

    Purpose: Verbal working memory in children with cochlear implants and children with normal hearing was examined. Participants: Ninety-three fourth graders (47 with normal hearing, 46 with cochlear implants) participated, all of whom were in a longitudinal study and had working memory assessed 2 years earlier. Method: A dual-component model of…

  17. Binaural Advantage for Younger and Older Adults with Normal Hearing

    ERIC Educational Resources Information Center

    Dubno, Judy R.; Ahlstrom, Jayne B.; Horwitz, Amy R.

    2008-01-01

    Purpose: Three experiments measured benefit of spatial separation, benefit of binaural listening, and masking-level differences (MLDs) to assess age-related differences in binaural advantage. Method: Participants were younger and older adults with normal hearing through 4.0 kHz. Experiment 1 compared spatial benefit with and without head shadow.…

  18. Differences in the perceived music pleasantness between monolateral cochlear implanted and normal hearing children assessed by EEG.

    PubMed

    Vecchiato, G; Maglione, A G; Scorpecci, A; Malerba, P; Graziani, I; Cherubino, P; Astolfi, L; Marsella, P; Colosimo, A; Babiloni, Fabio

    2013-01-01

    The perception of the music in cochlear implanted (CI) patients is an important aspect of their quality of life. In fact, the pleasantness of the music perception by such CI patients can be analyzed through a particular analysis of EEG rhythms. Studies on healthy subjects show that exists a particular frontal asymmetry of the EEG alpha rhythm which can be correlated with pleasantness of the perceived stimuli (approach-withdrawal theory). In particular, here we describe differences between EEG activities estimated in the alpha frequency band for a monolateral CI group of children and a normal hearing one during the fruition of a musical cartoon. The results of the present analysis showed that the alpha EEG asymmetry patterns related to the normal hearing group refers to a higher pleasantness perception when compared to the cerebral activity of the monolateral CI patients. In fact, the present results support the statement that a monolateral CI group could perceive the music in a less pleasant way when compared to normal hearing children.

  19. Clinical Value of Vestibular Evoked Myogenic Potential in Assessing the Stage and Predicting the Hearing Results in Ménière's Disease

    PubMed Central

    Kim, Min-Beom; Choi, Jeesun; Park, Ga Young; Cho, Yang-Sun; Hong, Sung Hwa

    2013-01-01

    Objectives Our goal was to find the clinical value of cervical vestibular evoked myogenic potential (VEMP) in Ménière's disease (MD) and to evaluate whether the VEMP results can be useful in assessing the stage of MD. Furthermore, we tried to evaluate the clinical effectiveness of VEMP in predicting hearing outcomes. Methods The amplitude, peak latency and interaural amplitude difference (IAD) ratio were obtained using cervical VEMP. The VEMP results of MD were compared with those of normal subjects, and the MD stages were compared with the IAD ratio. Finally, the hearing changes were analyzed according to their VEMP results. Results In clinically definite unilateral MD (n=41), the prevalence of cervical VEMP abnormality in the IAD ratio was 34.1%. When compared with normal subjects (n=33), the VEMP profile of MD patients showed a low amplitude and a similar latency. The mean IAD ratio in MD was 23%, which was significantly different from that of normal subjects (P=0.01). As the stage increased, the IAD ratio significantly increased (P=0.09). After stratification by initial hearing level, stage I and II subjects (hearing threshold, 0-40 dB) with an abnormal IAD ratio showed a decrease in hearing over time compared to those with a normal IAD ratio (P=0.08). Conclusion VEMP parameters have an important clinical role in MD. Especially, the IAD ratio can be used to assess the stage of MD. An abnormal IAD ratio may be used as a predictor of poor hearing outcomes in subjects with early stage MD. PMID:23799160

  20. Hearing with an atympanic ear: good vibration and poor sound-pressure detection in the royal python, Python regius.

    PubMed

    Christensen, Christian Bech; Christensen-Dalsgaard, Jakob; Brandt, Christian; Madsen, Peter Teglberg

    2012-01-15

    Snakes lack both an outer ear and a tympanic middle ear, which in most tetrapods provide impedance matching between the air and inner ear fluids and hence improve pressure hearing in air. Snakes would therefore be expected to have very poor pressure hearing and generally be insensitive to airborne sound, whereas the connection of the middle ear bone to the jaw bones in snakes should confer acute sensitivity to substrate vibrations. Some studies have nevertheless claimed that snakes are quite sensitive to both vibration and sound pressure. Here we test the two hypotheses that: (1) snakes are sensitive to sound pressure and (2) snakes are sensitive to vibrations, but cannot hear the sound pressure per se. Vibration and sound-pressure sensitivities were quantified by measuring brainstem evoked potentials in 11 royal pythons, Python regius. Vibrograms and audiograms showed greatest sensitivity at low frequencies of 80-160 Hz, with sensitivities of -54 dB re. 1 m s(-2) and 78 dB re. 20 μPa, respectively. To investigate whether pythons detect sound pressure or sound-induced head vibrations, we measured the sound-induced head vibrations in three dimensions when snakes were exposed to sound pressure at threshold levels. In general, head vibrations induced by threshold-level sound pressure were equal to or greater than those induced by threshold-level vibrations, and therefore sound-pressure sensitivity can be explained by sound-induced head vibration. From this we conclude that pythons, and possibly all snakes, lost effective pressure hearing with the complete reduction of a functional outer and middle ear, but have an acute vibration sensitivity that may be used for communication and detection of predators and prey.

  1. Syntagmatic and paradigmatic development of cochlear implanted children in comparison with normally hearing peers up to age 7.

    PubMed

    Faes, Jolien; Gillis, Joris; Gillis, Steven

    2015-09-01

    Grammatical development is shown to be delayed in CI children. However, the literature has focussed mainly on one aspect of grammatical development, either morphology or syntax, and on standard tests instead of spontaneous speech. The aim of the present study was to compare grammatical development in the spontaneous speech of Dutch-speaking children with cochlear implants and normally hearing peers. Both syntagmatic and paradigmatic development will be assessed and compared with each other. Nine children with cochlear implants were followed yearly between ages 2 and 7. There was a cross-sectional control group of 10 normally hearing peers at each age. Syntactic development is measured by means of Mean Length of Utterance (MLU), morphological development by means of Mean Size of Paradigm (MSP). This last measure is relatively new in child language research. MLU and MSP of children with cochlear implants lag behind that of their normally hearing peers up to age 4 and up to age 6 respectively. By age 5, CI children catch up on MSP and by age 7 they caught up on MLU. Children with cochlear implants catch up with their normally hearing peers for both measures of syntax and morphology. However, it is shown that inflection is earlier age-appropriate than sentence length in CI children. Possible explanations for this difference in developmental pace are discussed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Hearing Impairment and Incident Dementia: Findings from the English Longitudinal Study of Ageing.

    PubMed

    Davies, Hilary R; Cadar, Dorina; Herbert, Annie; Orrell, Martin; Steptoe, Andrew

    2017-09-01

    To determine whether hearing loss is associated with incident physician-diagnosed dementia in a representative sample. Retrospective cohort study. English Longitudinal Study of Ageing. Adults aged 50 and older. Cross-sectional associations between self-reported (n = 7,865) and objective hearing measures (n = 6,902) and dementia were examined using multinomial-logistic regression. The longitudinal association between self-reported hearing at Wave 2 (2004/05) and cumulative physician-diagnosed dementia up to Wave 7 (2014/15) was modelled using Cox proportional hazards regression. After adjustment for potential confounders, in cross-sectional analysis, participants who had self-reported or objective moderate and poor hearing were more likely to have a dementia diagnosis than those with normal hearing (self-reported: odds ratio OR = 1.6, 95% CI = 1.1-2.4 moderate hearing; OR = 2.6, 95% CI = 1.7-3.9 poor hearing, objective: OR = 1.6, 95% CI = 1.0-2.8 moderate hearing; OR = 4.4, 95% CI = 1.9-9.9 poor hearing). Longitudinally, the hazard of developing dementia was 1.4 (95% CI = 1.0-1.9) times as high in individuals who reported moderate hearing and 1.6 (95% CI = 1.1-2.0) times as high in those who reported poor hearing. Older adults with hearing loss are at greater risk of dementia than those with normal hearing. These findings are consistent with the rationale that correction of hearing loss could help delay the onset of dementia, or that hearing loss itself could serve as a risk indicator for cognitive decline. © 2017, The Authors. The Journal of the American Geriatrics Society published by Wiley Periodicals, Inc. on behalf of The American Geriatrics Society.

  3. Hearing speech in music.

    PubMed

    Ekström, Seth-Reino; Borg, Erik

    2011-01-01

    The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC) testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA) noise and speech spectrum-filtered noise (SPN)]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA). The results showed a significant effect of piano performance speed and octave (P<.01). Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01) and SPN (P<.05). Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01), but there were smaller differences between masking conditions (P<.01). It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  4. Auditory-nerve responses predict pitch attributes related to musical consonance-dissonance for normal and impaired hearinga

    PubMed Central

    Bidelman, Gavin M.; Heinz, Michael G.

    2011-01-01

    Human listeners prefer consonant over dissonant musical intervals and the perceived contrast between these classes is reduced with cochlear hearing loss. Population-level activity of normal and impaired model auditory-nerve (AN) fibers was examined to determine (1) if peripheral auditory neurons exhibit correlates of consonance and dissonance and (2) if the reduced perceptual difference between these qualities observed for hearing-impaired listeners can be explained by impaired AN responses. In addition, acoustical correlates of consonance-dissonance were also explored including periodicity and roughness. Among the chromatic pitch combinations of music, consonant intervals∕chords yielded more robust neural pitch-salience magnitudes (determined by harmonicity∕periodicity) than dissonant intervals∕chords. In addition, AN pitch-salience magnitudes correctly predicted the ordering of hierarchical pitch and chordal sonorities described by Western music theory. Cochlear hearing impairment compressed pitch salience estimates between consonant and dissonant pitch relationships. The reduction in contrast of neural responses following cochlear hearing loss may explain the inability of hearing-impaired listeners to distinguish musical qualia as clearly as normal-hearing individuals. Of the neural and acoustic correlates explored, AN pitch salience was the best predictor of behavioral data. Results ultimately show that basic pitch relationships governing music are already present in initial stages of neural processing at the AN level. PMID:21895089

  5. An acoustic analysis of laughter produced by congenitally deaf and normally hearing college students1

    PubMed Central

    Makagon, Maja M.; Funayama, E. Sumie; Owren, Michael J.

    2008-01-01

    Relatively few empirical data are available concerning the role of auditory experience in nonverbal human vocal behavior, such as laughter production. This study compared the acoustic properties of laughter in 19 congenitally, bilaterally, and profoundly deaf college students and in 23 normally hearing control participants. Analyses focused on degree of voicing, mouth position, air-flow direction, temporal features, relative amplitude, fundamental frequency, and formant frequencies. Results showed that laughter produced by the deaf participants was fundamentally similar to that produced by the normally hearing individuals, which in turn was consistent with previously reported findings. Finding comparable acoustic properties in the sounds produced by deaf and hearing vocalizers confirms the presumption that laughter is importantly grounded in human biology, and that auditory experience with this vocalization is not necessary for it to emerge in species-typical form. Some differences were found between the laughter of deaf and hearing groups; the most important being that the deaf participants produced lower-amplitude and longer-duration laughs. These discrepancies are likely due to a combination of the physiological and social factors that routinely affect profoundly deaf individuals, including low overall rates of vocal fold use and pressure from the hearing world to suppress spontaneous vocalizations. PMID:18646991

  6. Children with unilateral hearing loss may have lower intelligence quotient scores: A meta-analysis.

    PubMed

    Purcell, Patricia L; Shinn, Justin R; Davis, Greg E; Sie, Kathleen C Y

    2016-03-01

    In this meta-analysis, we reviewed observational studies investigating differences in intelligence quotient (IQ) scores of children with unilateral hearing loss compared to children with normal hearing. PubMed Medline, Cumulative Index to Nursing and Allied Health Literature, Embase, PsycINFO. A query identified all English-language studies related to pediatric unilateral hearing loss published between January 1980 and December 2014. Titles, abstracts, and articles were reviewed to identify observational studies reporting IQ scores. There were 261 unique titles, with 29 articles undergoing full review. Four articles were identified, which included 173 children with unilateral hearing loss and 202 children with normal hearing. Ages ranged from 6 to 18 years. Three studies were conducted in the United States and one in Mexico. All were of high quality. All studies reported full-scale IQ results; three reported verbal IQ results; and two reported performance IQ results. Children with unilateral hearing loss scored 6.3 points lower on full-scale IQ, 95% confidence interval (CI) [-9.1, -3.5], P value < 0.001; and 3.8 points lower on performance IQ, 95% CI [-7.3, -0.2], P value 0.04. When investigating verbal IQ, we detected substantial heterogeneity among studies; exclusion of the outlying study resulted in significant difference in verbal IQ of 4 points, 95% CI [-7.5, -0.4], P value 0.028. This meta-analysis suggests children with unilateral hearing loss have lower full-scale and performance IQ scores than children with normal hearing. There also may be disparity in verbal IQ scores. Laryngoscope, 126:746-754, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  7. Relations Between Self-Reported Daily-Life Fatigue, Hearing Status, and Pupil Dilation During a Speech Perception in Noise Task.

    PubMed

    Wang, Yang; Naylor, Graham; Kramer, Sophia E; Zekveld, Adriana A; Wendt, Dorothea; Ohlenforst, Barbara; Lunner, Thomas

    People with hearing impairment are likely to experience higher levels of fatigue because of effortful listening in daily communication. This hearing-related fatigue might not only constrain their work performance but also result in withdrawal from major social roles. Therefore, it is important to understand the relationships between fatigue, listening effort, and hearing impairment by examining the evidence from both subjective and objective measurements. The aim of the present study was to investigate these relationships by assessing subjectively measured daily-life fatigue (self-report questionnaires) and objectively measured listening effort (pupillometry) in both normally hearing and hearing-impaired participants. Twenty-seven normally hearing and 19 age-matched participants with hearing impairment were included in this study. Two self-report fatigue questionnaires Need For Recovery and Checklist Individual Strength were given to the participants before the test session to evaluate the subjectively measured daily fatigue. Participants were asked to perform a speech reception threshold test with single-talker masker targeting a 50% correct response criterion. The pupil diameter was recorded during the speech processing, and we used peak pupil dilation (PPD) as the main outcome measure of the pupillometry. No correlation was found between subjectively measured fatigue and hearing acuity, nor was a group difference found between the normally hearing and the hearing-impaired participants on the fatigue scores. A significant negative correlation was found between self-reported fatigue and PPD. A similar correlation was also found between Speech Intelligibility Index required for 50% correct and PPD. Multiple regression analysis showed that factors representing "hearing acuity" and "self-reported fatigue" had equal and independent associations with the PPD during the speech in noise test. Less fatigue and better hearing acuity were associated with a larger pupil dilation. To the best of our knowledge, this is the first study to investigate the relationship between a subjective measure of daily-life fatigue and an objective measure of pupil dilation, as an indicator of listening effort. These findings help to provide an empirical link between pupil responses, as observed in the laboratory, and daily-life fatigue.

  8. [Access by hearing-disabled individuals to health services in a southern Brazilian city].

    PubMed

    Freire, Daniela Buchrieser; Gigante, Luciana Petrucci; Béria, Jorge Umberto; Palazzo, Lílian dos Santos; Figueiredo, Andréia Cristina Leal; Raymann, Beatriz Carmen Warth

    2009-04-01

    This cross-sectional study aimed to compare access to health services and preventive measures by persons with hearing disability and those with normal hearing in Canoas, Rio Grande do Sul State, Brazil. The sample included 1,842 individuals 15 years or older (52.9% of whom were females). The most frequent income bracket was twice the minimum wage or more, or approximately U$360/month (42.7%). Individuals with hearing disability were more likely to have visited a physician in the previous two months (PR = 1.3, 95%CI: 1.10-1.51) and to have been hospitalized in the previous 12 months (PR = 2.1, 95%CI: 1.42-3.14). Regarding mental health, individuals with hearing disability showed 1.5 times greater probability of health care due to mental disorders and 4.2 times greater probability of psychiatric hospitalization as compared to those with normal hearing. Consistent with other studies, women with hearing disability performed less breast self-examination and had fewer Pap smears. The data indicate the need to invest in specific campaigns for this group of individuals with special needs.

  9. Aphasia and Auditory Processing after Stroke through an International Classification of Functioning, Disability and Health Lens

    PubMed Central

    Purdy, Suzanne C.; Wanigasekara, Iruni; Cañete, Oscar M.; Moore, Celia; McCann, Clare M.

    2016-01-01

    Aphasia is an acquired language impairment affecting speaking, listening, reading, and writing. Aphasia occurs in about a third of patients who have ischemic stroke and significantly affects functional recovery and return to work. Stroke is more common in older individuals but also occurs in young adults and children. Because people experiencing a stroke are typically aged between 65 and 84 years, hearing loss is common and can potentially interfere with rehabilitation. There is some evidence for increased risk and greater severity of sensorineural hearing loss in the stroke population and hence it has been recommended that all people surviving a stroke should have a hearing test. Auditory processing difficulties have also been reported poststroke. The International Classification of Functioning, Disability and Health (ICF) can be used as a basis for describing the effect of aphasia, hearing loss, and auditory processing difficulties on activities and participation. Effects include reduced participation in activities outside the home such as work and recreation and difficulty engaging in social interaction and communicating needs. A case example of a young man (M) in his 30s who experienced a left-hemisphere ischemic stroke is presented. M has normal hearing sensitivity but has aphasia and auditory processing difficulties based on behavioral and cortical evoked potential measures. His principal goal is to return to work. Although auditory processing difficulties (and hearing loss) are acknowledged in the literature, clinical protocols typically do not specify routine assessment. The literature and the case example presented here suggest a need for further research in this area and a possible change in practice toward more routine assessment of auditory function post-stroke. PMID:27489401

  10. Descending projections from the inferior colliculus to medial olivocochlear efferents: Mice with normal hearing, early onset hearing loss, and congenital deafness.

    PubMed

    Suthakar, Kirupa; Ryugo, David K

    2017-01-01

    Auditory efferent neurons reside in the brain and innervate the sensory hair cells of the cochlea to modulate incoming acoustic signals. Two groups of efferents have been described in mouse and this report will focus on the medial olivocochlear (MOC) system. Electrophysiological data suggest the MOC efferents function in selective listening by differentially attenuating auditory nerve fiber activity in quiet and noisy conditions. Because speech understanding in noise is impaired in age-related hearing loss, we asked whether pathologic changes in input to MOC neurons from higher centers could be involved. The present study investigated the anatomical nature of descending projections from the inferior colliculus (IC) to MOCs in 3-month old mice with normal hearing, and in 6-month old mice with normal hearing (CBA/CaH), early onset progressive hearing loss (DBA/2), and congenital deafness (homozygous Shaker-2). Anterograde tracers were injected into the IC and retrograde tracers into the cochlea. Electron microscopic analysis of double-labelled tissue confirmed direct synaptic contact from the IC onto MOCs in all cohorts. These labelled terminals are indicative of excitatory neurotransmission because they contain round synaptic vesicles, exhibit asymmetric membrane specializations, and are co-labelled with antibodies against VGlut2, a glutamate transporter. 3D reconstructions of the terminal fields indicate that in normal hearing mice, descending projections from the IC are arranged tonotopically with low frequencies projecting laterally and progressively higher frequencies projecting more medially. Along the mediolateral axis, the projections of DBA/2 mice with acquired high frequency hearing loss were shifted medially towards expected higher frequency projecting regions. Shaker-2 mice with congenital deafness had a much broader spatial projection, revealing abnormalities in the topography of connections. These data suggest that loss in precision of IC directed MOC activation could contribute to impaired signal detection in noise. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. The Effects of Age and Hearing Loss on Tasks of Perception and Production of Intonation.

    ERIC Educational Resources Information Center

    Most, Tova; Frank, Yael

    1994-01-01

    Hearing-impaired and normal hearing children in 2 age groups (5-6 years and 9-12 years) were observed for possible differences in their perception and production of intonation. Results indicated that imitation of intonation carried on nonsense syllables was not affected by age. Hearing-impaired subjects scored much lower than controls in imitating…

  12. Talker Differences in Clear and Conversational Speech: Vowel Intelligibility for Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Ferguson, Sarah Hargus

    2012-01-01

    Purpose: To establish the range of talker variability for vowel intelligibility in clear versus conversational speech for older adults with hearing loss and to determine whether talkers who produced a clear speech benefit for young listeners with normal hearing also did so for older adults with hearing loss. Method: Clear and conversational vowels…

  13. Lipreading in School-Age Children: The Roles of Age, Hearing Status, and Cognitive Ability

    ERIC Educational Resources Information Center

    Tye-Murray, Nancy; Hale, Sandra; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S.

    2014-01-01

    Purpose: The study addressed three research questions: Does lipreading improve between the ages of 7 and 14 years? Does hearing loss affect the development of lipreading? How do individual differences in lipreading relate to other abilities? Method: Forty children with normal hearing (NH) and 24 with hearing loss (HL) were tested using 4…

  14. The Use of Standardized Test Batteries in Assessing the Skill Development of Children with Mild-to-Moderate Sensorineural Hearing Loss.

    ERIC Educational Resources Information Center

    Plapinger, Donald S.; Sikora, Darryn M.

    1995-01-01

    This study of 12 children (ages 7-13) with mild to moderate bilateral sensorineural hearing loss found that psychoeducational diagnostic tests standardized on students with normal hearing may be used with confidence to assess both cognitive and academic levels of functioning in students with sensorineural hearing loss. (Author/JDD)

  15. Psycho acoustical Measures in Individuals with Congenital Visual Impairment.

    PubMed

    Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh

    2017-12-01

    In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.

  16. Binaural fusion and listening effort in children who use bilateral cochlear implants: a psychoacoustic and pupillometric study.

    PubMed

    Steel, Morrison M; Papsin, Blake C; Gordon, Karen A

    2015-01-01

    Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this "binaural fusion" reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.

  17. Assessment of hearing threshold in adults with hearing loss using an automated system of cortical auditory evoked potential detection.

    PubMed

    Durante, Alessandra Spada; Wieselberg, Margarita Bernal; Roque, Nayara; Carvalho, Sheila; Pucci, Beatriz; Gudayol, Nicolly; de Almeida, Kátia

    The use of hearing aids by individuals with hearing loss brings a better quality of life. Access to and benefit from these devices may be compromised in patients who present difficulties or limitations in traditional behavioral audiological evaluation, such as newborns and small children, individuals with auditory neuropathy spectrum, autism, and intellectual deficits, and in adults and the elderly with dementia. These populations (or individuals) are unable to undergo a behavioral assessment, and generate a growing demand for objective methods to assess hearing. Cortical auditory evoked potentials have been used for decades to estimate hearing thresholds. Current technological advances have lead to the development of equipment that allows their clinical use, with features that enable greater accuracy, sensitivity, and specificity, and the possibility of automated detection, analysis, and recording of cortical responses. To determine and correlate behavioral auditory thresholds with cortical auditory thresholds obtained from an automated response analysis technique. The study included 52 adults, divided into two groups: 21 adults with moderate to severe hearing loss (study group); and 31 adults with normal hearing (control group). An automated system of detection, analysis, and recording of cortical responses (HEARLab ® ) was used to record the behavioral and cortical thresholds. The subjects remained awake in an acoustically treated environment. Altogether, 150 tone bursts at 500, 1000, 2000, and 4000Hz were presented through insert earphones in descending-ascending intensity. The lowest level at which the subject detected the sound stimulus was defined as the behavioral (hearing) threshold (BT). The lowest level at which a cortical response was observed was defined as the cortical electrophysiological threshold. These two responses were correlated using linear regression. The cortical electrophysiological threshold was, on average, 7.8dB higher than the behavioral for the group with hearing loss and, on average, 14.5dB higher for the group without hearing loss for all studied frequencies. The cortical electrophysiological thresholds obtained with the use of an automated response detection system were highly correlated with behavioral thresholds in the group of individuals with hearing loss. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  18. Underwater audiogram of a tucuxi (Sotalia fluviatilis guianensis).

    PubMed

    Sauerland, M; Dehnhardt, G

    1998-02-01

    Using a go/no go response paradigm, a tucuxi (Sotalia fluviatilis guianensis) was trained to respond to pure-tone signals for an underwater hearing test. Auditory thresholds were obtained from 4 to 135 kHz. The audiogram curve shows that this Sotalia had an upper limit of hearing at 135 kHz; from 125 to 135 kHz sensitivity decreased by 475 dB/oct. This coincides with results from electrophysiological threshold measurements. The range of best hearing (defined as 10 dB from maximum sensitivity) was between 64 and 105 kHz. This range appears to be narrower and more restricted to higher frequencies in Sotalia fluviatilis guianensis than in other odontocete species that had been tested before. Peak frequencies of echolocation pulses reported from free-ranging Sotalia correspond with the range of most sensitive hearing of this test subject.

  19. A comparison of vowel productions in prelingually deaf children using cochlear implants, severe hearing-impaired children using conventional hearing aids and normal-hearing children.

    PubMed

    Baudonck, Nele; Van Lierde, K; Dhooge, I; Corthals, P

    2011-01-01

    The purpose of this study was to compare vowel productions by deaf cochlear implant (CI) children, hearing-impaired hearing aid (HA) children and normal-hearing (NH) children. 73 children [mean age: 9;14 years (years;months)] participated: 40 deaf CI children, 34 moderately to profoundly hearing-impaired HA children and 42 NH children. For the 3 corner vowels [a], [i] and [u], F(1), F(2) and the intrasubject SD were measured using the Praat software. Spectral separation between these vowel formants and vowel space were calculated. The significant effects in the CI group all pertain to a higher intrasubject variability in formant values, whereas the significant effects in the HA group all pertain to lower formant values. Both hearing-impaired subgroups showed a tendency toward greater intervowel distances and vowel space. Several subtle deviations in the vowel production of deaf CI children and hearing-impaired HA children could be established, using a well-defined acoustic analysis. CI children as well as HA children in this study tended to overarticulate, which hypothetically can be explained by a lack of auditory feedback and an attempt to compensate it by proprioceptive feedback during articulatory maneuvers. Copyright © 2010 S. Karger AG, Basel.

  20. Development and evaluation of the British English coordinate response measure speech-in-noise test as an occupational hearing assessment tool.

    PubMed

    Semeraro, Hannah D; Rowan, Daniel; van Besouw, Rachel M; Allsopp, Adrian A

    2017-10-01

    The studies described in this article outline the design and development of a British English version of the coordinate response measure (CRM) speech-in-noise (SiN) test. Our interest in the CRM is as a SiN test with high face validity for occupational auditory fitness for duty (AFFD) assessment. Study 1 used the method of constant stimuli to measure and adjust the psychometric functions of each target word, producing a speech corpus with equal intelligibility. After ensuring all the target words had similar intelligibility, for Studies 2 and 3, the CRM was presented in an adaptive procedure in stationary speech-spectrum noise to measure speech reception thresholds and evaluate the test-retest reliability of the CRM SiN test. Studies 1 (n = 20) and 2 (n = 30) were completed by normal-hearing civilians. Study 3 (n = 22) was completed by hearing impaired military personnel. The results display good test-retest reliability (95% confidence interval (CI) < 2.1 dB) and concurrent validity when compared to the triple-digit test (r ≤ 0.65), and the CRM is sensitive to hearing impairment. The British English CRM using stationary speech-spectrum noise is a "ready to use" SiN test, suitable for investigation as an AFFD assessment tool for military personnel.

  1. Validation of the Korean Version of the Spatial Hearing Questionnaire for Assessing the Severity and Symmetry of Hearing Impairment.

    PubMed

    Kong, Tae Hoon; Park, Yoon Ah; Bong, Jeong Pyo; Park, Sang Yoo

    2017-07-01

    Spatial hearing refers to the ability to understand speech and identify sounds in various environments. We assessed the validity of the Korean version of the Spatial Hearing Questionnaire (K-SHQ). We performed forward translation of the original English SHQ to Korean and backward translation from the Korean to English. Forty-eight patients who were able to read and understand Korean and received a score of 24 or higher on the Mini-Mental Status Examination were included in the study. Patients underwent pure tone audiometry (PTA) using a standard protocol and completed the K-SHQ. Internal consistency was evaluated using Cronbach's alpha, and factor analysis was performed to prove reliability. Construct validity was tested by comparing K-SHQ scores from patients with normal hearing to those with hearing impairment. Scores were compared between subjects with unilateral or bilateral hearing loss and between symmetrical and asymmetrical hearing impairment. Cronbach's alpha showed good internal consistency (0.982). Two factors were identified by factor analysis: There was a significant difference in K-SHQ scores for patients with normal hearing compared to those with hearing impairment. Patients with asymmetric hearing impairment had higher K-SHQ scores than those with symmetric hearing impairment. This is related to a lower threshold of PTA in the better ear of subjects. The hearing ability of the better ear is correlated with K-SHQ score. The K-SHQ is a reliable and valid tool with which to assess spatial hearing in patients who speak and read Korean. K-SHQ score reflects the severity and symmetry of hearing impairment. © Copyright: Yonsei University College of Medicine 2017

  2. The BEACH protein LRBA is required for hair bundle maintenance in cochlear hair cells and for hearing.

    PubMed

    Vogl, Christian; Butola, Tanvi; Haag, Natja; Hausrat, Torben J; Leitner, Michael G; Moutschen, Michel; Lefèbvre, Philippe P; Speckmann, Carsten; Garrett, Lillian; Becker, Lore; Fuchs, Helmut; Hrabe de Angelis, Martin; Nietzsche, Sandor; Kessels, Michael M; Oliver, Dominik; Kneussel, Matthias; Kilimann, Manfred W; Strenzke, Nicola

    2017-11-01

    Lipopolysaccharide-responsive beige-like anchor protein (LRBA) belongs to the enigmatic class of BEACH domain-containing proteins, which have been attributed various cellular functions, typically involving intracellular protein and membrane transport processes. Here, we show that LRBA deficiency in mice leads to progressive sensorineural hearing loss. In LRBA knockout mice, inner and outer hair cell stereociliary bundles initially develop normally, but then partially degenerate during the second postnatal week. LRBA deficiency is associated with a reduced abundance of radixin and Nherf2, two adaptor proteins, which are important for the mechanical stability of the basal taper region of stereocilia. Our data suggest that due to the loss of structural integrity of the central parts of the hair bundle, the hair cell receptor potential is reduced, resulting in a loss of cochlear sensitivity and functional loss of the fraction of spiral ganglion neurons with low spontaneous firing rates. Clinical data obtained from two human patients with protein-truncating nonsense or frameshift mutations suggest that LRBA deficiency may likewise cause syndromic sensorineural hearing impairment in humans, albeit less severe than in our mouse model. © 2017 The Authors.

  3. Development of Spatial Release from Masking in Mandarin-Speaking Children with Normal Hearing

    ERIC Educational Resources Information Center

    Yuen, Kevin C. P.; Yuan, Meng

    2014-01-01

    Purpose: This study investigated the development of spatial release from masking in children using closed-set Mandarin disyllabic words and monosyllabic words carrying lexical tones as test stimuli and speech spectrum-weighted noise as a masker. Method: Twenty-six children ages 4-9 years and 12 adults, all with normal hearing, participated in…

  4. Nonword Repetition by Children with Cochlear Implants: Accuracy Ratings from Normal-Hearing Listeners.

    ERIC Educational Resources Information Center

    Dillon, Caitlin M.; Burkholder, Rose A.; Cleary, Miranda; Pisoni, David B.

    2004-01-01

    Seventy-six children with cochlear implants completed a nonword repetition task. The children were presented with 20 nonword auditory patterns over a loudspeaker and were asked to repeat them aloud to the experimenter. The children's responses were recorded on digital audiotape and then played back to normal-hearing adult listeners to obtain…

  5. Domain Specificity and Everyday Biological, Physical, and Psychological Thinking in Normal, Autistic, and Deaf Children.

    ERIC Educational Resources Information Center

    Peterson, Candida C.; Siegal, Michael

    1997-01-01

    Examined reasoning in normal, autistic, and deaf individuals. Found that deaf individuals who grow up in hearing homes without fluent signers show selective impairments in theory of mind similar to those of autistic individuals. Results suggest that conversational differences in the language children hear accounts for distinctive patterns of…

  6. Speech Perception with Music Maskers by Cochlear Implant Users and Normal-Hearing Listeners

    ERIC Educational Resources Information Center

    Eskridge, Elizabeth N.; Galvin, John J., III; Aronoff, Justin M.; Li, Tianhao; Fu, Qian-Jie

    2012-01-01

    Purpose: The goal of this study was to investigate how the spectral and temporal properties in background music may interfere with cochlear implant (CI) and normal-hearing listeners' (NH) speech understanding. Method: Speech-recognition thresholds (SRTs) were adaptively measured in 11 CI and 9 NH subjects. CI subjects were tested while using their…

  7. Effects of hearing loss on speech recognition under distracting conditions and working memory in the elderly.

    PubMed

    Na, Wondo; Kim, Gibbeum; Kim, Gungu; Han, Woojae; Kim, Jinsook

    2017-01-01

    The current study aimed to evaluate hearing-related changes in terms of speech-in-noise processing, fast-rate speech processing, and working memory; and to identify which of these three factors is significantly affected by age-related hearing loss. One hundred subjects aged 65-84 years participated in the study. They were classified into four groups ranging from normal hearing to moderate-to-severe hearing loss. All the participants were tested for speech perception in quiet and noisy conditions and for speech perception with time alteration in quiet conditions. Forward- and backward-digit span tests were also conducted to measure the participants' working memory. 1) As the level of background noise increased, speech perception scores systematically decreased in all the groups. This pattern was more noticeable in the three hearing-impaired groups than in the normal hearing group. 2) As the speech rate increased faster, speech perception scores decreased. A significant interaction was found between speed of speech and hearing loss. In particular, 30% of compressed sentences revealed a clear differentiation between moderate hearing loss and moderate-to-severe hearing loss. 3) Although all the groups showed a longer span on the forward-digit span test than the backward-digit span test, there was no significant difference as a function of hearing loss. The degree of hearing loss strongly affects the speech recognition of babble-masked and time-compressed speech in the elderly but does not affect the working memory. We expect these results to be applied to appropriate rehabilitation strategies for hearing-impaired elderly who experience difficulty in communication.

  8. Hearing Sensation Changes When a Warning Predicts a Loud Sound in the False Killer Whale (Pseudorca crassidens).

    PubMed

    Nachtigall, Paul E; Supin, Alexander Y

    2016-01-01

    Stranded whales and dolphins have sometimes been associated with loud anthropogenic sounds. Echolocating whales produce very loud sounds themselves and have developed the ability to protect their hearing from their own signals. A false killer whale's hearing sensitivity was measured when a faint warning sound was given just before the presentation of an increase in intensity to 170 dB. If the warning occurred within 1-9 s, as opposed to 20-40 s, the whale showed a 13-dB reduction in hearing sensitivity. Warning sounds before loud pulses may help mitigate the effects of loud anthropogenic sounds on wild animals.

  9. Infrasonic and Ultrasonic Hearing Evolved after the Emergence of Modern Whales.

    PubMed

    Mourlam, Mickaël J; Orliac, Maeva J

    2017-06-19

    Mysticeti (baleen whales) and Odontoceti (toothed whales) today greatly differ in their hearing abilities: Mysticeti are presumed to be sensitive to infrasonic noises [1-3], whereas Odontoceti are sensitive to ultrasonic sounds [4-6]. Two competing hypotheses exist regarding the attainment of hearing abilities in modern whales: ancestral low-frequency sensitivity [7-13] or ancestral high-frequency sensitivity [14, 15]. The significance of these evolutionary scenarios is limited by the undersampling of both early-diverging cetaceans (archaeocetes) and terrestrial hoofed relatives of cetaceans (non-cetacean artiodactyls). Here, we document for the first time the bony labyrinth, the hollow cavity housing the hearing organ, of two species of protocetid whales from Lutetian deposits (ca. 46-43 Ma) of Kpogamé, Togo. These archaeocete cetaceans, which are transitional between terrestrial and aquatic forms, prove to be a key for determining the hearing abilities of early whales. We propose a new evolutionary picture for the early stages of this history, based on qualitative and quantitative studies of the cochlear morphology of an unparalleled sample of extant and extinct land artiodactyls and cetaceans. Contrary to the hypothesis that archaeocetes have been more sensitive to high-frequency sounds than their terrestrial ancestors [15], we demonstrate that early cetaceans presented a cochlear functional pattern close to that of their terrestrial relatives, and that specialization for infrasonic or ultrasonic hearing in Mysticeti or Odontoceti, respectively, instead only occurred in fully aquatic whales, after the emergence of Neoceti (Mysticeti+Odontoceti). Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Otitis media and hearing loss among 12-16-year-old Inuit of Inukjuak, Quebec, Canada.

    PubMed

    Ayukawa, Hannah; Bruneau, Suzanne; Proulx, Jean-François; Macarthur, Judy; Baxter, James

    2004-01-01

    Chronic otitis media (COM) and associated hearing loss is a frequent problem for many Inuit children in Canada. In this study, we evaluated individuals aged 12-16 years living in Inukjuak, to determine the prevalence of middle ear disease and hearing loss, and the effect of hearing loss on academic performance. Otological examination, hearing test, medical and school file review were performed in November 1997. 88 individuals were seen. Otological examination revealed maximal scarring in 1.8%, minimal scarring in 34.9%, normal eardrums in 49.1% and chronic otitis media in 16.9%. There were 62 individuals whose ear exams could be directly compared with a previous exam done in 1987. Of those, there were three ears that had developed COM and 4/13 ears with COM in 1987 that had healed. Hearing tests found bilateral normal hearing in 80% (PTA <20dB), unilateral loss in 15% and bilateral loss in 5%. Hearing loss was associated with poorer academic performance in Language (p<.05). A similar trend was found in Mathematics but not in Inuttitut. Chronic otitis media remains a significant problem among the Inuit, with a prevalence of 16.9% in individuals aged 12-16 years. One in five in this age group has hearing loss, and this hearing loss impacts on academic performance.

  11. Sensory deprivation due to otitis media episodes in early childhood and its effect at later age: A psychoacoustic and speech perception measure.

    PubMed

    Shetty, Hemanth Narayan; Koonoor, Vishal

    2016-11-01

    Past research has reported that children with repeated occurrences of otitis media at an early age have a negative impact on speech perception at a later age. The present study necessitates documenting the temporal and spectral processing on speech perception in noise from normal and atypical groups. The present study evaluated the relation between speech perception in noise and temporal; and spectral processing abilities in children with normal and atypical groups. The study included two experiments. In the first experiment, temporal resolution and frequency discrimination of listeners with normal group and three subgroups of atypical groups (had a history of OM) a) less than four episodes b) four to nine episodes and c) More than nine episodes during their chronological age of 6 months to 2 years) were evaluated using measures of temporal modulation transfer function and frequency discrimination test. In the second experiment, SNR 50 was evaluated on each group of study participants. All participants had normal hearing and middle ear status during the course of testing. Demonstrated that children with atypical group had significantly poorer modulation detection threshold, peak sensitivity and bandwidth; and frequency discrimination to each F0 than normal hearing listeners. Furthermore, there was a significant correlation seen between measures of temporal resolution; frequency discrimination and speech perception in noise. It infers atypical groups have significant impairment in extracting envelope as well as fine structure cues from the signal. The results supported the idea that episodes of OM before 2 years of agecan produce periods of sensory deprivation that alters the temporal and spectral skills which in turn has negative consequences on speech perception in noise. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.

    PubMed

    Kidd, Gerald

    2017-10-17

    Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.

  13. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    PubMed Central

    2017-01-01

    Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621 PMID:29049603

  14. Social inclusion and career development--transition from upper secondary school to work or post-secondary education among hard of hearing students.

    PubMed

    Danermark, B; Antonson, S; Lundström, I

    2001-01-01

    The aim of this study was to investigate the decision process and to analyse the mechanisms involved in the transition from upper secondary education to post-secondary education or the labour market. Sixteen students with sensorioneural hearing loss were selected. Among these eight of the students continued to university and eight did not. Twenty-five per cent of the students were women and the average age was 28 years. The investigation was conducted about 5 years after graduation from the upper secondary school. Both quantitative and qualitative methods were used. The results showed that none of the students came from a family where any or both of the parents had a university or comparable education. The differences in choice between the two groups cannot be explained in terms of social inheritance. Our study indicates that given normal intellectual capacity the level of the hearing loss seems to have no predictive value regarding future educational performance and academic career. The conclusion is that it is of great importance that a hearing impaired pupil with normal intellectual capacity is encouraged and guided to choose an upper secondary educational programme which is orientated towards post-secondary education (instead of a narrow vocational programme). Additional to their hearing impairment and related educational problems, hard of hearing students have much more difficulty than normal hearing peers in coping with changes in intentions and goals regarding their educational career during their upper secondary education.

  15. Affective Properties of Mothers' Speech to Infants With Hearing Impairment and Cochlear Implants

    PubMed Central

    Bergeson, Tonya R.; Xu, Huiping; Kitamura, Christine

    2015-01-01

    Purpose The affective properties of infant-directed speech influence the attention of infants with normal hearing to speech sounds. This study explored the affective quality of maternal speech to infants with hearing impairment (HI) during the 1st year after cochlear implantation as compared to speech to infants with normal hearing. Method Mothers of infants with HI and mothers of infants with normal hearing matched by age (NH-AM) or hearing experience (NH-EM) were recorded playing with their infants during 3 sessions over a 12-month period. Speech samples of 25 s were low-pass filtered, leaving intonation but not speech information intact. Sixty adults rated the stimuli along 5 scales: positive/negative affect and intention to express affection, to encourage attention, to comfort/soothe, and to direct behavior. Results Low-pass filtered speech to HI and NH-EM groups was rated as more positive, affective, and comforting compared with the such speech to the NH-AM group. Speech to infants with HI and with NH-AM was rated as more directive than speech to the NH-EM group. Mothers decreased affective qualities in speech to all infants but increased directive qualities in speech to infants with NH-EM over time. Conclusions Mothers fine-tune communicative intent in speech to their infant's developmental stage. They adjust affective qualities to infants' hearing experience rather than to chronological age but adjust directive qualities of speech to the chronological age of their infants. PMID:25679195

  16. 30 CFR 62.170 - Audiometric testing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... audiometric testing of the miner's hearing sensitivity for the purpose of establishing a valid baseline... the miner's hearing sensitivity as the baseline audiogram if it meets the audiometric testing... the testing. (2) The mine operator must notify the miner to avoid high levels of noise for at least 14...

  17. 30 CFR 62.170 - Audiometric testing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... audiometric testing of the miner's hearing sensitivity for the purpose of establishing a valid baseline... the miner's hearing sensitivity as the baseline audiogram if it meets the audiometric testing... the testing. (2) The mine operator must notify the miner to avoid high levels of noise for at least 14...

  18. 30 CFR 62.170 - Audiometric testing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... audiometric testing of the miner's hearing sensitivity for the purpose of establishing a valid baseline... the miner's hearing sensitivity as the baseline audiogram if it meets the audiometric testing... the testing. (2) The mine operator must notify the miner to avoid high levels of noise for at least 14...

  19. 30 CFR 62.170 - Audiometric testing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... audiometric testing of the miner's hearing sensitivity for the purpose of establishing a valid baseline... the miner's hearing sensitivity as the baseline audiogram if it meets the audiometric testing... the testing. (2) The mine operator must notify the miner to avoid high levels of noise for at least 14...

  20. 30 CFR 62.170 - Audiometric testing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... audiometric testing of the miner's hearing sensitivity for the purpose of establishing a valid baseline... the miner's hearing sensitivity as the baseline audiogram if it meets the audiometric testing... the testing. (2) The mine operator must notify the miner to avoid high levels of noise for at least 14...

  1. Comparison of characteristics observed in tinnitus patients with unilateral vs bilateral symptoms, with both normal hearing threshold and distortion-product otoacoustic emissions.

    PubMed

    Zagólski, Olaf; Stręk, Paweł

    2017-02-01

    Tinnitus characteristics in normal-hearing patients differ between the groups with unilateral and bilateral complaints. The study was to determine the differences between tinnitus characteristics observed in patients with unilateral vs bilateral symptoms and normal hearing threshold, as well as normal results of distortion-product otoacoustic emissions (DPOAEs). The patients answered questions concerning tinnitus duration, laterality, character, accompanying symptoms, and circumstances of onset. The results of tympanometry, auditory brainstem responses, tinnitus likeness spectrum, minimum masking level (MML), and uncomfortable loudness level were evaluated. Records of 380 tinnitus sufferers were examined. Patients with abnormal audiograms and/or DPOAEs were excluded. The remaining 66 participants were divided into groups with unilateral and bilateral tinnitus. Unilateral tinnitus in normal-hearing patients was diagnosed twice more frequently than bilateral. Tinnitus pitch was higher in the group with bilateral tinnitus (p < .001). MML was lower in unilateral tinnitus (p < .05). Mean age of patients was higher in the unilateral tinnitus group (p < .05). Mean tinnitus duration was longer (p < .05) and hypersensitivity to sound was more frequent (p < .05) in the bilateral tinnitus group. Repeated exposure to excessive noise was the most frequent cause in the bilateral tinnitus group.

  2. Comparison of distortion product otoacoustic emissions with auditory brain-stem response for clinical use in neonatal intensive care unit.

    PubMed

    Ochi, A; Yasuhara, A; Kobayashi, Y

    1998-11-01

    This study compares the clinical usefulness of distortion product otoacoustic emissions (DPOAEs) with the auditory brain-stem response (ABR) for neonates in the neonatal intensive care unit for the evaluation of hearing impairment. Both DPOAEs and ABR were performed on 36 neonates (67 ears) on the same day. We defined neonates as having normal hearing when the thresholds of wave V of ABR were < or =45 dB hearing level. (1) We could not obtain DPOAEs at f2 = 977 Hz in neonates with normal hearing because of high noise floors. DPOAE recording time was 36 min shorter than that of ABR. (2) We defined as normal DPOAEs, the number of frequencies which showed the DPgram-noise floor > or =4 dB was > or =4 at 6 f2 frequencies, from 1416 Hz to 7959 Hz. (3) Normal thresholds of ABR and normal DPOAEs showed the same percentages, i.e. 68.7%, but the percentage of different results between ABR and DPOAEs was 6.0%. Our study indicates that DPOAEs represent a simple procedure, which can be easily performed in the NICU to obtain reliable results in high-risk neonates. Results obtained by DPOAEs were comparable to those obtained by the more complex procedure of ABR.

  3. Maternal Distancing Strategies toward Twin Sons, One with Mild Hearing Loss: A Case Study

    ERIC Educational Resources Information Center

    Munoz-Silva, Alicia; Sanchez-Garcia, Manuel

    2004-01-01

    The authors apply descriptive and sequential analyses to a mother's distancing strategies toward her 3-year-old twin sons in puzzle assembly and book reading tasks. One boy had normal hearing and the other a mild hearing loss (threshold: 30 dB). The results show that the mother used more distancing behaviors with the son with a hearing loss, and…

  4. Effects of a cochlear implant simulation on immediate memory in normal-hearing adults

    PubMed Central

    Burkholder, Rose A.; Pisoni, David B.; Svirsky, Mario A.

    2012-01-01

    This study assessed the effects of stimulus misidentification and memory processing errors on immediate memory span in 25 normal-hearing adults exposed to degraded auditory input simulating signals provided by a cochlear implant. The identification accuracy of degraded digits in isolation was measured before digit span testing. Forward and backward digit spans were shorter when digits were degraded than when they were normal. Participants’ normal digit spans and their accuracy in identifying isolated digits were used to predict digit spans in the degraded speech condition. The observed digit spans in degraded conditions did not differ significantly from predicted digit spans. This suggests that the decrease in memory span is related primarily to misidentification of digits rather than memory processing errors related to cognitive load. These findings provide complementary information to earlier research on auditory memory span of listeners exposed to degraded speech either experimentally or as a consequence of a hearing-impairment. PMID:16317807

  5. Auditory Evoked Potentials for the Evaluation of Hearing Sensitivity in Navy Dolphins. Assessment of Hearing Sensitivity in Adult Male Elephant Seals

    DTIC Science & Technology

    2006-12-01

    Biology of Marine Mammals, San Diego, California, 12 - 16 December. Finneran, J. J. and Houser, D. S. 2004. Objective measures of steady-state...Gervais’ beaked whale auditory evoked potential hearing measurements. 16th Biennial Conference on the Biology of Marine Mammals, San Diego, California...Biennial Conference on the Biology of Marine Mammals, San Diego, California, 12 - 16 December. 16 FTR N00014-04-1-0455 BIOMIMETICA Invited Lectures

  6. Hearing loss is associated with decreased nonverbal intelligence in rural Nepal.

    PubMed

    Emmett, Susan D; Schmitz, Jane; Pillion, Joseph; Wu, Lee; Khatry, Subarna K; Karna, Sureshwar L; LeClerq, Steven C; West, Keith P

    2015-01-01

    To evaluate the association between adolescent and young-adult hearing loss and nonverbal intelligence in rural Nepal. Cross-sectional assessment of hearing loss among a population cohort of adolescents and young adults. Sarlahi District, southern Nepal. Seven hundred sixty-four individuals aged 14 to 23 years. Evaluation of hearing loss, defined by World Health Organization criteria of pure-tone average greater than 25 decibels (0.5, 1, 2, 4 kHz), unilaterally and bilaterally. Nonverbal intelligence, as measured by the Test of Nonverbal Intelligence, 3rd Edition standardized score (mean, 100; standard deviation, 15). Nonverbal intelligence scores differed between participants with normal hearing and those with bilateral (p = 0.04) but not unilateral (p = 0.74) hearing loss. Demographic and socioeconomic factors including male sex; higher caste; literacy; education level; occupation reported as student; and ownership of a bicycle, watch, and latrine were strongly associated with higher nonverbal intelligence scores (all p < 0.001). Subjects with bilateral hearing loss scored an average of 3.16 points lower (95% confidence interval, -5.56 to -0.75; p = 0.01) than subjects with normal hearing after controlling for socioeconomic factors. There was no difference in nonverbal intelligence score based on unilateral hearing loss (0.97; 95% confidence interval, -1.67 to 3.61; p = 0.47). Nonverbal intelligence is adversely affected by bilateral hearing loss even at mild hearing loss levels. Socio economic well-being appears compromised in individuals with lower nonverbal intelligence test scores.

  7. Pilot study of cognition in children with unilateral hearing loss.

    PubMed

    Ead, Banan; Hale, Sandra; DeAlwis, Duneesha; Lieu, Judith E C

    2013-11-01

    The objective of this study was to obtain preliminary data on the cognitive function of children with unilateral hearing loss in order to identify, quantify, and interpret differences in cognitive and language functions between children with unilateral hearing loss and with normal hearing. Fourteen children ages 9-14 years old (7 with severe-to-profound sensorineural unilateral hearing loss and 7 sibling controls with normal hearing) were administered five tests that assessed cognitive functions of working memory, processing speed, attention, and phonological processing. Mean composite scores for phonological processing were significantly lower for the group with unilateral hearing loss than for controls on one composite and four subtests. The unilateral hearing loss group trended toward worse performance on one additional composite and on two additional phonological processing subtests. The unilateral hearing loss group also performed worse than the control group on the complex letter span task. Analysis examining performance on the two levels of task difficulty revealed a significant main effect of task difficulty and an interaction between task difficulty and group. Cognitive function and phonological processing test results suggest two related deficits associated with unilateral hearing loss: (1) reduced accuracy and efficiency associated with phonological processing, and (2) impaired executive control function when engaged in maintaining verbal information in the face of processing incoming, irrelevant verbal information. These results provide a possible explanation for the educational difficulties experienced by children with unilateral hearing loss. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Hearing Loss is Associated with Decreased Nonverbal Intelligence in Rural Nepal

    PubMed Central

    Emmett, Susan D.; Schmitz, Jane; Pillion, Joseph; Wu, Lee; Khatry, Subarna K.; Karna, Sureshwar L.; LeClerq, Steven C.; West, Keith P.

    2014-01-01

    Objective Evaluate the association between adolescent and young adult hearing loss and nonverbal intelligence in rural Nepal Study Design Cross-sectional assessment of hearing loss among a population cohort of adolescents and young adults Setting Sarlahi District, southern Nepal Patients 764 individuals aged 14–23 years Intervention Evaluation of hearing loss, defined by WHO criteria of pure-tone average (PTA) >25 decibels (0.5, 1, 2, 4 kHz), unilaterally and bilaterally Main Outcome Measure Nonverbal intelligence, measured by the Test of Nonverbal Intelligence, 3rd Edition (TONI-3) standardized score (mean 100; standard deviation (SD) 15) Results Nonverbal intelligence scores differed between participants with normal hearing and those with bilateral (p =0.04) but not unilateral (p =0.74) hearing loss. Demographic and socioeconomic factors including male sex, higher caste, literacy, education level, occupation reported as student, and ownership of a bicycle, watch, and latrine were strongly associated with higher nonverbal intelligence scores (all p <0.001). Subjects with bilateral hearing loss scored an average of 3.16 points lower (95% CI: −5.56, −0.75; p =0.01) than subjects with normal hearing after controlling for socioeconomic factors. There was no difference in nonverbal intelligence score based on unilateral hearing loss (0.97; 95% CI: −1.67, 3.61; p =0.47). Conclusions Nonverbal intelligence is adversely affected by bilateral hearing loss, even at mild hearing loss levels. Social and economic well being appear compromised in individuals with lower nonverbal intelligence test scores. PMID:25299832

  9. How age affects memory task performance in clinically normal hearing persons.

    PubMed

    Vercammen, Charlotte; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid

    2017-05-01

    The main objective of this study is to investigate memory task performance in different age groups, irrespective of hearing status. Data are collected on a short-term memory task (WAIS-III Digit Span forward) and two working memory tasks (WAIS-III Digit Span backward and the Reading Span Test). The tasks are administered to young (20-30 years, n = 56), middle-aged (50-60 years, n = 47), and older participants (70-80 years, n = 16) with normal hearing thresholds. All participants have passed a cognitive screening task (Montreal Cognitive Assessment (MoCA)). Young participants perform significantly better than middle-aged participants, while middle-aged and older participants perform similarly on the three memory tasks. Our data show that older clinically normal hearing persons perform equally well on the memory tasks as middle-aged persons. However, even under optimal conditions of preserved sensory processing, changes in memory performance occur. Based on our data, these changes set in before middle age.

  10. Quality of Life and Hearing Eight Years After Sudden Sensorineural Hearing Loss.

    PubMed

    Härkönen, Kati; Kivekäs, Ilkka; Rautiainen, Markus; Kotti, Voitto; Vasama, Juha-Pekka

    2017-04-01

    To explore long-term hearing results, quality of life (QoL), quality of hearing (QoH), work-related stress, tinnitus, and balance problems after idiopathic sudden sensorineural hearing loss (ISSNHL). Cross-sectional study. We reviewed the audiograms of 680 patients with unilateral ISSNHL on average 8 years after the hearing impairment, and then divided the patients into two study groups based on whether their ISSNHL had recovered to normal (pure tone average [PTA] ≤ 30 dB) or not (PTA > 30 dB). The inclusion criteria were a hearing threshold decrease of 30 dB or more in at least three contiguous frequencies occurring within 72 hours in the affected ear and normal hearing in the contralateral ear. Audiograms of 217 patients fulfilled the criteria. We reviewed their medical records; measured present QoL, QoH, and work-related stress with specific questionnaires; and updated the hearing status. Poor hearing outcome after ISSNHL was correlated with age, severity of hearing loss, and vertigo together with ISSNHL. Quality of life and QoH were statistically significantly better in patients with recovered hearing, and the patients had statistically significantly less tinnitus and balance problems. During the 8-year follow-up, the PTA of the affected ear deteriorated on average 7 dB, and healthy ear deteriorated 6 dB. Idiopathic sudden sensorineural hearing loss that failed to recover had a negative impact on long-term QoL and QoH. The hearing deteriorated as a function of age similarly both in the affected and the healthy ear, and there were no differences between the groups. The cumulative recurrence rate for ISSNHL was 3.5%. 4 Laryngoscope, 127:927-931, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  11. Effects of modulation phase on profile analysis in normal-hearing and hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Rogers, Deanna; Lentz, Jennifer

    2003-04-01

    The ability to discriminate between sounds with different spectral shapes in the presence of amplitude modulation was measured in normal-hearing and hearing-impaired listeners. The standard stimulus was the sum of equal-amplitude modulated tones, and the signal stimulus was generated by increasing the level of half the tones (up components) and decreasing the level of half the tones (down components). The down components had the same modulation phase, and a phase shift was applied to the up components to encourage segregation from the down tones. The same phase shift was used in both standard and signal stimuli. Profile-analysis thresholds were measured as a function of the phase shift between up and down components. The phase shifts were 0, 30, 45, 60, 90, and 180 deg. As expected, thresholds were lowest when all tones had the same modulation phase and increased somewhat with increasing phase disparity. This small increase in thresholds was similar for both groups. These results suggest that hearing-impaired listeners are able to use modulation phase to group sounds in a manner similar to that of normal listeners. [Work supported by NIH (DC 05835).

  12. Validation of the second version of the LittlEARS® Early Speech Production Questionnaire (LEESPQ) in German-speaking children with normal hearing.

    PubMed

    Keilmann, Annerose; Friese, Barbara; Lässig, Anne; Hoffmann, Vanessa

    2018-04-01

    The introduction of neonatal hearing screening and the increasingly early age at which children can receive a cochlear implant has intensified the need for a validated questionnaire to assess the speech production of children aged 0‒18. Such a questionnaire has been created, the LittlEARS ® Early Speech Production Questionnaire (LEESPQ). This study aimed to validate a second, revised edition of the LEESPQ. Questionnaires were returned for 362 children with normal hearing. Completed questionnaires were analysed to determine if the LEESPQ is reliable, prognostically accurate, internally consistent, and if gender or multilingualism affects total scores. Total scores correlated positively with age. The LEESPQ is reliable, accurate, and consistent, and independent of gender or lingual status. A norm curve was created. This second version of the LEESPQ is a valid tool to assess the speech production development of children with normal hearing, aged 0‒18, regardless of their gender. As such, the LEESPQ may be a useful tool to monitor the development of paediatric hearing device users. The second version of the LEESPQ is a valid instrument for assessing early speech production of children aged 0‒18 months.

  13. Attitudes toward noise, perceived hearing symptoms, and reported use of hearing protection among college students: Influence of youth culture

    PubMed Central

    Balanay, Jo Anne G.; Kearney, Gregory D.

    2015-01-01

    The purpose of this study was to assess the attitude toward noise, perceived hearing symptoms, noisy activities that were participated in, and factors associated with hearing protection use among college students. A 44-item online survey was completed by 2,151 college students (aged 17 years and above) to assess the attitudes toward noise, perceived hearing symptoms related to noise exposure, and use of hearing protection around noisy activities. Among the participants, 39.6% experienced at least one hearing symptom, with ear pain as the most frequently reported (22.5%). About 80% of the participants were involved in at least one noise activity, out of which 41% reported the use of hearing protection. A large majority of those with ear pain, hearing loss, permanent tinnitus, and noise sensitivity was involved in attending a sporting event, which was the most reported noisy activity. The highest reported hearing protection use was in the use of firearms, and the lowest in discos/ dances. The reported use of hearing protection is associated with having at least one hearing symptom but the relationship is stronger with tinnitus, hearing loss, and ear pain (χ2 = 30.5-43.5, P < 0.01) as compared to noise sensitivity (χ2 = 3.8, P = 0.03); it is also associated with anti-noise attitudes, particularly in youth social events. Universities and colleges have important roles in protecting young adults’ hearing by integrating hearing conservation topic in the college curriculum, promoting hearing health by student health services, involving student groups in noise-induced hearing loss (NIHL) awareness and prevention, and establishing noise level limitations for all on-campus events. PMID:26572699

  14. Attitudes toward noise, perceived hearing symptoms, and reported use of hearing protection among college students: Influence of youth culture.

    PubMed

    Balanay, Jo Anne G; Kearney, Gregory D

    2015-01-01

    The purpose of this study was to assess the attitude toward noise, perceived hearing symptoms, noisy activities that were participated in, and factors associated with hearing protection use among college students. A 44-item online survey was completed by 2,151 college students (aged 17 years and above) to assess the attitudes toward noise, perceived hearing symptoms related to noise exposure, and use of hearing protection around noisy activities. Among the participants, 39.6% experienced at least one hearing symptom, with ear pain as the most frequently reported (22.5%). About 80% of the participants were involved in at least one noise activity, out of which 41% reported the use of hearing protection. A large majority of those with ear pain, hearing loss, permanent tinnitus, and noise sensitivity was involved in attending a sporting event, which was the most reported noisy activity. The highest reported hearing protection use was in the use of firearms, and the lowest in discos/ dances. The reported use of hearing protection is associated with having at least one hearing symptom but the relationship is stronger with tinnitus, hearing loss, and ear pain (χ² = 30.5-43.5, P< 0.01) as compared to noise sensitivity (χ² = 3.8, P= 0.03); it is also associated with anti-noise attitudes, particularly in youth social events. Universities and colleges have important roles in protecting young adults' hearing by integrating hearing conservation topic in the college curriculum, promoting hearing health by student health services, involving student groups in noise-induced hearing loss (NIHL) awareness and prevention, and establishing noise level limitations for all on-campus events.

  15. Cognitive load during speech perception in noise: the influence of age, hearing loss, and cognition on the pupil response.

    PubMed

    Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M

    2011-01-01

    The aim of the present study was to evaluate the influence of age, hearing loss, and cognitive ability on the cognitive processing load during listening to speech presented in noise. Cognitive load was assessed by means of pupillometry (i.e., examination of pupil dilation), supplemented with subjective ratings. Two groups of subjects participated: 38 middle-aged participants (mean age = 55 yrs) with normal hearing and 36 middle-aged participants (mean age = 61 yrs) with hearing loss. Using three Speech Reception Threshold (SRT) in stationary noise tests, we estimated the speech-to-noise ratios (SNRs) required for the correct repetition of 50%, 71%, or 84% of the sentences (SRT50%, SRT71%, and SRT84%, respectively). We examined the pupil response during listening: the peak amplitude, the peak latency, the mean dilation, and the pupil response duration. For each condition, participants rated the experienced listening effort and estimated their performance level. Participants also performed the Text Reception Threshold (TRT) test, a test of processing speed, and a word vocabulary test. Data were compared with previously published data from young participants with normal hearing. Hearing loss was related to relatively poor SRTs, and higher speech intelligibility was associated with lower effort and higher performance ratings. For listeners with normal hearing, increasing age was associated with poorer TRTs and slower processing speed but with larger word vocabulary. A multivariate repeated-measures analysis of variance indicated main effects of group and SNR and an interaction effect between these factors on the pupil response. The peak latency was relatively short and the mean dilation was relatively small at low intelligibility levels for the middle-aged groups, whereas the reverse was observed for high intelligibility levels. The decrease in the pupil response as a function of increasing SNR was relatively small for the listeners with hearing loss. Spearman correlation coefficients indicated that the cognitive load was larger in listeners with better TRT performances as reflected by a longer peak latency (normal-hearing participants, SRT50% condition) and a larger peak amplitude and longer response duration (hearing-impaired participants, SRT50% and SRT84% conditions). Also, a larger word vocabulary was related to longer response duration in the SRT84% condition for the participants with normal hearing. The pupil response systematically increased with decreasing speech intelligibility. Ageing and hearing loss were related to less release from effort when increasing the intelligibility of speech in noise. In difficult listening conditions, these factors may induce cognitive overload relatively early or they may be associated with relatively shallow speech processing. More research is needed to elucidate the underlying mechanisms explaining these results. Better TRTs and larger word vocabulary were related to higher mental processing load across speech intelligibility levels. This indicates that utilizing linguistic ability to improve speech perception is associated with increased listening load.

  16. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    ERIC Educational Resources Information Center

    Kidd, Gerald, Jr.

    2017-01-01

    Purpose: Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This…

  17. Hearing

    ERIC Educational Resources Information Center

    Koehlinger, Keegan M.; Van Horne, Amanda J. Owen; Moeller, Mary Pat

    2013-01-01

    Purpose: Spoken language skills of 3- and 6-year-old children who are hard of hearing (HH) were compared with those of children with normal hearing (NH). Method: Language skills were measured via mean length of utterance in words (MLUw) and percent correct use of finite verb morphology in obligatory contexts based on spontaneous conversational…

  18. The Hearing Environment

    ERIC Educational Resources Information Center

    Capewell, Carmel

    2014-01-01

    Glue ear, a condition resulting in intermittent hearing loss in young children, affects about 80% of young children under seven years old. About 60% of children will spend a third of their time unable to hear within normal thresholds. Teachers are unlikely to consider the sound quality in classrooms. In my research young people provided…

  19. Binaural Loudness Summation in the Hearing Impaired.

    ERIC Educational Resources Information Center

    Hawkins, David B.; And Others

    1987-01-01

    Binaural loudness summation was measured using three different paradigms with 10 normally hearing and 20 bilaterally symmetrical high-frequency sensorineural hearing loss subjects. Binaural summation increased with presentation level using the loudness matching procedure, with values in the 6-10 dB range. Summation decreased with level using the…

  20. The Developmental Trajectory of Spatial Listening Skills in Normal-Hearing Children

    ERIC Educational Resources Information Center

    Lovett, Rosemary Elizabeth Susan; Kitterick, Padraig Thomas; Huang, Shan; Summerfield, Arthur Quentin

    2012-01-01

    Purpose: To establish the age at which children can complete tests of spatial listening and to measure the normative relationship between age and performance. Method: Fifty-six normal-hearing children, ages 1.5-7.9 years, attempted tests of the ability to discriminate a sound source on the left from one on the right, to localize a source, to track…

  1. Word Learning in Deaf Children with Cochlear Implants: Effects of Early Auditory Experience

    ERIC Educational Resources Information Center

    Houston, Derek M.; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T.

    2012-01-01

    Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this…

  2. Auditory Brainstem Response Thresholds to Air- and Bone-Conducted CE-Chirps in Neonates and Adults

    ERIC Educational Resources Information Center

    Cobb, Kensi M.; Stuart, Andrew

    2016-01-01

    Purpose The purpose of this study was to compare auditory brainstem response (ABR) thresholds to air- and bone-conducted CE-Chirps in neonates and adults. Method Thirty-two neonates with no physical or neurologic challenges and 20 adults with normal hearing participated. ABRs were acquired with a starting intensity of 30 dB normal hearing level…

  3. Visual Attention in Deaf and Normal Hearing Adults: Effects of Stimulus Compatibility

    ERIC Educational Resources Information Center

    Sladen, Douglas P.; Tharpe, Anne Marie; Ashmead, Daniel H.; Grantham, D. Wesley; Chun, Marvin M.

    2005-01-01

    Visual perceptual skills of deaf and normal hearing adults were measured using the Eriksen flanker task. Participants were seated in front of a computer screen while a series of target letters flanked by similar or dissimilar letters was flashed in front of them. Participants were instructed to press one button when they saw an "H," and another…

  4. Responses to Targets in the Visual Periphery in Deaf and Normal-Hearing Adults

    ERIC Educational Resources Information Center

    Rothpletz, Ann M.; Ashmead, Daniel H.; Tharpe, Anne Marie

    2003-01-01

    The purpose of this study was to compare the response times of deaf and normal-hearing individuals to the onset of target events in the visual periphery in distracting and nondistracting conditions. Visual reaction times to peripheral targets placed at 3 eccentricities to the left and right of a center fixation point were measured in prelingually…

  5. Consonant Cluster Production in Children with Cochlear Implants: A Comparison with Normally Hearing Peers

    ERIC Educational Resources Information Center

    Faes, Jolien; Gillis, Steven

    2017-01-01

    In early word productions, the same types of errors are manifest in children with cochlear implants (CI) as in their normally hearing (NH) peers with respect to consonant clusters. However, the incidence of those types and their longitudinal development have not been examined or quantified in the literature thus far. Furthermore, studies on the…

  6. Response Errors in Females' and Males' Sentence Lipreading Necessitate Structurally Different Models for Predicting Lipreading Accuracy

    ERIC Educational Resources Information Center

    Bernstein, Lynne E.

    2018-01-01

    Lipreaders recognize words with phonetically impoverished stimuli, an ability that varies widely in normal-hearing adults. Lipreading accuracy for sentence stimuli was modeled with data from 339 normal-hearing adults. Models used measures of phonemic perceptual errors, insertion of text that was not in the stimulus, gender, and auditory speech…

  7. Voice gender identification by cochlear implant users: The role of spectral and temporal resolution

    NASA Astrophysics Data System (ADS)

    Fu, Qian-Jie; Chinchilla, Sherol; Nogaki, Geraldine; Galvin, John J.

    2005-09-01

    The present study explored the relative contributions of spectral and temporal information to voice gender identification by cochlear implant users and normal-hearing subjects. Cochlear implant listeners were tested using their everyday speech processors, while normal-hearing subjects were tested under speech processing conditions that simulated various degrees of spectral resolution, temporal resolution, and spectral mismatch. Voice gender identification was tested for two talker sets. In Talker Set 1, the mean fundamental frequency values of the male and female talkers differed by 100 Hz while in Talker Set 2, the mean values differed by 10 Hz. Cochlear implant listeners achieved higher levels of performance with Talker Set 1, while performance was significantly reduced for Talker Set 2. For normal-hearing listeners, performance was significantly affected by the spectral resolution, for both Talker Sets. With matched speech, temporal cues contributed to voice gender identification only for Talker Set 1 while spectral mismatch significantly reduced performance for both Talker Sets. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to 4-8 spectral channels. The results suggest that, because of the reduced spectral resolution, cochlear implant patients may attend strongly to periodicity cues to distinguish voice gender.

  8. Modern Prescription Theory and Application: Realistic Expectations for Speech Recognition With Hearing Aids

    PubMed Central

    2013-01-01

    A major decision at the time of hearing aid fitting and dispensing is the amount of amplification to provide listeners (both adult and pediatric populations) for the appropriate compensation of sensorineural hearing impairment across a range of frequencies (e.g., 160–10000 Hz) and input levels (e.g., 50–75 dB sound pressure level). This article describes modern prescription theory for hearing aids within the context of a risk versus return trade-off and efficient frontier analyses. The expected return of amplification recommendations (i.e., generic prescriptions such as National Acoustic Laboratories—Non-Linear 2, NAL-NL2, and Desired Sensation Level Multiple Input/Output, DSL m[i/o]) for the Speech Intelligibility Index (SII) and high-frequency audibility were traded against a potential risk (i.e., loudness). The modeled performance of each prescription was compared one with another and with the efficient frontier of normal hearing sensitivity (i.e., a reference point for the most return with the least risk). For the pediatric population, NAL-NL2 was more efficient for SII, while DSL m[i/o] was more efficient for high-frequency audibility. For the adult population, NAL-NL2 was more efficient for SII, while the two prescriptions were similar with regard to high-frequency audibility. In terms of absolute return (i.e., not considering the risk of loudness), however, DSL m[i/o] prescribed more outright high-frequency audibility than NAL-NL2 for either aged population, particularly, as hearing loss increased. Given the principles and demonstrated accuracy of desensitization (reduced utility of audibility with increasing hearing loss) observed at the group level, additional high-frequency audibility beyond that of NAL-NL2 is not expected to make further contributions to speech intelligibility (recognition) for the average listener. PMID:24253361

  9. Binaural Fusion and Listening Effort in Children Who Use Bilateral Cochlear Implants: A Psychoacoustic and Pupillometric Study

    PubMed Central

    Steel, Morrison M.; Papsin, Blake C.; Gordon, Karen A.

    2015-01-01

    Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing. PMID:25668423

  10. Functional morphology of the inner ear and underwater audiograms of Proteus anguinus (Amphibia, Urodela).

    PubMed

    Bulog, B; Schlegel, P

    2000-01-01

    Octavolateral sensory organs (auditory and lateral line organs) of cave salamander Proteus anguinus are highly differentiated. In the saccular macula of the inner ear the complex pattern of hair cell orientation and the large otoconial mass enable particle displacement direction detection. Additionally, the same organ, through air cavities within the body, enables detection of underwater sound pressure changes thus acting as a hearing organ. The cavities in the lungs and mouth of Proteus are a resonators that transmit underwater sound pressure to the inner ear. Behaviourally determined audiograms indicate hearing sensitivity of 60 dB (rel. 1 microPa) at frequencies between 1 and 10 kHz. The hearing frequency range was between 10 Hz and 10 kHz. The hearing sensitivities of depigmented Proteus and black Proteus were compared. The highest sensitivities of the depigmented animals (N=4) were at frequencies 1.3-1.7 kHz and it was 2 kHz in black animals (N=1). Excellent underwater hearing abilities of Proteus are sensory adaptations to cave habitat.

  11. Practical considerations for a second-order directional hearing aid microphone system

    NASA Astrophysics Data System (ADS)

    Thompson, Stephen C.

    2003-04-01

    First-order directional microphone systems for hearing aids have been available for several years. Such a system uses two microphones and has a theoretical maximum free-field directivity index (DI) of 6.0 dB. A second-order microphone system using three microphones could provide a theoretical increase in free-field DI to 9.5 dB. These theoretical maximum DI values assume that the microphones have exactly matched sensitivities at all frequencies of interest. In practice, the individual microphones in the hearing aid always have slightly different sensitivities. For the small microphone separation necessary to fit in a hearing aid, these sensitivity matching errors degrade the directivity from the theoretical values, especially at low frequencies. This paper shows that, for first-order systems the directivity degradation due to sensitivity errors is relatively small. However, for second-order systems with practical microphone sensitivity matching specifications, the directivity degradation below 1 kHz is not tolerable. A hybrid order directive system is proposed that uses first-order processing at low frequencies and second-order directive processing at higher frequencies. This hybrid system is suggested as an alternative that could provide improved directivity index in the frequency regions that are important to speech intelligibility.

  12. Variability of word discrimination scores in clinical practice and consequences on their sensitivity to hearing loss.

    PubMed

    Moulin, Annie; Bernard, André; Tordella, Laurent; Vergne, Judith; Gisbert, Annie; Martin, Christian; Richard, Céline

    2017-05-01

    Speech perception scores are widely used to assess patient's functional hearing, yet most linguistic material used in these audiometric tests dates to before the availability of large computerized linguistic databases. In an ENT clinic population of 120 patients with median hearing loss of 43-dB HL, we quantified the variability and the sensitivity of speech perception scores to hearing loss, measured using disyllabic word lists, as a function of both the number of ten-word lists and type of scoring used (word, syllables or phonemes). The mean word recognition scores varied significantly across lists from 54 to 68%. The median of the variability of the word recognition score ranged from 30% for one ten-word list down to 20% for three ten-word lists. Syllabic and phonemic scores showed much less variability with standard deviations decreasing by 1.15 with the use of syllabic scores and by 1.45 with phonemic scores. The sensitivity of each list to hearing loss and distortions varied significantly. There was an increase in the minimum effect size that could be seen for syllabic scores compared to word scores, with no significant further improvement with phonemic scores. The use of at least two ten-word lists, quoted in syllables rather than in whole words, contributed to a large decrease in variability and an increase in sensitivity to hearing loss. However, those results emphasize the need of using updated linguistic material for clinical speech score assessments.

  13. A user-operated test of suprathreshold acuity in noise for adult hearing screening: The SUN (Speech Understanding in Noise) test.

    PubMed

    Paglialonga, Alessia; Tognola, Gabriella; Grandori, Ferdinando

    2014-09-01

    A novel, user-operated test of suprathreshold acuity in noise for use in adult hearing screening (AHS) was developed. The Speech Understanding in Noise test (SUN) is a speech-in-noise test that makes use of a list of vowel-consonant-vowel (VCV) stimuli in background noise presented in a three-alternative forced choice (3AFC) paradigm by means of a touch sensitive screen. The test is automated, easy-to-use, and provides self-explanatory results (i.e., 'no hearing difficulties', or 'a hearing check would be advisable', or 'a hearing check is recommended'). The test was developed from its building blocks (VCVs and speech-shaped noise) through two main steps: (i) development of the test list through equalization of the intelligibility of test stimuli across the set and (ii) optimization of the test results through maximization of the test sensitivity and specificity. The test had 82.9% sensitivity and 85.9% specificity compared to conventional pure-tone screening, and 83.8% sensitivity and 83.9% specificity to identify individuals with disabling hearing impairment. Results obtained so far showed that the test could be easily performed by adults and older adults in less than one minute per ear and that its results were not influenced by ambient noise (up to 65dBA), suggesting that the test might be a viable method for AHS in clinical as well as non-clinical settings. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Tinnitus and other auditory problems - occupational noise exposure below risk limits may cause inner ear dysfunction.

    PubMed

    Lindblad, Ann-Cathrine; Rosenhall, Ulf; Olofsson, Åke; Hagerman, Björn

    2014-01-01

    The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: - pure tone audiometry with Békésy technique, - transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; - psychoacoustical modulation transfer function, - forward masking, - speech recognition in noise, - tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group exposed to noise below risk levels, had dysfunctions almost identical to those of the more exposed Industry group.

  15. Tinnitus and Other Auditory Problems – Occupational Noise Exposure below Risk Limits May Cause Inner Ear Dysfunction

    PubMed Central

    Lindblad, Ann-Cathrine; Rosenhall, Ulf; Olofsson, Åke; Hagerman, Björn

    2014-01-01

    The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: − pure tone audiometry with Békésy technique, − transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; − psychoacoustical modulation transfer function, − forward masking, − speech recognition in noise, − tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group exposed to noise below risk levels, had dysfunctions almost identical to those of the more exposed Industry group. PMID:24827149

  16. Auditory Perceptual Learning in Adults with and without Age-Related Hearing Loss

    PubMed Central

    Karawani, Hanin; Bitan, Tali; Attias, Joseph; Banai, Karen

    2016-01-01

    Introduction : Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL). Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL. Methods : Fifty-six listeners (60–72 y/o), 35 participants with ARHL, and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training, and no-training group). Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1) Speech-in-noise, (2) time compressed speech, and (3) competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results : Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions : ARHL did not preclude auditory perceptual learning but there was little generalization to untrained conditions. We suggest that most training-related changes occurred at higher level task-specific cognitive processes in both groups. However, these were enhanced by high quality perceptual representations in the normal-hearing group. In contrast, some training-related changes have also occurred at the level of phonemic representations in the ARHL group, consistent with an interaction between bottom-up and top-down processes. PMID:26869944

  17. High-frequency hearing impairment assessed with cochlear microphonics.

    PubMed

    Zhang, Ming

    2012-09-01

    Cochlear microphonic (CM) measurements may potentially become a supplementary approach to otoacoustic emission (OAE) measurements for assessing low-frequency cochlear functions in the clinic. The objective of this study was to investigate the measurement of CMs in subjects with high-frequency hearing loss. Currently, CMs can be measured using electrocochleography (ECochG or ECoG) techniques. Both CMs and OAEs are cochlear responses, while auditory brainstem responses (ABRs) are not. However, there are inherent limitations associated with OAE measurements such as acoustic noise, which can conceal low-frequency OAEs measured in the clinic. However, CM measurements may not have these limitations. CMs were measured in human subjects using an ear canal electrode. The CMs were compared between the high-frequency hearing loss group and the normal-hearing control group. Distortion product OAEs (DPOAEs) and audiogram were also measured. The DPOAE and audiogram measurements indicate that the subjects were correctly selected for the two groups. Low-frequency CM waveforms (CMWs) can be measured using ear canal electrodes in high-frequency hearing loss subjects. The difference in amplitudes of CMWs between the high-frequency hearing loss group and the normal-hearing group is insignificant at low frequencies but significant at high frequencies.

  18. Association of Hearing Impairment With Incident Frailty and Falls in Older Adults

    PubMed Central

    Kamil, Rebecca J.; Betz, Joshua; Powers, Becky Brott; Pratt, Sheila; Kritchevsky, Stephen; Ayonayon, Hilsa N.; Harris, Tammy B.; Helzner, Elizabeth; Deal, Jennifer A.; Martin, Kathryn; Peterson, Matthew; Satterfield, Suzanne; Simonsick, Eleanor M.; Lin, Frank R.

    2017-01-01

    Objective We aimed to determine whether hearing impairment (HI) in older adults is associated with the development of frailty and falls. Method Longitudinal analysis of observational data from the Health, Aging and Body Composition study of 2,000 participants aged 70 to 79 was conducted. Hearing was defined by the pure-tone-average of hearing thresholds at 0.5, 1, 2, and 4 kHz in the better hearing ear. Frailty was defined as a gait speed of <0.60 m/s and/or inability to rise from a chair without using arms. Falls were assessed annually by self-report. Results Older adults with moderate-or-greater HI had a 63% increased risk of developing frailty (adjusted hazard ratio [HR] = 1.63, 95% confidence interval [CI] = [1.26, 2.12]) compared with normal-hearing individuals. Moderate-or-greater HI was significantly associated with a greater annual percent increase in odds of falling over time (9.7%, 95% CI = [7.0, 12.4] compared with normal hearing, 4.4%, 95% CI = [2.6, 6.2]). Discussion HI is independently associated with the risk of frailty in older adults and with greater odds of falling over time. PMID:26438083

  19. Field hearing measurements of the Atlantic sharpnose shark Rhizoprionodon terraenovae.

    PubMed

    Casper, B M; Mann, D A

    2009-12-01

    Field measurements of hearing thresholds were obtained from the Atlantic sharpnose shark Rhizoprionodon terraenovae using the auditory evoked potential method (AEP). The fish had most sensitive hearing at 20 Hz, the lowest frequency tested, with decreasing sensitivity at higher frequencies. Hearing thresholds were lower than AEP thresholds previously measured for the nurse shark Ginglymostoma cirratum and yellow stingray Urobatis jamaicensis at frequencies <200 Hz, and similar at 200 Hz and above. Rhizoprionodon terraenovae represents the closest comparison in terms of pelagic lifestyle to the sharks which have been observed in acoustic field attraction experiments. The sound pressure levels that would be equivalent to the particle acceleration thresholds of R. terraenovae were much higher than the sound levels which attracted closely related sharks suggesting a discrepancy between the hearing threshold experiments and the field attraction experiments.

  20. Investigation of Psychophysiological and Subjective Effects of Long Working Hours – Do Age and Hearing Impairment Matter?

    PubMed Central

    Wagner-Hartl, Verena; Kallus, K. Wolfgang

    2018-01-01

    Following current prognosis, demographic development raises expectations of an aging of the working population. Therefore, keeping employees healthy and strengthening their ability to work, becomes more and more important. When employees become older, dealing with age-related impairments of sensory functions, such as hearing impairment, is a central issue. Recent evidence suggests that negative effects that are associated with reduced hearing can have a strong impact at work. Especially under exhausting working situations such as working overtime hours, age and hearing impairment might influence employees’ well-being. Until now, neither the problem of aged workers and long working hours, nor the problem of hearing impairment and prolonged working time has been addressed explicitly. Therefore, a laboratory study was examined to answer the research question: Do age and hearing impairment have an impact on psychophysiological and subjective effects of long working hours. In total, 51 white-collar workers, aged between 24 and 63 years, participated in the laboratory study. The results show no significant effects for age and hearing impairment on the intensity of subjective consequences (perceived recovery and fatigue, subjective emotional well-being and physical symptoms) of long working hours. However, the psychophysiological response (the saliva cortisol level) to long working hours differs significantly between hearing impaired and normal hearing employees. Interestingly, the results suggest that from a psychophysiological point of view long working hours were more demanding for normal hearing employees. PMID:29379452

  1. Sensorineural Hearing Impairment and Subclinical Atherosclerosis in Rheumatoid Arthritis Patients Without Traditional Cardiovascular Risk Factors

    PubMed Central

    MACIAS-REYES, Hector; DURAN-BARRAGAN, Sergio; CARDENAS-CONTRERAS, Cynthia R.; CHAVEZ-MARTIN, Cesar G.; GOMEZ-BAÑUELOS, Eduardo; NAVARRO-HERNANDEZ, Rosa E.; YANOWSKY-GONZALEZ, Carlos O.; GONZALEZ-LOPEZ, Laura; GAMEZ-NAVA, Jorge I.

    2016-01-01

    Objectives This study aims to evaluate the association of hearing impairment with carotid intima-media thickness and subclinical atherosclerosis in rheumatoid arthritis (RA) patients. Patients and methods A total of 41 RA patients (2 males, 39 females; mean age 46.5±10.2 years; range 20 to 63 years) with no known traditional cardiovascular risk factors were included. Routine clinical and laboratory assessments for RA patients were performed. Pure tone air (250-8000 Hz) and bone conduction (250-6000 Hz) thresholds were obtained, tympanograms and impedance audiometry were conducted. Sensorineural hearing impairment was defined if the average thresholds were ≥25 decibels. Carotid intima-media thickness was assessed and classified with a cut-off point of 0.6 mm. Results Thirteen patients (31.7%) had normal audition, while 28 (68.3%) had hearing impairment. Of these, 22 had bilateral sensorineural hearing impairment. Four patients had conductive hearing impairment (right in three patients and left in one patient). Patients with sensorineural hearing impairment had increased carotid intima-media thickness in the media segment of carotid common artery compared to patients with normal hearing (right ear p=0.007; left ear p=0.075). Thickening of the carotid intima-media thickness was associated with sensorineural hearing impairment in RA patients. Conclusion Rheumatoid arthritis patients should be evaluated by carotid intima-media thickness as a possible contributing factor of hearing impairment in patients without cardiovascular risk factors. PMID:29900940

  2. Investigation of Psychophysiological and Subjective Effects of Long Working Hours - Do Age and Hearing Impairment Matter?

    PubMed

    Wagner-Hartl, Verena; Kallus, K Wolfgang

    2017-01-01

    Following current prognosis, demographic development raises expectations of an aging of the working population. Therefore, keeping employees healthy and strengthening their ability to work, becomes more and more important. When employees become older, dealing with age-related impairments of sensory functions, such as hearing impairment, is a central issue. Recent evidence suggests that negative effects that are associated with reduced hearing can have a strong impact at work. Especially under exhausting working situations such as working overtime hours, age and hearing impairment might influence employees' well-being. Until now, neither the problem of aged workers and long working hours, nor the problem of hearing impairment and prolonged working time has been addressed explicitly. Therefore, a laboratory study was examined to answer the research question: Do age and hearing impairment have an impact on psychophysiological and subjective effects of long working hours. In total, 51 white-collar workers, aged between 24 and 63 years, participated in the laboratory study. The results show no significant effects for age and hearing impairment on the intensity of subjective consequences (perceived recovery and fatigue, subjective emotional well-being and physical symptoms) of long working hours. However, the psychophysiological response (the saliva cortisol level) to long working hours differs significantly between hearing impaired and normal hearing employees. Interestingly, the results suggest that from a psychophysiological point of view long working hours were more demanding for normal hearing employees.

  3. Clinical Application and Psychometric Properties of a Norwegian Questionnaire for the Self-Assessment of Communication in Quiet and Adverse Conditions Using Two Revised APHAB Subscales.

    PubMed

    Heggdal, Peder O Laugen; Nordvik, Øyvind; Brännström, Jonas; Vassbotn, Flemming; Aarstad, Anne Kari; Aarstad, Hans Jørgen

    2018-01-01

    Difficulty in following and understanding conversation in different daily life situations is a common complaint among persons with hearing loss. To the best of our knowledge, there is currently no published validated Norwegian questionnaire available that allows for a self-assessment of unaided communication ability in a population with hearing loss. The aims of the present study were to investigate a questionnaire for the self-assessment of communication ability, examine the psychometric properties of this questionnaire, and explore how demographic variables such as degree of hearing loss, age, and sex influence response patterns. A questionnaire based on the subscales of the Norwegian translation of the Abbreviated Profile of Hearing Aid Benefit was applied to a group of hearing aid users and normal-hearing controls. A total of 108 patients with bilateral hearing loss, and 101 controls with self-reported normal hearing. The psychometric properties were evaluated. Associations and differences between outcome scores and descriptive variables were examined. A regression analysis was performed to investigate whether descriptive variables could predict outcome. The measures of reliability suggest that the questionnaire has satisfactory psychometric properties, with the outcome of the questionnaire correlating to hearing loss severity, thus indicating that the concurrent validity of the questionnaire is good. The findings indicate that the proposed questionnaire is a valid measure of self-assessed communication ability in both quiet and adverse listening conditions in participants with and without hearing loss. American Academy of Audiology

  4. Role of Visual Speech in Phonological Processing by Children with Hearing Loss

    ERIC Educational Resources Information Center

    Jerger, Susan; Tye-Murray, Nancy; Abdi, Herve

    2009-01-01

    Purpose: This research assessed the influence of visual speech on phonological processing by children with hearing loss (HL). Method: Children with HL and children with normal hearing (NH) named pictures while attempting to ignore auditory or audiovisual speech distractors whose onsets relative to the pictures were either congruent, conflicting in…

  5. Production of Sentence-Final Intonation Contours by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Allen, George D.; Arndorfer, Patricia M.

    2000-01-01

    This study compared the relationship between acoustic parameters and listeners' perceptions of intonation contours produced by 12 children (ages 7-14) either with severe-to-profound hearing impairments (HI) or normal-hearing (NH). The HI children's productions were generally similar to the NH children in that they used fundamental frequency,…

  6. 76 FR 66734 - National Institute on Deafness and Other Communication Disorders Draft 2012-2016 Strategic Plan

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-27

    ... areas of hearing and balance; smell and taste; and voice, speech, and language. The Strategic Plan... research training in the normal and disordered processes of hearing, balance, smell, taste, voice, speech... into three program areas: Hearing and balance; smell and taste; and voice, speech, and language. The...

  7. Vocabulary Facilitates Speech Perception in Children with Hearing Aids

    ERIC Educational Resources Information Center

    Klein, Kelsey E.; Walker, Elizabeth A.; Kirby, Benjamin; McCreery, Ryan W.

    2017-01-01

    Purpose: We examined the effects of vocabulary, lexical characteristics (age of acquisition and phonotactic probability), and auditory access (aided audibility and daily hearing aid [HA] use) on speech perception skills in children with HAs. Method: Participants included 24 children with HAs and 25 children with normal hearing (NH), ages 5-12…

  8. Hearing Loss Severity: Impaired Processing of Formant Transition Duration

    ERIC Educational Resources Information Center

    Coez, A.; Belin, P.; Bizaguet, E.; Ferrary, E.; Zilbovicius, M.; Samson, Y.

    2010-01-01

    Normal hearing listeners exploit the formant transition (FT) detection to identify place of articulation for stop consonants. Neuro-imaging studies revealed that short FT induced less cortical activation than long FT. To determine the ability of hearing impaired listeners to distinguish short and long formant transitions (FT) from vowels of the…

  9. Fathers' Involvement in Preschool Programs for Children with and without Hearing Loss

    ERIC Educational Resources Information Center

    Ingber, Sara; Most, Tova

    2012-01-01

    The authors compared the involvement in children's development and education of 38 fathers of preschoolers with hearing loss to the involvement of a matched group of 36 fathers of preschoolers with normal hearing, examining correlations between child, father, and family characteristics. Fathers completed self-reports regarding their parental…

  10. Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss

    PubMed Central

    Alexander, Joshua M.; Kopun, Judy G.; Stelmachowicz, Patricia G.

    2014-01-01

    Summary: Listeners with normal hearing and mild to moderate loss identified fricatives and affricates that were recorded through hearing aids with frequency transposition (FT) or nonlinear frequency compression (NFC). FT significantly degraded performance for both groups. When frequencies up to ~9 kHz were lowered with NFC and with a novel frequency compression algorithm, spectral envelope decimation, performance significantly improved relative to conventional amplification (NFC-off) and was equivalent to wideband speech. Significant differences between most conditions could be largely attributed to an increase or decrease in confusions for /s/ and /z/. Objectives: Stelmachowicz and colleagues have demonstrated that the limited bandwidth associated with conventional hearing aid amplification prevents useful high-frequency speech information from being transmitted. The purpose of this study was to examine the efficacy of two popular frequency-lowering algorithms and one novel algorithm (spectral envelope decimation) in adults with mild-to-moderate sensorineural hearing loss and in normal-hearing controls. Design: Participants listened monaurally through headphones to recordings of nine fricatives and affricates spoken by three women in a vowel-consonant (VC) context. Stimuli were mixed with speech-shaped noise at 10 dB SNR and recorded through a Widex Inteo IN-9 and a Phonak Naída UP V behind-the-ear (BTE) hearing aid. Frequency transposition (FT) is used in the Inteo and nonlinear frequency compression (NFC) used in the Naída. Both devices were programmed to lower frequencies above 4 kHz, but neither device could lower frequencies above 6-7 kHz. Each device was tested under four conditions: frequency lowering deactivated (FT-off and NFC-off), frequency lowering activated (FT and NFC), wideband (WB), and a fourth condition unique to each hearing aid. The WB condition was constructed by mixing recordings from the first condition with high-pass filtered versions of the source stimuli. For the Inteo, the fourth condition consisted of recordings made with the same settings as the first, but with the noise reduction feature activated (FT-off). For the Naída, the fourth condition was the same as the first condition except that source stimuli were pre-processed by a novel frequency compression algorithm, spectral envelope decimation (SED), designed in MATLAB that allowed for a more complete lowering of the 4-10 kHz input band. A follow up experiment with NFC used Phonak’s Naída SP V BTE, which could also lower a greater range of input frequencies. Results: For normal-hearing (NH) and hearing-impaired (HI) listeners, performance with FT was significantly worse compared to the other conditions. Consistent with previous findings, performance for the HI listeners in the WB condition was significantly better than in the FT-off condition. In addition, performance in the SED and WB conditions were both significantly better than the NFC-off condition and the NFC condition with 6 kHz input bandwidth. There were no significant differences between SED and WB, indicating that improvements in fricative identification obtained by increasing bandwidth can also be obtained using this form of frequency compression. Significant differences between most conditions could be largely attributed to an increase or decrease in confusions for the phonemes /s/ and /z/. In the follow up experiment, performance in the NFC condition with 10 kHz input bandwidth was significantly better than NFC-off, replicating the results obtained with SED. Furthermore, listeners who performed poorly with NFC-off tended to show the most improvement with NFC. Conclusions: Improvements in the identification of stimuli chosen to be sensitive to the effects of frequency lowering have been demonstrated using two forms of frequency compression (NFC and SED) in individuals with mild to moderate high-frequency SNHL. However, negative results caution against using FT for this population. Results also indicate that the advantage of an extended bandwidth as reported here and elsewhere applies to the input bandwidth for frequency compression (NFC/SED) when the start frequency is ≥ 4 kHz. PMID:24699702

  11. Hearing Screening of High-Risk Newborns with Brainstem Auditory Evoked Potentials: A Follow-Up Study.

    ERIC Educational Resources Information Center

    Shannon, Dorothy A.; And Others

    1984-01-01

    The brainstem auditory evoked potential (BAEP) was evaluated as a hearing screening test in 168 high-risk newborns. The BAEP was found to be a sensitive procedure for the early identification of hearing-impaired newborns. However, the yield of significant hearing abnormalities was less than predicted in other studies using BAEP. (Author/CL)

  12. Binaural Interference and the Effects of Age and Hearing Loss.

    PubMed

    Mussoi, Bruna S S; Bentler, Ruth A

    2017-01-01

    The existence of binaural interference, defined here as poorer speech recognition with both ears than with the better ear alone, is well documented. Studies have suggested that its prevalence may be higher in the elderly population. However, no study to date has explored binaural interference in groups of younger and older adults in conditions that favor binaural processing (i.e., in spatially separated noise). Also, the effects of hearing loss have not been studied. To examine binaural interference through speech perception tests, in groups of younger adults with normal hearing, older adults with normal hearing for their age, and older adults with hearing loss. A cross-sectional study. Thirty-three participants with symmetric thresholds were recruited from the University of Iowa community. Participants were grouped as follows: younger with normal hearing (18-28 yr, n = 12), older with normal hearing for their age (73-87 yr, n = 9), and older with hearing loss (78-94 yr, n = 12). Prior noise exposure was ruled out. The Connected Speech Test (CST) and Hearing in Noise Test (HINT) were administered to all participants bilaterally, and to each ear separately. Test materials were presented in the sound field with speech at 0° azimuth and the noise at 180°. The Dichotic Digits Test (DDT) was administered to all participants through earphones. Hearing aids were not used during testing. Group results were compared with repeated measures and one-way analysis of variances, as appropriate. Within-subject analyses using pre-established critical differences for each test were also performed. The HINT revealed no effect of condition (individual ear versus bilateral presentation) using group analysis, although within-subject analysis showed that 27% of the participants had binaural interference (18% had binaural advantage). On the CST, there was significant binaural advantage across all groups with group data analysis, as well as for 12% of the participants at each of the two signal-to-babble ratios (SBRs) tested. One participant had binaural interference at each SBR. Finally, on the DDT, a significant right-ear advantage was found with group data, and for at least some participants. Regarding age effects, more participants in the pooled elderly groups had binaural interference (33.3%) than in the younger group (16.7%), on the HINT. The presence of hearing loss yielded overall lower scores, but none of the comparisons between bilateral and unilateral performance were affected by hearing loss. Results of within-subject analyses on the HINT agree with previous findings of binaural interference in ≥17% of listeners. Across all groups, a significant right-ear advantage was also seen on the DDT. HINT results support the notion that the prevalence of binaural interference is likely higher in the elderly population. Hearing loss, however, did not affect the differences between bilateral and better unilateral scores. The possibility of binaural interference should be considered when fitting hearing aids to listeners with symmetric hearing loss. Comparing bilateral to unilateral (unaided) performance on tests such as the HINT may provide the clinician with objective data to support subjective preference for one hearing aid as opposed to two. American Academy of Audiology

  13. [Research on activity evolution of cerebral cortex and hearing rehabilitation of congenitally deaf children after cochlear implant].

    PubMed

    Wang, X J; Liang, M J; Zhang, J P; Huang, H; Zheng, Y Q

    2017-11-05

    Objective: There is a significant difference in the hearing rehabilitation between the congenitally deaf children after cochlear implant(CI). The intrinsic mechanism that affects the hearing rehabilitation in patients was discussed from the perspective of evoked EEG source activity. Method: Firstly, we collected the ERP data from 23 patients and 10 control group children during 0, 3, 6, 9 and 12 months after CI. According to the hearing rehabilitation during 12 months after CI, the patients were divided into two groups: rehabilitation of "the good" and "the poor". Then we used sLORETA to show the changes in the groups of patients' cerebral cortex and compared with the control group. Result: Cross-modal reorganization of cerebral cortex exists in the congenitally deaf children. The cross-modal reorganization gradually degraded and the activity of the relevant cortex followed by normally after CI. There was a statistically significant difference( P < 0.05) in the temporal lobe and the associated cortex around parietal lobe between "the good" and "the poor" groups after 12 months. Conclusion: The normalization of the cross-modal reorganization in patients reflects the hearing rehabilitation after CI, especially the normalization of the activity of the temporal lobe and the associated cortex around parietal lobe, which influences the rehabilitation effect of the auditory function to some extent. This research demonstrated the detection of the mechanism has important significance for the hearing recovery training and evaluation of the hearing rehabilitation after CI. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  14. Development and Initial Validation of a Consumer Questionnaire to Predict the Presence of Ear Disease.

    PubMed

    Kleindienst, Samantha J; Zapala, David A; Nielsen, Donald W; Griffith, James W; Rishiq, Dania; Lundy, Larry; Dhar, Sumitrajit

    2017-10-01

    The already large population of individuals with age- or noise-related hearing loss in the United States is increasing, yet hearing aids remain largely inaccessible. The recent decision by the US Food and Drug Administration to not enforce the medical examination prior to hearing aid fitting highlights the need to reengineer consumer protections when increasing accessibility. A self-administered tool to estimate ear disease risk would provide disease surveillance without posing an unreasonable barrier to hearing aid procurement. To develop and validate a consumer questionnaire for the self-assessment of risk for ear diseases associated with hearing loss. The questionnaire was developed using established methods including expert opinion to validate and create questions, and cognitive interviews to ensure that questions were clear to respondents. Exploratory structural equation modeling, logistic regression, and receiver operating characteristic curve analysis were used to determine sensitivity and specificity with blinded neurotologist opinion as the criterion for evaluation. Patients 40 to 80 years old with ear or hearing complaints necessitating a neurotologic examination and a control group of participants with a diagnosis of age- or noise-related hearing loss participated at the Departments of Otorhinolaryngology and Audiology of Mayo Clinic Florida. Sensitivity and specificity of the prototype questionnaire to identify individuals with targeted diseases. Of 307 participants (mean [SD] age, 62.9 [9.8] years; 148 [48%] female), 75% (n = 231) were enrolled with targeted disease(s) identified on neurotologic assessment and 25% (n = 76) with age- or noise-related hearing loss. Participants were randomly divided into a training sample (80% [n = 246; 185 with disease, 61 controls]) and a test sample (20% [n = 61; 46 with disease, 15 controls]). Using a simple scoring method, a sensitivity of 94% (95% CI, 89%-97%) and specificity of 61% (95% CI, 47%-73%) were established in the training sample. Applying this cutoff to the test sample resulted in 85% (95% CI, 71%-93%) sensitivity and 47% (95% CI, 22%-73%) specificity. This is the first self-assessment tool designed to assess an individual's risk for ear disease. Our preliminary results demonstrate a high sensitivity to disease detection. A further validated and refined version of this questionnaire may serve as an efficacious tool for improving access to hearing health care while minimizing the risk for missed ear diseases.

  15. Hearing loss in the developing world: evaluating the iPhone mobile device as a screening tool.

    PubMed

    Peer, S; Fagan, J J

    2015-01-01

    Developing countries have the world's highest prevalence of hearing loss, and hearing screening programmes are scarce. Mobile devices such as smartphones have potential for audiometric testing. To evaluate the uHear app using an Apple iPhone as a possible hearing screening tool in the developing world, and to determine accuracy of certain hearing thresholds that could prove useful in early detection of hearing loss for high-risk populations in resource-poor communities. This was a quasi-experimental study design. Participants recruited from the Otolaryngology Clinic, Groote Schuur Hospital, Cape Town, South Africa, completed a uHear test in three settings--waiting room (WR), quiet roon (QR) and soundproof room (SR). Thresholds were compared with formal audiograms. Twenty-five patients were tested (50 ears). The uHear test detected moderate or worse hearing loss (pure-tone average (PTA) > 40 dB accurately with a sensitivity of 100% in all three environments. Specificity was 88% (SR), 73% (QR) and 68% (WR). Its was highly accurate in detecting high-frequency hearing loss (2 000, 4 000, 6 000 Hz) in the QR and SR with 'good' and 'very good' kappa values, showing statistical significance (p < 0.05). It was moderately accurate in low-frequency hearing loss (250, 500, 1 000 Hz) in the SR, and poor in the QR and WR. Using the iPhone, uHear is a feasible screening test to rule out significant hearing loss (PTA > 40 dB). It is highly sensitive for detecting threshold changes at high frequencies, making it reasonably well suited to detect presbycusis and ototoxic hearing loss from HIV, tuberculosis therapy and chemotherapy. Portability and ease of use make it appropriate to use in developing world communities that lack screening programmes.

  16. Assessing the Importance of Lexical Tone Contour to Sentence Perception in Mandarin-Speaking Children with Normal Hearing

    ERIC Educational Resources Information Center

    Zhu, Shufeng; Wong, Lena L. N.; Wang, Bin; Chen, Fei

    2017-01-01

    Purpose: The aim of the present study was to evaluate the influence of lexical tone contour and age on sentence perception in quiet and in noise conditions in Mandarin-speaking children ages 7 to 11 years with normal hearing. Method: Test materials were synthesized Mandarin sentences, each word with a manipulated lexical contour, that is, normal…

  17. Dichotic Listening and Otoacoustic Emissions: Shared Variance between Cochlear Function and Dichotic Listening Performance in Adults with Normal Hearing

    ERIC Educational Resources Information Center

    Markevych, Vladlena; Asbjornsen, Arve E.; Lind, Ola; Plante, Elena; Cone, Barbara

    2011-01-01

    The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a…

  18. An Acoustic Analysis of the Vowel Space in Young and Old Cochlear-Implant Speakers

    ERIC Educational Resources Information Center

    Neumeyer, Veronika; Harrington, Jonathan; Draxler, Christoph

    2010-01-01

    The main purpose of this study was to compare acoustically the vowel spaces of two groups of cochlear implantees (CI) with two age-matched normal hearing groups. Five young test persons (15-25 years) and five older test persons (55-70 years) with CI and two control groups of the same age with normal hearing were recorded. The speech material…

  19. Perception of Speech Produced by Native and Nonnative Talkers by Listeners with Normal Hearing and Listeners with Cochlear Implants

    ERIC Educational Resources Information Center

    Ji, Caili; Galvin, John J.; Chang, Yi-ping; Xu, Anting; Fu, Qian-Jie

    2014-01-01

    Purpose: The aim of this study was to evaluate the understanding of English sentences produced by native (English) and nonnative (Spanish) talkers by listeners with normal hearing (NH) and listeners with cochlear implants (CIs). Method: Sentence recognition in noise was measured in adult subjects with CIs and subjects with NH, all of whom were…

  20. Central auditory processing effects induced by solvent exposure.

    PubMed

    Fuente, Adrian; McPherson, Bradley

    2007-01-01

    Various studies have demonstrated that organic solvent exposure may induce auditory damage. Studies conducted in workers occupationally exposed to solvents suggest, on the one hand, poorer hearing thresholds than in matched non-exposed workers, and on the other hand, central auditory damage due to solvent exposure. Taking into account the potential auditory damage induced by solvent exposure due to the neurotoxic properties of such substances, the present research aimed at studying the possible auditory processing disorder (APD), and possible hearing difficulties in daily life listening situations that solvent-exposed workers may acquire. Fifty workers exposed to a mixture of organic solvents (xylene, toluene, methyl ethyl ketone) and 50 non-exposed workers matched by age, gender and education were assessed. Only subjects with no history of ear infections, high blood pressure, kidney failure, metabolic and neurological diseases, or alcoholism were selected. The subjects had either normal hearing or sensorineural hearing loss, and normal tympanometric results. Hearing-in-noise (HINT), dichotic digit (DD), filtered speech (FS), pitch pattern sequence (PPS), and random gap detection (RGD) tests were carried out in the exposed and non-exposed groups. A self-report inventory of each subject's performance in daily life listening situations, the Amsterdam Inventory for Auditory Disability and Handicap, was also administered. Significant threshold differences between exposed and non-exposed workers were found at some of the hearing test frequencies, for both ears. However, exposed workers still presented normal hearing thresholds as a group (equal or better than 20 dB HL). Also, for the HINT, DD, PPS, FS and RGD tests, non-exposed workers obtained better results than exposed workers. Finally, solvent-exposed workers reported significantly more hearing complaints in daily life listening situations than non-exposed workers. It is concluded that subjects exposed to solvents may acquire an APD and thus the sole use of pure-tone audiometry is insufficient to assess hearing in solvent-exposed populations.

  1. Vibrotactile Presentation of Musical Notes to the Glabrous Skin for Adults with Normal Hearing or a Hearing Impairment: Thresholds, Dynamic Range and High-Frequency Perception

    PubMed Central

    Maté-Cid, Saúl; Fulford, Robert; Seiffert, Gary; Ginsborg, Jane

    2016-01-01

    Presentation of music as vibration to the skin has the potential to facilitate interaction between musicians with hearing impairments and other musicians during group performance. Vibrotactile thresholds have been determined to assess the potential for vibrotactile presentation of music to the glabrous skin of the fingertip, forefoot and heel. No significant differences were found between the thresholds for sinusoids representing notes between C1 and C6 when presented to the fingertip of participants with normal hearing and with a severe or profound hearing loss. For participants with normal hearing, thresholds for notes between C1 and C6 showed the characteristic U-shape curve for the fingertip, but not for the forefoot and heel. Compared to the fingertip, the forefoot had lower thresholds between C1 and C3, and the heel had lower thresholds between C1 and G2; this is attributed to spatial summation from the Pacinian receptors over the larger contactor area used for the forefoot and heel. Participants with normal hearing assessed the perception of high-frequency vibration using 1s sinusoids presented to the fingertip and were found to be more aware of transient vibration at the beginning and/or end of notes between G4 and C6 when stimuli were presented 10dB above threshold, rather than at threshold. An average of 94% of these participants reported feeling continuous vibration between G4 and G5 with stimuli presented 10dB above threshold. Based on the experimental findings and consideration of health effects relating to vibration exposure, a suitable range of notes for vibrotactile presentation of music is identified as being from C1 to G5. This is more limited than for human hearing but the fundamental frequencies of the human voice, and the notes played by many instruments, lie within it. However, the dynamic range might require compression to avoid the negative effects of amplitude on pitch perception. PMID:27191400

  2. Evaluation of family history of permanent hearing loss in childhood as a risk indicator in universal screening.

    PubMed

    Valido Quintana, Mercedes; Oviedo Santos, Ángeles; Borkoski Barreiro, Silvia; Santana Rodríguez, Alfredo; Ramos Macías, Ángel

    Sixty percent of prelingual hearing loss is of genetic origin. A family history of permanent childhood hearing loss is a risk factor. The objective of the study is to determine the relationship between this risk factor and hearing loss. We have evaluated clinical and epidemiological characteristics and related nonsyndromic genetic variation. This was a retrospective, descriptive and observational study of newborns between January 2007 and December 2010 with family history as risk factor for hearing loss using transient evoked otoacoustic emissions and auditory brainstem response. A total of 26,717 children were born. Eight hundred and fifty-seven (3.2%) had family history. Fifty-seven(0.21%) failed to pass the second test. A percentage of 29.1 (n=16) had another risk factor, and 17.8% (n=9) had no classical risk factor. No risk factor was related to the hearing loss except heart disease. Seventy-six point four percent had normal hearing and 23.6% hearing loss. The mean of family members with hearing loss was 1.25. On genetic testing, 82.86% of homozygotes was normal, 11.43% heterozygosity in Connexin 26 gene (35delG), 2.86% R143W heterozygosity in the same gene and 2.86% mutant homozygotes (35delG). We found no relationship between hearing loss and mutated allele. The percentage of children with a family history and hearing loss is higher than expected in the general population. The genetic profile requires updating to clarify the relationship between hearing loss and heart disease, family history and the low prevalence in the mutations analyzed. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.

  3. Factors associated with Hearing Loss in a Normal-Hearing Guinea Pig Model of Hybrid Cochlear Implants

    PubMed Central

    Tanaka, Chiemi; Nguyen-Huynh, Anh; Loera, Katherine; Stark, Gemaine; Reiss, Lina

    2014-01-01

    The Hybrid cochlear implant (CI), also known as Electro- Acoustic Stimulation (EAS), is a new type of CI that preserves residual acoustic hearing and enables combined cochlear implant and hearing aid use in the same ear. However, 30-55% of patients experience acoustic hearing loss within days to months after activation, suggesting that both surgical trauma and electrical stimulation may cause hearing loss. The goals of this study were to: 1) determine the contributions of both implantation surgery and EAS to hearing loss in a normal-hearing guinea pig model; 2) determine which cochlear structural changes are associated with hearing loss after surgery and EAS. Two groups of animals were implanted (n=6 per group), with one group receiving chronic acoustic and electric stimulation for 10 weeks, and the other group receiving no direct acoustic or electric stimulation during this time frame. A third group (n=6) was not implanted, but received chronic acoustic stimulation. Auditory brainstem response thresholds were followed over time at 1, 2, 6, and 16 kHz. At the end of the study, the following cochlear measures were quantified: hair cells, spiral ganglion neuron density, fibrous tissue density, and stria vascularis blood vessel density; the presence or absence of ossification around the electrode entry was also noted. After surgery, implanted animals experienced a range of 0-55 dB of threshold shifts in the vicinity of the electrode at 6 and 16 kHz. The degree of hearing loss was significantly correlated with reduced stria vascularis vessel density and with the presence of ossification, but not with hair cell counts, spiral ganglion neuron density, or fibrosis area. After 10 weeks of stimulation, 67% of implanted, stimulated animals had more than 10 dB of additional threshold shift at 1 kHz, compared to 17% of implanted, non-stimulated animals and 0% of non-implanted animals. This 1-kHz hearing loss was not associated with changes in any of the cochlear measures quantified in this study. The variation in hearing loss after surgery and electrical stimulation in this animal model is consistent with the variation in human patients. Further, these findings illustrate an advantage of a normal-hearing animal model for quantification of hearing loss and damage to cochlear structures without the confounding effects of chemical- or noise-induced hearing loss. Finally, this study is the first to suggest a role of the stria vascularis and damage to the lateral wall in implantation-induced hearing loss. Further work is needed to determine the mechanisms of implantation- and electrical-stimulation-induced hearing loss. PMID:25128626

  4. Physiologic correlates to background noise acceptance

    NASA Astrophysics Data System (ADS)

    Tampas, Joanna; Harkrider, Ashley; Nabelek, Anna

    2004-05-01

    Acceptance of background noise can be evaluated by having listeners indicate the highest background noise level (BNL) they are willing to accept while following the words of a story presented at their most comfortable listening level (MCL). The difference between the selected MCL and BNL is termed the acceptable noise level (ANL). One of the consistent findings in previous studies of ANL is large intersubject variability in acceptance of background noise. This variability is not related to age, gender, hearing sensitivity, personality, type of background noise, or speech perception in noise performance. The purpose of the current experiment was to determine if individual differences in physiological activity measured from the peripheral and central auditory systems of young female adults with normal hearing can account for the variability observed in ANL. Correlations between ANL and various physiological responses, including spontaneous, click-evoked, and distortion-product otoacoustic emissions, auditory brainstem and middle latency evoked potentials, and electroencephalography will be presented. Results may increase understanding of the regions of the auditory system that contribute to individual noise acceptance.

  5. Molecular architecture underlying fluid absorption by the developing inner ear

    PubMed Central

    Honda, Keiji; Kim, Sung Huhn; Kelly, Michael C; Burns, Joseph C; Constance, Laura; Li, Xiangming; Zhou, Fei; Hoa, Michael; Kelley, Matthew W; Morell, Robert J

    2017-01-01

    Mutations of SLC26A4 are a common cause of hearing loss associated with enlargement of the endolymphatic sac (EES). Slc26a4 expression in the developing mouse endolymphatic sac is required for acquisition of normal inner ear structure and function. Here, we show that the mouse endolymphatic sac absorbs fluid in an SLC26A4-dependent fashion. Fluid absorption was sensitive to ouabain and gadolinium but insensitive to benzamil, bafilomycin and S3226. Single-cell RNA-seq analysis of pre- and postnatal endolymphatic sacs demonstrates two types of differentiated cells. Early ribosome-rich cells (RRCs) have a transcriptomic signature suggesting expression and secretion of extracellular proteins, while mature RRCs express genes implicated in innate immunity. The transcriptomic signature of mitochondria-rich cells (MRCs) indicates that they mediate vectorial ion transport. We propose a molecular mechanism for resorption of NaCl by MRCs during development, and conclude that disruption of this mechanism is the root cause of hearing loss associated with EES. PMID:28994389

  6. Electrically evoked reticular lamina and basilar membrane vibrations in mice with alpha tectorin C1509G mutation

    NASA Astrophysics Data System (ADS)

    Ren, Tianying; He, Wenxuan

    2015-12-01

    Mechanical coupling between the tectorial membrane and the hair bundles of outer hair cells is crucial for stimulating mechanoelectrical transduction channels, which convert sound-induced vibrations into electrical signal, and for transmitting outer hair cell-generated force back to the basilar membrane to boost hearing sensitivity. It has been demonstrated that the detached tectorial membrane in mice with C1509G alpha tectorin mutation caused hearing loss, but enhanced electrically evoked otoacoustic emissions. To understand how the mutated cochlea emits sounds, the reticular lamina and basilar membrane vibrations were measured in the electrically stimulated cochlea in this study. The results showed that the electrically evoked basilar membrane vibration decreased dramatically while the reticular lamina vibration and otoacoustic emissions exhibited no significant change in C1509G mutation mice. This result indicates that a functional cochlear amplifier and a normal basilar membrane vibration are not required for the outer hair cell-generated sound to exit the cochlea.

  7. Interaural envelope correlation change discrimination in bilateral cochlear implantees: effects of mismatch, centering, and onset of deafness.

    PubMed

    Goupell, Matthew J

    2015-03-01

    Bilateral cochlear implant (CI) listeners can perform binaural tasks, but they are typically worse than normal-hearing (NH) listeners. To understand why this difference occurs and the mechanisms involved in processing dynamic binaural differences, interaural envelope correlation change discrimination sensitivity was measured in real and simulated CI users. In experiment 1, 11 CI (eight late deafened, three early deafened) and eight NH listeners were tested in an envelope correlation change discrimination task. Just noticeable differences (JNDs) were best for a matched place-of-stimulation and increased for an increasing mismatch. In experiment 2, attempts at intracranially centering stimuli did not produce lower JNDs. In experiment 3, the percentage of correct identifications of antiphasic carrier pulse trains modulated by correlated envelopes was measured as a function of mismatch and pulse rate. Sensitivity decreased for increasing mismatch and increasing pulse rate. The experiments led to two conclusions. First, envelope correlation change discrimination necessitates place-of-stimulation matched inputs. However, it is unclear if previous experience with acoustic hearing is necessary for envelope correlation change discrimination. Second, NH listeners presented with CI simulations demonstrated better performance than real CI listeners. If the simulations are realistic representations of electrical stimuli, real CI listeners appear to have difficulty processing interaural information in modulated signals.

  8. Auditory phonological priming in children and adults during word repetition

    NASA Astrophysics Data System (ADS)

    Cleary, Miranda; Schwartz, Richard G.

    2004-05-01

    Short-term auditory phonological priming effects involve changes in the speed with which words are processed by a listener as a function of recent exposure to other similar-sounding words. Activation of phonological/lexical representations appears to persist beyond the immediate offset of a word, influencing subsequent processing. Priming effects are commonly cited as demonstrating concurrent activation of word/phonological candidates during word identification. Phonological priming is controversial, the direction of effects (facilitating versus slowing) varying with the prime-target relationship. In adults, it has repeatedly been demonstrated, however, that hearing a prime word that rhymes with the following target word (ISI=50 ms) decreases the time necessary to initiate repetition of the target, relative to when the prime and target have no phonemic overlap. Activation of phonological representations in children has not typically been studied using this paradigm, auditory-word + picture-naming tasks being used instead. The present study employed an auditory phonological priming paradigm being developed for use with normal-hearing and hearing-impaired children. Initial results from normal-hearing adults replicate previous reports of faster naming times for targets following a rhyming prime word than for targets following a prime having no phonemes in common. Results from normal-hearing children will also be reported. [Work supported by NIH-NIDCD T32DC000039.

  9. Normal-hearing listener preferences of music as a function of signal-to-noise-ratio

    NASA Astrophysics Data System (ADS)

    Barrett, Jillian G.

    2005-04-01

    Optimal signal-to-noise ratios (SNR) for speech discrimination are well-known, well-documented phenomena. Discrimination preferences and functions have been studied for both normal-hearing and hard-of-hearing populations, and information from these studies has provided clearer indices on additional factors affecting speech discrimination ability and SNR preferences. This knowledge lends itself to improvements in hearing aids and amplification devices, telephones, television and radio transmissions, and a wide arena of recorded media such as movies and music. This investigation was designed to identify the preferred signal-to-background ratio (SBR) of normal-hearing listeners in a musical setting. The signal was the singer's voice, and music was considered the background. Subjects listened to an unfamiliar ballad with a female singer, and rated seven different SBR treatments. When listening to melodic motifs with linguistic content, results indicated subjects preferred SBRs similar to those in conventional speech discrimination applications. However, unlike traditional speech discrimination studies, subjects did not prefer increased levels of SBR. Additionally, subjects had a much larger acceptable range of SBR in melodic motifs where the singer's voice was not intended to communicate via linguistic means, but by the pseudo-paralinguistic means of vocal timbre and harmonic arrangements. Results indicate further studies investigating perception of singing are warranted.

  10. Auditory, visual, and auditory-visual perception of emotions by individuals with cochlear implants, hearing AIDS, and normal hearing.

    PubMed

    Most, Tova; Aviner, Chen

    2009-01-01

    This study evaluated the benefits of cochlear implant (CI) with regard to emotion perception of participants differing in their age of implantation, in comparison to hearing aid users and adolescents with normal hearing (NH). Emotion perception was examined by having the participants identify happiness, anger, surprise, sadness, fear, and disgust. The emotional content was placed upon the same neutral sentence. The stimuli were presented in auditory, visual, and combined auditory-visual modes. The results revealed better auditory identification by the participants with NH in comparison to all groups of participants with hearing loss (HL). No differences were found among the groups with HL in each of the 3 modes. Although auditory-visual perception was better than visual-only perception for the participants with NH, no such differentiation was found among the participants with HL. The results question the efficiency of some currently used CIs in providing the acoustic cues required to identify the speaker's emotional state.

  11. Broadband noise exposure does not affect hearing sensitivity in big brown bats (Eptesicus fuscus).

    PubMed

    Simmons, Andrea Megela; Hom, Kelsey N; Warnecke, Michaela; Simmons, James A

    2016-04-01

    In many vertebrates, exposure to intense sounds under certain stimulus conditions can induce temporary threshold shifts that reduce hearing sensitivity. Susceptibility to these hearing losses may reflect the relatively quiet environments in which most of these species have evolved. Echolocating big brown bats (Eptesicus fuscus) live in extremely intense acoustic environments in which they navigate and forage successfully, both alone and in company with other bats. We hypothesized that bats may have evolved a mechanism to minimize noise-induced hearing losses that otherwise could impair natural echolocation behaviors. The hearing sensitivity of seven big brown bats was measured in active echolocation and passive hearing tasks, before and after exposure to broadband noise spanning their audiometric range (10-100 kHz, 116 dB SPL re. 20 µPa rms, 1 h duration; sound exposure level 152 dB). Detection thresholds measured 20 min, 2 h or 24 h after exposure did not vary significantly from pre-exposure thresholds or from thresholds in control (sham exposure) conditions. These results suggest that big brown bats may be less susceptible to temporary threshold shifts than are other terrestrial mammals after exposure to similarly intense broadband sounds. These experiments provide fertile ground for future research on possible mechanisms employed by echolocating bats to minimize hearing losses while orienting effectively in noisy biological soundscapes. © 2016. Published by The Company of Biologists Ltd.

  12. Light-induced vibration in the hearing organ

    PubMed Central

    Ren, Tianying; He, Wenxuan; Li, Yizeng; Grosh, Karl; Fridberger, Anders

    2014-01-01

    The exceptional sensitivity of mammalian hearing organs is attributed to an active process, where force produced by sensory cells boost sound-induced vibrations, making soft sounds audible. This process is thought to be local, with each section of the hearing organ capable of amplifying sound-evoked movement, and nearly instantaneous, since amplification can work for sounds at frequencies up to 100 kHz in some species. To test these fundamental precepts, we developed a method for focally stimulating the living hearing organ with light. Light pulses caused intense and highly damped mechanical responses followed by traveling waves that developed with considerable delay. The delayed response was identical to movements evoked by click-like sounds. This shows that the active process is neither local nor instantaneous, but requires mechanical waves traveling from the cochlear base toward its apex. A physiologically-based mathematical model shows that such waves engage the active process, enhancing hearing sensitivity. PMID:25087606

  13. Screening for hearing, visual and dual sensory impairment in older adults using behavioural cues: a validation study.

    PubMed

    Roets-Merken, Lieve M; Zuidema, Sytse U; Vernooij-Dassen, Myrra J F J; Kempen, Gertrudis I J M

    2014-11-01

    This study investigated the psychometric properties of the Severe Dual Sensory Loss screening tool, a tool designed to help nurses and care assistants to identify hearing, visual and dual sensory impairment in older adults. Construct validity of the Severe Dual Sensory Loss screening tool was evaluated using Crohnbach's alpha and factor analysis. Interrater reliability was calculated using Kappa statistics. To evaluate the predictive validity, sensitivity and specificity were calculated by comparison with the criterion standard assessment for hearing and vision. The criterion used for hearing impairment was a hearing loss of ≥40 decibel measured by pure-tone audiometry, and the criterion for visual impairment was a visual acuity of ≤0.3 diopter or a visual field of ≤0.3°. Feasibility was evaluated by the time needed to fill in the screening tool and the clarity of the instruction and items. Prevalence of dual sensory impairment was calculated. A total of 56 older adults receiving aged care and 12 of their nurses and care assistants participated in the study. Crohnbach's alpha was 0.81 for the hearing subscale and 0.84 for the visual subscale. Factor analysis showed two constructs for hearing and two for vision. Kappa was 0.71 for the hearing subscale and 0.74 for the visual subscale. The predictive validity showed a sensitivity of 0.71 and a specificity of 0.72 for the hearing subscale; and a sensitivity of 0.69 and a specificity of 0.78 for the visual subscale. The optimum cut-off point for each subscale was score 1. The nurses and care assistants reported that the Severe Dual Sensory Loss screening tool was easy to use. The prevalence of hearing and vision impairment was 55% and 29%, respectively, and that of dual sensory impairment was 20%. The Severe Dual Sensory Loss screening tool was compared with the criterion standards for hearing and visual impairment and was found a valid and reliable tool, enabling nurses and care assistants to identify hearing, visual and dual sensory impairment among older adults. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. High-frequency tone burst-evoked ABR latency-intensity functions.

    PubMed

    Fausti, S A; Olson, D J; Frey, R H; Henry, J A; Schaffer, H I

    1993-01-01

    High-frequency tone burst stimuli (8, 10, 12, and 14 kHz) have been developed and demonstrated to provide reliable and valid auditory brainstem responses (ABRs) in normal-hearing subjects. In this study, latency-intensity functions (LIFs) were determined using these stimuli in 14 normal-hearing individuals. Significant shifts in response latency occurred as a function of stimulus intensity for all tone burst frequencies. For each 10 dB shift in intensity, latency shifts for waves I and V were statistically significant except for one isolated instance. LIF slopes were comparable between frequencies, ranging from 0.020 to 0.030 msec/dB. These normal LIFs for high-frequency tone burst-evoked ABRs suggest the degree of response latency change that might be expected from, for example, progressive hearing loss due to ototoxic insult, although these phenomena may not be directly related.

  15. Cortical auditory evoked potentials in the assessment of auditory neuropathy: two case studies.

    PubMed

    Pearce, Wendy; Golding, Maryanne; Dillon, Harvey

    2007-05-01

    Infants with auditory neuropathy and possible hearing impairment are being identified at very young ages through the implementation of hearing screening programs. The diagnosis is commonly based on evidence of normal cochlear function but abnormal brainstem function. This lack of normal brainstem function is highly problematic when prescribing amplification in young infants because prescriptive formulae require the input of hearing thresholds that are normally estimated from auditory brainstem responses to tonal stimuli. Without this information, there is great uncertainty surrounding the final fitting. Cortical auditory evoked potentials may, however, still be evident and reliably recorded to speech stimuli presented at conversational levels. The case studies of two infants are presented that demonstrate how these higher order electrophysiological responses may be utilized in the audiological management of some infants with auditory neuropathy.

  16. Auditory Distance Coding in Rabbit Midbrain Neurons and Human Perception: Monaural Amplitude Modulation Depth as a Cue

    PubMed Central

    Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki

    2015-01-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  17. EFFECT OF SMOKING ON TRASIENTLY EVOKED OTOACOUSTIC EMISSION.

    PubMed

    Gegenava, Kh; Japaridze, Sh; Sharashenidze, N; Jalabadze, G; Kevanishvili, Z

    2016-01-01

    Evoked otoacoustic emissions, EOAEs, are proved to be sounds aroused in response to external acoustic stimulus by the cochlear outer hair cells. Transiently evoked otoacoustic emissions, TEOAEs, are the most clinically utilized EOAEs. TEOAEs are detectable in 98% of people with normal hearing, regardless of age or sex, while two ears of any individual produce similar TEOAEs waveforms. The objective of the presented study was the comparison of TEOAE magnitudes in cigarette smokers and nonsmokers. The TEOAE occurrence and characteristics in individuals of both samples with audiometrically proved hearing losses and in those without were also specifically examined. 30 smokers and and 30 nonsmokers within the age range of 30-59 years were involved in the present study after informed concent. OAEs were performed to each subject by Madsen Capella's-OAE/middle ear analyzer-GN Otometrics, (Danmark). After OAE testing each subject was performed routine pure-tone audiometry and tympanometry. Obtained results were statistically treated by the student's t-distribution. According to our results 76.6% of smokers and 3.33% of nonsmokers had marked different level decrease in TEOAE amplitude. Audiographic measurments showed altered audiogram in 6.7% of smokers and in 3.33% of nonsmokers. Based on the above mentioned results we suppose that smoking has significant influence on hearing function, especially on cochlear apparatus; At the same time, TOEAE, as a sensitive method can be used for very early detection of hearing loss, even when there are neither any subjective complains nor some changies on audiogram.

  18. The Effect of Conventional and Transparent Surgical Masks on Speech Understanding in Individuals with and without Hearing Loss.

    PubMed

    Atcherson, Samuel R; Mendel, Lisa Lucks; Baltimore, Wesley J; Patro, Chhayakanta; Lee, Sungmin; Pousson, Monique; Spann, M Joshua

    2017-01-01

    It is generally well known that speech perception is often improved with integrated audiovisual input whether in quiet or in noise. In many health-care environments, however, conventional surgical masks block visual access to the mouth and obscure other potential facial cues. In addition, these environments can be noisy. Although these masks may not alter the acoustic properties, the presence of noise in addition to the lack of visual input can have a deleterious effect on speech understanding. A transparent ("see-through") surgical mask may help to overcome this issue. To compare the effect of noise and various visual input conditions on speech understanding for listeners with normal hearing (NH) and hearing impairment using different surgical masks. Participants were assigned to one of three groups based on hearing sensitivity in this quasi-experimental, cross-sectional study. A total of 31 adults participated in this study: one talker, ten listeners with NH, ten listeners with moderate sensorineural hearing loss, and ten listeners with severe-to-profound hearing loss. Selected lists from the Connected Speech Test were digitally recorded with and without surgical masks and then presented to the listeners at 65 dB HL in five conditions against a background of four-talker babble (+10 dB SNR): without a mask (auditory only), without a mask (auditory and visual), with a transparent mask (auditory only), with a transparent mask (auditory and visual), and with a paper mask (auditory only). A significant difference was found in the spectral analyses of the speech stimuli with and without the masks; however, no more than ∼2 dB root mean square. Listeners with NH performed consistently well across all conditions. Both groups of listeners with hearing impairment benefitted from visual input from the transparent mask. The magnitude of improvement in speech perception in noise was greatest for the severe-to-profound group. Findings confirm improved speech perception performance in noise for listeners with hearing impairment when visual input is provided using a transparent surgical mask. Most importantly, the use of the transparent mask did not negatively affect speech perception performance in noise. American Academy of Audiology

  19. A comprehensive noise survey of the S-70A-9 Black Hawk helicopter.

    PubMed

    King, R B; Saliba, A J; Brock, J R

    1999-02-01

    This paper reports the results of a comprehensive noise survey of the Sikorsky S-70A-9 Black Hawk helicopter environment and provides an assessment of the hearing protection devices worn by Australian Army personnel exposed to that environment. At-ear noise levels were measured at 4 positions in the cabin of the Black Hawk under various flight conditions and at 13 positions outside the Black Hawk under various ground running conditions using the Head Acoustic Measurement System (Head, GmbH). The attenuation properties of the hearing protection devices (HPDs) normally worn by aircrew and maintenance crews (the ALPHA helmet and the Roanwell MX-2507 Communications headset) were also assessed. At-ear sound pressure levels that would be experienced by personnel wearing their normal HPDs were determined at the positions they would normally occupy in and around the aircraft. Results indicate that HPDs do not provide adequate hearing protection to meet current hearing conservation regulations which allow a permissible noise exposure of 85 dB(A) for an 8-h day.

  20. Vocabulary Knowledge of Children With Cochlear Implants: A Meta-Analysis

    PubMed Central

    2016-01-01

    This article employs meta-analysis procedures to evaluate whether children with cochlear implants demonstrate lower spoken-language vocabulary knowledge than peers with normal hearing. Of the 754 articles screened and 52 articles coded, 12 articles met predetermined inclusion criteria (with an additional 5 included for one analysis). Effect sizes were calculated for relevant studies and forest plots were used to compare differences between groups of children with normal hearing and children with cochlear implants. Weighted effect size averages for expressive vocabulary measures (g = −11.99; p < .001) and for receptive vocabulary measures (g = −20.33; p < .001) indicated that children with cochlear implants demonstrate lower vocabulary knowledge than children with normal hearing. Additional analyses confirmed the value of comparing vocabulary knowledge of children with hearing loss to a tightly matched (e.g., socioeconomic status-matched) sample. Age of implantation, duration of implantation, and chronological age at testing were not significantly related to magnitude of weighted effect size. Findings from this analysis represent a first step toward resolving discrepancies in the vocabulary knowledge literature. PMID:26712811

  1. Parental Support for Language Development during Joint Book Reading for Young Children with Hearing Loss

    ERIC Educational Resources Information Center

    DesJardin, Jean L.; Doll, Emily R.; Stika, Carren J.; Eisenberg, Laurie S.; Johnson, Karen J.; Ganguly, Dianne Hammes; Colson, Bethany G.; Henning, Shirley C.

    2014-01-01

    Parent and child joint book reading (JBR) characteristics and parent facilitative language techniques (FLTs) were investigated in two groups of parents and their young children; children with normal hearing (NH; "n" = 60) and children with hearing loss (HL; "n" = 45). Parent-child dyads were videotaped during JBR interactions,…

  2. Anxious Mothers and At-Risk Infants: The Influence of Mild Hearing Impairment on Early Interaction.

    ERIC Educational Resources Information Center

    Day, Pat Spencer; Prezioso, Carlene

    To examine the influence of imperfect audition in otherwise intact infants on early mother-infant interaction, three hard of hearing and three normally hearing infants were videotaped in interaction with their mothers. Interaction was coded, a narrative record of the mothers' nonverbal behavior was made, and transcripts of interviews with the…

  3. Purpose of Newborn Hearing Screening

    MedlinePlus

    ... services, babies will have trouble with speech and language development. For some babies, early intervention services may include ... baby will have the best chance for normal language development if any hearing loss is discovered and treatment ...

  4. Emotional and behavioural difficulties in children and adolescents with hearing impairment: a systematic review and meta-analysis.

    PubMed

    Stevenson, Jim; Kreppner, Jana; Pimperton, Hannah; Worsfold, Sarah; Kennedy, Colin

    2015-05-01

    The aim of this study is to estimate the extent to which children and adolescents with hearing impairment (HI) show higher rates of emotional and behavioural difficulties compared to normally hearing children. Studies of emotional and behavioural difficulties in children and adolescents were traced from computerized systematic searches supplemented, where appropriate, by studies referenced in previous narrative reviews. Effect sizes (Hedges' g) were calculated for all studies. Meta-analyses were conducted on the weighted effect sizes obtained for studies adopting the Strength and Difficulties Questionnaire (SDQ) and on the unweighted effect sizes for non-SDQ studies. 33 non-SDQ studies were identified in which emotional and behavioural difficulties in children with HI could be compared to normally hearing children. The unweighted average g for these studies was 0.36. The meta-analysis of the 12 SDQ studies gave estimated effect sizes of 0.23 (95% CI 0.07, 0.40), 0.34 (95% CI 0.19, 0.49) and -0.01 (95% CI -0.32, 0.13) for Parent, Teacher and Self-ratings of Total Difficulties, respectively. The SDQ sub-scale showing consistent differences across raters between groups with HI and those with normal hearing was Peer Problems. Children and adolescents with HI have scores on emotional and behavioural difficulties measures about a quarter to a third of a standard deviation higher than hearing children. Children and adolescents with HI are in need of support to help their social relationships particularly with their peers.

  5. Differences in interregional brain connectivity in children with unilateral hearing loss.

    PubMed

    Jung, Matthew E; Colletta, Miranda; Coalson, Rebecca; Schlaggar, Bradley L; Lieu, Judith E C

    2017-11-01

    To identify functional network architecture differences in the brains of children with unilateral hearing loss (UHL) using resting-state functional-connectivity magnetic resonance imaging (rs-fcMRI). Prospective observational study. Children (7 to 17 years of age) with severe to profound hearing loss in one ear, along with their normal hearing (NH) siblings, were recruited and imaged using rs-fcMRI. Eleven children had right UHL; nine had left UHL; and 13 had normal hearing. Forty-one brain regions of interest culled from established brain networks such as the default mode (DMN); cingulo-opercular (CON); and frontoparietal networks (FPN); as well as regions for language, phonological, and visual processing, were analyzed using regionwise correlations and conjunction analysis to determine differences in functional connectivity between the UHL and normal hearing children. When compared to the NH group, children with UHL showed increased connectivity patterns between multiple networks, such as between the CON and visual processing centers. However, there were decreased, as well as aberrant connectivity patterns with the coactivation of the DMN and FPN, a relationship that usually is negatively correlated. Children with UHL demonstrate multiple functional connectivity differences between brain networks involved with executive function, cognition, and language comprehension that may represent adaptive as well as maladaptive changes. These findings suggest that possible interventions or habilitation, beyond amplification, might be able to affect some children's requirement for additional help at school. 3b. Laryngoscope, 127:2636-2645, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  6. Emotional perception of music in children with unilateral cochlear implants.

    PubMed

    Shirvani, Sareh; Jafari, Zahra; Sheibanizadeh, Abdolreza; Motasaddi Zarandy, Masoud; Jalaie, Shohre

    2014-10-01

    Cochlear implantation (CI) improves language skills among children with hearing loss. However, children with CIs still fall short of fulfilling some other needs, including musical perception. This is often attributed to the biological, technological, and acoustic limitations of CIs. Emotions play a key role in the understanding and enjoyment of music. The present study aimed to investigate the emotional perception of music in children with bilaterally severe-to-profound hearing loss and unilateral CIs. Twenty-five children with congenital severe-to-profound hearing loss and unilateral CIs and 30 children with normal hearing participated in the study. The children's emotional perceptions of music, as defined by Peretz (1998), were measured. Children were instructed to indicate happy or sad feelings fostered in them by the music by pointing to pictures of faces showing these emotions. Children with CI obtained significantly lower scores than children with normal hearing, for both happy and sad items of music as well as in overall test scores (P<0.001). Furthermore, both in CI group (P=0.49) and the control one (P<0.001), the happy items were more often recognized correctly than the sad items. Hearing-impaired children with CIs had poorer emotional perception of music than their normal peers. Due to the importance of music in the development of language, cognitive and social interaction skills, aural rehabilitation programs for children with CIs should focus particularly on music. Furthermore, it is essential to enhance the quality of musical perception by improving the quality of implant prostheses.

  7. Preferred Compression Speed for Speech and Music and Its Relationship to Sensitivity to Temporal Fine Structure.

    PubMed

    Moore, Brian C J; Sęk, Aleksander

    2016-09-07

    Multichannel amplitude compression is widely used in hearing aids. The preferred compression speed varies across individuals. Moore (2008) suggested that reduced sensitivity to temporal fine structure (TFS) may be associated with preference for slow compression. This idea was tested using a simulated hearing aid. It was also assessed whether preferences for compression speed depend on the type of stimulus: speech or music. Twenty-two hearing-impaired subjects were tested, and the stimulated hearing aid was fitted individually using the CAM2A method. On each trial, a given segment of speech or music was presented twice. One segment was processed with fast compression and the other with slow compression, and the order was balanced across trials. The subject indicated which segment was preferred and by how much. On average, slow compression was preferred over fast compression, more so for music, but there were distinct individual differences, which were highly correlated for speech and music. Sensitivity to TFS was assessed using the difference limen for frequency at 2000 Hz and by two measures of sensitivity to interaural phase at low frequencies. The results for the difference limens for frequency, but not the measures of sensitivity to interaural phase, supported the suggestion that preference for compression speed is affected by sensitivity to TFS. © The Author(s) 2016.

  8. Gap Detection in School-Age Children and Adults: Effects of Inherent Envelope Modulation and the Availability of Cues across Frequency

    ERIC Educational Resources Information Center

    Buss, Emily; Hall, Joseph W., III; Porter, Heather; Grose, John H.

    2014-01-01

    Purpose: The present study evaluated the effects of inherent envelope modulation and the availability of cues across frequency on behavioral gap detection with noise-band stimuli in school-age children. Method: Listeners were 34 normal-hearing children (ages 5.2-15.6 years) and 12 normal-hearing adults (ages 18.5-28.8 years). Stimuli were…

  9. An initial study of voice characteristics of children using two different sound coding strategies in comparison to normal hearing children.

    PubMed

    Coelho, Ana Cristina; Brasolotto, Alcione Ghedini; Bevilacqua, Maria Cecília

    2015-06-01

    To compare some perceptual and acoustic characteristics of the voices of children who use the advanced combination encoder (ACE) or fine structure processing (FSP) speech coding strategies, and to investigate whether these characteristics differ from children with normal hearing. Acoustic analysis of the sustained vowel /a/ was performed using the multi-dimensional voice program (MDVP). Analyses of sequential and spontaneous speech were performed using the real time pitch. Perceptual analyses of these samples were performed using visual-analogic scales of pre-selected parameters. Seventy-six children from three years to five years and 11 months of age participated. Twenty-eight were users of ACE, 23 were users of FSP, and 25 were children with normal hearing. Although both groups with CI presented with some deviated vocal features, the users of ACE presented with voice quality more like children with normal hearing than the users of FSP. Sound processing of ACE appeared to provide better conditions for auditory monitoring of the voice, and consequently, for better control of the voice production. However, these findings need to be further investigated due to the lack of comparative studies published to understand exactly which attributes of sound processing are responsible for differences in performance.

  10. Comparison of auditory comprehension skills in children with cochlear implant and typically developing children.

    PubMed

    Mandal, Joyanta Chandra; Kumar, Suman; Roy, Sumit

    2016-12-01

    The main goal of this study was to obtain auditory comprehension skills of native Hindi speaking children with cochlear implant and typically developing children across the age of 3-7 years and compare the scores between two groups. A total of sixty Hindi speaking participants were selected for the study. They were divided into two groups- Group-A consisted of thirty children with normal hearing and Group-B thirty children using cochlear implants. To assess the auditory comprehension skills, Test of auditory comprehension in Hindi (TACH) was used. The participant was required to point to one of three pictures which would best correspond to the stimulus presented. Correct answers were scored as 1 and incorrect answers as 0. TACH was administered on for both groups. Independent t-test was applied and it was found that auditory comprehension scores of children using cochlear implant were significantly poorer than the score of children with normal hearing for all three subtests. Pearson's correlation coefficient revealed poor correlation between the scores of children with normal hearing and children using cochlear implant. The results of this study suggest that children using cochlear implant have poor auditory comprehension skills than children with normal hearing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Killer whale (Orcinus orca) behavioral audiograms.

    PubMed

    Branstetter, Brian K; St Leger, Judy; Acton, Doug; Stewart, John; Houser, Dorian; Finneran, James J; Jenkins, Keith

    2017-04-01

    Killer whales (Orcinus orca) are one of the most cosmopolitan marine mammal species with potential widespread exposure to anthropogenic noise impacts. Previous audiometric data on this species were from two adult females [Szymanski, Bain, Kiehl, Pennington, Wong, and Henry (1999). J. Acoust. Soc. Am. 108, 1322-1326] and one sub-adult male [Hall and Johnson (1972). J. Acoust. Soc. Am. 51, 515-517] with apparent high-frequency hearing loss. All three killer whales had best sensitivity between 15 and 20 kHz, with thresholds lower than any odontocete tested to date, suggesting this species might be particularly sensitive to acoustic disturbance. The current study reports the behavioral audiograms of eight killer whales at two different facilities. Hearing sensitivity was measured from 100 Hz to 160 kHz in killer whales ranging in age from 12 to 52 year. Previously measured low thresholds at 20 kHz were not replicated in any individual. Hearing in the killer whales was generally similar to other delphinids, with lowest threshold (49 dB re 1 μPa) at approximately 34 kHz, good hearing (i.e., within 20 dB of best sensitivity) from 5 to 81 kHz, and low- and high-frequency hearing cutoffs (>100 dB re μPa) of 600 Hz and 114 kHz, respectively.

  12. Deafness Simulation: A Model for Enhancing Awareness and Sensitivity among Hearing Educators.

    ERIC Educational Resources Information Center

    Sevigny-Skyer, Solange C.; Dagel, Delbert D.

    1990-01-01

    The National Technical Institute for the Deaf developed and implemented a school-based deafness simulation project for hearing faculty members called "Keeping in Touch." Faculty members wore tinnitus maskers which produced a moderate-to-severe hearing loss and subsequently discussed their experiences, feelings, and communication…

  13. Enhancing the Induction Skill of Deaf and Hard-of-Hearing Children with Virtual Reality Technology.

    PubMed

    Passig, D; Eden, S

    2000-01-01

    Many researchers have found that for reasoning and reaching a reasoned conclusion, particularly when the process of induction is required, deaf and hard-of-hearing children have unusual difficulty. The purpose of this study was to investigate whether the practice of rotating virtual reality (VR) three-dimensional (3D) objects will have a positive effect on the ability of deaf and hard-of-hearing children to use inductive processes when dealing with shapes. Three groups were involved in the study: (1) experimental group, which included 21 deaf and hard-of-hearing children, who played a VR 3D game; (2) control group I, which included 23 deaf and hard-of-hearing children, who played a similar two-dimensional (2D) game (not VR game); and (3) control group II of 16 hearing children for whom no intervention was introduced. The results clearly indicate that practicing with VR 3D spatial rotations significantly improved inductive thinking used by the experimental group for shapes as compared with the first control group, who did not significantly improve their performance. Also, prior to the VR 3D experience, the deaf and hard-of-hearing children attained lower scores in inductive abilities than the children with normal hearing, (control group II). The results for the experimental group, after the VR 3D experience, improved to the extent that there was no noticeable difference between them and the children with normal hearing.

  14. Reading skills in Persian deaf children with cochlear implants and hearing aids.

    PubMed

    Rezaei, Mohammad; Rashedi, Vahid; Morasae, Esmaeil Khedmati

    2016-10-01

    Reading skills are necessary for educational development in children. Many studies have shown that children with hearing loss often experience delays in reading. This study aimed to examine reading skills of Persian deaf children with cochlear implant and hearing aid and compare them with normal hearing counterparts. The sample consisted of 72 s and third grade Persian-speaking children aged 8-12 years. They were divided into three equal groups including 24 children with cochlear implant (CI), 24 children with hearing aid (HA), and 24 children with normal hearing (NH). Reading performance of participants was evaluated by the "Nama" reading test. "Nama" provides normative data for hearing and deaf children and consists of 10 subtests and the sum of the scores is regarded as reading performance score. Results of ANOVA on reading test showed that NH children had significantly better reading performance than deaf children with CI and HA in both grades (P < 0.001). Post-hoc analysis, using Tukey test, indicated that there was no significant difference between HA and CI groups in terms of non-word reading, word reading, and word comprehension skills (respectively, P = 0.976, P = 0.988, P = 0.998). Considering the findings, cochlear implantation is not significantly more effective than hearing aid for improvement of reading abilities. It is clear that even with considerable advances in hearing aid technology, many deaf children continue to find literacy a challenging struggle. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Discrimination Task Reveals Differences in Neural Bases of Tinnitus and Hearing Impairment

    PubMed Central

    Husain, Fatima T.; Pajor, Nathan M.; Smith, Jason F.; Kim, H. Jeff; Rudy, Susan; Zalewski, Christopher; Brewer, Carmen; Horwitz, Barry

    2011-01-01

    We investigated auditory perception and cognitive processing in individuals with chronic tinnitus or hearing loss using functional magnetic resonance imaging (fMRI). Our participants belonged to one of three groups: bilateral hearing loss and tinnitus (TIN), bilateral hearing loss without tinnitus (HL), and normal hearing without tinnitus (NH). We employed pure tones and frequency-modulated sweeps as stimuli in two tasks: passive listening and active discrimination. All subjects had normal hearing through 2 kHz and all stimuli were low-pass filtered at 2 kHz so that all participants could hear them equally well. Performance was similar among all three groups for the discrimination task. In all participants, a distributed set of brain regions including the primary and non-primary auditory cortices showed greater response for both tasks compared to rest. Comparing the groups directly, we found decreased activation in the parietal and frontal lobes in the participants with tinnitus compared to the HL group and decreased response in the frontal lobes relative to the NH group. Additionally, the HL subjects exhibited increased response in the anterior cingulate relative to the NH group. Our results suggest that a differential engagement of a putative auditory attention and short-term memory network, comprising regions in the frontal, parietal and temporal cortices and the anterior cingulate, may represent a key difference in the neural bases of chronic tinnitus accompanied by hearing loss relative to hearing loss alone. PMID:22066003

  16. Maternal Interactions with a Hearing and Hearing-Impaired Twin: Similarities and Differences in Speech Input, Interaction Quality, and Word Production

    ERIC Educational Resources Information Center

    Lam, Christa; Kitamura, Christine

    2010-01-01

    Purpose: This study examined a mother's speech style and interactive behaviors with her twin sons: 1 with bilateral hearing impairment (HI) and the other with normal hearing (NH). Method: The mother was video-recorded interacting with her twin sons when the boys were 12.5 and 22 months of age. Mean F0, F0 range, duration, and F1/F2 vowel space of…

  17. Type 2 diabetes and hearing loss in personnel of the Self-Defense Forces.

    PubMed

    Sakuta, Hidenari; Suzuki, Takashi; Yasuda, Hiroko; Ito, Teizo

    2007-02-01

    The association of type 2 diabetes with hearing loss was evaluated in middle-aged male personnel of the Self-Defense Forces (SDFs). Hearing loss was defined as the pure-tone average (PTA) of the thresholds frequency at 0.5, 1, 2, and 4 kHz greater than 25 dB hearing levels (HL) in the worse ear. Diabetes status was determined by self-report of physician-diagnosed diabetes or by oral glucose tolerance test (OGTT). Of 699 subjects studied (age 52.9+/-1.0 years), 103 subjects were classified as having type 2 diabetes. Fasting plasma glucose of diabetic subjects was 120+/-19 mg/dl. Hearing loss levels were (worse) higher among diabetic subjects compared with subjects with normal glucose tolerance (NGT) (30.7+/-13.0 dB versus 27.4+/-12.3 dB, P=0.014). Hearing loss was more prevalent among diabetic subjects than among subjects with normal glucose tolerance (60.2% versus 45.2%, P=0.006). The odds ratio (OR) of type 2 diabetes for the presence of hearing loss was 1.87 (95% confidence interval 1.20-2.91, P=0.006) in a logistic regression analysis adjusted for age, rank, cigarette smoking and ethanol consumption. These results suggest that type 2 diabetes is associated with hearing loss independently of lifestyle factors in middle-aged men.

  18. The approximate number system and domain-general abilities as predictors of math ability in children with normal hearing and hearing loss.

    PubMed

    Bull, Rebecca; Marschark, Marc; Nordmann, Emily; Sapere, Patricia; Skene, Wendy A

    2018-06-01

    Many children with hearing loss (CHL) show a delay in mathematical achievement compared to children with normal hearing (CNH). This study examined whether there are differences in acuity of the approximate number system (ANS) between CHL and CNH, and whether ANS acuity is related to math achievement. Working memory (WM), short-term memory (STM), and inhibition were considered as mediators of any relationship between ANS acuity and math achievement. Seventy-five CHL were compared with 75 age- and gender-matched CNH. ANS acuity, mathematical reasoning, WM, and STM of CHL were significantly poorer compared to CNH. Group differences in math ability were no longer significant when ANS acuity, WM, or STM was controlled. For CNH, WM and STM fully mediated the relationship of ANS acuity to math ability; for CHL, WM and STM only partially mediated this relationship. ANS acuity, WM, and STM are significant contributors to hearing status differences in math achievement, and to individual differences within the group of CHL. Statement of contribution What is already known on this subject? Children with hearing loss often perform poorly on measures of math achievement, although there have been few studies focusing on basic numerical cognition in these children. In typically developing children, the approximate number system predicts math skills concurrently and longitudinally, although there have been some contradictory findings. Recent studies suggest that domain-general skills, such as inhibition, may account for the relationship found between the approximate number system and math achievement. What does this study adds? This is the first robust examination of the approximate number system in children with hearing loss, and the findings suggest poorer acuity of the approximate number system in these children compared to hearing children. The study addresses recent issues regarding the contradictory findings of the relationship of the approximate number system to math ability by examining how this relationship varies across children with normal hearing and hearing loss, and by examining whether this relationship is mediated by domain-general skills (working memory, short-term memory, and inhibition). © 2017 The British Psychological Society.

  19. The Socioeconomic Impact of Hearing Loss in US Adults

    PubMed Central

    Emmett, Susan D.; Francis, Howard W.

    2014-01-01

    Objective To evaluate the associations between hearing loss and educational attainment, income, and unemployment/underemployment in US adults. Study design National cross-sectional survey. Setting Ambulatory examination centers. Patients Adults aged 20-69 years who participated in the 1999-2002 cycles of the National Health and Nutrition Examination Survey (NHANES) audiometric evaluation and income questionnaire (n = 3379). Intervention(s) Pure tone audiometry, with hearing loss defined by World Health Organization criteria of bilateral pure tone average >25 decibels (0.5,1,2,4 kHz). Main outcome measure(s) Low educational attainment, defined as not completing high school; low income, defined as family income less than $20,000/year, and unemployment or underemployment, defined as not having a job or working less than 35 hours per week. Results Individuals with hearing loss had 3.21 times higher odds of low educational attainment (95% CI: 2.20-4.68) compared to normal-hearing individuals. Controlling for education, age, sex, and race, individuals with hearing loss had 1.58 times higher odds of low income (95% CI: 1.16-2.15) and 1.98 times higher odds of being unemployed or underemployed (95% CI: 1.38-2.85) compared to normal-hearing individuals. Conclusions Hearing loss is associated with low educational attainment in US adults. Even after controlling for education and important demographic factors, hearing loss is independently associated with economic hardship, including both low income and unemployment/underemployment. The societal impact of hearing loss is profound in this nationally representative study and should be further evaluated with longitudinal cohorts. PMID:25158616

  20. Consequences of Early Conductive Hearing Loss on Long-Term Binaural Processing.

    PubMed

    Graydon, Kelley; Rance, Gary; Dowell, Richard; Van Dun, Bram

    The aim of the study was to investigate the long-term effects of early conductive hearing loss on binaural processing in school-age children. One hundred and eighteen children participated in the study, 82 children with a documented history of conductive hearing loss associated with otitis media and 36 controls who had documented histories showing no evidence of otitis media or conductive hearing loss. All children were demonstrated to have normal-hearing acuity and middle ear function at the time of assessment. The Listening in Spatialized Noise Sentence (LiSN-S) task and the masking level difference (MLD) task were used as the two different measures of binaural interaction ability. Children with a history of conductive hearing loss performed significantly poorer than controls on all LiSN-S conditions relying on binaural cues (DV90, p = <0.001 and SV90, p = 0.003). No significant difference was found between the groups in listening conditions without binaural cues. Fifteen children with a conductive hearing loss history (18%) showed results consistent with a spatial processing disorder. No significant difference was observed between the conductive hearing loss group and the controls on the MLD task. Furthermore, no correlations were found between LiSN-S and MLD. Results show a relationship between early conductive hearing loss and listening deficits that persist once hearing has returned to normal. Results also suggest that the two binaural interaction tasks (LiSN-S and MLD) may be measuring binaural processing at different levels. Findings highlight the need for a screening measure of functional listening ability in children with a history of early otitis media.

  1. The use of transient evoked otoacoustic emissions as a hearing screen following grommet insertion.

    PubMed

    Dale, O T; McCann, L J; Thio, D; Wells, S C; Drysdale, A J

    2011-07-01

    This study aimed to evaluate the sensitivity of transient evoked otoacoustic emission testing as a screening tool for hearing loss in children, after grommet insertion. A prospective study was conducted of 48 children (91 ears) aged three to 16 years who had undergone grommet insertion for glue ear. At post-operative review, pure tone audiometry was performed followed by transient evoked otoacoustic emission testing. Outcomes for both tests, in each ear, were compared. The pure tone audiometry threshold was ≤ 20 dB in 85 ears (93.4 per cent), 25 dB in two ears (2.2 per cent) and ≥ 30 dB in four ears (4.4 per cent). Transient evoked otoacoustic emissions were detected in 69 ears (75.8 per cent). The sensitivity of transient evoked otoacoustic emission testing for detecting hearing loss was 100 per cent for ≥ 30 dB loss but only 66.7 per cent for ≥ 25 dB loss. Transient evoked otoacoustic emission testing offers a sensitive means of detecting hearing loss of ≥ 30 dB following grommet insertion in children. However, the use of such testing as a screening tool may miss some cases of mild hearing loss.

  2. Spectrotemporal Modulation Detection and Speech Perception by Cochlear Implant Users

    PubMed Central

    Won, Jong Ho; Moon, Il Joon; Jin, Sunhwa; Park, Heesung; Woo, Jihwan; Cho, Yang-Sun; Chung, Won-Ho; Hong, Sung Hwa

    2015-01-01

    Spectrotemporal modulation (STM) detection performance was examined for cochlear implant (CI) users. The test involved discriminating between an unmodulated steady noise and a modulated stimulus. The modulated stimulus presents frequency modulation patterns that change in frequency over time. In order to examine STM detection performance for different modulation conditions, two different temporal modulation rates (5 and 10 Hz) and three different spectral modulation densities (0.5, 1.0, and 2.0 cycles/octave) were employed, producing a total 6 different STM stimulus conditions. In order to explore how electric hearing constrains STM sensitivity for CI users differently from acoustic hearing, normal-hearing (NH) and hearing-impaired (HI) listeners were also tested on the same tasks. STM detection performance was best in NH subjects, followed by HI subjects. On average, CI subjects showed poorest performance, but some CI subjects showed high levels of STM detection performance that was comparable to acoustic hearing. Significant correlations were found between STM detection performance and speech identification performance in quiet and in noise. In order to understand the relative contribution of spectral and temporal modulation cues to speech perception abilities for CI users, spectral and temporal modulation detection was performed separately and related to STM detection and speech perception performance. The results suggest that that slow spectral modulation rather than slow temporal modulation may be important for determining speech perception capabilities for CI users. Lastly, test–retest reliability for STM detection was good with no learning. The present study demonstrates that STM detection may be a useful tool to evaluate the ability of CI sound processing strategies to deliver clinically pertinent acoustic modulation information. PMID:26485715

  3. Gender differences in myogenic regulation along the vascular tree of the gerbil cochlea.

    PubMed

    Reimann, Katrin; Krishnamoorthy, Gayathri; Wier, Withrow Gil; Wangemann, Philine

    2011-01-01

    Regulation of cochlear blood flow is critical for hearing due to its exquisite sensitivity to ischemia and oxidative stress. Many forms of hearing loss such as sensorineural hearing loss and presbyacusis may involve or be aggravated by blood flow disorders. Animal experiments and clinical outcomes further suggest that there is a gender preference in hearing loss, with males being more susceptible. Autoregulation of cochlear blood flow has been demonstrated in some animal models in vivo, suggesting that similar to the brain, blood vessels supplying the cochlea have the ability to control flow within normal limits, despite variations in systemic blood pressure. Here, we investigated myogenic regulation in the cochlear blood supply of the Mongolian gerbil, a widely used animal model in hearing research. The cochlear blood supply originates at the basilar artery, followed by the anterior inferior cerebellar artery, and inside the inner ear, by the spiral modiolar artery and the radiating arterioles that supply the capillary beds of the spiral ligament and stria vascularis. Arteries from male and female gerbils were isolated and pressurized using a concentric pipette system. Diameter changes in response to increasing luminal pressures were recorded by laser scanning microscopy. Our results show that cochlear vessels from male and female gerbils exhibit myogenic regulation but with important differences. Whereas in male gerbils, both spiral modiolar arteries and radiating arterioles exhibited pressure-dependent tone, in females, only radiating arterioles had this property. Male spiral modiolar arteries responded more to L-NNA than female spiral modiolar arteries, suggesting that NO-dependent mechanisms play a bigger role in the myogenic regulation of male than female gerbil cochlear vessels.

  4. Metalloproteinases and their associated genes contribute to the functional integrity and noise-induced damage in the cochlear sensory epithelium

    PubMed Central

    Hu, Bo Hua; Cai, Qunfeng; Hu, Zihua; Patel, Minal; Bard, Jonathan; Jamison, Jennifer; Coling, Donald

    2012-01-01

    Matrix metalloproteinases (MMPs) and their related gene products regulate essential cellular functions. An imbalance in MMPs has been implicated in various neurological disorders, including traumatic injuries. Here, we report a role for MMPs and their related gene products in the modulation of cochlear responses to acoustic trauma in rats. The normal cochlea was shown to be enriched in MMP enzymatic activity, and this activity was reduced in a time-dependent fashion after traumatic noise injury. The analysis of gene expression by RNA-seq and qRT-PCR revealed the differential expression of MMPs and their related genes between functionally specialized regions of the sensory epithelium. The expression of these genes was dynamically regulated between the acute and chronic phases of noise-induced hearing loss. Moreover, noise-induced expression changes in two endogenous MMP inhibitors, Timp1 and Timp2, in sensory cells were dependent upon the stage of nuclear condensation, suggesting a specific role for MMP activity in sensory cell apoptosis. A short-term application of doxycycline, a broad-spectrum inhibitor of MMPs, prior to noise exposure reduced noise-induced hearing loss and sensory cell death. By contrast, a 7-day treatment compromised hearing sensitivity and potentiated noise-induced hearing loss. This detrimental effect of the long-term inhibition of MMPs on noise-induced hearing loss was further confirmed using targeted Mmp7 knockout mice. Together, these observations suggest that MMPs and their related genes participate in the regulation of cochlear responses to acoustic overstimulation and that the modulation of MMP activity can serve as a novel therapeutic target for the reduction of noise-induced cochlear damage. PMID:23100416

  5. Development of auditory sensitivity in budgerigars (Melopsittacus undulatus)

    NASA Astrophysics Data System (ADS)

    Brittan-Powell, Elizabeth F.; Dooling, Robert J.

    2004-06-01

    Auditory feedback influences the development of vocalizations in songbirds and parrots; however, little is known about the development of hearing in these birds. The auditory brainstem response was used to track the development of auditory sensitivity in budgerigars from hatch to 6 weeks of age. Responses were first obtained from 1-week-old at high stimulation levels at frequencies at or below 2 kHz, showing that budgerigars do not hear well at hatch. Over the next week, thresholds improved markedly, and responses were obtained for almost all test frequencies throughout the range of hearing by 14 days. By 3 weeks posthatch, birds' best sensitivity shifted from 2 to 2.86 kHz, and the shape of the auditory brainstem response (ABR) audiogram became similar to that of adult budgerigars. About a week before leaving the nest, ABR audiograms of young budgerigars are very similar to those of adult birds. These data complement what is known about vocal development in budgerigars and show that hearing is fully developed by the time that vocal learning begins.

  6. Temporal masking functions for listeners with real and simulated hearing loss

    PubMed Central

    Desloge, Joseph G.; Reed, Charlotte M.; Braida, Louis D.; Perez, Zachary D.; Delhorne, Lorraine A.

    2011-01-01

    A functional simulation of hearing loss was evaluated in its ability to reproduce the temporal masking functions for eight listeners with mild to severe sensorineural hearing loss. Each audiometric loss was simulated in a group of age-matched normal-hearing listeners through a combination of spectrally-shaped masking noise and multi-band expansion. Temporal-masking functions were obtained in both groups of listeners using a forward-masking paradigm in which the level of a 110-ms masker required to just mask a 10-ms fixed-level probe (5-10 dB SL) was measured as a function of the time delay between the masker offset and probe onset. At each of four probe frequencies (500, 1000, 2000, and 4000 Hz), temporal-masking functions were obtained using maskers that were 0.55, 1.0, and 1.15 times the probe frequency. The slopes and y-intercepts of the masking functions were not significantly different for listeners with real and simulated hearing loss. The y-intercepts were positively correlated with level of hearing loss while the slopes were negatively correlated. The ratio of the slopes obtained with the low-frequency maskers relative to the on-frequency maskers was similar for both groups of listeners and indicated a smaller compressive effect than that observed in normal-hearing listeners. PMID:21877806

  7. Abnormal neural activities of directional brain networks in patients with long-term bilateral hearing loss.

    PubMed

    Xu, Long-Chun; Zhang, Gang; Zou, Yue; Zhang, Min-Feng; Zhang, Dong-Sheng; Ma, Hua; Zhao, Wen-Bo; Zhang, Guang-Yu

    2017-10-13

    The objective of the study is to provide some implications for rehabilitation of hearing impairment by investigating changes of neural activities of directional brain networks in patients with long-term bilateral hearing loss. Firstly, we implemented neuropsychological tests of 21 subjects (11 patients with long-term bilateral hearing loss, and 10 subjects with normal hearing), and these tests revealed significant differences between the deaf group and the controls. Then we constructed the individual specific virtual brain based on functional magnetic resonance data of participants by utilizing effective connectivity and multivariate regression methods. We exerted the stimulating signal to the primary auditory cortices of the virtual brain and observed the brain region activations. We found that patients with long-term bilateral hearing loss presented weaker brain region activations in the auditory and language networks, but enhanced neural activities in the default mode network as compared with normally hearing subjects. Especially, the right cerebral hemisphere presented more changes than the left. Additionally, weaker neural activities in the primary auditor cortices were also strongly associated with poorer cognitive performance. Finally, causal analysis revealed several interactional circuits among activated brain regions, and these interregional causal interactions implied that abnormal neural activities of the directional brain networks in the deaf patients impacted cognitive function.

  8. Comparison of speech recognition with adaptive digital and FM remote microphone hearing assistance technology by listeners who use hearing aids.

    PubMed

    Thibodeau, Linda

    2014-06-01

    The purpose of this study was to compare the benefits of 3 types of remote microphone hearing assistance technology (HAT), adaptive digital broadband, adaptive frequency modulation (FM), and fixed FM, through objective and subjective measures of speech recognition in clinical and real-world settings. Participants included 11 adults, ages 16 to 78 years, with primarily moderate-to-severe bilateral hearing impairment (HI), who wore binaural behind-the-ear hearing aids; and 15 adults, ages 18 to 30 years, with normal hearing. Sentence recognition in quiet and in noise and subjective ratings were obtained in 3 conditions of wireless signal processing. Performance by the listeners with HI when using the adaptive digital technology was significantly better than that obtained with the FM technology, with the greatest benefits at the highest noise levels. The majority of listeners also preferred the digital technology when listening in a real-world noisy environment. The wireless technology allowed persons with HI to surpass persons with normal hearing in speech recognition in noise, with the greatest benefit occurring with adaptive digital technology. The use of adaptive digital technology combined with speechreading cues would allow persons with HI to engage in communication in environments that would have otherwise not been possible with traditional wireless technology.

  9. The hidden effect of hearing acuity on speech recall, and compensatory effects of self-paced listening

    PubMed Central

    Piquado, Tepring; Benichov, Jonathan I.; Brownell, Hiram; Wingfield, Arthur

    2013-01-01

    Objective The purpose of this research was to determine whether negative effects of hearing loss on recall accuracy for spoken narratives can be mitigated by allowing listeners to control the rate of speech input. Design Paragraph-length narratives were presented for recall under two listening conditions in a within-participants design: presentation without interruption (continuous) at an average speech-rate of 150 words per minute; and presentation interrupted at periodic intervals at which participants were allowed to pause before initiating the next segment (self-paced). Study sample Participants were 24 adults ranging from 21 to 33 years of age. Half had age-normal hearing acuity and half had mild-to-moderate hearing loss. The two groups were comparable for age, years of formal education, and vocabulary. Results When narrative passages were presented continuously, without interruption, participants with hearing loss recalled significantly fewer story elements, both main ideas and narrative details, than those with age-normal hearing. The recall difference was eliminated when the two groups were allowed to self-pace the speech input. Conclusion Results support the hypothesis that the listening effort associated with reduced hearing acuity can slow processing operations and increase demands on working memory, with consequent negative effects on accuracy of narrative recall. PMID:22731919

  10. Interactions between amplitude modulation and frequency modulation processing: Effects of age and hearing loss.

    PubMed

    Paraouty, Nihaad; Ewert, Stephan D; Wallaert, Nicolas; Lorenzi, Christian

    2016-07-01

    Frequency modulation (FM) and amplitude modulation (AM) detection thresholds were measured for a 500-Hz carrier frequency and a 5-Hz modulation rate. For AM detection, FM at the same rate as the AM was superimposed with varying FM depth. For FM detection, AM at the same rate was superimposed with varying AM depth. The target stimuli always contained both amplitude and frequency modulations, while the standard stimuli only contained the interfering modulation. Young and older normal-hearing listeners, as well as older listeners with mild-to-moderate sensorineural hearing loss were tested. For all groups, AM and FM detection thresholds were degraded in the presence of the interfering modulation. AM detection with and without interfering FM was hardly affected by either age or hearing loss. While aging had an overall detrimental effect on FM detection with and without interfering AM, there was a trend that hearing loss further impaired FM detection in the presence of AM. Several models using optimal combination of temporal-envelope cues at the outputs of off-frequency filters were tested. The interfering effects could only be predicted for hearing-impaired listeners. This indirectly supports the idea that, in addition to envelope cues resulting from FM-to-AM conversion, normal-hearing listeners use temporal fine-structure cues for FM detection.

  11. A vestibular phenotype for Waardenburg syndrome?

    NASA Technical Reports Server (NTRS)

    Black, F. O.; Pesznecker, S. C.; Allen, K.; Gianna, C.

    2001-01-01

    OBJECTIVE: To investigate vestibular abnormalities in subjects with Waardenburg syndrome. STUDY DESIGN: Retrospective record review. SETTING: Tertiary referral neurotology clinic. SUBJECTS: Twenty-two adult white subjects with clinical diagnosis of Waardenburg syndrome (10 type I and 12 type II). INTERVENTIONS: Evaluation for Waardenburg phenotype, history of vestibular and auditory symptoms, tests of vestibular and auditory function. MAIN OUTCOME MEASURES: Results of phenotyping, results of vestibular and auditory symptom review (history), results of vestibular and auditory function testing. RESULTS: Seventeen subjects were women, and 5 were men. Their ages ranged from 21 to 58 years (mean, 38 years). Sixteen of the 22 subjects sought treatment for vertigo, dizziness, or imbalance. For subjects with vestibular symptoms, the results of vestibuloocular tests (calorics, vestibular autorotation, and/or pseudorandom rotation) were abnormal in 77%, and the results of vestibulospinal function tests (computerized dynamic posturography, EquiTest) were abnormal in 57%, but there were no specific patterns of abnormality. Six had objective sensorineural hearing loss. Thirteen had an elevated summating/action potential (>0.40) on electrocochleography. All subjects except those with severe hearing loss (n = 3) had normal auditory brainstem response results. CONCLUSION: Patients with Waardenburg syndrome may experience primarily vestibular symptoms without hearing loss. Electrocochleography and vestibular function tests appear to be the most sensitive measures of otologic abnormalities in such patients.

  12. Exploring the Relationship between Physiological Measures of Cochlear and Brainstem Function

    PubMed Central

    Dhar, S.; Abel, R.; Hornickel, J.; Nicol, T.; Skoe, E.; Zhao, W.; Kraus, N.

    2009-01-01

    Objective Otoacoustic emissions and the speech-evoked auditory brainstem response are objective indices of peripheral auditory physiology and are used clinically for assessing hearing function. While each measure has been extensively explored, their interdependence and the relationships between them remain relatively unexplored. Methods Distortion product otoacoustic emissions (DPOAE) and speech-evoked auditory brainstem responses (sABR) were recorded from 28 normal-hearing adults. Through correlational analyses, DPOAE characteristics were compared to measures of sABR timing and frequency encoding. Data were organized into two DPOAE (Strength and Structure) and five brainstem (Onset, Spectrotemporal, Harmonics, Envelope Boundary, Pitch) composite measures. Results DPOAE Strength shows significant relationships with sABR Spectrotemporal and Harmonics measures. DPOAE Structure shows significant relationships with sABR Envelope Boundary. Neither DPOAE Strength nor Structure is related to sABR Pitch. Conclusions The results of the present study show that certain aspects of the speech-evoked auditory brainstem responses are related to, or covary with, cochlear function as measured by distortion product otoacoustic emissions. Significance These results form a foundation for future work in clinical populations. Analyzing cochlear and brainstem function in parallel in different clinical populations will provide a more sensitive clinical battery for identifying the locus of different disorders (e.g., language based learning impairments, hearing impairment). PMID:19346159

  13. Vocabulary Facilitates Speech Perception in Children With Hearing Aids

    PubMed Central

    Walker, Elizabeth A.; Kirby, Benjamin; McCreery, Ryan W.

    2017-01-01

    Purpose We examined the effects of vocabulary, lexical characteristics (age of acquisition and phonotactic probability), and auditory access (aided audibility and daily hearing aid [HA] use) on speech perception skills in children with HAs. Method Participants included 24 children with HAs and 25 children with normal hearing (NH), ages 5–12 years. Groups were matched on age, expressive and receptive vocabulary, articulation, and nonverbal working memory. Participants repeated monosyllabic words and nonwords in noise. Stimuli varied on age of acquisition, lexical frequency, and phonotactic probability. Performance in each condition was measured by the signal-to-noise ratio at which the child could accurately repeat 50% of the stimuli. Results Children from both groups with larger vocabularies showed better performance than children with smaller vocabularies on nonwords and late-acquired words but not early-acquired words. Overall, children with HAs showed poorer performance than children with NH. Auditory access was not associated with speech perception for the children with HAs. Conclusions Children with HAs show deficits in sensitivity to phonological structure but appear to take advantage of vocabulary skills to support speech perception in the same way as children with NH. Further investigation is needed to understand the causes of the gap that exists between the overall speech perception abilities of children with HAs and children with NH. PMID:28738138

  14. Monaural Congenital Deafness Affects Aural Dominance and Degrades Binaural Processing

    PubMed Central

    Tillein, Jochen; Hubka, Peter; Kral, Andrej

    2016-01-01

    Cortical development extensively depends on sensory experience. Effects of congenital monaural and binaural deafness on cortical aural dominance and representation of binaural cues were investigated in the present study. We used an animal model that precisely mimics the clinical scenario of unilateral cochlear implantation in an individual with single-sided congenital deafness. Multiunit responses in cortical field A1 to cochlear implant stimulation were studied in normal-hearing cats, bilaterally congenitally deaf cats (CDCs), and unilaterally deaf cats (uCDCs). Binaural deafness reduced cortical responsiveness and decreased response thresholds and dynamic range. In contrast to CDCs, in uCDCs, cortical responsiveness was not reduced, but hemispheric-specific reorganization of aural dominance and binaural interactions were observed. Deafness led to a substantial drop in binaural facilitation in CDCs and uCDCs, demonstrating the inevitable role of experience for a binaural benefit. Sensitivity to interaural time differences was more reduced in uCDCs than in CDCs, particularly at the hemisphere ipsilateral to the hearing ear. Compared with binaural deafness, unilateral hearing prevented nonspecific reduction in cortical responsiveness, but extensively reorganized aural dominance and binaural responses. The deaf ear remained coupled with the cortex in uCDCs, demonstrating a significant difference to deprivation amblyopia in the visual system. PMID:26803166

  15. Monaural Congenital Deafness Affects Aural Dominance and Degrades Binaural Processing.

    PubMed

    Tillein, Jochen; Hubka, Peter; Kral, Andrej

    2016-04-01

    Cortical development extensively depends on sensory experience. Effects of congenital monaural and binaural deafness on cortical aural dominance and representation of binaural cues were investigated in the present study. We used an animal model that precisely mimics the clinical scenario of unilateral cochlear implantation in an individual with single-sided congenital deafness. Multiunit responses in cortical field A1 to cochlear implant stimulation were studied in normal-hearing cats, bilaterally congenitally deaf cats (CDCs), and unilaterally deaf cats (uCDCs). Binaural deafness reduced cortical responsiveness and decreased response thresholds and dynamic range. In contrast to CDCs, in uCDCs, cortical responsiveness was not reduced, but hemispheric-specific reorganization of aural dominance and binaural interactions were observed. Deafness led to a substantial drop in binaural facilitation in CDCs and uCDCs, demonstrating the inevitable role of experience for a binaural benefit. Sensitivity to interaural time differences was more reduced in uCDCs than in CDCs, particularly at the hemisphere ipsilateral to the hearing ear. Compared with binaural deafness, unilateral hearing prevented nonspecific reduction in cortical responsiveness, but extensively reorganized aural dominance and binaural responses. The deaf ear remained coupled with the cortex in uCDCs, demonstrating a significant difference to deprivation amblyopia in the visual system. © The Author 2016. Published by Oxford University Press.

  16. Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.

    PubMed

    Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi

    2015-08-01

    To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  17. Are normally sighted, visually impaired, and blind pedestrians accurate and reliable at making street crossing decisions?

    PubMed

    Hassan, Shirin E

    2012-05-04

    The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times.

  18. Are Normally Sighted, Visually Impaired, and Blind Pedestrians Accurate and Reliable at Making Street Crossing Decisions?

    PubMed Central

    Hassan, Shirin E.

    2012-01-01

    Purpose. The purpose of this study is to measure the accuracy and reliability of normally sighted, visually impaired, and blind pedestrians at making street crossing decisions using visual and/or auditory information. Methods. Using a 5-point rating scale, safety ratings for vehicular gaps of different durations were measured along a two-lane street of one-way traffic without a traffic signal. Safety ratings were collected from 12 normally sighted, 10 visually impaired, and 10 blind subjects for eight different gap times under three sensory conditions: (1) visual plus auditory information, (2) visual information only, and (3) auditory information only. Accuracy and reliability in street crossing decision-making were calculated for each subject under each sensory condition. Results. We found that normally sighted and visually impaired pedestrians were accurate and reliable in their street crossing decision-making ability when using either vision plus hearing or vision only (P > 0.05). Under the hearing only condition, all subjects were reliable (P > 0.05) but inaccurate with their street crossing decisions (P < 0.05). Compared to either the normally sighted (P = 0.018) or visually impaired subjects (P = 0.019), blind subjects were the least accurate with their street crossing decisions under the hearing only condition. Conclusions. Our data suggested that visually impaired pedestrians can make accurate and reliable street crossing decisions like those of normally sighted pedestrians. When using auditory information only, all subjects significantly overestimated the vehicular gap time. Our finding that blind pedestrians performed significantly worse than either the normally sighted or visually impaired subjects under the hearing only condition suggested that they may benefit from training to improve their detection ability and/or interpretation of vehicular gap times. PMID:22427593

  19. Computed tomography demonstrates abnormalities of contralateral ear in subjects with unilateral sensorineural hearing loss.

    PubMed

    Marcus, Sonya; Whitlow, Christopher T; Koonce, James; Zapadka, Michael E; Chen, Michael Y; Williams, Daniel W; Lewis, Meagan; Evans, Adele K

    2014-02-01

    Prior studies have associated gross inner ear abnormalities with pediatric sensorineural hearing loss (SNHL) using computed tomography (CT). No studies to date have specifically investigated morphologic inner ear abnormalities involving the contralateral unaffected ear in patients with unilateral SNHL. The purpose of this study is to evaluate contralateral inner ear structures of subjects with unilateral SNHL but no grossly abnormal findings on CT. IRB-approved retrospective analysis of pediatric temporal bone CT scans. 97 temporal bone CT scans, previously interpreted as "normal" based upon previously accepted guidelines by board certified neuroradiologists, were assessed using 12 measurements of the semicircular canals, cochlea and vestibule. The control-group consisted of 72 "normal" temporal bone CTs with underlying SNHL in the subject excluded. The study-group consisted of 25 normal-hearing contralateral temporal bones in subjects with unilateral SNHL. Multivariate analysis of covariance (MANCOVA) was then conducted to evaluate for differences between the study and control group. Cochlea basal turn lumen width was significantly greater in magnitude and central lucency of the lateral semicircular canal bony island was significantly lower in density for audiometrically normal ears of subjects with unilateral SNHL compared to controls. Abnormalities of the inner ear were present in the contralateral audiometrically normal ears of subjects with unilateral SNHL. These data suggest that patients with unilateral SNHL may have a more pervasive disease process that results in abnormalities of both ears. The findings of a cochlea basal turn lumen width disparity >5% from "normal" and/or a lateral semicircular canal bony island central lucency disparity of >5% from "normal" may indicate inherent risk to the contralateral unaffected ear in pediatric patients with unilateral sensorineural hearing loss. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.

    PubMed

    Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo

    2009-04-01

    The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.

Top