Sample records for young normal-hearing listeners

  1. Effect of training on word-recognition performance in noise for young normal-hearing and older hearing-impaired listeners.

    PubMed

    Burk, Matthew H; Humes, Larry E; Amos, Nathan E; Strauser, Lauren E

    2006-06-01

    The objective of this study was to evaluate the effectiveness of a training program for hearing-impaired listeners to improve their speech-recognition performance within a background noise when listening to amplified speech. Both noise-masked young normal-hearing listeners, used to model the performance of elderly hearing-impaired listeners, and a group of elderly hearing-impaired listeners participated in the study. Of particular interest was whether training on an isolated word list presented by a standardized talker can generalize to everyday speech communication across novel talkers. Word-recognition performance was measured for both young normal-hearing (n = 16) and older hearing-impaired (n = 7) adults. Listeners were trained on a set of 75 monosyllabic words spoken by a single female talker over a 9- to 14-day period. Performance for the familiar (trained) talker was measured before and after training in both open-set and closed-set response conditions. Performance on the trained words of the familiar talker were then compared with those same words spoken by three novel talkers and to performance on a second set of untrained words presented by both the familiar and unfamiliar talkers. The hearing-impaired listeners returned 6 mo after their initial training to examine retention of the trained words as well as their ability to transfer any knowledge gained from word training to sentences containing both trained and untrained words. Both young normal-hearing and older hearing-impaired listeners performed significantly better on the word list in which they were trained versus a second untrained list presented by the same talker. Improvements on the untrained words were small but significant, indicating some generalization to novel words. The large increase in performance on the trained words, however, was maintained across novel talkers, pointing to the listener's greater focus on lexical memorization of the words rather than a focus on talker-specific acoustic

  2. The Effect of Tinnitus on Listening Effort in Normal-Hearing Young Adults: A Preliminary Study

    ERIC Educational Resources Information Center

    Degeest, Sofie; Keppler, Hannah; Corthals, Paul

    2017-01-01

    Purpose: The objective of this study was to investigate the effect of chronic tinnitus on listening effort. Method: Thirteen normal-hearing young adults with chronic tinnitus were matched with a control group for age, gender, hearing thresholds, and educational level. A dual-task paradigm was used to evaluate listening effort in different…

  3. The Effect of Tinnitus on Listening Effort in Normal-Hearing Young Adults: A Preliminary Study.

    PubMed

    Degeest, Sofie; Keppler, Hannah; Corthals, Paul

    2017-04-14

    The objective of this study was to investigate the effect of chronic tinnitus on listening effort. Thirteen normal-hearing young adults with chronic tinnitus were matched with a control group for age, gender, hearing thresholds, and educational level. A dual-task paradigm was used to evaluate listening effort in different listening conditions. A primary speech-recognition task and a secondary memory task were performed both separately and simultaneously. Furthermore, subjective listening effort was questioned for various listening situations. The Tinnitus Handicap Inventory was used to control for tinnitus handicap. Listening effort significantly increased in the tinnitus group across listening conditions. There was no significant difference in listening effort between listening conditions, nor was there an interaction between groups and listening conditions. Subjective listening effort did not significantly differ between both groups. This study is a first exploration of listening effort in normal-hearing participants with chronic tinnitus showing that listening effort is increased as compared with a control group. There is a need to further investigate the cognitive functions important for speech understanding and their possible relation with the presence of tinnitus and listening effort.

  4. Vowel perception by noise masked normal-hearing young adults

    NASA Astrophysics Data System (ADS)

    Richie, Carolyn; Kewley-Port, Diane; Coughlin, Maureen

    2005-08-01

    This study examined vowel perception by young normal-hearing (YNH) adults, in various listening conditions designed to simulate mild-to-moderate sloping sensorineural hearing loss. YNH listeners were individually age- and gender-matched to young hearing-impaired (YHI) listeners tested in a previous study [Richie et al., J. Acoust. Soc. Am. 114, 2923-2933 (2003)]. YNH listeners were tested in three conditions designed to create equal audibility with the YHI listeners; a low signal level with and without a simulated hearing loss, and a high signal level with a simulated hearing loss. Listeners discriminated changes in synthetic vowel tokens /smcapi e ɛ invv æ/ when F1 or F2 varied in frequency. Comparison of YNH with YHI results failed to reveal significant differences between groups in terms of performance on vowel discrimination, in conditions of similar audibility by using both noise masking to elevate the hearing thresholds of the YNH and applying frequency-specific gain to the YHI listeners. Further, analysis of learning curves suggests that while the YHI listeners completed an average of 46% more test blocks than YNH listeners, the YHI achieved a level of discrimination similar to that of the YNH within the same number of blocks. Apparently, when age and gender are closely matched between young hearing-impaired and normal-hearing adults, performance on vowel tasks may be explained by audibility alone.

  5. Effects of Varying Reverberation on Music Perception for Young Normal-Hearing and Old Hearing-Impaired Listeners.

    PubMed

    Reinhart, Paul N; Souza, Pamela E

    2018-01-01

    Reverberation enhances music perception and is one of the most important acoustic factors in auditorium design. However, previous research on reverberant music perception has focused on young normal-hearing (YNH) listeners. Old hearing-impaired (OHI) listeners have degraded spatial auditory processing; therefore, they may perceive reverberant music differently. Two experiments were conducted examining the effects of varying reverberation on music perception for YNH and OHI listeners. Experiment 1 examined whether YNH listeners and OHI listeners prefer different amounts of reverberation for classical music listening. Symphonic excerpts were processed at a range of reverberation times using a point-source simulation. Listeners performed a paired-comparisons task in which they heard two excerpts with different reverberation times, and they indicated which they preferred. The YNH group preferred a reverberation time of 2.5 s; however, the OHI group did not demonstrate any significant preference. Experiment 2 examined whether OHI listeners are less sensitive to (e, less able to discriminate) differences in reverberation time than YNH listeners. YNH and OHI participants listened to pairs of music excerpts and indicated whether they perceived the same or different amount of reverberation. Results indicated that the ability of both groups to detect differences in reverberation time improved with increasing reverberation time difference. However, discrimination was poorer for the OHI group than for the YNH group. This suggests that OHI listeners are less sensitive to differences in reverberation when listening to music than YNH listeners, which might explain the lack of group reverberation time preferences of the OHI group.

  6. Judgments of Emotion in Clear and Conversational Speech by Young Adults with Normal Hearing and Older Adults with Hearing Impairment

    ERIC Educational Resources Information Center

    Morgan, Shae D.; Ferguson, Sarah Hargus

    2017-01-01

    Purpose: In this study, we investigated the emotion perceived by young listeners with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners) when listening to speech produced conversationally or in a clear speaking style. Method: The first experiment included 18 YNH listeners, and the second included 10 additional…

  7. Upward spread of informational masking in normal-hearing and hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Alexander, Joshua M.; Lutfi, Robert A.

    2003-04-01

    Thresholds for pure-tone signals of 0.8, 2.0, and 5.0 kHz were measured in the presence of a simultaneous multitone masker in 15 normal-hearing and 8 hearing-impaired listeners. The masker consisted of fixed-frequency tones ranging from 522-8346 Hz at 1/3-octave intervals, excluding the 2/3-octave interval on either side of the signal. Masker uncertainty was manipulated by independently and randomly playing individual masker tones with probability p=0.5 or p=1.0 on each trial. Informational masking (IM) was estimated by the threshold difference (p=0.5 minus p=1.0). Decision weights were estimated from correlations of the listener's response with the occurrence of the signal and individual masker components on each trial. IM was greater for normal-hearing listeners than for hearing-impaired listeners, and most listeners had at least 10 dB of IM for one of the signal frequencies. For both groups, IM increased as the number of masker components below the signal frequency increased. Decision weights were also similar for both groups-masker frequencies below the signal were weighted more than those above. Implications are that normal-hearing and hearing-impaired individuals do not weight information differently in these masking conditions and that factors associated with listening may be partially responsible for the greater effectiveness of low-frequency maskers. [Work supported by NIDCD.

  8. Intelligibility of foreign-accented speech: Effects of listening condition, listener age, and listener hearing status

    NASA Astrophysics Data System (ADS)

    Ferguson, Sarah Hargus

    2005-09-01

    It is well known that, for listeners with normal hearing, speech produced by non-native speakers of the listener's first language is less intelligible than speech produced by native speakers. Intelligibility is well correlated with listener's ratings of talker comprehensibility and accentedness, which have been shown to be related to several talker factors, including age of second language acquisition and level of similarity between the talker's native and second language phoneme inventories. Relatively few studies have focused on factors extrinsic to the talker. The current project explored the effects of listener and environmental factors on the intelligibility of foreign-accented speech. Specifically, monosyllabic English words previously recorded from two talkers, one a native speaker of American English and the other a native speaker of Spanish, were presented to three groups of listeners (young listeners with normal hearing, elderly listeners with normal hearing, and elderly listeners with hearing impairment; n=20 each) in three different listening conditions (undistorted words in quiet, undistorted words in 12-talker babble, and filtered words in quiet). Data analysis will focus on interactions between talker accent, listener age, listener hearing status, and listening condition. [Project supported by American Speech-Language-Hearing Association AARC Award.

  9. The time course of learning during a vowel discrimination task by hearing-impaired and masked normal-hearing listeners

    NASA Astrophysics Data System (ADS)

    Davis, Carrie; Kewley-Port, Diane; Coughlin, Maureen

    2002-05-01

    Vowel discrimination was compared between a group of young, well-trained listeners with mild-to-moderate sensorineural hearing impairment (YHI), and a matched group of normal hearing, noise-masked listeners (YNH). Unexpectedly, discrimination of F1 and F2 in the YHI listeners was equal to or better than that observed in YNH listeners in three conditions of similar audibility [Davis et al., J. Acoust. Soc. Am. 109, 2501 (2001)]. However, in the same time interval, the YHI subjects completed an average of 55% more blocks of testing than the YNH group. New analyses were undertaken to examine the time course of learning during the vowel discrimination task, to determine whether performance was affected by number of trials. Learning curves for a set of vowels in the F1 and F2 regions showed no significant differences between the YHI and YNH listeners. Thus while the YHI subjects completed more trials overall, they achieved a level of discrimination similar to that of their normal-hearing peers within the same number of blocks. Implications of discrimination performance in relation to hearing status and listening strategies will be discussed. [Work supported by NIHDCD-02229.

  10. An Evaluation of the BKB-SIN, HINT, QuickSIN, and WIN Materials on Listeners with Normal Hearing and Listeners with Hearing Loss

    ERIC Educational Resources Information Center

    Wilson, Richard H.; McArdle, Rachel A.; Smith, Sherri L.

    2007-01-01

    Purpose: The purpose of this study was to examine in listeners with normal hearing and listeners with sensorineural hearing loss the within- and between-group differences obtained with 4 commonly available speech-in-noise protocols. Method: Recognition performances by 24 listeners with normal hearing and 72 listeners with sensorineural hearing…

  11. Self-Monitoring of Listening Abilities in Normal-Hearing Children, Normal-Hearing Adults, and Children with Cochlear Implants

    PubMed Central

    Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.

    2012-01-01

    Background Self-monitoring has been shown to be an essential skill for various aspects of our lives, including our health, education, and interpersonal relationships. Likewise, the ability to monitor one’s speech reception in noisy environments may be a fundamental skill for communication, particularly for those who are often confronted with challenging listening environments, such as students and children with hearing loss. Purpose The purpose of this project was to determine if normal-hearing children, normal-hearing adults, and children with cochlear implants can monitor their listening ability in noise and recognize when they are not able to perceive spoken messages. Research Design Participants were administered an Objective-Subjective listening task in which their subjective judgments of their ability to understand sentences from the Coordinate Response Measure corpus presented in speech spectrum noise were compared to their objective performance on the same task. Study Sample Participants included 41 normal-hearing children, 35 normal-hearing adults, and 10 children with cochlear implants. Data Collection and Analysis On the Objective-Subjective listening task, the level of the masker noise remained constant at 63 dB SPL, while the level of the target sentences varied over a 12 dB range in a block of trials. Psychometric functions, relating proportion correct (Objective condition) and proportion perceived as intelligible (Subjective condition) to target/masker ratio (T/M), were estimated for each participant. Thresholds were defined as the T/M required to produce 51% correct (Objective condition) and 51% perceived as intelligible (Subjective condition). Discrepancy scores between listeners’ threshold estimates in the Objective and Subjective conditions served as an index of self-monitoring ability. In addition, the normal-hearing children were administered tests of cognitive skills and academic achievement, and results from these measures were compared

  12. Effect of duration and inter-stimulus interval on auditory temporal order discrimination in young normal-hearing and elderly hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Narendran, Mini M.; Humes, Larry E.

    2003-04-01

    Increasing the rate of presentation can have a deleterious effect on auditory processing, especially among the elderly. Rate can be manipulated by changing the duration of individual components of a sequence of sounds, by changing the inter-stimulus interval (ISI) between components, or both. Consequently, when age-related deficits in performance appear to be attributable to rate of stimulus presentation, it is often the case that alternative explanations in terms of the effects of stimulus duration or ISI are also possible. In this study, the independent effects of duration and ISI on the discrimination of temporal order for four-tone sequences were investigated in a group of young normal-hearing and elderly hearing-impaired listeners. It was found that discrimination performance was driven by the rate of presentation, rather than stimulus duration or ISI alone, for both groups of listeners. The performance of the two groups of listeners differed significantly for the fastest presentation rates, but was similar for the slower rates. Slowing the rate of presentation seemed to improve performance, regardless of whether this was done by increasing stimulus duration or increasing ISI, and this was observed for both groups of listeners. [Work supported, in part, by NIA.

  13. High-Level Psychophysical Tuning Curves: Forward Masking in Normal-Hearing and Hearing-Impaired Listeners.

    ERIC Educational Resources Information Center

    Nelson, David A.

    1991-01-01

    Forward-masked psychophysical tuning curves were obtained at multiple probe levels from 26 normal-hearing listeners and 24 ears of 21 hearing-impaired listeners with cochlear hearing loss. Results indicated that some cochlear hearing losses influence the sharp tuning capabilities usually associated with outer hair cell function. (Author/JDD)

  14. Effects of reverberation and noise on speech intelligibility in normal-hearing and aided hearing-impaired listeners.

    PubMed

    Xia, Jing; Xu, Buye; Pentony, Shareka; Xu, Jingjing; Swaminathan, Jayaganesh

    2018-03-01

    Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts. Once corrected for ceiling effects, the differences in the effects of reverberation on speech intelligibility between the two groups were much smaller. This suggests that, at least, part of the difference in susceptibility to reverberation between normal-hearing and hearing-impaired listeners was due to ceiling effects. Across both groups, a complex interaction between the noise characteristics and reverberation was observed on the speech intelligibility scores. Further fine-grained analyses of the perception of consonants showed that, for both listener groups, final consonants were more susceptible to reverberation than initial consonants. However, differences in the perception of specific consonant features were observed between the groups.

  15. Examination of the neighborhood activation theory in normal and hearing-impaired listeners.

    PubMed

    Dirks, D D; Takayanagi, S; Moshfegh, A; Noffsinger, P D; Fausti, S A

    2001-02-01

    Experiments were conducted to examine the effects of lexical information on word recognition among normal hearing listeners and individuals with sensorineural hearing loss. The lexical factors of interest were incorporated in the Neighborhood Activation Model (NAM). Central to this model is the concept that words are recognized relationally in the context of other phonemically similar words. NAM suggests that words in the mental lexicon are organized into similarity neighborhoods and the listener is required to select the target word from competing lexical items. Two structural characteristics of similarity neighborhoods that influence word recognition have been identified; "neighborhood density" or the number of phonemically similar words (neighbors) for a particular target item and "neighborhood frequency" or the average frequency of occurrence of all the items within a neighborhood. A third lexical factor, "word frequency" or the frequency of occurrence of a target word in the language, is assumed to optimize the word recognition process by biasing the system toward choosing a high frequency over a low frequency word. Three experiments were performed. In the initial experiments, word recognition for consonant-vowel-consonant (CVC) monosyllables was assessed in young normal hearing listeners by systematically partitioning the items into the eight possible lexical conditions that could be created by two levels of the three lexical factors, word frequency (high and low), neighborhood density (high and low), and average neighborhood frequency (high and low). Neighborhood structure and word frequency were estimated computationally using a large, on-line lexicon-based Webster's Pocket Dictionary. From this program 400 highly familiar, monosyllables were selected and partitioned into eight orthogonal lexical groups (50 words/group). The 400 words were presented randomly to normal hearing listeners in speech-shaped noise (Experiment 1) and "in quiet" (Experiment 2) as

  16. Talker Differences in Clear and Conversational Speech: Perceived Sentence Clarity for Young Adults with Normal Hearing and Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Ferguson, Sarah Hargus; Morgan, Shae D.

    2018-01-01

    Purpose: The purpose of this study is to examine talker differences for subjectively rated speech clarity in clear versus conversational speech, to determine whether ratings differ for young adults with normal hearing (YNH listeners) and older adults with hearing impairment (OHI listeners), and to explore effects of certain talker characteristics…

  17. An Overview of the Major Phenomena of the Localization of Sound Sources by Normal-Hearing, Hearing-Impaired, and Aided Listeners

    PubMed Central

    2014-01-01

    Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention. PMID:25492094

  18. On The (Un)importance of Working Memory in Speech-in-Noise Processing for Listeners with Normal Hearing Thresholds.

    PubMed

    Füllgrabe, Christian; Rosen, Stuart

    2016-01-01

    With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in the processing of speech in noise (SiN). The psychological construct that has received much interest in recent years is working memory. Empirical evidence indeed confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. However, some theoretical models propose that variations in WMC are an important predictor for variations in speech processing abilities in adverse perceptual conditions for all listeners, and this notion has become widely accepted within the field. To assess whether WMC also plays a role when listeners without hearing loss process speech in adverse listening conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification, using sentence material routinely used in audiological and hearing research. A meta-analysis revealed that, for young listeners with audiometrically normal hearing, individual variations in WMC are estimated to account for, on average, less than 2% of the variance in SiN identification scores. This result cautions against the (intuitively appealing) assumption that individual variations in WMC are predictive of SiN identification independently of the age and hearing status of the listener.

  19. Effects of Age and Working Memory Capacity on Speech Recognition Performance in Noise Among Listeners With Normal Hearing.

    PubMed

    Gordon-Salant, Sandra; Cole, Stacey Samuels

    2016-01-01

    Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.

  20. The Influence of Noise Reduction on Speech Intelligibility, Response Times to Speech, and Perceived Listening Effort in Normal-Hearing Listeners.

    PubMed

    van den Tillaart-Haverkate, Maj; de Ronde-Brons, Inge; Dreschler, Wouter A; Houben, Rolph

    2017-01-01

    Single-microphone noise reduction leads to subjective benefit, but not to objective improvements in speech intelligibility. We investigated whether response times (RTs) provide an objective measure of the benefit of noise reduction and whether the effect of noise reduction is reflected in rated listening effort. Twelve normal-hearing participants listened to digit triplets that were either unprocessed or processed with one of two noise-reduction algorithms: an ideal binary mask (IBM) and a more realistic minimum mean square error estimator (MMSE). For each of these three processing conditions, we measured (a) speech intelligibility, (b) RTs on two different tasks (identification of the last digit and arithmetic summation of the first and last digit), and (c) subjective listening effort ratings. All measurements were performed at four signal-to-noise ratios (SNRs): -5, 0, +5, and +∞ dB. Speech intelligibility was high (>97% correct) for all conditions. A significant decrease in response time, relative to the unprocessed condition, was found for both IBM and MMSE for the arithmetic but not the identification task. Listening effort ratings were significantly lower for IBM than for MMSE and unprocessed speech in noise. We conclude that RT for an arithmetic task can provide an objective measure of the benefit of noise reduction. For young normal-hearing listeners, both ideal and realistic noise reduction can reduce RTs at SNRs where speech intelligibility is close to 100%. Ideal noise reduction can also reduce perceived listening effort.

  1. Perception of a Sung Vowel as a Function of Frequency-Modulation Rate and Excursion in Listeners with Normal Hearing and Hearing Impairment

    ERIC Educational Resources Information Center

    Vatti, Marianna; Santurette, Sébastien; Pontoppidan, Niels Henrik; Dau, Torsten

    2014-01-01

    Purpose: Frequency fluctuations in human voices can usually be described as coherent frequency modulation (FM). As listeners with hearing impairment (HI listeners) are typically less sensitive to FM than listeners with normal hearing (NH listeners), this study investigated whether hearing loss affects the perception of a sung vowel based on FM…

  2. Cochlear Implants Special Issue Article: Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

    PubMed Central

    Luo, Xin; Fu, Qian-Jie; Galvin, John J.

    2007-01-01

    The present study investigated the ability of normal-hearing listeners and cochlear implant users to recognize vocal emotions. Sentences were produced by 1 male and 1 female talker according to 5 target emotions: angry, anxious, happy, sad, and neutral. Overall amplitude differences between the stimuli were either preserved or normalized. In experiment 1, vocal emotion recognition was measured in normal-hearing and cochlear implant listeners; cochlear implant subjects were tested using their clinically assigned processors. When overall amplitude cues were preserved, normal-hearing listeners achieved near-perfect performance, whereas listeners with cochlear implant recognized less than half of the target emotions. Removing the overall amplitude cues significantly worsened mean normal-hearing and cochlear implant performance. In experiment 2, vocal emotion recognition was measured in listeners with cochlear implant as a function of the number of channels (from 1 to 8) and envelope filter cutoff frequency (50 vs 400 Hz) in experimental speech processors. In experiment 3, vocal emotion recognition was measured in normal-hearing listeners as a function of the number of channels (from 1 to 16) and envelope filter cutoff frequency (50 vs 500 Hz) in acoustic cochlear implant simulations. Results from experiments 2 and 3 showed that both cochlear implant and normal-hearing performance significantly improved as the number of channels or the envelope filter cutoff frequency was increased. The results suggest that spectral, temporal, and overall amplitude cues each contribute to vocal emotion recognition. The poorer cochlear implant performance is most likely attributable to the lack of salient pitch cues and the limited functional spectral resolution. PMID:18003871

  3. Effects of modulation phase on profile analysis in normal-hearing and hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Rogers, Deanna; Lentz, Jennifer

    2003-04-01

    The ability to discriminate between sounds with different spectral shapes in the presence of amplitude modulation was measured in normal-hearing and hearing-impaired listeners. The standard stimulus was the sum of equal-amplitude modulated tones, and the signal stimulus was generated by increasing the level of half the tones (up components) and decreasing the level of half the tones (down components). The down components had the same modulation phase, and a phase shift was applied to the up components to encourage segregation from the down tones. The same phase shift was used in both standard and signal stimuli. Profile-analysis thresholds were measured as a function of the phase shift between up and down components. The phase shifts were 0, 30, 45, 60, 90, and 180 deg. As expected, thresholds were lowest when all tones had the same modulation phase and increased somewhat with increasing phase disparity. This small increase in thresholds was similar for both groups. These results suggest that hearing-impaired listeners are able to use modulation phase to group sounds in a manner similar to that of normal listeners. [Work supported by NIH (DC 05835).

  4. Sound localization in noise in hearing-impaired listeners.

    PubMed

    Lorenzi, C; Gatehouse, S; Lever, C

    1999-06-01

    The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.

  5. Binaural pitch fusion: Comparison of normal-hearing and hearing-impaired listenersa)

    PubMed Central

    Reiss, Lina A. J.; Shayman, Corey S.; Walker, Emily P.; Bennett, Keri O.; Fowler, Jennifer R.; Hartling, Curtis L.; Glickman, Bess; Lasarev, Michael R.; Oh, Yonghee

    2017-01-01

    Binaural pitch fusion is the fusion of dichotically presented tones that evoke different pitches between the ears. In normal-hearing (NH) listeners, the frequency range over which binaural pitch fusion occurs is usually <0.2 octaves. Recently, broad fusion ranges of 1–4 octaves were demonstrated in bimodal cochlear implant users. In the current study, it was hypothesized that hearing aid (HA) users would also exhibit broad fusion. Fusion ranges were measured in both NH and hearing-impaired (HI) listeners with hearing losses ranging from mild-moderate to severe-profound, and relationships of fusion range with demographic factors and with diplacusis were examined. Fusion ranges of NH and HI listeners averaged 0.17 ± 0.13 octaves and 1.7 ± 1.5 octaves, respectively. In HI listeners, fusion ranges were positively correlated with a principal component measure of the covarying factors of young age, early age of hearing loss onset, and long durations of hearing loss and HA use, but not with hearing threshold, amplification level, or diplacusis. In NH listeners, no correlations were observed with age, hearing threshold, or diplacusis. The association of broad fusion with early onset, long duration of hearing loss suggests a possible role of long-term experience with hearing loss and amplification in the development of broad fusion. PMID:28372056

  6. Decision strategies of hearing-impaired listeners in spectral shape discrimination

    NASA Astrophysics Data System (ADS)

    Lentz, Jennifer J.; Leek, Marjorie R.

    2002-03-01

    The ability to discriminate between sounds with different spectral shapes was evaluated for normal-hearing and hearing-impaired listeners. Listeners detected a 920-Hz tone added in phase to a single component of a standard consisting of the sum of five tones spaced equally on a logarithmic frequency scale ranging from 200 to 4200 Hz. An overall level randomization of 10 dB was either present or absent. In one subset of conditions, the no-perturbation conditions, the standard stimulus was the sum of equal-amplitude tones. In the perturbation conditions, the amplitudes of the components within a stimulus were randomly altered on every presentation. For both perturbation and no-perturbation conditions, thresholds for the detection of the 920-Hz tone were measured to compare sensitivity to changes in spectral shape between normal-hearing and hearing-impaired listeners. To assess whether hearing-impaired listeners relied on different regions of the spectrum to discriminate between sounds, spectral weights were estimated from the perturbed standards by correlating the listener's responses with the level differences per component across two intervals of a two-alternative forced-choice task. Results showed that hearing-impaired and normal-hearing listeners had similar sensitivity to changes in spectral shape. On average, across-frequency correlation functions also were similar for both groups of listeners, suggesting that as long as all components are audible and well separated in frequency, hearing-impaired listeners can use information across frequency as well as normal-hearing listeners. Analysis of the individual data revealed, however, that normal-hearing listeners may be better able to adopt optimal weighting schemes. This conclusion is only tentative, as differences in internal noise may need to be considered to interpret the results obtained from weighting studies between normal-hearing and hearing-impaired listeners.

  7. Noise-induced hearing loss in young adults: the role of personal listening devices and other sources of leisure noise.

    PubMed

    Mostafapour, S P; Lahargoue, K; Gates, G A

    1998-12-01

    No consensus exists regarding the magnitude of the risk of noise-induced hearing loss (NIHL) associated with leisure noise, in particular, personal listening devices in young adults. Examine the magnitude of hearing loss associated with personal listening devices and other sources of leisure noise in causing NIHL in young adults. Prospective auditory testing of college student volunteers with retrospective history exposure to home stereos, personal listening devices, firearms, and other sources of recreational noise. Subjects underwent audiologic examination consisting of estimation of pure-tone thresholds, speech reception thresholds, and word recognition at 45 dB HL. Fifty subjects aged 18 to 30 years were tested. All hearing thresholds of all subjects (save one-a unilateral 30 dB HL threshold at 6 kHz) were normal, (i.e., 25 dB HL or better). A 10 dB threshold elevation (notch) in either ear at 3 to 6 kHz as compared with neighboring frequencies was noted in 11 (22%) subjects and an unequivocal notch (15 dB or greater) in either ear was noted in 14 (28%) of subjects. The presence or absence of any notch (small or large) did not correlate with any single or cumulative source of noise exposure. No difference in pure-tone threshold, speech reception threshold, or speech discrimination was found among subjects when segregated by noise exposure level. The majority of young users of personal listening devices are at low risk for substantive NIHL. Interpretation of the significance of these findings in relation to noise exposure must be made with caution. NIHL is an additive process and even subtle deficits may contribute to unequivocal hearing loss with continued exposure. The low prevalence of measurable deficits in this study group may not exclude more substantive deficits in other populations with greater exposures. Continued education of young people about the risk to hearing from recreational noise exposure is warranted.

  8. Investigating the Role of Working Memory in Speech-in-noise Identification for Listeners with Normal Hearing.

    PubMed

    Füllgrabe, Christian; Rosen, Stuart

    2016-01-01

    With the advent of cognitive hearing science, increased attention has been given to individual differences in cognitive functioning and their explanatory power in accounting for inter-listener variability in understanding speech in noise (SiN). The psychological construct that has received most interest is working memory (WM), representing the ability to simultaneously store and process information. Common lore and theoretical models assume that WM-based processes subtend speech processing in adverse perceptual conditions, such as those associated with hearing loss or background noise. Empirical evidence confirms the association between WM capacity (WMC) and SiN identification in older hearing-impaired listeners. To assess whether WMC also plays a role when listeners without hearing loss process speech in acoustically adverse conditions, we surveyed published and unpublished studies in which the Reading-Span test (a widely used measure of WMC) was administered in conjunction with a measure of SiN identification. The survey revealed little or no evidence for an association between WMC and SiN performance. We also analysed new data from 132 normal-hearing participants sampled from across the adult lifespan (18-91 years), for a relationship between Reading-Span scores and identification of matrix sentences in noise. Performance on both tasks declined with age, and correlated weakly even after controlling for the effects of age and audibility (r = 0.39, p ≤ 0.001, one-tailed). However, separate analyses for different age groups revealed that the correlation was only significant for middle-aged and older groups but not for the young (< 40 years) participants.

  9. Perception of stochastic envelopes by normal-hearing and cochlear-implant listeners

    PubMed Central

    Gomersall, Philip A.; Turner, Richard E.; Baguley, David M.; Deeks, John M.; Gockel, Hedwig E.; Carlyon, Robert P.

    2016-01-01

    We assessed auditory sensitivity to three classes of temporal-envelope statistics (modulation depth, modulation rate, and comodulation) that are important for the perception of ‘sound textures’. The textures were generated by a probabilistic model that prescribes the temporal statistics of a selected number of modulation envelopes, superimposed onto noise carriers. Discrimination thresholds were measured for normal-hearing (NH) listeners and users of a MED-EL pulsar cochlear implant (CI), for separate manipulations of the average rate and modulation depth of the envelope in each frequency band of the stimulus, and of the co-modulation between bands. Normal-hearing (NH) listeners' discrimination of envelope rate was similar for baseline modulation rates of 5 and 34 Hz, and much poorer than previously reported for sinusoidally amplitude-modulated sounds. In contrast, discrimination of model parameters that controlled modulation depth was poorer at the lower baseline rate, consistent with the idea that, at the lower rate, subjects get fewer ‘looks’ at the relevant information when comparing stimuli differing in modulation depth. NH listeners could discriminate differences in co-modulation across bands; a multidimensional scaling study revealed that this was likely due to genuine across-frequency processing, rather than within-channel cues. CI users' discrimination performance was worse overall than for NH listeners, but showed a similar dependence on stimulus parameters. PMID:26706708

  10. Dichotic Listening and Otoacoustic Emissions: Shared Variance between Cochlear Function and Dichotic Listening Performance in Adults with Normal Hearing

    ERIC Educational Resources Information Center

    Markevych, Vladlena; Asbjornsen, Arve E.; Lind, Ola; Plante, Elena; Cone, Barbara

    2011-01-01

    The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a…

  11. Perception of Speech Produced by Native and Nonnative Talkers by Listeners with Normal Hearing and Listeners with Cochlear Implants

    ERIC Educational Resources Information Center

    Ji, Caili; Galvin, John J.; Chang, Yi-ping; Xu, Anting; Fu, Qian-Jie

    2014-01-01

    Purpose: The aim of this study was to evaluate the understanding of English sentences produced by native (English) and nonnative (Spanish) talkers by listeners with normal hearing (NH) and listeners with cochlear implants (CIs). Method: Sentence recognition in noise was measured in adult subjects with CIs and subjects with NH, all of whom were…

  12. Aided speech recognition in single-talker competition by elderly hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Coughlin, Maureen; Humes, Larry

    2004-05-01

    This study examined the speech-identification performance in one-talker interference conditions that increased in complexity while audibility was ensured over a wide bandwidth (200-4000 Hz). Factorial combinations of three independent variables were used to vary the amount of informational masking. These variables were: (1) competition playback direction (forward or reverse); (2) gender match between target and competition talkers (same or different); and (3) target talker uncertainty (one of three possible talkers from trial to trial). Four groups of listeners, two elderly hearing-impaired groups differing in age (65-74 and 75-84 years) and two young normal-hearing groups, were tested. One of the groups of young normal-hearing listeners was tested under acoustically equivalent test conditions and one was tested under perceptually equivalent test conditions. The effect of each independent variable on speech-identification performance and informational masking was generally consistent with expectations. Group differences in the observed informational masking were most pronounced for the oldest group of hearing-impaired listeners. The eight measures of speech-identification performance were found to be strongly correlated with one another, and individual differences in speech understanding performance among the elderly were found to be associated with age and level of education. [Work supported, in part, by NIA.

  13. Normal-hearing listener preferences of music as a function of signal-to-noise-ratio

    NASA Astrophysics Data System (ADS)

    Barrett, Jillian G.

    2005-04-01

    Optimal signal-to-noise ratios (SNR) for speech discrimination are well-known, well-documented phenomena. Discrimination preferences and functions have been studied for both normal-hearing and hard-of-hearing populations, and information from these studies has provided clearer indices on additional factors affecting speech discrimination ability and SNR preferences. This knowledge lends itself to improvements in hearing aids and amplification devices, telephones, television and radio transmissions, and a wide arena of recorded media such as movies and music. This investigation was designed to identify the preferred signal-to-background ratio (SBR) of normal-hearing listeners in a musical setting. The signal was the singer's voice, and music was considered the background. Subjects listened to an unfamiliar ballad with a female singer, and rated seven different SBR treatments. When listening to melodic motifs with linguistic content, results indicated subjects preferred SBRs similar to those in conventional speech discrimination applications. However, unlike traditional speech discrimination studies, subjects did not prefer increased levels of SBR. Additionally, subjects had a much larger acceptable range of SBR in melodic motifs where the singer's voice was not intended to communicate via linguistic means, but by the pseudo-paralinguistic means of vocal timbre and harmonic arrangements. Results indicate further studies investigating perception of singing are warranted.

  14. Categorical loudness scaling and equal-loudness contours in listeners with normal hearing and hearing loss

    PubMed Central

    Rasetshwane, Daniel M.; Trevino, Andrea C.; Gombert, Jessa N.; Liebig-Trehearn, Lauren; Kopun, Judy G.; Jesteadt, Walt; Neely, Stephen T.; Gorga, Michael P.

    2015-01-01

    This study describes procedures for constructing equal-loudness contours (ELCs) in units of phons from categorical loudness scaling (CLS) data and characterizes the impact of hearing loss on these estimates of loudness. Additionally, this study developed a metric, level-dependent loudness loss, which uses CLS data to specify the deviation from normal loudness perception at various loudness levels and as function of frequency for an individual listener with hearing loss. CLS measurements were made in 87 participants with hearing loss and 61 participants with normal hearing. An assessment of the reliability of CLS measurements was conducted on a subset of the data. CLS measurements were reliable. There was a systematic increase in the slope of the low-level segment of the CLS functions with increase in the degree of hearing loss. ELCs derived from CLS measurements were similar to standardized ELCs (International Organization for Standardization, ISO 226:2003). The presence of hearing loss decreased the vertical spacing of the ELCs, reflecting loudness recruitment and reduced cochlear compression. Representing CLS data in phons may lead to wider acceptance of CLS measurements. Like the audiogram that specifies hearing loss at threshold, level-dependent loudness loss describes deficit for suprathreshold sounds. Such information may have implications for the fitting of hearing aids. PMID:25920842

  15. The Developmental Trajectory of Spatial Listening Skills in Normal-Hearing Children

    ERIC Educational Resources Information Center

    Lovett, Rosemary Elizabeth Susan; Kitterick, Padraig Thomas; Huang, Shan; Summerfield, Arthur Quentin

    2012-01-01

    Purpose: To establish the age at which children can complete tests of spatial listening and to measure the normative relationship between age and performance. Method: Fifty-six normal-hearing children, ages 1.5-7.9 years, attempted tests of the ability to discriminate a sound source on the left from one on the right, to localize a source, to track…

  16. Talker differences in clear and conversational speech: Vowel intelligibility for normal-hearing listeners

    NASA Astrophysics Data System (ADS)

    Hargus Ferguson, Sarah

    2004-10-01

    Several studies have shown that when a talker is instructed to speak as though talking to a hearing-impaired person, the resulting ``clear'' speech is significantly more intelligible than typical conversational speech. While variability among talkers during speech production is well known, only one study to date [Gagné et al., J. Acad. Rehab. Audiol. 27, 135-158 (1994)] has directly examined differences among talkers producing clear and conversational speech. Data from that study, which utilized ten talkers, suggested that talkers vary in the extent to which they improve their intelligibility by speaking clearly. Similar variability can be also seen in studies using smaller groups of talkers [e.g., Picheny, Durlach, and Braida, J. Speech Hear. Res. 28, 96-103 (1985)]. In the current paper, clear and conversational speech materials were recorded from 41 male and female talkers aged 18 to 45 years. A listening experiment demonstrated that for normal-hearing listeners in noise, vowel intelligibility varied widely among the 41 talkers for both speaking styles, as did the magnitude of the speaking style effect. While female talkers showed a larger clear speech vowel intelligibility benefit than male talkers, neither talker age nor prior experience communicating with hearing-impaired listeners significantly affected the speaking style effect. .

  17. Spatial release of cognitive load measured in a dual-task paradigm in normal-hearing and hearing-impaired listeners.

    PubMed

    Xia, Jing; Nooraei, Nazanin; Kalluri, Sridhar; Edwards, Brent

    2015-04-01

    This study investigated whether spatial separation between talkers helps reduce cognitive processing load, and how hearing impairment interacts with the cognitive load of individuals listening in multi-talker environments. A dual-task paradigm was used in which performance on a secondary task (visual tracking) served as a measure of the cognitive load imposed by a speech recognition task. Visual tracking performance was measured under four conditions in which the target and the interferers were distinguished by (1) gender and spatial location, (2) gender only, (3) spatial location only, and (4) neither gender nor spatial location. Results showed that when gender cues were available, a 15° spatial separation between talkers reduced the cognitive load of listening even though it did not provide further improvement in speech recognition (Experiment I). Compared to normal-hearing listeners, large individual variability in spatial release of cognitive load was observed among hearing-impaired listeners. Cognitive load was lower when talkers were spatially separated by 60° than when talkers were of different genders, even though speech recognition was comparable in these two conditions (Experiment II). These results suggest that a measure of cognitive load might provide valuable insight into the benefit of spatial cues in multi-talker environments.

  18. Listening effort and perceived clarity for normal-hearing children with the use of digital noise reduction.

    PubMed

    Gustafson, Samantha; McCreery, Ryan; Hoover, Brenda; Kopun, Judy G; Stelmachowicz, Pat

    2014-01-01

    The goal of this study was to evaluate how digital noise reduction (DNR) impacts listening effort and judgment of sound clarity in children with normal hearing. It was hypothesized that when two DNR algorithms differing in signal-to-noise ratio (SNR) output are compared, the algorithm that provides the greatest improvement in overall output SNR will reduce listening effort and receive a better clarity rating from child listeners. A secondary goal was to evaluate the relation between the inversion method measurements and listening effort with DNR processing. Twenty-four children with normal hearing (ages 7 to 12 years) participated in a speech recognition task in which consonant-vowel-consonant nonwords were presented in broadband background noise. Test stimuli were recorded through two hearing aids with DNR off and DNR on at 0 dB and +5 dB input SNR. Stimuli were presented to listeners and verbal response time (VRT) and phoneme recognition scores were measured. The underlying assumption was that an increase in VRT reflects an increase in listening effort. Children rated the sound clarity for each condition. The two commercially available HAs were chosen based on: (1) an inversion technique, which was used to quantify the magnitude of change in SNR with the activation of DNR, and (2) a measure of magnitude-squared coherence, which was used to ensure that DNR in both devices preserved the spectrum. One device provided a greater improvement in overall output SNR than the other. Both DNR algorithms resulted in minimal spectral distortion as measured using coherence. For both devices, VRT decreased for the DNR-on condition, suggesting that listening effort decreased with DNR in both devices. Clarity ratings were also better in the DNR-on condition for both devices. The device showing the greatest improvement in output SNR with DNR engaged improved phoneme recognition scores. The magnitude of this improved phoneme recognition was not accurately predicted with

  19. Factors affecting speech understanding in gated interference: Cochlear implant users and normal-hearing listeners

    NASA Astrophysics Data System (ADS)

    Nelson, Peggy B.; Jin, Su-Hyun

    2004-05-01

    Previous work [Nelson, Jin, Carney, and Nelson (2003), J. Acoust. Soc. Am 113, 961-968] suggested that cochlear implant users do not benefit from masking release when listening in modulated noise. The previous findings indicated that implant users experience little to no release from masking when identifying sentences in speech-shaped noise, regardless of the modulation frequency applied to the noise. The lack of masking release occurred for all implant subjects who were using three different devices and speech processing strategies. In the present study, possible causes of this reduced masking release in implant listeners were investigated. Normal-hearing listeners, implant users, and normal-hearing listeners presented with a four-band simulation of a cochlear implant were tested for their understanding of sentences in gated noise (1-32 Hz gate frequencies) when the duty cycle of the noise was varied from 25% to 75%. No systematic effect of noise duty cycle on implant and simulation listeners' performance was noted, indicating that the masking caused by gated noise is not only energetic masking. Masking release significantly increased when the number of spectral channels was increased from 4 to 12 for simulation listeners, suggesting that spectral resolution is important for masking release. Listeners were also tested for their understanding of gated sentences (sentences in quiet interrupted by periods of silence ranging from 1 to 32 Hz as a measure of auditory fusion, or the ability to integrate speech across temporal gaps. Implant and simulation listeners had significant difficulty understanding gated sentences at every gate frequency. When the number of spectral channels was increased for simulation listeners, their ability to understand gated sentences improved significantly. Findings suggest that implant listeners' difficulty understanding speech in modulated conditions is related to at least two (possibly related) factors: degraded spectral information and

  20. Selective attention in normal and impaired hearing.

    PubMed

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  1. Hearing in Noise Test Brazil: standardization for young adults with normal hearing.

    PubMed

    Sbompato, Andressa Forlevise; Corteletti, Lilian Cassia Bornia Jacob; Moret, Adriane de Lima Mortari; Jacob, Regina Tangerino de Souza

    2015-01-01

    Individuals with the same ability of speech recognition in quiet can have extremely different results in noisy environments. To standardize speech perception in adults with normal hearing in the free field using the Brazilian Hearing in Noise Test. Contemporary, cross-sectional cohort study. 79 adults with normal hearing and without cognitive impairment participated in the study. Lists of Hearing in Noise Test sentences were randomly in quiet, noise front, noise right, and noise left. There were no significant differences between right and left ears at all frequencies tested (paired t-1 test). Nor were significant differences observed when comparing gender and interaction between these conditions. A difference was observed among the free field positions tested, except in the situations of noise right and noise left. Results of speech perception in adults with normal hearing in the free field during different listening situations in noise indicated poorer performance during the condition with noise and speech in front, i.e., 0°/0°. The values found in the standardization of the Hearing in Noise Test free field can be used as a reference in the development of protocols for tests of speech perception in noise, and for monitoring individuals with hearing impairment. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  2. Effects of Noise on Speech Recognition and Listening Effort in Children with Normal Hearing and Children with Mild Bilateral or Unilateral Hearing Loss

    ERIC Educational Resources Information Center

    Lewis, Dawna; Schmid, Kendra; O'Leary, Samantha; Spalding, Jody; Heinrichs-Graham, Elizabeth; High, Robin

    2016-01-01

    Purpose: This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Method Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL,…

  3. High-frequency amplification and sound quality in listeners with normal through moderate hearing loss.

    PubMed

    Ricketts, Todd A; Dittberner, Andrew B; Johnson, Earl E

    2008-02-01

    One factor that has been shown to greatly affect sound quality is audible bandwidth. Provision of gain for frequencies above 4-6 kHz has not generally been supported for groups of hearing aid wearers. The purpose of this study was to determine if preference for bandwidth extension in hearing aid processed sounds was related to the magnitude of hearing loss in individual listeners. Ten participants with normal hearing and 20 participants with mild-to-moderate hearing loss completed the study. Signals were processed using hearing aid-style compression algorithms and filtered using two cutoff frequencies, 5.5 and 9 kHz, which were selected to represent bandwidths that are achievable in modern hearing aids. Round-robin paired comparisons based on the criteria of preferred sound quality were made for 2 different monaurally presented brief sound segments, including music and a movie. Results revealed that preference for either the wider or narrower bandwidth (9- or 5.5-kHz cutoff frequency, respectively) was correlated with the slope of hearing loss from 4 to 12 kHz, with steep threshold slopes associated with preference for narrower bandwidths. Consistent preference for wider bandwidth is present in some listeners with mild-to-moderate hearing loss.

  4. Prediction of consonant recognition in quiet for listeners with normal and impaired hearing using an auditory model.

    PubMed

    Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas

    2014-03-01

    Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.

  5. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability among Listeners with Normal Hearing Thresholds

    ERIC Educational Resources Information Center

    Shinn-Cunningham, Barbara

    2017-01-01

    Purpose: This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method: The results from neuroscience and psychoacoustics are reviewed. Results: In noisy settings, listeners focus their…

  6. Internalized elevation perception of simple stimuli in cochlear-implant and normal-hearing listeners

    PubMed Central

    Thakkar, Tanvi; Goupell, Matthew J.

    2014-01-01

    In normal-hearing (NH) listeners, elevation perception is produced by the spectral cues imposed by the pinna, head, and torso. Elevation perception in cochlear-implant (CI) listeners appears to be non-existent; this may be a result of poorly encoded spectral cues. In this study, an analog of elevation perception was investigated by having 15 CI and 8 NH listeners report the intracranial location of spectrally simple signals (single-electrode or bandlimited acoustic stimuli, respectively) in both horizontal and vertical dimensions. Thirteen CI listeners and all of the NH listeners showed an association between place of stimulation (i.e., stimulus frequency) and perceived elevation, generally responding with higher elevations for more basal stimulation. This association persisted in the presence of a randomized temporal pitch, suggesting that listeners were not associating pitch with elevation. These data provide evidence that CI listeners might perceive changes in elevation if they were presented stimuli with sufficiently salient elevation cues. PMID:25096117

  7. Selective Attention in Normal and Impaired Hearing

    PubMed Central

    Shinn-Cunningham, Barbara G.; Best, Virginia

    2008-01-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202

  8. Nonword Repetition by Children with Cochlear Implants: Accuracy Ratings from Normal-Hearing Listeners.

    ERIC Educational Resources Information Center

    Dillon, Caitlin M.; Burkholder, Rose A.; Cleary, Miranda; Pisoni, David B.

    2004-01-01

    Seventy-six children with cochlear implants completed a nonword repetition task. The children were presented with 20 nonword auditory patterns over a loudspeaker and were asked to repeat them aloud to the experimenter. The children's responses were recorded on digital audiotape and then played back to normal-hearing adult listeners to obtain…

  9. Speech Perception with Music Maskers by Cochlear Implant Users and Normal-Hearing Listeners

    ERIC Educational Resources Information Center

    Eskridge, Elizabeth N.; Galvin, John J., III; Aronoff, Justin M.; Li, Tianhao; Fu, Qian-Jie

    2012-01-01

    Purpose: The goal of this study was to investigate how the spectral and temporal properties in background music may interfere with cochlear implant (CI) and normal-hearing listeners' (NH) speech understanding. Method: Speech-recognition thresholds (SRTs) were adaptively measured in 11 CI and 9 NH subjects. CI subjects were tested while using their…

  10. Why middle-aged listeners have trouble hearing in everyday settings.

    PubMed

    Ruggles, Dorea; Bharadwaj, Hari; Shinn-Cunningham, Barbara G

    2012-08-07

    Anecdotally, middle-aged listeners report difficulty conversing in social settings, even when they have normal audiometric thresholds [1-3]. Moreover, young adult listeners with "normal" hearing vary in their ability to selectively attend to speech amid similar streams of speech. Ignoring age, these individual differences correlate with physiological differences in temporal coding precision present in the auditory brainstem, suggesting that the fidelity of encoding of suprathreshold sound helps explain individual differences [4]. Here, we revisit the conundrum of whether early aging influences an individual's ability to communicate in everyday settings. Although absolute selective attention ability is not predicted by age, reverberant energy interferes more with selective attention as age increases. Breaking the brainstem response down into components corresponding to coding of stimulus fine structure and envelope, we find that age alters which brainstem component predicts performance. Specifically, middle-aged listeners appear to rely heavily on temporal fine structure, which is more disrupted by reverberant energy than temporal envelope structure is. In contrast, the fidelity of envelope cues predicts performance in younger adults. These results hint that temporal envelope cues influence spatial hearing in reverberant settings more than is commonly appreciated and help explain why middle-aged listeners have particular difficulty communicating in daily life. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Informational Masking and Spatial Hearing in Listeners with and without Unilateral Hearing Loss

    ERIC Educational Resources Information Center

    Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.

    2012-01-01

    Purpose: This study assessed selective listening for speech in individuals with and without unilateral hearing loss (UHL) and the potential relationship between spatial release from informational masking and localization ability in listeners with UHL. Method: Twelve adults with UHL and 12 normal-hearing controls completed a series of monaural and…

  12. Listening effort and perceived clarity for normal hearing children with the use of digital noise reduction

    PubMed Central

    Gustafson, Samantha; McCreery, Ryan; Hoover, Brenda; Kopun, Judy G; Stelmachowicz, Pat

    2013-01-01

    Objectives The goal of this study was to evaluate how digital noise reduction (DNR) impacts listening effort and judgment of sound clarity in children with normal hearing. It was hypothesized that, when two DNR algorithms differing in signal-to-noise ratio (SNR) output are compared, the algorithm which provides the greatest improvement in overall output SNR will reduce listening effort and receive a better clarity rating from child listeners. A secondary goal was to evaluate the relation between the inversion method measurements and listening effort with DNR processing. Design Twenty-four children with normal hearing (ages 7-12 years) participated in a speech recognition task in which consonant-vowel-consonant nonwords were presented in broadband background noise. Test stimuli were recorded through two hearing aids with DNR-off and DNR-on at 0 dB and +5 dB input SNR. Stimuli were presented to listeners and verbal response time (VRT) and phoneme recognition scores were measured. The underlying assumption was that an increase in VRT reflects an in increase in listening effort. Children rated the sound clarity for each condition. The two commercially available HAs were chosen based on: 1) an inversion technique which was used to quantify the magnitude of change in SNR with the activation of DNR, and 2) a measure of magnitude-squared coherence which was used to ensure that DNR in both devices preserved the spectrum. Results One device provided a greater improvement in overall output SNR than the other. Both DNR algorithms resulted in minimal spectral distortion as measured using coherence. For both devices, VRT decreased for the DNR-on condition suggesting that listening effort decreased with DNR in both devices. Clarity ratings were also better in the DNR-on condition for both devices. The device showing the greatest improvement in output SNR with DNR engaged improved phoneme recognition scores. The magnitude of this improved phoneme recognition was not accurately

  13. Comparison of single-microphone noise reduction schemes: can hearing impaired listeners tell the difference?

    PubMed

    Huber, Rainer; Bisitz, Thomas; Gerkmann, Timo; Kiessling, Jürgen; Meister, Hartmut; Kollmeier, Birger

    2018-06-01

    The perceived qualities of nine different single-microphone noise reduction (SMNR) algorithms were to be evaluated and compared in subjective listening tests with normal hearing and hearing impaired (HI) listeners. Speech samples added with traffic noise or with party noise were processed by the SMNR algorithms. Subjects rated the amount of speech distortions, intrusiveness of background noise, listening effort and overall quality, using a simplified MUSHRA (ITU-R, 2003 ) assessment method. 18 normal hearing and 18 moderately HI subjects participated in the study. Significant differences between the rating behaviours of the two subject groups were observed: While normal hearing subjects clearly differentiated between different SMNR algorithms, HI subjects rated all processed signals very similarly. Moreover, HI subjects rated speech distortions of the unprocessed, noisier signals as being more severe than the distortions of the processed signals, in contrast to normal hearing subjects. It seems harder for HI listeners to distinguish between additive noise and speech distortions or/and they might have a different understanding of the term "speech distortion" than normal hearing listeners have. The findings confirm that the evaluation of SMNR schemes for hearing aids should always involve HI listeners.

  14. Virtual Auditory Space Training-Induced Changes of Auditory Spatial Processing in Listeners with Normal Hearing.

    PubMed

    Nisha, Kavassery Venkateswaran; Kumar, Ajith Uppunda

    2017-04-01

    Localization involves processing of subtle yet highly enriched monaural and binaural spatial cues. Remediation programs aimed at resolving spatial deficits are surprisingly scanty in literature. The present study is designed to explore the changes that occur in the spatial performance of normal-hearing listeners before and after subjecting them to virtual acoustic space (VAS) training paradigm using behavioral and electrophysiological measures. Ten normal-hearing listeners participated in the study, which was conducted in three phases, including a pre-training, training, and post-training phase. At the pre- and post-training phases both behavioral measures of spatial acuity and electrophysiological P300 were administered. The spatial acuity of the participants in the free field and closed field were measured apart from quantifying their binaural processing abilities. The training phase consisted of 5-8 sessions (20 min each) carried out using a hierarchy of graded VAS stimuli. The results obtained from descriptive statistics were indicative of an improvement in all the spatial acuity measures in the post-training phase. Statistically, significant changes were noted in interaural time difference (ITD) and virtual acoustic space identification scores measured in the post-training phase. Effect sizes (r) for all of these measures were substantially large, indicating the clinical relevance of these measures in documenting the impact of training. However, the same was not reflected in P300. The training protocol used in the present study on a preliminary basis proves to be effective in normal-hearing listeners, and its implications can be extended to other clinical population as well.

  15. Loudness judgment procedures for evaluating hearing aid preselection decisions for severely and profoundly hearing-impaired listeners.

    PubMed

    Gottermeier, L; De Filippo, C L; Block, M G

    1991-08-01

    Hearing aid fitting involves a two-phase process of preselection and evaluation (Seewald RC and Ross M. Amplification for the Hearing Impaired 1988:213-271). The purpose of the present study was to examine alternative procedures that clinicians might use in the evaluation phase to verify the adequacy of hearing aid preselection decisions for severely and profoundly hearing-impaired listeners. Bekesy tracking, loudness rating, and conventional bracketing procedures were used to determine threshold, most comfortable listening level, and uncomfortable listening level for 10 hearing-impaired young adults. Stimuli were pulsed pure tones of 500, 1000, and 2000 Hz and filtered words. Means and standard deviations of most comfortable listening levels and uncomfortable listening levels derived from loudness judgments of the 10 subjects showed only nominal differences across procedures. However, correlation analysis (Pearson r) indicated that individuals responded to the three procedures in varying ways, producing different loudness judgments and overall dynamic ranges. Thus, test procedure may influence the clinician's final evaluation of a preselected hearing aid. Initial work suggests that closed-set response categories such as loudness rating can limit measurement variability and potentially guide the clinician's evaluation of hearing aid preselection decisions.

  16. Investigation of in-vehicle speech intelligibility metrics for normal hearing and hearing impaired listeners

    NASA Astrophysics Data System (ADS)

    Samardzic, Nikolina

    The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly

  17. Vowel Identification by Listeners with Hearing Impairment in Response to Variation in Formant Frequencies

    ERIC Educational Resources Information Center

    Molis, Michelle R.; Leek, Marjorie R.

    2011-01-01

    Purpose: This study examined the influence of presentation level and mild-to-moderate hearing loss on the identification of a set of vowel tokens systematically varying in the frequency locations of their second and third formants. Method: Five listeners with normal hearing (NH listeners) and five listeners with hearing impairment (HI listeners)…

  18. Temporal masking functions for listeners with real and simulated hearing loss

    PubMed Central

    Desloge, Joseph G.; Reed, Charlotte M.; Braida, Louis D.; Perez, Zachary D.; Delhorne, Lorraine A.

    2011-01-01

    A functional simulation of hearing loss was evaluated in its ability to reproduce the temporal masking functions for eight listeners with mild to severe sensorineural hearing loss. Each audiometric loss was simulated in a group of age-matched normal-hearing listeners through a combination of spectrally-shaped masking noise and multi-band expansion. Temporal-masking functions were obtained in both groups of listeners using a forward-masking paradigm in which the level of a 110-ms masker required to just mask a 10-ms fixed-level probe (5-10 dB SL) was measured as a function of the time delay between the masker offset and probe onset. At each of four probe frequencies (500, 1000, 2000, and 4000 Hz), temporal-masking functions were obtained using maskers that were 0.55, 1.0, and 1.15 times the probe frequency. The slopes and y-intercepts of the masking functions were not significantly different for listeners with real and simulated hearing loss. The y-intercepts were positively correlated with level of hearing loss while the slopes were negatively correlated. The ratio of the slopes obtained with the low-frequency maskers relative to the on-frequency maskers was similar for both groups of listeners and indicated a smaller compressive effect than that observed in normal-hearing listeners. PMID:21877806

  19. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds.

    PubMed

    Shinn-Cunningham, Barbara

    2017-10-17

    This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. The results from neuroscience and psychoacoustics are reviewed. In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. http://cred.pubs.asha.org/article.aspx?articleid=2601617.

  20. Assessment of a directional microphone array for hearing-impaired listeners.

    PubMed

    Soede, W; Bilsen, F A; Berkhout, A J

    1993-08-01

    Hearing-impaired listeners often have great difficulty understanding speech in surroundings with background noise or reverberation. Based on array techniques, two microphone prototypes (broadside and endfire) have been developed with strongly directional characteristics [Soede et al., "Development of a new directional hearing instrument based on array technology," J. Acoust. Soc. Am. 94, 785-798 (1993)]. Physical measurements show that the arrays attenuate reverberant sound by 6 dB (free-field) and can improve the signal-to-noise ratio by 7 dB in a diffuse noise field (measured with a KEMAR manikin). For the clinical assessment of these microphones an experimental setup was made in a sound-insulated listening room with one loudspeaker in front of the listener simulating the partner in a discussion and eight loudspeakers placed on the edges of a cube producing a diffuse background noise. The hearing-impaired subject wearing his own (familiar) hearing aid is placed in the center of the cube. The speech-reception threshold in noise for simple Dutch sentences was determined with a normal single omnidirectional microphone and with one of the microphone arrays. The results of monaural listening tests with hearing impaired subjects show that in comparison with an omnidirectional hearing-aid microphone the broadside and endfire microphone array gives a mean improvement of the speech reception threshold in noise of 7.0 dB (26 subjects) and 6.8 dB (27 subjects), respectively. Binaural listening with two endfire microphone arrays gives a binaural improvement which is comparable to the binaural improvement obtained by listening with two normal ears or two conventional hearing aids.

  1. Aided and Unaided Speech Perception by Older Hearing Impaired Listeners

    PubMed Central

    Woods, David L.; Arbogast, Tanya; Doss, Zoe; Younus, Masood; Herron, Timothy J.; Yund, E. William

    2015-01-01

    The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners

  2. Dichotic listening and otoacoustic emissions: shared variance between cochlear function and dichotic listening performance in adults with normal hearing.

    PubMed

    Markevych, Vladlena; Asbjørnsen, Arve E; Lind, Ola; Plante, Elena; Cone, Barbara

    2011-07-01

    The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a nonforced, and also attention-right, and attention-left condition. Transient evoked otoacoustic emissions (TEOAEs) were recorded for both ears, with and without the presentation of contralateral broadband noise. The main finding was a strong negative correlation between language laterality as measured with the dichotic listening task and of the TEOAE responses. The findings support a hypothesis of shared variance between central and peripheral auditory lateralities, and contribute to the attentional theory of auditory lateralization. The results have implications for the understanding of the cortico-fugal efferent control of cochlear activity. 2011 Elsevier Inc. All rights reserved.

  3. Temporal modulation transfer functions for listeners with real and simulated hearing loss

    PubMed Central

    Desloge, Joseph G.; Reed, Charlotte M.; Braida, Louis D.; Perez, Zachary D.; Delhorne, Lorraine A.

    2011-01-01

    A functional simulation of hearing loss was evaluated in its ability to reproduce the temporal modulation transfer functions (TMTFs) for nine listeners with mild to profound sensorineural hearing loss. Each hearing loss was simulated in a group of three age-matched normal-hearing listeners through spectrally shaped masking noise or a combination of masking noise and multiband expansion. TMTFs were measured for both groups of listeners using a broadband noise carrier as a function of modulation rate in the range 2 to 1024 Hz. The TMTFs were fit with a lowpass filter function that provided estimates of overall modulation-depth sensitivity and modulation cutoff frequency. Although the simulations were capable of accurately reproducing the threshold elevations of the hearing-impaired listeners, they were not successful in reproducing the TMTFs. On average, the simulations resulted in lower sensitivity and higher cutoff frequency than were observed in the TMTFs of the hearing-impaired listeners. Discrepancies in performance between listeners with real and simulated hearing loss are possibly related to inaccuracies in the simulation of recruitment. PMID:21682411

  4. The role of spectral and temporal cues in voice gender discrimination by normal-hearing listeners and cochlear implant users.

    PubMed

    Fu, Qian-Jie; Chinchilla, Sherol; Galvin, John J

    2004-09-01

    The present study investigated the relative importance of temporal and spectral cues in voice gender discrimination and vowel recognition by normal-hearing subjects listening to an acoustic simulation of cochlear implant speech processing and by cochlear implant users. In the simulation, the number of speech processing channels ranged from 4 to 32, thereby varying the spectral resolution; the cutoff frequencies of the channels' envelope filters ranged from 20 to 320 Hz, thereby manipulating the available temporal cues. For normal-hearing subjects, results showed that both voice gender discrimination and vowel recognition scores improved as the number of spectral channels was increased. When only 4 spectral channels were available, voice gender discrimination significantly improved as the envelope filter cutoff frequency was increased from 20 to 320 Hz. For all spectral conditions, increasing the amount of temporal information had no significant effect on vowel recognition. Both voice gender discrimination and vowel recognition scores were highly variable among implant users. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to comparable speech processing (4-8 spectral channels). The results suggest that both spectral and temporal cues contribute to voice gender discrimination and that temporal cues are especially important for cochlear implant users to identify the voice gender when there is reduced spectral resolution.

  5. Signal-to-background-ratio preferences of normal-hearing listeners as a function of music

    NASA Astrophysics Data System (ADS)

    Barrett, Jillian G.

    2005-04-01

    The primary purpose of speech is to convey a message. Many factors affect the listener's overall reception, several of which have little to do with the linguistic content itself, but rather with the delivery (e.g., prosody, intonation patterns, pragmatics, paralinguistic cues). Music, however, may convey a message either with or without linguistic content. In instances in which music has lyrics, one cannot assume verbal content will take precedence over sonic properties. Lyric emphasis over other aspects of music cannot be assumed. Singing introduces distortion of the vowel-consonant temporal ratio of speech, emphasizing vowels and de-emphasizing consonants. The phonemic production alterations of singing make it difficult for even those with normal hearing to understand the singer. This investigation was designed to identify singer-to-background-ratio (SBR) prefer- ences for normal hearing adult listeners (as opposed to SBR levels maxi-mizing speech discrimination ability). Stimuli were derived from three different original songs, each produced in two different genres and sung by six different singers. Singer and genre were the two primary contributors to significant differences in SBR preferences, though results clearly indicate genre, style and singer interact in different combinations for each song, each singer, and for each subject in an unpredictable manner.

  6. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds

    PubMed Central

    2017-01-01

    Purpose This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method The results from neuroscience and psychoacoustics are reviewed. Results In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with “normal hearing.” Conclusions How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601617 PMID:29049598

  7. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility

    PubMed Central

    Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan

    2014-01-01

    Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise. PMID:25566159

  8. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility.

    PubMed

    Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan

    2014-01-01

    Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.

  9. Effects of Hearing Impairment and Hearing Aid Amplification on Listening Effort: A Systematic Review.

    PubMed

    Ohlenforst, Barbara; Zekveld, Adriana A; Jansma, Elise P; Wang, Yang; Naylor, Graham; Lorens, Artur; Lunner, Thomas; Kramer, Sophia E

    Recommendations Assessment, Development, and Evaluation Working Group guidelines. We tested the statistical evidence across studies with nonparametric tests. The testing revealed only one consistent effect across studies, namely that listening effort was higher for hearing-impaired listeners compared with normal-hearing listeners (Q1) as measured by electroencephalographic measures. For all other studies, the evidence across studies failed to reveal consistent effects on listening effort. In summary, we could only identify scientific evidence from physiological measurement methods, suggesting that hearing impairment increases listening effort during speech perception (Q1). There was no scientific, finding across studies indicating that hearing aid amplification decreases listening effort (Q2). In general, there were large differences in the study population, the control groups and conditions, and the outcome measures applied between the studies included in this review. The results of this review indicate that published listening effort studies lack consistency, lack standardization across studies, and have insufficient statistical power. The findings underline the need for a common conceptual framework for listening effort to address the current shortcomings.

  10. Effects of Hearing Impairment and Hearing Aid Amplification on Listening Effort: A Systematic Review

    PubMed Central

    Ohlenforst, Barbara; Jansma, Elise P.; Wang, Yang; Naylor, Graham; Lorens, Artur; Lunner, Thomas; Kramer, Sophia E.

    2017-01-01

    , according to the Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines. We tested the statistical evidence across studies with nonparametric tests. The testing revealed only one consistent effect across studies, namely that listening effort was higher for hearing-impaired listeners compared with normal-hearing listeners (Q1) as measured by electroencephalographic measures. For all other studies, the evidence across studies failed to reveal consistent effects on listening effort. Conclusion: In summary, we could only identify scientific evidence from physiological measurement methods, suggesting that hearing impairment increases listening effort during speech perception (Q1). There was no scientific, finding across studies indicating that hearing aid amplification decreases listening effort (Q2). In general, there were large differences in the study population, the control groups and conditions, and the outcome measures applied between the studies included in this review. The results of this review indicate that published listening effort studies lack consistency, lack standardization across studies, and have insufficient statistical power. The findings underline the need for a common conceptual framework for listening effort to address the current shortcomings. PMID:28234670

  11. Rate discrimination at low pulse rates in normal-hearing and cochlear implant listeners: Influence of intracochlear stimulation site.

    PubMed

    Stahl, Pierre; Macherey, Olivier; Meunier, Sabine; Roman, Stéphane

    2016-04-01

    Temporal pitch perception in cochlear implantees remains weaker than in normal hearing listeners and is usually limited to rates below about 300 pulses per second (pps). Recent studies have suggested that stimulating the apical part of the cochlea may improve the temporal coding of pitch by cochlear implants (CIs), compared to stimulating other sites. The present study focuses on rate discrimination at low pulse rates (ranging from 20 to 104 pps). Two experiments measured and compared pulse rate difference limens (DLs) at four fundamental frequencies (ranging from 20 to 104 Hz) in both CI and normal-hearing (NH) listeners. Experiment 1 measured DLs in users of the (Med-El CI, Innsbruck, Austria) device for two electrodes (one apical and one basal). In experiment 2, DLs for NH listeners were compared for unresolved harmonic complex tones filtered in two frequency regions (lower cut-off frequencies of 1200 and 3600 Hz, respectively) and for different bandwidths. Pulse rate discrimination performance was significantly better when stimulation was provided by the apical electrode in CI users and by the lower-frequency tone complexes in NH listeners. This set of data appears consistent with better temporal coding when stimulation originates from apical regions of the cochlea.

  12. An algorithm that improves speech intelligibility in noise for normal-hearing listeners.

    PubMed

    Kim, Gibak; Lu, Yang; Hu, Yi; Loizou, Philipos C

    2009-09-01

    Traditional noise-suppression algorithms have been shown to improve speech quality, but not speech intelligibility. Motivated by prior intelligibility studies of speech synthesized using the ideal binary mask, an algorithm is proposed that decomposes the input signal into time-frequency (T-F) units and makes binary decisions, based on a Bayesian classifier, as to whether each T-F unit is dominated by the target or the masker. Speech corrupted at low signal-to-noise ratio (SNR) levels (-5 and 0 dB) using different types of maskers is synthesized by this algorithm and presented to normal-hearing listeners for identification. Results indicated substantial improvements in intelligibility (over 60% points in -5 dB babble) over that attained by human listeners with unprocessed stimuli. The findings from this study suggest that algorithms that can estimate reliably the SNR in each T-F unit can improve speech intelligibility.

  13. Melodic interval perception by normal-hearing listeners and cochlear implant users

    PubMed Central

    Luo, Xin; Masterson, Megan E.; Wu, Ching-Chih

    2014-01-01

    The perception of melodic intervals (sequential pitch differences) is essential to music perception. This study tested melodic interval perception in normal-hearing (NH) listeners and cochlear implant (CI) users. Melodic interval ranking was tested using an adaptive procedure. CI users had slightly higher interval ranking thresholds than NH listeners. Both groups' interval ranking thresholds, although not affected by root note, significantly increased with standard interval size and were higher for descending intervals than for ascending intervals. The pitch direction effect may be due to a procedural artifact or a difference in central processing. In another test, familiar melodies were played with all the intervals scaled by a single factor. Subjects rated how in tune the melodies were and adjusted the scaling factor until the melodies sounded the most in tune. CI users had lower final interval ratings and less change in interval rating as a function of scaling factor than NH listeners. For CI users, the root-mean-square error of the final scaling factors and the width of the interval rating function were significantly correlated with the average ranking threshold for ascending rather than descending intervals, suggesting that CI users may have focused on ascending intervals when rating and adjusting the melodies. PMID:25324084

  14. Information processing of visually presented picture and word stimuli by young hearing-impaired and normal-hearing children.

    PubMed

    Kelly, R R; Tomlison-Keasey, C

    1976-12-01

    Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.

  15. Influence of Hearing Risk Information on the Motivation and Modification of Personal Listening Device Use.

    PubMed

    Serpanos, Yula C; Berg, Abbey L; Renne, Brittany

    2016-12-01

    The purpose of this study was (a) to investigate the behaviors, knowledge, and motivators associated with personal listening device (PLD) use and (b) to determine the influence of different types of hearing health risk education information (text with or without visual images) on motivation to modify PLD listening use behaviors in young adults. College-age students (N = 523) completed a paper-and-pencil survey tapping their behaviors, knowledge, and motivation regarding listening to music or media at high volume using PLDs. Participants rated their motivation to listen to PLDs at lower volume levels following each of three information sets: text only, behind-the-ear hearing aid image with text, and inner ear hair cell damage image with text. Acoustically pleasing and emotional motives were the most frequently cited (38%-45%) reasons for listening to music or media using a PLD at high volume levels. The behind-the-ear hearing aid image with text information was significantly (p < .0001) more motivating to participants than text alone or the inner ear hair cell damage image with text. Evocative imagery using hearing aids may be an effective approach in hearing protective health campaigns for motivating safer listening practices with PLDs in young adults.

  16. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults with Normal Hearing but Not Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker

    2016-01-01

    Purpose: Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Method: Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13…

  17. Human Frequency Following Response: Neural Representation of Envelope and Temporal Fine Structure in Listeners with Normal Hearing and Sensorineural Hearing Loss

    PubMed Central

    Ananthakrishnan, Saradha; Krishnan, Ananthanarayan; Bartlett, Edward

    2015-01-01

    Objective Listeners with sensorineural hearing loss (SNHL) typically experience reduced speech perception, which is not completely restored with amplification. This likely occurs because cochlear damage, in addition to elevating audiometric thresholds, alters the neural representation of speech transmitted to higher centers along the auditory neuroaxis. While the deleterious effects of SNHL on speech perception in humans have been well-documented using behavioral paradigms, our understanding of the neural correlates underlying these perceptual deficits remains limited. Using the scalp-recorded Frequency Following Response (FFR), the authors examine the effects of SNHL and aging on subcortical neural representation of acoustic features important for pitch and speech perception, namely the periodicity envelope (F0) and temporal fine structure (TFS) (formant structure), as reflected in the phase-locked neural activity generating the FFR. Design FFRs were obtained from 10 listeners with normal hearing (NH) and 9 listeners with mild-moderate SNHL in response to a steady-state English back vowel /u/ presented at multiple intensity levels. Use of multiple presentation levels facilitated comparisons at equal sound pressure level (SPL) and equal sensation level (SL). In a second follow-up experiment to address the effect of age on envelope and TFS representation, FFRs were obtained from 25 NH and 19 listeners with mild to moderately-severe SNHL to the same vowel stimulus presented at 80 dB SPL. Temporal waveforms, Fast Fourier Transform (FFT) and spectrograms were used to evaluate the magnitude of the phase-locked activity at F0 (periodicity envelope) and F1 (TFS). Results Neural representation of both envelope (F0) and TFS (F1) at equal SPLs was stronger in NH listeners compared to listeners with SNHL. Also, comparison of neural representation of F0 and F1 across stimulus levels expressed in SPL and SL (accounting for audibility) revealed that level-related changes in F0

  18. Postural control assessment in students with normal hearing and sensorineural hearing loss.

    PubMed

    Melo, Renato de Souza; Lemos, Andrea; Macky, Carla Fabiana da Silva Toscano; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica

    2015-01-01

    Children with sensorineural hearing loss can present with instabilities in postural control, possibly as a consequence of hypoactivity of their vestibular system due to internal ear injury. To assess postural control stability in students with normal hearing (i.e., listeners) and with sensorineural hearing loss, and to compare data between groups, considering gender and age. This cross-sectional study evaluated the postural control of 96 students, 48 listeners and 48 with sensorineural hearing loss, aged between 7 and 18 years, of both genders, through the Balance Error Scoring Systems scale. This tool assesses postural control in two sensory conditions: stable surface and unstable surface. For statistical data analysis between groups, the Wilcoxon test for paired samples was used. Students with hearing loss showed more instability in postural control than those with normal hearing, with significant differences between groups (stable surface, unstable surface) (p<0.001). Students with sensorineural hearing loss showed greater instability in the postural control compared to normal hearing students of the same gender and age. Copyright © 2014 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  19. Individual Sensitivity to Spectral and Temporal Cues in Listeners With Hearing Impairment

    PubMed Central

    Wright, Richard A.; Blackburn, Michael C.; Tatman, Rachael; Gallun, Frederick J.

    2015-01-01

    Purpose The present study was designed to evaluate use of spectral and temporal cues under conditions in which both types of cues were available. Method Participants included adults with normal hearing and hearing loss. We focused on 3 categories of speech cues: static spectral (spectral shape), dynamic spectral (formant change), and temporal (amplitude envelope). Spectral and/or temporal dimensions of synthetic speech were systematically manipulated along a continuum, and recognition was measured using the manipulated stimuli. Level was controlled to ensure cue audibility. Discriminant function analysis was used to determine to what degree spectral and temporal information contributed to the identification of each stimulus. Results Listeners with normal hearing were influenced to a greater extent by spectral cues for all stimuli. Listeners with hearing impairment generally utilized spectral cues when the information was static (spectral shape) but used temporal cues when the information was dynamic (formant transition). The relative use of spectral and temporal dimensions varied among individuals, especially among listeners with hearing loss. Conclusion Information about spectral and temporal cue use may aid in identifying listeners who rely to a greater extent on particular acoustic cues and applying that information toward therapeutic interventions. PMID:25629388

  20. Cortisol, Chromogranin A, and Pupillary Responses Evoked by Speech Recognition Tasks in Normally Hearing and Hard-of-Hearing Listeners: A Pilot Study.

    PubMed

    Kramer, Sophia E; Teunissen, Charlotte E; Zekveld, Adriana A

    2016-01-01

    Pupillometry is one method that has been used to measure processing load expended during speech understanding. Notably, speech perception (in noise) tasks can evoke a pupil response. It is not known if there is concurrent activation of the sympathetic nervous system as indexed by salivary cortisol and chromogranin A (CgA) and whether such activation differs between normally hearing (NH) and hard-of-hearing (HH) adults. Ten NH and 10 adults with mild-to-moderate hearing loss (mean age 52 years) participated. Two speech perception tests were administered in random order: one in quiet targeting 100% correct performance and one in noise targeting 50% correct performance. Pupil responses and salivary samples for cortisol and CgA analyses were collected four times: before testing, after the two speech perception tests, and at the end of the session. Participants rated their perceived accuracy, effort, and motivation. Effects were examined using repeated-measures analyses of variance. Correlations between outcomes were calculated. HH listeners had smaller peak pupil dilations (PPDs) than NH listeners in the speech in noise condition only. No group or condition effects were observed for the cortisol data, but HH listeners tended to have higher cortisol levels across conditions. CgA levels were larger at the pretesting time than at the three other test times. Hearing impairment did not affect CgA. Self-rated motivation correlated most often with cortisol or PPD values. The three physiological indicators of cognitive load and stress (PPD, cortisol, and CgA) are not equally affected by speech testing or hearing impairment. Each of them seem to capture a different dimension of sympathetic nervous system activity.

  1. Auditory brainstem response latency in forward masking, a marker of sensory deficits in listeners with normal hearing thresholds

    PubMed Central

    Mehraei, Golbarg; Gallardo, Andreu Paredes; Shinn-Cunningham, Barbara G.; Dau, Torsten

    2017-01-01

    In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low-SR fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker behaviorally. Further, the amount of wave-V latency change with masker-to-probe interval was positively correlated with the rate of change in forward masking detection thresholds. Although we cannot rule out central contributions, these findings are consistent with the hypothesis that auditory nerve fiber deafferentation occurs in humans and may predict how well individuals can hear in noisy environments. PMID:28159652

  2. From fragments to the whole: a comparison between cochlear implant users and normal-hearing listeners in music perception and enjoyment.

    PubMed

    Alexander, Ashlin J; Bartel, Lee; Friesen, Lendra; Shipp, David; Chen, Joseph

    2011-02-01

    Cochlear implants (CIs) allow many profoundly deaf individuals to regain speech understanding. However, the ability to understand speech does not necessarily guarantee music enjoyment. Enabling a CI user to recover the ability to perceive and enjoy the complexity of music remains a challenge determined by many factors. (1) To construct a novel, attention-based, diagnostic software tool (Music EAR) for the assessment of music enjoyment and perception and (2) to compare the results among three listener groups. Thirty-six subjects completed the Music EAR assessment tool: 12 normal-hearing musicians (NHMs), 12 normal-hearing nonmusicians (NHnMs), and 12 CI listeners. Subjects were required to (1) rate enjoyment of musical excerpts at three complexity levels; (2) differentiate five instrumental timbres; (3) recognize pitch pattern variation; and (4) identify target musical patterns embedded holistically in a melody. Enjoyment scores for CI users were comparable to those for NHMs and superior to those for NHnMs and revealed that implantees enjoyed classical music most. CI users performed significantly poorer in all categories of music perception compared to normal-hearing listeners. Overall CI user scores were lowest in those tasks requiring increased attention. Two high-performing subjects matched or outperformed NHnMs in pitch and timbre perception tasks. The Music EAR assessment tool provides a unique approach to the measurement of music perception and enjoyment in CI users. Together with auditory training evidence, the results provide considerable hope for further recovery of music appreciation through methodical rehabilitation.

  3. Predicting word-recognition performance in noise by young listeners with normal hearing using acoustic, phonetic, and lexical variables.

    PubMed

    McArdle, Rachel; Wilson, Richard H

    2008-06-01

    To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.

  4. Auditory stream segregation with multi-tonal complexes in hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Rogers, Deanna S.; Lentz, Jennifer J.

    2004-05-01

    The ability to segregate sounds into different streams was investigated in normally hearing and hearing-impaired listeners. Fusion and fission boundaries were measured using 6-tone complexes with tones equally spaced in log frequency. An ABA-ABA- sequence was used in which A represents a multitone complex ranging from either 250-1000 Hz (low-frequency region) or 1000-4000 Hz (high-frequency region). B also represents a multitone complex with same log spacing as A. Multitonal complexes were 100 ms in duration with 20-ms ramps, and- represents a silent interval of 100 ms. To measure the fusion boundary, the first tone of the B stimulus was either 375 Hz (low) or 1500 Hz (high) and shifted downward in frequency with each progressive ABA triplet until the listener pressed a button indicating that a ``galloping'' rhythm was heard. When measuring the fusion boundary, the first tone of the B stimulus was 252 or 1030 Hz and shifted upward with each triplet. Listeners then pressed a button when the ``galloping rhythm ended.'' Data suggest that hearing-impaired subjects have different fission and fusion boundaries than normal-hearing listeners. These data will be discussed in terms of both peripheral and central factors.

  5. Spectral and binaural loudness summation for hearing-impaired listeners.

    PubMed

    Oetting, Dirk; Hohmann, Volker; Appell, Jens-E; Kollmeier, Birger; Ewert, Stephan D

    2016-05-01

    Sensorineural hearing loss typically results in a steepened loudness function and a reduced dynamic range from elevated thresholds to uncomfortably loud levels for narrowband and broadband signals. Restoring narrowband loudness perception for hearing-impaired (HI) listeners can lead to overly loud perception of broadband signals and it is unclear how binaural presentation affects loudness perception in this case. Here, loudness perception quantified by categorical loudness scaling for nine normal-hearing (NH) and ten HI listeners was compared for signals with different bandwidth and different spectral shape in monaural and in binaural conditions. For the HI listeners, frequency- and level-dependent amplification was used to match the narrowband monaural loudness functions of the NH listeners. The average loudness functions for NH and HI listeners showed good agreement for monaural broadband signals. However, HI listeners showed substantially greater loudness for binaural broadband signals than NH listeners: on average a 14.1 dB lower level was required to reach "very loud" (range 30.8 to -3.7 dB). Overall, with narrowband loudness compensation, a given binaural loudness for broadband signals above "medium loud" was reached at systematically lower levels for HI than for NH listeners. Such increased binaural loudness summation was not found for loudness categories below "medium loud" or for narrowband signals. Large individual variations in the increased loudness summation were observed and could not be explained by the audiogram or the narrowband loudness functions. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners.

    PubMed

    Ruggles, Dorea; Shinn-Cunningham, Barbara

    2011-06-01

    Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.

  7. The Effects of Hearing Aid Directional Microphone and Noise Reduction Processing on Listening Effort in Older Adults with Hearing Loss.

    PubMed

    Desjardins, Jamie L

    2016-01-01

    Older listeners with hearing loss may exert more cognitive resources to maintain a level of listening performance similar to that of younger listeners with normal hearing. Unfortunately, this increase in cognitive load, which is often conceptualized as increased listening effort, may come at the cost of cognitive processing resources that might otherwise be available for other tasks. The purpose of this study was to evaluate the independent and combined effects of a hearing aid directional microphone and a noise reduction (NR) algorithm on reducing the listening effort older listeners with hearing loss expend on a speech-in-noise task. Participants were fitted with study worn commercially available behind-the-ear hearing aids. Listening effort on a sentence recognition in noise task was measured using an objective auditory-visual dual-task paradigm. The primary task required participants to repeat sentences presented in quiet and in a four-talker babble. The secondary task was a digital visual pursuit rotor-tracking test, for which participants were instructed to use a computer mouse to track a moving target around an ellipse that was displayed on a computer screen. Each of the two tasks was presented separately and concurrently at a fixed overall speech recognition performance level of 50% correct with and without the directional microphone and/or the NR algorithm activated in the hearing aids. In addition, participants reported how effortful it was to listen to the sentences in quiet and in background noise in the different hearing aid listening conditions. Fifteen older listeners with mild sloping to severe sensorineural hearing loss participated in this study. Listening effort in background noise was significantly reduced with the directional microphones activated in the hearing aids. However, there was no significant change in listening effort with the hearing aid NR algorithm compared to no noise processing. Correlation analysis between objective and self

  8. Automatic Speech Recognition Predicts Speech Intelligibility and Comprehension for Listeners With Simulated Age-Related Hearing Loss.

    PubMed

    Fontan, Lionel; Ferrané, Isabelle; Farinas, Jérôme; Pinquier, Julien; Tardieu, Julien; Magnen, Cynthia; Gaillard, Pascal; Aumont, Xavier; Füllgrabe, Christian

    2017-09-18

    The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist audiologists/hearing-aid dispensers in the fine-tuning of hearing aids. Sixty young participants with normal hearing listened to speech materials mimicking the perceptual consequences of ARHL at different levels of severity. Two intelligibility tests (repetition of words and sentences) and 1 comprehension test (responding to oral commands by moving virtual objects) were administered. Several language models were developed and used by the ASR system in order to fit human performances. Strong significant positive correlations were observed between human and ASR scores, with coefficients up to .99. However, the spectral smearing used to simulate losses in frequency selectivity caused larger declines in ASR performance than in human performance. Both intelligibility and comprehension scores for listeners with simulated ARHL are highly correlated with the performances of an ASR-based system. In the future, it needs to be determined if the ASR system is similarly successful in predicting speech processing in noise and by older people with ARHL.

  9. Effects of frequency compression and frequency transposition on fricative and affricate perception in listeners with normal hearing and mild to moderate hearing loss

    PubMed Central

    Alexander, Joshua M.; Kopun, Judy G.; Stelmachowicz, Patricia G.

    2014-01-01

    Summary: Listeners with normal hearing and mild to moderate loss identified fricatives and affricates that were recorded through hearing aids with frequency transposition (FT) or nonlinear frequency compression (NFC). FT significantly degraded performance for both groups. When frequencies up to ~9 kHz were lowered with NFC and with a novel frequency compression algorithm, spectral envelope decimation, performance significantly improved relative to conventional amplification (NFC-off) and was equivalent to wideband speech. Significant differences between most conditions could be largely attributed to an increase or decrease in confusions for /s/ and /z/. Objectives: Stelmachowicz and colleagues have demonstrated that the limited bandwidth associated with conventional hearing aid amplification prevents useful high-frequency speech information from being transmitted. The purpose of this study was to examine the efficacy of two popular frequency-lowering algorithms and one novel algorithm (spectral envelope decimation) in adults with mild-to-moderate sensorineural hearing loss and in normal-hearing controls. Design: Participants listened monaurally through headphones to recordings of nine fricatives and affricates spoken by three women in a vowel-consonant (VC) context. Stimuli were mixed with speech-shaped noise at 10 dB SNR and recorded through a Widex Inteo IN-9 and a Phonak Naída UP V behind-the-ear (BTE) hearing aid. Frequency transposition (FT) is used in the Inteo and nonlinear frequency compression (NFC) used in the Naída. Both devices were programmed to lower frequencies above 4 kHz, but neither device could lower frequencies above 6-7 kHz. Each device was tested under four conditions: frequency lowering deactivated (FT-off and NFC-off), frequency lowering activated (FT and NFC), wideband (WB), and a fourth condition unique to each hearing aid. The WB condition was constructed by mixing recordings from the first condition with high-pass filtered versions of

  10. Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners.

    PubMed

    Schwartz, Andrew H; Shinn-Cunningham, Barbara G

    2013-04-01

    Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.

  11. Perception of dissonance by people with normal hearing and sensorineural hearing loss

    NASA Astrophysics Data System (ADS)

    Tufts, Jennifer B.; Molis, Michelle R.; Leek, Marjorie R.

    2005-08-01

    The purpose of this study was to determine whether the perceived sensory dissonance of pairs of pure tones (PT dyads) or pairs of harmonic complex tones (HC dyads) is altered due to sensorineural hearing loss. Four normal-hearing (NH) and four hearing-impaired (HI) listeners judged the sensory dissonance of PT dyads geometrically centered at 500 and 2000 Hz, and of HC dyads with fundamental frequencies geometrically centered at 500 Hz. The frequency separation of the members of the dyads varied from 0 Hz to just over an octave. In addition, frequency selectivity was assessed at 500 and 2000 Hz for each listener. Maximum dissonance was perceived at frequency separations smaller than the auditory filter bandwidth for both groups of listners, but maximum dissonance for HI listeners occurred at a greater proportion of their bandwidths at 500 Hz than at 2000 Hz. Further, their auditory filter bandwidths at 500 Hz were significantly wider than those of the NH listeners. For both the PT and HC dyads, curves displaying dissonance as a function of frequency separation were more compressed for the HI listeners, possibly reflecting less contrast between their perceptions of consonance and dissonance compared with the NH listeners.

  12. Beyond the hearing aid: Assistive listening devices

    NASA Astrophysics Data System (ADS)

    Holmes, Alice E.

    2003-04-01

    Persons with hearing loss can obtain great benefit from hearing aids but there are many situations that traditional amplification devices will not provide enough help to ensure optimal communication. Assistive listening and signaling devices are designed to improve the communication of the hearing impaired in instances where traditional hearing aids are not sufficient. These devices are designed to help with problems created by listening in noise or against a competing message, improve distance listening, facilitate group conversation (help with problems created by rapidly changing speakers), and allow independence from friends and family. With the passage of the Americans with Disabilities Act in 1990, assistive listening devices (ALDs) are becoming more accessible to the public with hearing loss. Employers and public facilities must provide auxiliary aids and services when necessary to ensure effective communication for persons who are deaf or hard of hearing. However many professionals and persons with hearing loss are unaware of the various types and availability of ALDs. An overview of ALDs along with a discussion of their advantages and disadvantages will be given.

  13. Self-masking: Listening during vocalization. Normal hearing.

    PubMed

    Borg, Erik; Bergkvist, Christina; Gustafsson, Dan

    2009-06-01

    What underlying mechanisms are involved in the ability to talk and listen simultaneously and what role does self-masking play under conditions of hearing impairment? The purpose of the present series of studies is to describe a technique for assessment of masked thresholds during vocalization, to describe normative data for males and females, and to focus on hearing impairment. The masking effect of vocalized [a:] on narrow-band noise pulses (250-8000 Hz) was studied using the maximum vocalization method. An amplitude-modulated series of sound pulses, which sounded like a steam engine, was masked until the criterion of halving the perceived pulse rate was reached. For masking of continuous reading, a just-follow-conversation criterion was applied. Intra-session test-retest reproducibility and inter-session variability were calculated. The results showed that female voices were more efficient in masking high frequency noise bursts than male voices and more efficient in masking both a male and a female test reading. The male had to vocalize 4 dBA louder than the female to produce the same masking effect on the test reading. It is concluded that the method is relatively simple to apply and has small intra-session and fair inter-session variability. Interesting gender differences were observed.

  14. Enjoyment of music by elderly hearing-impaired listeners.

    PubMed

    Leek, Marjorie R; Molis, Michelle R; Kubli, Lina R; Tufts, Jennifer B

    2008-06-01

    Anecdotal evidence suggests that hearing loss interferes with the enjoyment of music, although it is not known how widespread this problem currently is. To estimate the prevalence of music-listening difficulties among a group of elderly hearing aid wearers. Interview. Telephone interviews were conducted with patients who wore hearing aids. Questions regarding several aspects of music listening were included. Sixty-eight hearing-impaired people served as subjects. They had all been seen in the audiology clinic for hearing aid evaluation during the previous year. Subjects were asked questions concerning their use of hearing aids, the importance of listening to music in their lives, their habits and practices concerning music, and difficulties they experienced in listening to music. Almost 30% of the respondents reported that their hearing losses affected their enjoyment of music. About half of the respondents indicated that music was either too loud or too soft, although only about one-third reported difficulties with level contrasts within musical pieces. In contrast to a similar survey carried out 20 years ago, there were many fewer complaints about listening to music. This result may be due in large part to improvements in hearing aids, especially with regard to nonlinear compression. Although new hearing aid technologies have somewhat reduced problems of music enjoyment experienced by hearing-impaired people, audiologists should be aware that some 25-30% of patients may have difficulties with listening to music and may require extra attention to minimize those problems.

  15. Some considerations in evaluating spoken word recognition by normal-hearing, noise-masked normal-hearing, and cochlear implant listeners. I: The effects of response format.

    PubMed

    Sommers, M S; Kirk, K I; Pisoni, D B

    1997-04-01

    The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH

  16. Normal-Hearing Listeners’ and Cochlear Implant Users’ Perception of Pitch Cues in Emotional Speech

    PubMed Central

    Fuller, Christina; Gilbers, Dicky; Broersma, Mirjam; Goudbeek, Martijn; Free, Rolien; Başkent, Deniz

    2015-01-01

    In cochlear implants (CIs), acoustic speech cues, especially for pitch, are delivered in a degraded form. This study’s aim is to assess whether due to degraded pitch cues, normal-hearing listeners and CI users employ different perceptual strategies to recognize vocal emotions, and, if so, how these differ. Voice actors were recorded pronouncing a nonce word in four different emotions: anger, sadness, joy, and relief. These recordings’ pitch cues were phonetically analyzed. The recordings were used to test 20 normal-hearing listeners’ and 20 CI users’ emotion recognition. In congruence with previous studies, high-arousal emotions had a higher mean pitch, wider pitch range, and more dominant pitches than low-arousal emotions. Regarding pitch, speakers did not differentiate emotions based on valence but on arousal. Normal-hearing listeners outperformed CI users in emotion recognition, even when presented with CI simulated stimuli. However, only normal-hearing listeners recognized one particular actor’s emotions worse than the other actors’. The groups behaved differently when presented with similar input, showing that they had to employ differing strategies. Considering the respective speaker’s deviating pronunciation, it appears that for normal-hearing listeners, mean pitch is a more salient cue than pitch range, whereas CI users are biased toward pitch range cues. PMID:27648210

  17. Identification and Treatment of Very Young Children with Hearing Loss.

    ERIC Educational Resources Information Center

    Madell, Jane R.

    1988-01-01

    Hearing loss in infants and young children can be identified through behavioral observation audiometry, visual reinforcement audiometry, or auditory brainstem response testing. Habilitation may involve amplification with hearing aids, other assistive listening devices, or cochlear implants. Expectations for children with different degrees of…

  18. Auditory and visual orienting responses in listeners with and without hearing-impairment

    PubMed Central

    Brimijoin, W. Owen; McShefferty, David; Akeroyd, Michael A.

    2015-01-01

    Head movements are intimately involved in sound localization and may provide information that could aid an impaired auditory system. Using an infrared camera system, head position and orientation was measured for 17 normal-hearing and 14 hearing-impaired listeners seated at the center of a ring of loudspeakers. Listeners were asked to orient their heads as quickly as was comfortable toward a sequence of visual targets, or were blindfolded and asked to orient toward a sequence of loudspeakers playing a short sentence. To attempt to elicit natural orienting responses, listeners were not asked to reorient their heads to the 0° loudspeaker between trials. The results demonstrate that hearing-impairment is associated with several changes in orienting responses. Hearing-impaired listeners showed a larger difference in auditory versus visual fixation position and a substantial increase in initial and fixation latency for auditory targets. Peak velocity reached roughly 140 degrees per second in both groups, corresponding to a rate of change of approximately 1 microsecond of interaural time difference per millisecond of time. Most notably, hearing-impairment was associated with a large change in the complexity of the movement, changing from smooth sigmoidal trajectories to ones characterized by abruptly-changing velocities, directional reversals, and frequent fixation angle corrections. PMID:20550266

  19. Comparison of speech recognition with adaptive digital and FM remote microphone hearing assistance technology by listeners who use hearing aids.

    PubMed

    Thibodeau, Linda

    2014-06-01

    The purpose of this study was to compare the benefits of 3 types of remote microphone hearing assistance technology (HAT), adaptive digital broadband, adaptive frequency modulation (FM), and fixed FM, through objective and subjective measures of speech recognition in clinical and real-world settings. Participants included 11 adults, ages 16 to 78 years, with primarily moderate-to-severe bilateral hearing impairment (HI), who wore binaural behind-the-ear hearing aids; and 15 adults, ages 18 to 30 years, with normal hearing. Sentence recognition in quiet and in noise and subjective ratings were obtained in 3 conditions of wireless signal processing. Performance by the listeners with HI when using the adaptive digital technology was significantly better than that obtained with the FM technology, with the greatest benefits at the highest noise levels. The majority of listeners also preferred the digital technology when listening in a real-world noisy environment. The wireless technology allowed persons with HI to surpass persons with normal hearing in speech recognition in noise, with the greatest benefit occurring with adaptive digital technology. The use of adaptive digital technology combined with speechreading cues would allow persons with HI to engage in communication in environments that would have otherwise not been possible with traditional wireless technology.

  20. Toward a Nonspeech Test of Auditory Cognition: Semantic Context Effects in Environmental Sound Identification in Adults of Varying Age and Hearing Abilities

    PubMed Central

    Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian

    2016-01-01

    Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791

  1. Auditory, Visual, and Auditory-Visual Perceptions of Emotions by Young Children with Hearing Loss versus Children with Normal Hearing

    ERIC Educational Resources Information Center

    Most, Tova; Michaelis, Hilit

    2012-01-01

    Purpose: This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. Method: A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify…

  2. Vowels in clear and conversational speech: Talker differences in acoustic characteristics and intelligibility for normal-hearing listeners

    NASA Astrophysics Data System (ADS)

    Hargus Ferguson, Sarah; Kewley-Port, Diane

    2002-05-01

    Several studies have shown that when a talker is instructed to speak as though talking to a hearing-impaired person, the resulting ``clear'' speech is significantly more intelligible than typical conversational speech. Recent work in this lab suggests that talkers vary in how much their intelligibility improves when they are instructed to speak clearly. The few studies examining acoustic characteristics of clear and conversational speech suggest that these differing clear speech effects result from different acoustic strategies on the part of individual talkers. However, only two studies to date have directly examined differences among talkers producing clear versus conversational speech, and neither included acoustic analysis. In this project, clear and conversational speech was recorded from 41 male and female talkers aged 18-45 years. A listening experiment demonstrated that for normal-hearing listeners in noise, vowel intelligibility varied widely among the 41 talkers for both speaking styles, as did the magnitude of the speaking style effect. Acoustic analyses using stimuli from a subgroup of talkers shown to have a range of speaking style effects will be used to assess specific acoustic correlates of vowel intelligibility in clear and conversational speech. [Work supported by NIHDCD-02229.

  3. Targeting hearing health messages for users of personal listening devices.

    PubMed

    Punch, Jerry L; Elfenbein, Jill L; James, Richard R

    2011-06-01

    To summarize the literature on patterns and risks of personal listening device (PLD) use, which is ubiquitous among teenagers and young adults. The review emphasizes risk awareness, health concerns of PLD users, inclination to take actions to prevent hearing loss from exposure to loud music, and specific instructional messages that are likely to motivate such preventive actions. We conducted a systematic, critical review of the English-language scholarly literature on the topic of PLDs and their potential effects on human hearing. We used popular database search engines to locate relevant professional journals, books, recent conference papers, and other reference sources. Adolescents and young adults appear to have somewhat different perspectives on risks to hearing posed by PLD use. Messages designed to suggest actions they might take in avoiding or reducing these risks, therefore, need to be targeted to achieve optimal outcomes. We offer specific recommendations regarding the framing and content of educational messages that are most likely to be effective in reducing the potentially harmful effects of loud music on hearing in these populations, and we note future research needs.

  4. Measuring the Effects of Reverberation and Noise on Sentence Intelligibility for Hearing-Impaired Listeners

    ERIC Educational Resources Information Center

    George, Erwin L. J.; Goverts, S. Theo; Festen, Joost M.; Houtgast, Tammo

    2010-01-01

    Purpose: The Speech Transmission Index (STI; Houtgast, Steeneken, & Plomp, 1980; Steeneken & Houtgast, 1980) is commonly used to quantify the adverse effects of reverberation and stationary noise on speech intelligibility for normal-hearing listeners. Duquesnoy and Plomp (1980) showed that the STI can be applied for presbycusic listeners, relating…

  5. Comparison of Psychophysiological and Dual-Task Measures of Listening Effort

    ERIC Educational Resources Information Center

    Seeman, Scott; Sims, Rebecca

    2015-01-01

    Purpose: We wished to make a comparison of psychophysiological measures of listening effort with subjective and dual-task measures of listening effort for a diotic-dichotic-digits and a sentences-in-noise task. Method: Three groups of young adults (18-38 years old) with normal hearing participated in three experiments: two psychophysiological…

  6. Auditory and tactile gap discrimination by observers with normal and impaired hearing.

    PubMed

    Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Perez, Zachary D; Delhorne, Lorraine A; Villabona, Timothy J

    2014-02-01

    Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21-69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250-250, 250-400, 400-250, and 400-400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.

  7. The hidden effect of hearing acuity on speech recall, and compensatory effects of self-paced listening

    PubMed Central

    Piquado, Tepring; Benichov, Jonathan I.; Brownell, Hiram; Wingfield, Arthur

    2013-01-01

    Objective The purpose of this research was to determine whether negative effects of hearing loss on recall accuracy for spoken narratives can be mitigated by allowing listeners to control the rate of speech input. Design Paragraph-length narratives were presented for recall under two listening conditions in a within-participants design: presentation without interruption (continuous) at an average speech-rate of 150 words per minute; and presentation interrupted at periodic intervals at which participants were allowed to pause before initiating the next segment (self-paced). Study sample Participants were 24 adults ranging from 21 to 33 years of age. Half had age-normal hearing acuity and half had mild-to-moderate hearing loss. The two groups were comparable for age, years of formal education, and vocabulary. Results When narrative passages were presented continuously, without interruption, participants with hearing loss recalled significantly fewer story elements, both main ideas and narrative details, than those with age-normal hearing. The recall difference was eliminated when the two groups were allowed to self-pace the speech input. Conclusion Results support the hypothesis that the listening effort associated with reduced hearing acuity can slow processing operations and increase demands on working memory, with consequent negative effects on accuracy of narrative recall. PMID:22731919

  8. An algorithm to improve speech recognition in noise for hearing-impaired listeners

    PubMed Central

    Healy, Eric W.; Yoho, Sarah E.; Wang, Yuxuan; Wang, DeLiang

    2013-01-01

    Despite considerable effort, monaural (single-microphone) algorithms capable of increasing the intelligibility of speech in noise have remained elusive. Successful development of such an algorithm is especially important for hearing-impaired (HI) listeners, given their particular difficulty in noisy backgrounds. In the current study, an algorithm based on binary masking was developed to separate speech from noise. Unlike the ideal binary mask, which requires prior knowledge of the premixed signals, the masks used to segregate speech from noise in the current study were estimated by training the algorithm on speech not used during testing. Sentences were mixed with speech-shaped noise and with babble at various signal-to-noise ratios (SNRs). Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions. These increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs. They were also often substantial, allowing several HI listeners to improve intelligibility from scores near zero to values above 70%. PMID:24116438

  9. Free Field Word recognition test in the presence of noise in normal hearing adults.

    PubMed

    Almeida, Gleide Viviani Maciel; Ribas, Angela; Calleros, Jorge

    In ideal listening situations, subjects with normal hearing can easily understand speech, as can many subjects who have a hearing loss. To present the validation of the Word Recognition Test in a Free Field in the Presence of Noise in normal-hearing adults. Sample consisted of 100 healthy adults over 18 years of age with normal hearing. After pure tone audiometry, a speech recognition test was applied in free field condition with monosyllables and disyllables, with standardized material in three listening situations: optimal listening condition (no noise), with a signal to noise ratio of 0dB and a signal to noise ratio of -10dB. For these tests, an environment in calibrated free field was arranged where speech was presented to the subject being tested from two speakers located at 45°, and noise from a third speaker, located at 180°. All participants had speech audiometry results in the free field between 88% and 100% in the three listening situations. Word Recognition Test in Free Field in the Presence of Noise proved to be easy to be organized and applied. The results of the test validation suggest that individuals with normal hearing should get between 88% and 100% of the stimuli correct. The test can be an important tool in measuring noise interference on the speech perception abilities. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  10. Effects of Noise on Speech Recognition and Listening Effort in Children With Normal Hearing and Children With Mild Bilateral or Unilateral Hearing Loss.

    PubMed

    Lewis, Dawna; Schmid, Kendra; O'Leary, Samantha; Spalding, Jody; Heinrichs-Graham, Elizabeth; High, Robin

    2016-10-01

    This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL, UHL, or NH (Experiment 2) performed consonant identification and word and sentence recognition in background noise. Percentage correct performance and verbal response time (VRT) were assessed (onset time, total duration). In general, speech recognition improved as signal-to-noise ratio (SNR) increased both for children with NH and children with MBHL or UHL. The groups did not differ on measures of VRT. Onset times were longer for incorrect than for correct responses. For correct responses only, there was a general increase in VRT with decreasing SNR. Findings indicate poorer sentence recognition in children with NH and MBHL or UHL as SNR decreases. VRT results suggest that greater effort was expended when processing stimuli that were incorrectly identified. Increasing VRT with decreasing SNR for correct responses also supports greater effort in poorer acoustic conditions. The absence of significant hearing status differences suggests that VRT was not differentially affected by MBHL, UHL, or NH for children in this study.

  11. Individual differences reveal correlates of hidden hearing deficits.

    PubMed

    Bharadwaj, Hari M; Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G

    2015-02-04

    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of "normal hearing." Copyright © 2015 the authors 0270-6474/15/352161-12$15.00/0.

  12. Individual Differences Reveal Correlates of Hidden Hearing Deficits

    PubMed Central

    Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G.

    2015-01-01

    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.” PMID:25653371

  13. Audibility of reverse alarms under hearing protectors for normal and hearing-impaired listeners.

    PubMed

    Robinson, G S; Casali, J G

    1995-11-01

    The question of whether or not an individual suffering from a hearing loss is capable of hearing an auditory alarm or warning is an extremely important industrial safety issue. The ISO Standard that addresses auditory warnings for workplaces requires that any auditory alarm or warning be audible to all individuals in the workplace including those suffering from a hearing loss and/or wearing hearing protection devices (HPDs). Research was undertaken to determine how the ability to detect an alarm or warning signal changed for individuals with normal hearing and two levels of hearing loss as the levels of masking noise and alarm were manipulated. Pink noise was used as the masker and a heavy-equipment reverse alarm was used as the signal. The rating method paradigm of signal detection theory was used as the experimental procedure to separate the subjects' absolute sensitivities to the alarm from their individual criteria for deciding to respond in an affirmative manner. Results indicated that even at a fairly low signal-to-noise ratio (0 dB), subjects with a substantial hearing loss [a pure-tone average (PTA) hearing level of 45-50 dBHL in both ears] were capable of hearing the reverse alarm while wearing a high-attenuation earmuff in the pink noise used in the study.

  14. Effects of fundamental frequency and vocal-tract length cues on sentence segregation by listeners with hearing loss

    PubMed Central

    Mackersie, Carol L.; Dewey, James; Guthrie, Lesli A.

    2011-01-01

    The purpose was to determine the effect of hearing loss on the ability to separate competing talkers using talker differences in fundamental frequency (F0) and apparent vocal-tract length (VTL). Performance of 13 adults with hearing loss and 6 adults with normal hearing was measured using the Coordinate Response Measure. For listeners with hearing loss, the speech was amplified and filtered according to the NAL-RP hearing aid prescription. Target-to-competition ratios varied from 0 to 9 dB. The target sentence was randomly assigned to the higher or lower values of F0 or VTL on each trial. Performance improved for F0 differences up to 9 and 6 semitones for people with normal hearing and hearing loss, respectively, but only when the target talker had the higher F0. Recognition for the lower F0 target improved when trial-to-trial uncertainty was removed (9-semitone condition). Scores improved with increasing differences in VTL for the normal-hearing group. On average, hearing-impaired listeners did not benefit from VTL cues, but substantial inter-subject variability was observed. The amount of benefit from VTL cues was related to the average hearing loss in the 1–3-kHz region when the target talker had the shorter VTL. PMID:21877813

  15. Voice emotion recognition by cochlear-implanted children and their normally-hearing peers

    PubMed Central

    Chatterjee, Monita; Zion, Danielle; Deroche, Mickael L.; Burianek, Brooke; Limb, Charles; Goren, Alison; Kulkarni, Aditya M.; Christensen, Julie A.

    2014-01-01

    Despite their remarkable success in bringing spoken language to hearing impaired listeners, the signal transmitted through cochlear implants (CIs) remains impoverished in spectro-temporal fine structure. As a consequence, pitch-dominant information such as voice emotion, is diminished. For young children, the ability to correctly identify the mood/intent of the speaker (which may not always be visible in their facial expression) is an important aspect of social and linguistic development. Previous work in the field has shown that children with cochlear implants (cCI) have significant deficits in voice emotion recognition relative to their normally hearing peers (cNH). Here, we report on voice emotion recognition by a cohort of 36 school-aged cCI. Additionally, we provide for the first time, a comparison of their performance to that of cNH and NH adults (aNH) listening to CI simulations of the same stimuli. We also provide comparisons to the performance of adult listeners with CIs (aCI), most of whom learned language primarily through normal acoustic hearing. Results indicate that, despite strong variability, on average, cCI perform similarly to their adult counterparts; that both groups’ mean performance is similar to aNHs’ performance with 8-channel noise-vocoded speech; that cNH achieve excellent scores in voice emotion recognition with full-spectrum speech, but on average, show significantly poorer scores than aNH with 8-channel noise-vocoded speech. A strong developmental effect was observed in the cNH with noise-vocoded speech in this task. These results point to the considerable benefit obtained by cochlear-implanted children from their devices, but also underscore the need for further research and development in this important and neglected area. PMID:25448167

  16. Voice emotion recognition by cochlear-implanted children and their normally-hearing peers.

    PubMed

    Chatterjee, Monita; Zion, Danielle J; Deroche, Mickael L; Burianek, Brooke A; Limb, Charles J; Goren, Alison P; Kulkarni, Aditya M; Christensen, Julie A

    2015-04-01

    Despite their remarkable success in bringing spoken language to hearing impaired listeners, the signal transmitted through cochlear implants (CIs) remains impoverished in spectro-temporal fine structure. As a consequence, pitch-dominant information such as voice emotion, is diminished. For young children, the ability to correctly identify the mood/intent of the speaker (which may not always be visible in their facial expression) is an important aspect of social and linguistic development. Previous work in the field has shown that children with cochlear implants (cCI) have significant deficits in voice emotion recognition relative to their normally hearing peers (cNH). Here, we report on voice emotion recognition by a cohort of 36 school-aged cCI. Additionally, we provide for the first time, a comparison of their performance to that of cNH and NH adults (aNH) listening to CI simulations of the same stimuli. We also provide comparisons to the performance of adult listeners with CIs (aCI), most of whom learned language primarily through normal acoustic hearing. Results indicate that, despite strong variability, on average, cCI perform similarly to their adult counterparts; that both groups' mean performance is similar to aNHs' performance with 8-channel noise-vocoded speech; that cNH achieve excellent scores in voice emotion recognition with full-spectrum speech, but on average, show significantly poorer scores than aNH with 8-channel noise-vocoded speech. A strong developmental effect was observed in the cNH with noise-vocoded speech in this task. These results point to the considerable benefit obtained by cochlear-implanted children from their devices, but also underscore the need for further research and development in this important and neglected area. This article is part of a Special Issue entitled . Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Children's Performance in Complex Listening Conditions: Effects of Hearing Loss and Digital Noise Reduction

    ERIC Educational Resources Information Center

    Pittman, Andrea

    2011-01-01

    Purpose: To determine the effect of hearing loss (HL) on children's performance for an auditory task under demanding listening conditions and to determine the effect of digital noise reduction (DNR) on that performance. Method: Fifty children with normal hearing (NH) and 30 children with HL (8-12 years of age) categorized words in the presence of…

  18. Processing Mechanisms in Hearing-Impaired Listeners: Evidence from Reaction Times and Sentence Interpretation.

    PubMed

    Carroll, Rebecca; Uslar, Verena; Brand, Thomas; Ruigendijk, Esther

    The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. Hearing impairment adds perceptual processing load during sentence processing

  19. Interrupted Monosyllabic Words: The Effects of Ten Interruption Locations on Recognition Performance by Older Listeners with Sensorineural Hearing Loss.

    PubMed

    Wilson, Richard H; Sharrett, Kadie C

    2017-01-01

    Two previous experiments from our laboratory with 70 interrupted monosyllabic words demonstrated that recognition performance was influenced by the temporal location of the interruption pattern. The interruption pattern (10 interruptions/sec, 50% duty cycle) was always the same and referenced word onset; the only difference between the patterns was the temporal location of the on- and off-segments of the interruption cycle. In the first study, both young and older listeners obtained better recognition performances when the initial on-segment coincided with word onset than when the initial on-segment was delayed by 50 msec. The second experiment with 24 young listeners detailed recognition performance as the interruption pattern was incremented in 10-msec steps through the 0- to 90-msec onset range. Across the onset conditions, 95% of the functions were either flat or U-shaped. To define the effects that interruption pattern locations had on word recognition by older listeners with sensorineural hearing loss as the interruption pattern incremented, re: word onset, from 0 to 90 msec in 10-msec steps. A repeated-measures design with ten interruption patterns (onset conditions) and one uninterruption condition. Twenty-four older males (mean = 69.6 yr) with sensorineural hearing loss participated in two 1-hour sessions. The three-frequency pure-tone average was 24.0 dB HL and word recognition was ≥80% correct. Seventy consonant-vowel nucleus-consonant words formed the corpus of materials with 25 additional words used for practice. For each participant, the 700 interrupted stimuli (70 words by 10 onset conditions), the 70 words uninterrupted, and two practice lists each were randomized and recorded on compact disc in 33 tracks of 25 words each. The data were analyzed at the participant and word levels and compared to the results obtained earlier on 24 young listeners with normal hearing. The mean recognition performance on the 70 words uninterrupted was 91.0% with an

  20. Use of a glimpsing model to understand the performance of listeners with and without hearing loss in spatialized speech mixtures

    PubMed Central

    Best, Virginia; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Kidd, Gerald

    2017-01-01

    In many situations, listeners with sensorineural hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. This deficit is particularly evident in the “symmetric masker” paradigm in which competing talkers are located to either side of a central target talker. However, there is some evidence that reduced target audibility (rather than a spatial deficit per se) under conditions of spatial separation may contribute to the observed deficit. In this study a simple “glimpsing” model (applied separately to each ear) was used to isolate the target information that is potentially available in binaural speech mixtures. Intelligibility of these glimpsed stimuli was then measured directly. Differences between normally hearing and hearing-impaired listeners observed in the natural binaural condition persisted for the glimpsed condition, despite the fact that the task no longer required segregation or spatial processing. This result is consistent with the idea that the performance of listeners with hearing loss in the spatialized mixture was limited by their ability to identify the target speech based on sparse glimpses, possibly as a result of some of those glimpses being inaudible. PMID:28147587

  1. Music to whose ears? The effect of social norms on young people's risk perceptions of hearing damage resulting from their music listening behavior.

    PubMed

    Gilliver, Megan; Carter, Lyndal; Macoun, Denise; Rosen, Jenny; Williams, Warwick

    2012-01-01

    Professional and community concerns about the potentially dangerous noise levels for common leisure activities has led to increased interest on providing hearing health information to participants. However, noise reduction programmes aimed at leisure activities (such as music listening) face a unique difficulty. The noise source that is earmarked for reduction by hearing health professionals is often the same one that is viewed as pleasurable by participants. Furthermore, these activities often exist within a social setting, with additional peer influences that may influence behavior. The current study aimed to gain a better understanding of social-based factors that may influence an individual's motivation to engage in positive hearing health behaviors. Four hundred and eighty-four participants completed questionnaires examining their perceptions of the hearing risk associated with listening to music listening and asking for estimates of their own and their peer's music listening behaviors. Participants were generally aware of the potential risk posed by listening to personal stereo players (PSPs) and the volumes likely to be most dangerous. Approximately one in five participants reported using listening volumes at levels perceived to be dangerous, an incidence rate in keeping with other studies measuring actual PSP use. However, participants showed less awareness of peers' behavior, consistently overestimating the volumes at which they believed their friends listened. Misperceptions of social norms relating to listening behavior may decrease individuals' perceptions of susceptibility to hearing damage. The consequences of hearing health promotion are discussed, along with suggestions relating to the development of new programs.

  2. Hear here: children with hearing loss learn words by listening.

    PubMed

    Lew, Joyce; Purcell, Alison A; Doble, Maree; Lim, Lynne H

    2014-10-01

    Early use of hearing devices and family participation in auditory-verbal therapy has been associated with age-appropriate verbal communication outcomes for children with hearing loss. However, there continues to be great variability in outcomes across different oral intervention programmes and little consensus on how therapists should prioritise goals at each therapy session for positive clinical outcomes. This pilot intervention study aimed to determine whether therapy goals that concentrate on teaching preschool children with hearing loss how to distinguish between words in a structured listening programme is effective, and whether gains in speech perception skills impact on vocabulary and speech development without them having to be worked on directly in therapy. A multiple baseline across subjects design was used in this within-subject controlled study. 3 children aged between 2:6 and 3:1 with moderate-severe to severe-profound hearing loss were recruited for a 6-week intervention programme. Each participant commenced at different stages of the 10-staged listening programme depending on their individual listening skills at recruitment. Speech development and vocabulary assessments were conducted before and after the training programme in addition to speech perception assessments and probes conducted throughout the intervention programme. All participants made gains in speech perception skills as well as vocabulary and speech development. Speech perception skills acquired were noted to be maintained a week after intervention. In addition, all participants were able to generalise speech perception skills learnt to words that had not been used in the intervention programme. This pilot study found that therapy directed at listening alone is promising and that it may have positive impact on speech and vocabulary development without these goals having to be incorporated into a therapy programme. Although a larger study is necessary for more conclusive findings, the

  3. Peripheral hearing loss reduces the ability of children to direct selective attention during multi-talker listening.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2017-07-01

    Restoring normal hearing requires knowledge of how peripheral and central auditory processes are affected by hearing loss. Previous research has focussed primarily on peripheral changes following sensorineural hearing loss, whereas consequences for central auditory processing have received less attention. We examined the ability of hearing-impaired children to direct auditory attention to a voice of interest (based on the talker's spatial location or gender) in the presence of a common form of background noise: the voices of competing talkers (i.e. during multi-talker, or "Cocktail Party" listening). We measured brain activity using electro-encephalography (EEG) when children prepared to direct attention to the spatial location or gender of an upcoming target talker who spoke in a mixture of three talkers. Compared to normally-hearing children, hearing-impaired children showed significantly less evidence of preparatory brain activity when required to direct spatial attention. This finding is consistent with the idea that hearing-impaired children have a reduced ability to prepare spatial attention for an upcoming talker. Moreover, preparatory brain activity was not restored when hearing-impaired children listened with their acoustic hearing aids. An implication of these findings is that steps to improve auditory attention alongside acoustic hearing aids may be required to improve the ability of hearing-impaired children to understand speech in the presence of competing talkers. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. The effects of listening environment and earphone style on preferred listening levels of normal hearing adults using an MP3 player.

    PubMed

    Hodgetts, William E; Rieger, Jana M; Szarko, Ryan A

    2007-06-01

    The main objective of this study was to determine the influence of listening environment and earphone style on the preferred-listening levels (PLLs) measured in users' ear canals with a commercially-available MP3 player. It was hypothesized that listeners would prefer higher levels with earbud headphones as opposed to over-the-ear headphones, and that the effects would depend on the environment in which the user was listening. A secondary objective was to use the measured PLLs to determine the permissible listening duration to reach 100% daily noise dose. There were two independent variables in this study. The first, headphone style, had three levels: earbud, over-the-ear, and over-the-ear with noise reduction (the same headphones with a noise reduction circuit). The second, environment, also had 3 levels: quiet, street noise and multi-talker babble. The dependent variable was ear canal A-weighted sound pressure level. A 3 x 3 within-subjects repeated-measures ANOVA was used to analyze the data. Thirty-eight normal hearing adults were recruited from the Faculty of Rehabilitation Medicine at the University of Alberta. Each subject listened to the same song and adjusted the level until it "sounded best" to them in each of the 9 conditions. Significant main effects were found for both the headphone style and environment factors. On average, listeners had higher preferred listening levels with the earbud headphones, than with the over-the-ear headphones. When the noise reduction circuit was used with the over-the-ear headphones, the average PLL was even lower. On average, listeners had higher PLLs in street noise than in multi-talker babble and both of these were higher than the PLL for the quiet condition. The interaction between headphone style and environment was also significant. Details of individual contrasts are explored. Overall, PLLs were quite conservative, which would theoretically allow for extended permissible listening durations. Finally, we investigated

  5. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners

    PubMed Central

    Kirchberger, Martin

    2016-01-01

    Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. PMID:26868955

  6. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners.

    PubMed

    Kirchberger, Martin; Russo, Frank A

    2016-02-10

    Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. © The Author(s) 2016.

  7. Speech perception in older listeners with normal hearing:conditions of time alteration, selective word stress, and length of sentences.

    PubMed

    Cho, Soojin; Yu, Jyaehyoung; Chun, Hyungi; Seo, Hyekyung; Han, Woojae

    2014-04-01

    Deficits of the aging auditory system negatively affect older listeners in terms of speech communication, resulting in limitations to their social lives. To improve their perceptual skills, the goal of this study was to investigate the effects of time alteration, selective word stress, and varying sentence lengths on the speech perception of older listeners. Seventeen older people with normal hearing were tested for seven conditions of different time-altered sentences (i.e., ±60%, ±40%, ±20%, 0%), two conditions of selective word stress (i.e., no-stress and stress), and three different lengths of sentences (i.e., short, medium, and long) at the most comfortable level for individuals in quiet circumstances. As time compression increased, sentence perception scores decreased statistically. Compared to a natural (or no stress) condition, the selectively stressed words significantly improved the perceptual scores of these older listeners. Long sentences yielded the worst scores under all time-altered conditions. Interestingly, there was a noticeable positive effect for the selective word stress at the 20% time compression. This pattern of results suggests that a combination of time compression and selective word stress is more effective for understanding speech in older listeners than using the time-expanded condition only.

  8. Pulse-rate discrimination by cochlear-implant and normal-hearing listeners with and without binaural cues

    PubMed Central

    Carlyon, Robert P.; Long, Christopher J.; Deeks, John M.

    2008-01-01

    Experiment 1 measured rate discrimination of electric pulse trains by bilateral cochlear implant (CI) users, for standard rates of 100, 200, and 300 pps. In the diotic condition the pulses were presented simultaneously to the two ears. Consistent with previous results with unilateral stimulation, performance deteriorated at higher standard rates. In the signal interval of each trial in the dichotic condition, the standard rate was presented to the left ear and the (higher) signal rate was presented to the right ear; the non-signal intervals were the same as in the diotic condition. Performance in the dichotic condition was better for some listeners than in the diotic condition for standard rates of 100 and 200 pps, but not at 300 pps. It is concluded that the deterioration in rate discrimination observed for CI users at high rates cannot be alleviated by the introduction of a binaural cue, and is unlikely to be limited solely by central pitch processes. Experiment 2 performed an analogous experiment in which 300-pps acoustic pulse trains were bandpass filtered (3900-5400 Hz) and presented in a noise background to normal-hearing listeners. Unlike the results of experiment 1, performance was superior in the dichotic than in the diotic condition. PMID:18397032

  9. Sentence intelligibility during segmental interruption and masking by speech-modulated noise: Effects of age and hearing loss

    PubMed Central

    Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.

    2015-01-01

    This study investigated how single-talker modulated noise impacts consonant and vowel cues to sentence intelligibility. Younger normal-hearing, older normal-hearing, and older hearing-impaired listeners completed speech recognition tests. All listeners received spectrally shaped speech matched to their individual audiometric thresholds to ensure sufficient audibility with the exception of a second younger listener group who received spectral shaping that matched the mean audiogram of the hearing-impaired listeners. Results demonstrated minimal declines in intelligibility for older listeners with normal hearing and more evident declines for older hearing-impaired listeners, possibly related to impaired temporal processing. A correlational analysis suggests a common underlying ability to process information during vowels that is predictive of speech-in-modulated noise abilities. Whereas, the ability to use consonant cues appears specific to the particular characteristics of the noise and interruption. Performance declines for older listeners were mostly confined to consonant conditions. Spectral shaping accounted for the primary contributions of audibility. However, comparison with the young spectral controls who received identical spectral shaping suggests that this procedure may reduce wideband temporal modulation cues due to frequency-specific amplification that affected high-frequency consonants more than low-frequency vowels. These spectral changes may impact speech intelligibility in certain modulation masking conditions. PMID:26093436

  10. Spectral Ripple Discrimination in Normal-Hearing Infants.

    PubMed

    Horn, David L; Won, Jong Ho; Rubinstein, Jay T; Werner, Lynne A

    Spectral resolution is a correlate of open-set speech understanding in postlingually deaf adults and prelingually deaf children who use cochlear implants (CIs). To apply measures of spectral resolution to assess device efficacy in younger CI users, it is necessary to understand how spectral resolution develops in normal-hearing children. In this study, spectral ripple discrimination (SRD) was used to measure listeners' sensitivity to a shift in phase of the spectral envelope of a broadband noise. Both resolution of peak to peak location (frequency resolution) and peak to trough intensity (across-channel intensity resolution) are required for SRD. SRD was measured as the highest ripple density (in ripples per octave) for which a listener could discriminate a 90° shift in phase of the sinusoidally-modulated amplitude spectrum. A 2 × 3 between-subjects design was used to assess the effects of age (7-month-old infants versus adults) and ripple peak/trough "depth" (10, 13, and 20 dB) on SRD in normal-hearing listeners (experiment 1). In experiment 2, SRD thresholds in the same age groups were compared using a task in which ripple starting phases were randomized across trials to obscure within-channel intensity cues. In experiment 3, the randomized starting phase method was used to measure SRD as a function of age (3-month-old infants, 7-month-old infants, and young adults) and ripple depth (10 and 20 dB in repeated measures design). In experiment 1, there was a significant interaction between age and ripple depth. The infant SRDs were significantly poorer than the adult SRDs at 10 and 13 dB ripple depths but adult-like at 20 dB depth. This result is consistent with immature across-channel intensity resolution. In contrast, the trajectory of SRD as a function of depth was steeper for infants than adults suggesting that frequency resolution was better in infants than adults. However, in experiment 2 infant performance was significantly poorer than adults at 20 d

  11. Hearing thresholds, tinnitus, and headphone listening habits in nine-year-old children.

    PubMed

    Båsjö, Sara; Möller, Claes; Widén, Stephen; Jutengren, Göran; Kähäri, Kim

    2016-10-01

    Investigate hearing function and headphone listening habits in nine-year-old Swedish children. A cross-sectional study was conducted and included otoscopy, tympanometry, pure-tone audiometry, and spontaneous otoacoustic emissions (SOAE). A questionnaire was used to evaluate headphone listening habits, tinnitus, and hyperacusis. A total of 415 children aged nine years. The prevalence of a hearing threshold ≥20 dB HL at one or several frequencies was 53%, and the hearing thresholds at 6 and 8 kHz were higher than those at the low and mid frequencies. SOAEs were observed in 35% of the children, and the prevalence of tinnitus was 5.3%. No significant relationship between SOAE and tinnitus was found. Pure-tone audiometry showed poorer hearing thresholds in children with tinnitus and in children who regularly listened with headphones. The present study of hearing, listening habits, and tinnitus in nine-year old children is, to our knowledge, the largest study so far. The main findings were that hearing thresholds in the right ear were poorer in children who used headphones than in children not using them, which could be interpreted as headphone listening may have negative consequences to children's hearing. Children with tinnitus showed poorer hearing thresholds compared to children without tinnitus.

  12. [The discrimination of mono-syllable words in noise in listeners with normal hearing].

    PubMed

    Yoshida, M; Sagara, T; Nagano, M; Korenaga, K; Makishima, K

    1992-02-01

    The discrimination of mono-syllable words (67S word-list) pronounced by a male and a female speaker was investigated in noise in 39 normal hearing subjects. The subjects listened to the test words at a constant level of 62 dB together with white or weighted noise in four S/N conditions. By processing the data with logit transformation, S/N-discrimination curves were presumed for each combination of a speech material and a noise. Regardless of the type of noise, the discrimination scores for the female voice started to decrease gradually at a S/N ratio of +10 dB, and reached 10 to 20% at-10 dB. For the male voice in white noise, the discrimination curve was similar to those for the female voice. On the contrary, the discrimination score for the male voice in weighted noise declined rapidly from a S/N ratio of +5 dB, and went below 10% at -5 dB. The discrimination curves seem to be shaped by the interrelations between the spectrum of the speech material and that of the noise.

  13. Speech Perception in Older Hearing Impaired Listeners: Benefits of Perceptual Training

    PubMed Central

    Woods, David L.; Doss, Zoe; Herron, Timothy J.; Arbogast, Tanya; Younus, Masood; Ettlinger, Marc; Yund, E. William

    2015-01-01

    Hearing aids (HAs) only partially restore the ability of older hearing impaired (OHI) listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC) syllables containing different vowels (/ɑ/, /i/, or /u/) and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs) spanning a 12 dB range. Noise levels were adjusted over training sessions based on d’ measures. Listeners were tested before and after training to measure (1) changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2) sentence reception thresholds (SeRTs) using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d’ thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in processing

  14. Microscopic prediction of speech intelligibility in spatially distributed speech-shaped noise for normal-hearing listeners.

    PubMed

    Geravanchizadeh, Masoud; Fallah, Ali

    2015-12-01

    A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.

  15. The role of auditory and cognitive factors in understanding speech in noise by normal-hearing older listeners

    PubMed Central

    Schoof, Tim; Rosen, Stuart

    2014-01-01

    Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60–72 years) and younger (19–29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed. PMID:25429266

  16. Sequential stream segregation in normally-hearing and cochlear-implant listenersa)

    PubMed Central

    Tejani, Viral D.; Schvartz-Leyzac, Kara C.; Chatterjee, Monita

    2017-01-01

    Sequential stream segregation by normal hearing (NH) and cochlear implant (CI) listeners was investigated using an irregular rhythm detection (IRD) task. Pure tones and narrowband noises of different bandwidths were presented monaurally to older and younger NH listeners via headphones. For CI users, stimuli were delivered as pure tones via soundfield and via direct electrical stimulation. Results confirmed that tonal pitch is not essential for stream segregation by NH listeners and that aging does not reduce NH listeners' stream segregation. CI listeners' stream segregation was significantly poorer than NH listeners' with pure tone stimuli. With direct stimulation, however, CI listeners showed significantly stronger stream segregation, with a mean normalized pattern similar to NH listeners, implying that the CI speech processors possibly degraded acoustic cues. CI listeners' performance on an electrode discrimination task indicated that cues that are salient enough to make two electrodes highly discriminable may not be sufficiently salient for stream segregation, and that gap detection/discrimination, which must depend on perceptual electrode differences, did not play a role in the IRD task. Although the IRD task does not encompass all aspects of full stream segregation, these results suggest that some CI listeners may demonstrate aspects of stream segregation. PMID:28147600

  17. Effects of cooperating and conflicting cues on speech intonation recognition by cochlear implant users and normal hearing listeners.

    PubMed

    Peng, Shu-Chen; Lu, Nelson; Chatterjee, Monita

    2009-01-01

    Cochlear implant (CI) recipients have only limited access to fundamental frequency (F0) information, and thus exhibit deficits in speech intonation recognition. For speech intonation, F0 serves as the primary cue, and other potential acoustic cues (e.g. intensity properties) may also contribute. This study examined the effects of cooperating or conflicting acoustic cues on speech intonation recognition by adult CI and normal hearing (NH) listeners with full-spectrum and spectrally degraded speech stimuli. Identification of speech intonation that signifies question and statement contrasts was measured in 13 CI recipients and 4 NH listeners, using resynthesized bi-syllabic words, where F0 and intensity properties were systematically manipulated. The stimulus set was comprised of tokens whose acoustic cues (i.e. F0 contour and intensity patterns) were either cooperating or conflicting. Subjects identified if each stimulus is a 'statement' or a 'question' in a single-interval, 2-alternative forced-choice (2AFC) paradigm. Logistic models were fitted to the data, and estimated coefficients were compared under cooperating and conflicting conditions, between the subject groups (CI vs. NH), and under full-spectrum and spectrally degraded conditions for NH listeners. The results indicated that CI listeners' intonation recognition was enhanced by cooperating F0 contour and intensity cues, but was adversely affected by these cues being conflicting. On the other hand, with full-spectrum stimuli, NH listeners' intonation recognition was not affected by cues being cooperating or conflicting. The effects of cues being cooperating or conflicting were comparable between the CI group and NH listeners with spectrally degraded stimuli. These findings suggest the importance of taking multiple acoustic sources for speech recognition into consideration in aural rehabilitation for CI recipients. Copyright (C) 2009 S. Karger AG, Basel.

  18. Cochlear compression: perceptual measures and implications for normal and impaired hearing.

    PubMed

    Oxenham, Andrew J; Bacon, Sid P

    2003-10-01

    This article provides a review of recent developments in our understanding of how cochlear nonlinearity affects sound perception and how a loss of the nonlinearity associated with cochlear hearing impairment changes the way sounds are perceived. The response of the healthy mammalian basilar membrane (BM) to sound is sharply tuned, highly nonlinear, and compressive. Damage to the outer hair cells (OHCs) results in changes to all three attributes: in the case of total OHC loss, the response of the BM becomes broadly tuned and linear. Many of the differences in auditory perception and performance between normal-hearing and hearing-impaired listeners can be explained in terms of these changes in BM response. Effects that can be accounted for in this way include poorer audiometric thresholds, loudness recruitment, reduced frequency selectivity, and changes in apparent temporal processing. All these effects can influence the ability of hearing-impaired listeners to perceive speech, especially in complex acoustic backgrounds. A number of behavioral methods have been proposed to estimate cochlear nonlinearity in individual listeners. By separating the effects of cochlear nonlinearity from other aspects of hearing impairment, such methods may contribute towards identifying the different physiological mechanisms responsible for hearing loss in individual patients. This in turn may lead to more accurate diagnoses and more effective hearing-aid fitting for individual patients. A remaining challenge is to devise a behavioral measure that is sufficiently accurate and efficient to be used in a clinical setting.

  19. Neurodynamic evaluation of hearing aid features using EEG correlates of listening effort.

    PubMed

    Bernarding, Corinna; Strauss, Daniel J; Hannemann, Ronny; Seidler, Harald; Corona-Strauss, Farah I

    2017-06-01

    In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.

  20. Listening Skills Handbook.

    ERIC Educational Resources Information Center

    Decatur Public Schools District 61, IL.

    Defining listening as the active and conscious process of hearing, recognizing, and interpreting or comprehending language, this guide provides numerous activities to promote the listening skills of primary and intermediate grade students. Specifically, the activities described seek to develop (1) the ability of young students to listen…

  1. Methods of Improving Speech Intelligibility for Listeners with Hearing Resolution Deficit

    PubMed Central

    2012-01-01

    Abstract Methods developed for real-time time scale modification (TSM) of speech signal are presented. They are based on the non-uniform, speech rate depended SOLA algorithm (Synchronous Overlap and Add). Influence of the proposed method on the intelligibility of speech was investigated for two separate groups of listeners, i.e. hearing impaired children and elderly listeners. It was shown that for the speech with average rate equal to or higher than 6.48 vowels/s, all of the proposed methods have statistically significant impact on the improvement of speech intelligibility for hearing impaired children with reduced hearing resolution and one of the proposed methods significantly improves comprehension of speech in the group of elderly listeners with reduced hearing resolution. Virtual slides http://www.diagnosticpathology.diagnomx.eu/vs/2065486371761991 PMID:23009662

  2. [Relationship between the Mandarin acceptable noise level and the personality traits in normal hearing adults].

    PubMed

    Wu, Dan; Chen, Jian-yong; Wang, Shuo; Zhang, Man-hua; Chen, Jing; Li, Yu-ling; Zhang, Hua

    2013-03-01

    To evaluate the relationship between the Mandarin acceptable noise level (ANL) and the personality trait for normal-hearing adults. Eighty-five Mandarin speakers, aged from 21 to 27, participated in this study. ANL materials and the Eysenck Personality Questionnaire (EPQ) questionnaire were used to test the acceptable noise level and the personality trait for normal-hearing subjects. SPSS 17.0 was used to analyze the results. ANL were (7.8 ± 2.9) dB in normal hearing participants. The P and N scores in EPQ were significantly correlated with ANL (r = 0.284 and 0.318, P < 0.01). No significant correlations were found between ANL and E and L scores (r = -0.036 and -.167, P > 0.05). Listeners with higher ANL were more likely to be eccentric, hostile, aggressive, and instabe, no ANL differences were found in listeners who were different in introvert-extravert or lying.

  3. Talker Differences in Clear and Conversational Speech: Vowel Intelligibility for Older Adults with Hearing Loss

    ERIC Educational Resources Information Center

    Ferguson, Sarah Hargus

    2012-01-01

    Purpose: To establish the range of talker variability for vowel intelligibility in clear versus conversational speech for older adults with hearing loss and to determine whether talkers who produced a clear speech benefit for young listeners with normal hearing also did so for older adults with hearing loss. Method: Clear and conversational vowels…

  4. Hearing Handicap and Speech Recognition Correlate With Self-Reported Listening Effort and Fatigue.

    PubMed

    Alhanbali, Sara; Dawes, Piers; Lloyd, Simon; Munro, Kevin J

    To investigate the correlations between hearing handicap, speech recognition, listening effort, and fatigue. Eighty-four adults with hearing loss (65 to 85 years) completed three self-report questionnaires: the Fatigue Assessment Scale, the Effort Assessment Scale, and the Hearing Handicap Inventory for Elderly. Audiometric assessment included pure-tone audiometry and speech recognition in noise. There was a significant positive correlation between handicap and fatigue (r = 0.39, p < 0.05) and handicap and effort (r = 0.73, p < 0.05). There were significant (but lower) correlations between speech recognition and fatigue (r = 0.22, p < 0.05) or effort (r = 0.32, p< 0.05). There was no significant correlation between hearing level and fatigue or effort. Hearing handicap and speech recognition both correlate with self-reported listening effort and fatigue, which is consistent with a model of listening effort and fatigue where perceived difficulty is related to sustained effort and fatigue for unrewarding tasks over which the listener has low control. A clinical implication is that encouraging clients to recognize and focus on the pleasure and positive experiences of listening may result in greater satisfaction and benefit from hearing aid use.

  5. Social inclusion for children with hearing loss in listening and spoken Language early intervention: an exploratory study.

    PubMed

    Constantinescu-Sharpe, Gabriella; Phillips, Rebecca L; Davis, Aleisha; Dornan, Dimity; Hogan, Anthony

    2017-03-14

    Social inclusion is a common focus of listening and spoken language (LSL) early intervention for children with hearing loss. This exploratory study compared the social inclusion of young children with hearing loss educated using a listening and spoken language approach with population data. A framework for understanding the scope of social inclusion is presented in the Background. This framework guided the use of a shortened, modified version of the Longitudinal Study of Australian Children (LSAC) to measure two of the five facets of social inclusion ('education' and 'interacting with society and fulfilling social goals'). The survey was completed by parents of children with hearing loss aged 4-5 years who were educated using a LSL approach (n = 78; 37% who responded). These responses were compared to those obtained for typical hearing children in the LSAC dataset (n = 3265). Analyses revealed that most children with hearing loss had comparable outcomes to those with typical hearing on the 'education' and 'interacting with society and fulfilling social roles' facets of social inclusion. These exploratory findings are positive and warrant further investigation across all five facets of the framework to identify which factors influence social inclusion.

  6. Spectrotemporal Modulation Sensitivity as a Predictor of Speech Intelligibility for Hearing-Impaired Listeners

    PubMed Central

    Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.

    2014-01-01

    Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to

  7. Evaluation of Extended-wear Hearing Aid Technology for Operational Military Use

    DTIC Science & Technology

    2017-07-01

    for a transparent hearing protection device that could protect the hearing of normal-hearing listeners without degrading auditory situational...method, suggest that continuous noise protection is also comparable to conventional earplug devices. Behavioral testing on listeners with normal...associated with the extended-wear hearing aid could be adapted to provide long-term hearing protection for listeners with normal hearing with minimal

  8. Binaural Advantage for Younger and Older Adults with Normal Hearing

    ERIC Educational Resources Information Center

    Dubno, Judy R.; Ahlstrom, Jayne B.; Horwitz, Amy R.

    2008-01-01

    Purpose: Three experiments measured benefit of spatial separation, benefit of binaural listening, and masking-level differences (MLDs) to assess age-related differences in binaural advantage. Method: Participants were younger and older adults with normal hearing through 4.0 kHz. Experiment 1 compared spatial benefit with and without head shadow.…

  9. Effectiveness of alternative listening devices to conventional hearing aids for adults with hearing loss: a systematic review protocol

    PubMed Central

    Barker, Alex B; Xia, Jun

    2016-01-01

    Introduction Hearing loss is a major public health concern, affecting over 11 million people in the UK. While hearing aids are the most common clinical intervention for hearing loss, the majority of people that would benefit from using hearing aids do not take them up. Recent technological advances have led to a rapid increase of alternative listening devices to conventional hearing aids. These include hearing aids that can be customised using a smartphone, smartphone-based ‘hearing aid’ apps, personal sound amplification products and wireless hearing products. However, no systematic review has been published evaluating whether alternative listening devices are an effective management strategy for people with hearing loss. Methods and analysis The objective of this systematic review is to assess whether alternative listening devices are an effective intervention for adults with hearing loss. Methods are reported according to the Preferred Reporting Items for Systematic reviews and Meta-analyses Protocols (PRISMA-P) 2015 checklist. Retrospective or prospective studies, randomised controlled trials, non-randomised controlled trials, and before-after comparison studies will be eligible for inclusion. We will include studies with adult participants (≥18 years) with a mild or moderate hearing loss. The intervention should be an alternative listening device to a conventional hearing aid (comparison). Studies will be restricted to outcomes associated with the consequences of hearing loss. We will search relevant databases to identify published, completed but unpublished and ongoing trials. The overall quality of included evidence will be evaluated using the GRADE system, and meta-analysis performed if appropriate. Ethics and dissemination No ethical issues are foreseen. The findings will be reported at national and international conferences, primarily audiology, and ear, nose and throat, and in a peer-reviewed journal using the PRISMA guidelines. Review

  10. Exploration of a physiologically-inspired hearing-aid algorithm using a computer model mimicking impaired hearing.

    PubMed

    Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray

    2016-01-01

    To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.

  11. The effect of hearing aid technologies on listening in an automobile.

    PubMed

    Wu, Yu-Hsiang; Stangl, Elizabeth; Bentler, Ruth A; Stanziola, Rachel W

    2013-06-01

    Communication while traveling in an automobile often is very difficult for hearing aid users. This is because the automobile/road noise level is usually high, and listeners/drivers often do not have access to visual cues. Since the talker of interest usually is not located in front of the listener/driver, conventional directional processing that places the directivity beam toward the listener's front may not be helpful and, in fact, could have a negative impact on speech recognition (when compared to omnidirectional processing). Recently, technologies have become available in commercial hearing aids that are designed to improve speech recognition and/or listening effort in noisy conditions where talkers are located behind or beside the listener. These technologies include (1) a directional microphone system that uses a backward-facing directivity pattern (Back-DIR processing), (2) a technology that transmits audio signals from the ear with the better signal-to-noise ratio (SNR) to the ear with the poorer SNR (Side-Transmission processing), and (3) a signal processing scheme that suppresses the noise at the ear with the poorer SNR (Side-Suppression processing). The purpose of the current study was to determine the effect of (1) conventional directional microphones and (2) newer signal processing schemes (Back-DIR, Side-Transmission, and Side-Suppression) on listener's speech recognition performance and preference for communication in a traveling automobile. A single-blinded, repeated-measures design was used. Twenty-five adults with bilateral symmetrical sensorineural hearing loss aged 44 through 84 yr participated in the study. The automobile/road noise and sentences of the Connected Speech Test (CST) were recorded through hearing aids in a standard van moving at a speed of 70 mph on a paved highway. The hearing aids were programmed to omnidirectional microphone, conventional adaptive directional microphone, and the three newer schemes. CST sentences were presented

  12. Hearing risk associated with the usage of personal listening devices among urban high school students in Malaysia.

    PubMed

    Sulaiman, A H; Seluakumaran, K; Husain, R

    2013-08-01

    To investigate listening habits and hearing risks associated with the use of personal listening devices among urban high school students in Malaysia. Cross-sectional, descriptive study. In total, 177 personal listening device users (13-16 years old) were interviewed to elicit their listening habits (e.g. listening duration, volume setting) and symptoms of hearing loss. Their listening levels were also determined by asking them to set their usual listening volume on an Apple iPod TM playing a pre-selected song. The iPod's sound output was measured with an artificial ear connected to a sound level meter. Subjects also underwent pure tone audiometry to ascertain their hearing thresholds at standard frequencies (0.5-8 kHz) and extended high frequencies (9-16 kHz). The mean measured listening level and listening duration for all subjects were 72.2 dBA and 1.2 h/day, respectively. Their self-reported listening levels were highly correlated with the measured levels (P < 0.001). Subjects who listened at higher volumes also tend to listen for longer durations (P = 0.012). Male subjects listened at a significantly higher volume than female subjects (P = 0.008). When sound exposure levels were compared with the recommended occupational noise exposure limit, 4.5% of subjects were found to be listening at levels which require mandatory hearing protection in the occupational setting. Hearing loss (≥25 dB hearing level at one or more standard test frequencies) was detected in 7.3% of subjects. Subjects' sound exposure levels from the devices were positively correlated with their hearing thresholds at two of the extended high frequencies (11.2 and 14 kHz), which could indicate an early stage of noise-induced hearing loss. Although the average high school student listened at safe levels, a small percentage of listeners were exposed to harmful sound levels. Preventive measures are needed to avoid permanent hearing damage in high-risk listeners. Copyright © 2013 The Royal Society

  13. Continuous multiword recognition performance of young and elderly listeners in ambient noise

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi

    2005-09-01

    Hearing threshold shift due to aging is known as a dominant factor to degrade speech recognition performance in noisy conditions. On the other hand, cognitive factors of aging-relating speech recognition performance in various speech-to-noise conditions are not well established. In this study, two kinds of speech test were performed to examine how working memory load relates to speech recognition performance. One is word recognition test with high-familiarity, four-syllable Japanese words (single-word test). In this test, each word was presented to listeners; the listeners were asked to write the word down on paper with enough time to answer. In the other test, five continuous word were presented to listeners and listeners were asked to write the word down after just five words were presented (multiword test). Both tests were done in various speech-to-noise ratios under 50-dBA Hoth spectrum noise with more than 50 young and elderly subjects. The results of two experiments suggest that (1) Hearing level is related to scores of both tests. (2) Scores of single-word test are well correlated with those of multiword test. (3) Scores of multiword test are not improved as speech-to-noise ratio improves in the condition where scores of single-word test reach their ceiling.

  14. Impact of stimulus-related factors and hearing impairment on listening effort as indicated by pupil dilation.

    PubMed

    Ohlenforst, Barbara; Zekveld, Adriana A; Lunner, Thomas; Wendt, Dorothea; Naylor, Graham; Wang, Yang; Versfeld, Niek J; Kramer, Sophia E

    2017-08-01

    Previous research has reported effects of masker type and signal-to-noise ratio (SNR) on listening effort, as indicated by the peak pupil dilation (PPD) relative to baseline during speech recognition. At about 50% correct sentence recognition performance, increasing SNRs generally results in declining PPDs, indicating reduced effort. However, the decline in PPD over SNRs has been observed to be less pronounced for hearing-impaired (HI) compared to normal-hearing (NH) listeners. The presence of a competing talker during speech recognition generally resulted in larger PPDs as compared to the presence of a fluctuating or stationary background noise. The aim of the present study was to examine the interplay between hearing-status, a broad range of SNRs corresponding to sentence recognition performance varying from 0 to 100% correct, and different masker types (stationary noise and single-talker masker) on the PPD during speech perception. Twenty-five HI and 32 age-matched NH participants listened to sentences across a broad range of SNRs, masked with speech from a single talker (-25 dB to +15 dB SNR) or with stationary noise (-12 dB to +16 dB). Correct sentence recognition scores and pupil responses were recorded during stimulus presentation. With a stationary masker, NH listeners show maximum PPD across a relatively narrow range of low SNRs, while HI listeners show relatively large PPD across a wide range of ecological SNRs. With the single-talker masker, maximum PPD was observed in the mid-range of SNRs around 50% correct sentence recognition performance, while smaller PPDs were observed at lower and higher SNRs. Mixed-model ANOVAs revealed significant interactions between hearing-status and SNR on the PPD for both masker types. Our data show a different pattern of PPDs across SNRs between groups, which indicates that listening and the allocation of effort during listening in daily life environments may be different for NH and HI listeners. Copyright © 2017

  15. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing.

    PubMed

    Lundbeck, Micha; Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.

  16. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing

    PubMed Central

    Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far (‘radial’) and left-right (‘angular’) movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup. PMID:28675088

  17. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners With Bilateral and With Hearing-Preservation Cochlear Implants.

    PubMed

    Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H

    2016-08-01

    To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.

  18. Cochlear implantation with hearing preservation yields significant benefit for speech recognition in complex listening environments

    PubMed Central

    Gifford, René H.; Dorman, Michael F.; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L. W.; Roland, Peter; Buchman, Craig A.

    2012-01-01

    threshold at 250 Hz and EAS-related benefit for the adaptive SRT. Conclusions Our results suggest that (i) preserved low-frequency hearing improves speech understanding for CI recipients (ii) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing and (iii) preservation of binaural timing cues, albeit poorer than observed for individuals with normal hearing, is possible following unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. Our results demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of cochlear implant criteria to include individuals with low-frequency thresholds in even the normal to near-normal range. PMID:23446225

  19. Fitting and verification of frequency modulation systems on children with normal hearing.

    PubMed

    Schafer, Erin C; Bryant, Danielle; Sanders, Katie; Baldus, Nicole; Algier, Katherine; Lewis, Audrey; Traber, Jordan; Layden, Paige; Amin, Aneeqa

    2014-06-01

    Several recent investigations support the use of frequency modulation (FM) systems in children with normal hearing and auditory processing or listening disorders such as those diagnosed with auditory processing disorders, autism spectrum disorders, attention-deficit hyperactivity disorder, Friedreich ataxia, and dyslexia. The American Academy of Audiology (AAA) published suggested procedures, but these guidelines do not cite research evidence to support the validity of the recommended procedures for fitting and verifying nonoccluding open-ear FM systems on children with normal hearing. Documenting the validity of these fitting procedures is critical to maximize the potential FM-system benefit in the above-mentioned populations of children with normal hearing and those with auditory-listening problems. The primary goal of this investigation was to determine the validity of the AAA real-ear approach to fitting FM systems on children with normal hearing. The secondary goal of this study was to examine speech-recognition performance in noise and loudness ratings without and with FM systems in children with normal hearing sensitivity. A two-group, cross-sectional design was used in the present study. Twenty-six typically functioning children, ages 5-12 yr, with normal hearing sensitivity participated in the study. Participants used a nonoccluding open-ear FM receiver during laboratory-based testing. Participants completed three laboratory tests: (1) real-ear measures, (2) speech recognition performance in noise, and (3) loudness ratings. Four real-ear measures were conducted to (1) verify that measured output met prescribed-gain targets across the 1000-4000 Hz frequency range for speech stimuli, (2) confirm that the FM-receiver volume did not exceed predicted uncomfortable loudness levels, and (3 and 4) measure changes to the real-ear unaided response when placing the FM receiver in the child's ear. After completion of the fitting, speech recognition in noise at a -5

  20. The use of fundamental frequency for lexical segmentation in listeners with cochlear implants.

    PubMed

    Spitzer, Stephanie; Liss, Julie; Spahr, Tony; Dorman, Michael; Lansford, Kaitlin

    2009-06-01

    Fundamental frequency (F0) variation is one of a number of acoustic cues normal hearing listeners use for guiding lexical segmentation of degraded speech. This study examined whether F0 contour facilitates lexical segmentation by listeners fitted with cochlear implants (CIs). Lexical boundary error patterns elicited under unaltered and flattened F0 conditions were compared across three groups: listeners with conventional CI, listeners with CI and preserved low-frequency acoustic hearing, and normal hearing listeners subjected to CI simulations. Results indicate that all groups attended to syllabic stress cues to guide lexical segmentation, and that F0 contours facilitated performance for listeners with low-frequency hearing.

  1. Effectiveness of alternative listening devices to conventional hearing aids for adults with hearing loss: a systematic review protocol.

    PubMed

    Maidment, David W; Barker, Alex B; Xia, Jun; Ferguson, Melanie A

    2016-10-27

    Hearing loss is a major public health concern, affecting over 11 million people in the UK. While hearing aids are the most common clinical intervention for hearing loss, the majority of people that would benefit from using hearing aids do not take them up. Recent technological advances have led to a rapid increase of alternative listening devices to conventional hearing aids. These include hearing aids that can be customised using a smartphone, smartphone-based 'hearing aid' apps, personal sound amplification products and wireless hearing products. However, no systematic review has been published evaluating whether alternative listening devices are an effective management strategy for people with hearing loss. The objective of this systematic review is to assess whether alternative listening devices are an effective intervention for adults with hearing loss. Methods are reported according to the Preferred Reporting Items for Systematic reviews and Meta-analyses Protocols (PRISMA-P) 2015 checklist. Retrospective or prospective studies, randomised controlled trials, non-randomised controlled trials, and before-after comparison studies will be eligible for inclusion. We will include studies with adult participants (≥18 years) with a mild or moderate hearing loss. The intervention should be an alternative listening device to a conventional hearing aid (comparison). Studies will be restricted to outcomes associated with the consequences of hearing loss. We will search relevant databases to identify published, completed but unpublished and ongoing trials. The overall quality of included evidence will be evaluated using the GRADE system, and meta-analysis performed if appropriate. No ethical issues are foreseen. The findings will be reported at national and international conferences, primarily audiology, and ear, nose and throat, and in a peer-reviewed journal using the PRISMA guidelines. PROSPERO CRD4201502958. Published by the BMJ Publishing Group Limited. For

  2. Interactions between amplitude modulation and frequency modulation processing: Effects of age and hearing loss.

    PubMed

    Paraouty, Nihaad; Ewert, Stephan D; Wallaert, Nicolas; Lorenzi, Christian

    2016-07-01

    Frequency modulation (FM) and amplitude modulation (AM) detection thresholds were measured for a 500-Hz carrier frequency and a 5-Hz modulation rate. For AM detection, FM at the same rate as the AM was superimposed with varying FM depth. For FM detection, AM at the same rate was superimposed with varying AM depth. The target stimuli always contained both amplitude and frequency modulations, while the standard stimuli only contained the interfering modulation. Young and older normal-hearing listeners, as well as older listeners with mild-to-moderate sensorineural hearing loss were tested. For all groups, AM and FM detection thresholds were degraded in the presence of the interfering modulation. AM detection with and without interfering FM was hardly affected by either age or hearing loss. While aging had an overall detrimental effect on FM detection with and without interfering AM, there was a trend that hearing loss further impaired FM detection in the presence of AM. Several models using optimal combination of temporal-envelope cues at the outputs of off-frequency filters were tested. The interfering effects could only be predicted for hearing-impaired listeners. This indirectly supports the idea that, in addition to envelope cues resulting from FM-to-AM conversion, normal-hearing listeners use temporal fine-structure cues for FM detection.

  3. Analysis of subtle auditory dysfunctions in young normal-hearing subjects affected by Williams syndrome.

    PubMed

    Paglialonga, Alessia; Barozzi, Stefania; Brambilla, Daniele; Soi, Daniela; Cesarani, Antonio; Spreafico, Emanuela; Tognola, Gabriella

    2014-11-01

    To assess if young subjects affected by Williams syndrome (WS) with normal middle ear functionality and normal hearing thresholds might have subtle auditory dysfunctions that could be detected by using clinically available measurements. Otoscopy, acoustic reflexes, tympanometry, pure-tone audiometry, and distortion product otoacoustic emissions (DPOAEs) were measured in a group of 13 WS subjects and in 13 age-matched, typically developing control subjects. Participants were required to have normal otoscopy, A-type tympanogram, normal acoustic reflex thresholds, and pure-tone thresholds≤15 dB HL at 0.5, 1, and 2 kHz bilaterally. To limit the possible influence of middle ear status on DPOAE recordings, we analyzed only data from ears with pure-tone thresholds≤15 dB HL across all octave frequencies in the range 0.25-8 kHz, middle ear pressure (MEP)>-50 daPa, static compliance (SC) in the range 0.3-1.2 cm3, and ear canal volume (ECV) in the range 0.2-2 ml, and we performed analysis of covariance to remove the possible effects of middle ear variables on DPOAEs. No differences in mean hearing thresholds, SC, ECV, and gradient were observed between the two groups, whereas significantly lower MEP values were found in WS subjects as well as significantly decreased DPOAEs up to 3.2 kHz after adjusting for differences in middle ear status. Results revealed that WS subjects with normal hearing thresholds (≤15 dB HL) and normal middle ear functionality (MEP>-50 daPa, SC in the range 0.3-1.2 cm3, ECV in the range 0.2-2 ml) might have subtle auditory dysfunctions that can be detected by using clinically available methods. Overall, this study points out the importance of using otoacoustic emissions as a complement to routine audiological examinations in individuals with WS to detect, before the onset of hearing loss, possible subtle auditory dysfunctions so that patients can be early identified, better monitored, and promptly treated. Copyright © 2014 Elsevier Ireland Ltd

  4. Inherent envelope fluctuations in forward maskers: Effects of masker-probe delay for listeners with normal and impaired hearinga)

    PubMed Central

    Svec, Adam; Dubno, Judy R.; Nelson, Peggy B.

    2016-01-01

    Forward-masked thresholds increase as the magnitude of inherent masker envelope fluctuations increase for both normal-hearing (NH) and hearing-impaired (HI) adults for a short masker-probe delay (25 ms). The slope of the recovery from forward masking is shallower for HI than for NH listeners due to reduced cochlear nonlinearities. However, effects of hearing loss on additional masking due to inherent envelope fluctuations across masker-probe delays remain unknown. The current study assessed effects of hearing loss on the slope and amount of recovery from forward maskers that varied in inherent envelope fluctuations. Forward-masked thresholds were measured at 2000 and 4000 Hz, for masker-probe delays of 25, 50, and 75 ms, for NH and HI adults. Four maskers at each center frequency varied in inherent envelope fluctuations: Gaussian noise (GN) or low-fluctuation noise (LFN), with 1 or 1/3 equivalent rectangular bandwidths (ERBs). Results suggested that slopes of recovery from forward masking were shallower for HI than for NH listeners regardless of masker fluctuations. Additional masking due to inherent envelope fluctuations was greater for HI than for NH listeners at longer masker-probe delays, suggesting that inherent envelope fluctuations are more disruptive for HI than for NH listeners for a longer time course PMID:27036255

  5. Effects of cooperating and conflicting cues on speech intonation recognition by cochlear implant users and normal hearing listeners

    PubMed Central

    Peng, Shu-Chen; Lu, Nelson; Chatterjee, Monita

    2009-01-01

    Cochlear implant (CI) recipients have only limited access to fundamental frequency (F0) information, and thus exhibit deficits in speech intonation recognition. For speech intonation, F0 serves as the primary cue, and other potential acoustic cues (e.g., intensity properties) may also contribute. This study examined the effects of acoustic cues being cooperating or conflicting on speech intonation recognition by adult cochlear implant (CI), and normal-hearing (NH) listeners with full-spectrum and spectrally degraded speech stimuli. Identification of speech intonation that signifies question and statement contrasts was measured in 13 CI recipients and 4 NH listeners, using resynthesized bi-syllabic words, where F0 and intensity properties were systematically manipulated. The stimulus set was comprised of tokens whose acoustic cues, i.e., F0 contour and intensity patterns, were either cooperating or conflicting. Subjects identified if each stimulus is a “statement” or a “question” in a single-interval, two-alternative forced-choice (2AFC) paradigm. Logistic models were fitted to the data, and estimated coefficients were compared under cooperating and conflicting conditions, between the subject groups (CI vs. NH), and under full-spectrum and spectrally degraded conditions for NH listeners. The results indicated that CI listeners’ intonation recognition was enhanced by F0 contour and intensity cues being cooperating, but was adversely affected by these cues being conflicting. On the other hand, with full-spectrum stimuli, NH listeners’ intonation recognition was not affected by cues being cooperating or conflicting. The effects of cues being cooperating or conflicting were comparable between the CI group and NH listeners with spectrally-degraded stimuli. These findings suggest the importance of taking multiple acoustic sources for speech recognition into consideration in aural rehabilitation for CI recipients. PMID:19372651

  6. The effect of hearing aid technologies on listening in an automobile

    PubMed Central

    Wu, Yu-Hsiang; Stangl, Elizabeth; Bentler, Ruth A.; Stanziola, Rachel W.

    2014-01-01

    Background Communication while traveling in an automobile often is very difficult for hearing aid users. This is because the automobile /road noise level is usually high, and listeners/drivers often do not have access to visual cues. Since the talker of interest usually is not located in front of the driver/listener, conventional directional processing that places the directivity beam toward the listener’s front may not be helpful, and in fact, could have a negative impact on speech recognition (when compared to omnidirectional processing). Recently, technologies have become available in commercial hearing aids that are designed to improve speech recognition and/or listening effort in noisy conditions where talkers are located behind or beside the listener. These technologies include (1) a directional microphone system that uses a backward-facing directivity pattern (Back-DIR processing), (2) a technology that transmits audio signals from the ear with the better signal-to-noise ratio (SNR) to the ear with the poorer SNR (Side-Transmission processing), and (3) a signal processing scheme that suppresses the noise at the ear with the poorer SNR (Side-Suppression processing). Purpose The purpose of the current study was to determine the effect of (1) conventional directional microphones and (2) newer signal processing schemes (Back-DIR, Side-Transmission, and Side-Suppression) on listener’s speech recognition performance and preference for communication in a traveling automobile. Research design A single-blinded, repeated-measures design was used. Study Sample Twenty-five adults with bilateral symmetrical sensorineural hearing loss aged 44 through 84 years participated in the study. Data Collection and Analysis The automobile/road noise and sentences of the Connected Speech Test (CST) were recorded through hearing aids in a standard van moving at a speed of 70 miles/hour on a paved highway. The hearing aids were programmed to omnidirectional microphone

  7. Auditory Discrimination of Lexical Stress Patterns in Hearing-Impaired Infants with Cochlear Implants Compared with Normal Hearing: Influence of Acoustic Cues and Listening Experience to the Ambient Language.

    PubMed

    Segal, Osnat; Houston, Derek; Kishon-Rabin, Liat

    2016-01-01

    To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants' processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress

  8. Adaptation to novel foreign-accented speech and retention of benefit following training: Influence of aging and hearing loss

    PubMed Central

    Bieber, Rebecca E.; Gordon-Salant, Sandra

    2017-01-01

    Adaptation to speech with a foreign accent is possible through prior exposure to talkers with that same accent. For young listeners with normal hearing, short term, accent-independent adaptation to a novel foreign accent is also facilitated through exposure training with multiple foreign accents. In the present study, accent-independent adaptation is examined in younger and older listeners with normal hearing and older listeners with hearing loss. Retention of training benefit is additionally explored. Stimuli for testing and training were HINT sentences recorded by talkers with nine distinctly different accents. Following two training sessions, all listener groups showed a similar increase in speech perception for a novel foreign accent. While no group retained this benefit at one week post-training, results of a secondary reaction time task revealed a decrease in reaction time following training, suggesting reduced listening effort. Examination of listeners' cognitive skills reveals a positive relationship between working memory and speech recognition ability. The present findings indicate that, while this no-feedback training paradigm for foreign-accented English is successful in promoting short term adaptation for listeners, this paradigm is not sufficient in facilitation of perceptual learning with lasting benefits for younger or older listeners. PMID:28464671

  9. Cochlear implantation with hearing preservation yields significant benefit for speech recognition in complex listening environments.

    PubMed

    Gifford, René H; Dorman, Michael F; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L W; Roland, Peter; Buchman, Craig A

    2013-01-01

    difference threshold at 250 Hz and EAS-related benefit for the adaptive speech reception threshold. The findings of this study suggest that (1) preserved low-frequency hearing improves speech understanding for CI recipients, (2) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing, and (3) preservation of binaural timing cues, although poorer than observed for individuals with normal hearing, is possible after unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. The results of this study demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of CI criteria to include individuals with low-frequency thresholds in even the normal to near-normal range.

  10. Music preferences with hearing aids: effects of signal properties, compression settings, and listener characteristics.

    PubMed

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2014-01-01

    Current knowledge of how to design and fit hearing aids to optimize music listening is limited. Many hearing-aid users listen to recorded music, which often undergoes compression limiting (CL) in the music industry. Therefore, hearing-aid users may experience twofold effects of compression when listening to recorded music: music-industry CL and hearing-aid wide dynamic-range compression (WDRC). The goal of this study was to examine the roles of input-signal properties, hearing-aid processing, and individual variability in the perception of recorded music, with a focus on the effects of dynamic-range compression. A group of 18 experienced hearing-aid users made paired-comparison preference judgments for classical and rock music samples using simulated hearing aids. Music samples were either unprocessed before hearing-aid input or had different levels of music-industry CL. Hearing-aid conditions included linear gain and individually fitted WDRC. Combinations of four WDRC parameters were included: fast release time (50 msec), slow release time (1,000 msec), three channels, and 18 channels. Listeners also completed several psychophysical tasks. Acoustic analyses showed that CL and WDRC reduced temporal envelope contrasts, changed amplitude distributions across the acoustic spectrum, and smoothed the peaks of the modulation spectrum. Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels. Heavy CL was least preferred for classical music, but the amount of CL did not change the patterns of WDRC preferences for either genre. Auditory filter bandwidth as estimated from psychophysical tuning curves was associated with variability in listeners' preferences for classical music. Fast

  11. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    NASA Astrophysics Data System (ADS)

    Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica

    2005-12-01

    This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  12. Seeing the Talker's Face Improves Free Recall of Speech for Young Adults With Normal Hearing but Not Older Adults With Hearing Loss.

    PubMed

    Rudner, Mary; Mishra, Sushmit; Stenfelt, Stefan; Lunner, Thomas; Rönnberg, Jerker

    2016-06-01

    Seeing the talker's face improves speech understanding in noise, possibly releasing resources for cognitive processing. We investigated whether it improves free recall of spoken two-digit numbers. Twenty younger adults with normal hearing and 24 older adults with hearing loss listened to and subsequently recalled lists of 13 two-digit numbers, with alternating male and female talkers. Lists were presented in quiet as well as in stationary and speech-like noise at a signal-to-noise ratio giving approximately 90% intelligibility. Amplification compensated for loss of audibility. Seeing the talker's face improved free recall performance for the younger but not the older group. Poorer performance in background noise was contingent on individual differences in working memory capacity. The effect of seeing the talker's face did not differ in quiet and noise. We have argued that the absence of an effect of seeing the talker's face for older adults with hearing loss may be due to modulation of audiovisual integration mechanisms caused by an interaction between task demands and participant characteristics. In particular, we suggest that executive task demands and interindividual executive skills may play a key role in determining the benefit of seeing the talker's face during a speech-based cognitive task.

  13. Consonant-recognition patterns and self-assessment of hearing handicap.

    PubMed

    Hustedde, C G; Wiley, T L

    1991-12-01

    Two companion experiments were conducted with normal-hearing subjects and subjects with high-frequency, sensorineural hearing loss. In Experiment 1, the validity of a self-assessment device of hearing handicap was evaluated in two groups of hearing-impaired listeners with significantly different consonant-recognition ability. Data for the Hearing Performance Inventory--Revised (Lamb, Owens, & Schubert, 1983) did not reveal differences in self-perceived handicap for the two groups of hearing-impaired listeners; it was sensitive to perceived differences in hearing abilities for listeners who did and did not have a hearing loss. Experiment 2 was aimed at evaluation of consonant error patterns that accounted for observed group differences in consonant-recognition ability. Error patterns on the Nonsense-Syllable Test (NST) across the two subject groups differed in both degree and type of error. Listeners in the group with poorer NST performance always demonstrated greater difficulty with selected low-frequency and high-frequency syllables than did listeners in the group with better NST performance. Overall, the NST was sensitive to differences in consonant-recognition ability for normal-hearing and hearing-impaired listeners.

  14. The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.

    PubMed

    Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo

    2009-04-01

    The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained

  15. Chosen Listening Levels for Music With and Without the Use of Hearing Aids.

    PubMed

    Croghan, Naomi B H; Swanberg, Anne M; Anderson, Melinda C; Arehart, Kathryn H

    2016-09-01

    The objective of this study was to describe chosen listening levels (CLLs) for recorded music for listeners with hearing loss in aided and unaided conditions. The study used a within-subject, repeated-measures design with 13 adult hearing-aid users. The music included rock and classical samples with different amounts of audio-industry compression limiting. CLL measurements were taken at ear level (i.e., at input to the hearing aid) and at the tympanic membrane. For aided listening, average CLLs were 69.3 dBA at the input to the hearing aid and 80.3 dBA at the tympanic membrane. For unaided listening, average CLLs were 76.9 dBA at the entrance to the ear canal and 77.1 dBA at the tympanic membrane. Although wide intersubject variability was observed, CLLs were not associated with audiometric thresholds. CLLs for rock music were higher than for classical music at the tympanic membrane, but no differences were observed between genres for ear-level CLLs. The amount of audio-industry compression had no significant effect on CLLs. By describing the levels of recorded music chosen by hearing-aid users, this study provides a basis for ecologically valid testing conditions in clinical and laboratory settings.

  16. [Subjective difficulties in young people related to extensive loud music listening].

    PubMed

    Budimcić, Milenko; Ignatović, Snezana; Zivić, Ljubica

    2010-01-01

    For human ear, noise represents every undesirable and valueless sound. In disco clubs, as in some other places with loud music mostly attended by young people, the level of noise sometimes attains over 100 dB. As reported by numerous studies, a high noise level could induce subjective difficulties (ear buzzing, audition loss, vertigo and palpitations, anxiety, high blood pressure, decreased concentration, lowered memory storing). Assessment of subjective difficulties occurring in young people when staying in places with a high noise level (cafes, disco clubs, rock concerts), which can produce health problems, due to loud music, in association with demographic data, addictions and personal life style data. One of the goals is to find factors leading to subjective difficulties, which would be objectively studied in the second stage of the research and marked as early predictors of possible health problems. The study was conducted among 780 students of the Higher Healthcare School of Professional Studied in Belgrade. We used a questionnaire with 20 questions, divided into four categories: demographic data, case-history data, subjective problems and addictions of the subjects. In the statistical data processing we used the methods of descriptive and exploratory analysis, chi-square tests, correlation tests and Mantel-Haenszel odds ratio. After listening loud music, 54.0% of examined subjects felt ear buzzing, and 4.6% had hearing damage. The habit of visiting places with loud music, mostly once a week in duration of 2-3 hours per visit had 80.4% of subjects. The presence of subjective complaints after listening of loud music was in association with loud music listening and disco clubs visits.The major reasons of the present subjective difficulties could be predicated by listening of loud music and club visits (r = 0.918 and r = 0.857). A relative risk for subjective difficulties presentation was 1.599. According to the results of our study, over half of children

  17. Music perception by cochlear implant and normal hearing listeners as measured by the Montreal Battery for Evaluation of Amusia.

    PubMed

    Cooper, William B; Tobey, Emily; Loizou, Philipos C

    2008-08-01

    The purpose of this study was to explore the utility/possibility of using the Montreal Battery for Evaluation of Amusia (MBEA) test (Peretz, et al., Ann N Y Acad Sci, 999, 58-75) to assess the music perception abilities of cochlear implant (CI) users. The MBEA was used to measure six different aspects of music perception (Scale, Contour, Interval, Rhythm, Meter, and Melody Memory) by CI users and normal-hearing (NH) listeners presented with stimuli processed via CI simulations. The spectral resolution (number of channels) was varied in the CI simulations to determine: (a) the number of channels (4, 6, 8, 12, and 16) needed to achieve the highest levels of music perception and (b) the number of channels needed to produce levels of music perception performance comparable with that of CI users. CI users and NH listeners performed higher on temporal-based tests (Rhythm and Meter) than on pitch-based tests (Scale, Contour, and Interval)--a finding that is consistent with previous research studies. The CI users' scores on pitch-based tests were near chance. The CI users' (but not NH listeners') scores for the Memory test, a test that incorporates an integration of both temporal-based and pitch-based aspects of music, were significantly higher than the scores obtained for the pitch-based Scale test and significantly lower than the temporal-based Rhythm and Meter tests. The data from NH listeners indicated that 16 channels of stimulation did not provide the highest music perception scores and performance was as good as that obtained with 12 channels. This outcome is consistent with other studies showing that NH listeners listening to vocoded speech are not able to use effectively F0 cues present in the envelopes, even when the stimuli are processed with a large number (16) of channels. The CI user data seem to most closely match with the 4- and 6-channel NH listener conditions for the pitch-based tasks. Consistent with previous studies, both CI users and NH listeners

  18. Listening to Sentences in Noise: Revealing Binaural Hearing Challenges in Patients with Schizophrenia.

    PubMed

    Abdul Wahab, Noor Alaudin; Zakaria, Mohd Normani; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Wahab, Suzaily

    2017-11-01

    The present, case-control, study investigates binaural hearing performance in schizophrenia patients towards sentences presented in quiet and noise. Participants were twenty-one healthy controls and sixteen schizophrenia patients with normal peripheral auditory functions. The binaural hearing was examined in four listening conditions by using the Malay version of hearing in noise test. The syntactically and semantically correct sentences were presented via headphones to the randomly selected subjects. In each condition, the adaptively obtained reception thresholds for speech (RTS) were used to determine RTS noise composite and spatial release from masking. Schizophrenia patients demonstrated significantly higher mean RTS value relative to healthy controls (p=0.018). The large effect size found in three listening conditions, i.e., in quiet (d=1.07), noise right (d=0.88) and noise composite (d=0.90) indicates statistically significant difference between the groups. However, noise front and noise left conditions show medium (d=0.61) and small (d=0.50) effect size respectively. No statistical difference between groups was noted in regards to spatial release from masking on right (p=0.305) and left (p=0.970) ear. The present findings suggest an abnormal unilateral auditory processing in central auditory pathway in schizophrenia patients. Future studies to explore the role of binaural and spatial auditory processing were recommended.

  19. Objective Prediction of Hearing Aid Benefit Across Listener Groups Using Machine Learning: Speech Recognition Performance With Binaural Noise-Reduction Algorithms.

    PubMed

    Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger

    2018-01-01

    The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.

  20. Listening Comprehension in Middle-Aged Adults.

    PubMed

    Sommers, Mitchell S

    2015-06-01

    The purpose of this summary is to examine changes in listening comprehension across the adult lifespan and to identify factors associated with individual differences in listening comprehension. In this article, the author reports on both cross-sectional and longitudinal changes in listening comprehension. Despite significant declines in both sensory and cognitive abilities, listening comprehension remains relatively unchanged in middle-aged listeners (between the ages of 40 and 60 years) compared with young listeners. These results are discussed with respect to possible compensatory factors that maintain listening comprehension despite impaired hearing and reduced cognitive capacities.

  1. Advantages of binaural amplification to acceptable noise level of directional hearing aid users.

    PubMed

    Kim, Ja-Hee; Lee, Jae Hee; Lee, Ho-Ki

    2014-06-01

    The goal of the present study was to examine whether Acceptable Noise Levels (ANLs) would be lower (greater acceptance of noise) in binaural listening than in monaural listening condition and also whether meaningfulness of background speech noise would affect ANLs for directional microphone hearing aid users. In addition, any relationships between the individual binaural benefits on ANLs and the individuals' demographic information were investigated. Fourteen hearing aid users (mean age, 64 years) participated for experimental testing. For the ANL calculation, listeners' most comfortable listening levels and background noise level were measured. Using Korean ANL material, ANLs of all participants were evaluated under monaural and binaural amplification with a counterbalanced order. The ANLs were also compared across five types of competing speech noises, consisting of 1- through 8-talker background speech maskers. Seven young normal-hearing listeners (mean age, 27 years) participated for the same measurements as a pilot testing. The results demonstrated that directional hearing aid users accepted more noise (lower ANLs) with binaural amplification than with monaural amplification, regardless of the type of competing speech. When the background speech noise became more meaningful, hearing-impaired listeners accepted less amount of noise (higher ANLs), revealing that ANL is dependent on the intelligibility of the competing speech. The individuals' binaural advantages in ANLs were significantly greater for the listeners with longer experience of hearing aids, yet not related to their age or hearing thresholds. Binaural directional microphone processing allowed hearing aid users to accept a greater amount of background noise, which may in turn improve listeners' hearing aid success. Informational masking substantially influenced background noise acceptance. Given a significant association between ANLs and duration of hearing aid usage, ANL measurement can be useful for

  2. Hearing versus Listening: Attention to Speech and Its Role in Language Acquisition in Deaf Infants with Cochlear Implants

    PubMed Central

    Houston, Derek M.; Bergeson, Tonya R.

    2013-01-01

    The advent of cochlear implantation has provided thousands of deaf infants and children access to speech and the opportunity to learn spoken language. Whether or not deaf infants successfully learn spoken language after implantation may depend in part on the extent to which they listen to speech rather than just hear it. We explore this question by examining the role that attention to speech plays in early language development according to a prominent model of infant speech perception – Jusczyk’s WRAPSA model – and by reviewing the kinds of speech input that maintains normal-hearing infants’ attention. We then review recent findings suggesting that cochlear-implanted infants’ attention to speech is reduced compared to normal-hearing infants and that speech input to these infants differs from input to infants with normal hearing. Finally, we discuss possible roles attention to speech may play on deaf children’s language acquisition after cochlear implantation in light of these findings and predictions from Jusczyk’s WRAPSA model. PMID:24729634

  3. Effect of Energy Equalization on the Intelligibility of Speech in Fluctuating Background Interference for Listeners With Hearing Impairment

    PubMed Central

    D’Aquila, Laura A.; Desloge, Joseph G.; Braida, Louis D.

    2017-01-01

    The masking release (MR; i.e., better speech recognition in fluctuating compared with continuous noise backgrounds) that is evident for listeners with normal hearing (NH) is generally reduced or absent for listeners with sensorineural hearing impairment (HI). In this study, a real-time signal-processing technique was developed to improve MR in listeners with HI and offer insight into the mechanisms influencing the size of MR. This technique compares short-term and long-term estimates of energy, increases the level of short-term segments whose energy is below the average energy, and normalizes the overall energy of the processed signal to be equivalent to that of the original long-term estimate. This signal-processing algorithm was used to create two types of energy-equalized (EEQ) signals: EEQ1, which operated on the wideband speech plus noise signal, and EEQ4, which operated independently on each of four bands with equal logarithmic width. Consonant identification was tested in backgrounds of continuous and various types of fluctuating speech-shaped Gaussian noise including those with both regularly and irregularly spaced temporal fluctuations. Listeners with HI achieved similar scores for EEQ and the original (unprocessed) stimuli in continuous-noise backgrounds, while superior performance was obtained for the EEQ signals in fluctuating background noises that had regular temporal gaps but not for those with irregularly spaced fluctuations. Thus, in noise backgrounds with regularly spaced temporal fluctuations, the energy-normalized signals led to larger values of MR and higher intelligibility than obtained with unprocessed signals. PMID:28602128

  4. Teaching Listening Skills to Young Learners through "Listen and Do" Songs

    ERIC Educational Resources Information Center

    Sevik, Mustafa

    2012-01-01

    In this article, the author examines the use of songs to improve the listening skills of young learners. He first provides a theoretical discussion about listening skills and YLs, and about songs and YLs in general; second, he provides a sample lesson for what can be called "Listen and Do" songs for YLs at the beginning level. These are the songs…

  5. The detection of differences in the cues to distance by elderly hearing-impaired listeners

    PubMed Central

    Akeroyd, Michael A.; Blaschke, Julia; Gatehouse, Stuart

    2013-01-01

    This experiment measured the capability of hearing-impaired individuals to discriminate differences in the cues to the distance of spoken sentences. The stimuli were generated synthetically, using a room-image procedure to calculate the direct sound and first 74 reflections for a source placed in a 7 × 9 m room, and then presenting each of those sounds individually through a circular array of 24 loudspeakers. Seventy-seven listeners participated, aged 22-83 years and with hearing levels from −5 to 59 dB HL. In conditions where a substantial change in overall level due to the inverse-square law was available as a cue, the elderly-hearing-impaired listeners did not perform any different from control groups. In other conditions where that cue was unavailable (so leaving the direct-to-reverberant relationship as a cue), either because the reverberant field dominated the direct sound or because the overall level had been artificially equalized, hearing-impaired listeners performed worse than controls. There were significant correlations with listeners’ self-reported distance capabilities as measured by the “SSQ” questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)]. The results demonstrate that hearing-impaired listeners show deficits in the ability to use some of the cues which signal auditory distance. PMID:17348530

  6. Can a hearing education campaign for adolescents change their music listening behavior?

    PubMed

    Weichbold, Viktor; Zorowka, Patrick

    2007-03-01

    This study looked at whether a hearing education campaign would have behavioral effects on the music listening practices of high school students. A total of 1757 students participated in a hearing education campaign. Before the campaign and one year thereafter they completed a survey asking for: (1) average frequency of discotheque attendance, (2) average duration of stay in the discotheque, (3) use of earplugs in discotheques, (4) frequency of regeneration breaks while at a discotheque, and (5) mean time per week spent listening to music through headphones. On questions (2), (3) and (5) no relevant post-campaign changes were reported. On question (1) students' answers indicated that the frequency of discotheque attendance had even increased after the campaign. The only change in keeping with the purpose of the campaign was an increase in the number of regeneration breaks when at a discotheque. The effect of hearing education campaigns on music listening behavior is questioned. Additional efforts are suggested to encourage adolescents to adopt protective behaviors.

  7. Commentary: Listening Can Be Exhausting—Fatigue in Children and Adults With Hearing Loss

    PubMed Central

    Bess, Fred H.; Hornsby, Benjamin W. Y.

    2017-01-01

    Anecdotal reports of fatigue after sustained speech-processing demands are common among adults with hearing loss; however, systematic research examining hearing loss–related fatigue is limited, particularly with regard to fatigue among children with hearing loss (CHL). Many audiologists, educators, and parents have long suspected that CHL experience stress and fatigue as a result of the difficult listening demands they encounter throughout the day at school. Recent research in this area provides support for these intuitive suggestions. In this article, the authors provide a framework for understanding the construct of fatigue and its relation to hearing loss, particularly in children. Although empirical evidence is limited, preliminary data from recent studies suggest that some CHL experience significant fatigue—and such fatigue has the potential to compromise a child’s performance in the classroom. In this commentary, the authors discuss several aspects of fatigue including its importance, definitions, prevalence, consequences, and potential linkage to increased listening effort in persons with hearing loss. The authors also provide a brief synopsis of subjective and objective methods to quantify listening effort and fatigue. Finally, the authors suggest a common-sense approach for identification of fatigue in CHL; and, the authors briefly comment on the use of amplification as a management strategy for reducing hearing-related fatigue. PMID:25255399

  8. Objective Prediction of Hearing Aid Benefit Across Listener Groups Using Machine Learning: Speech Recognition Performance With Binaural Noise-Reduction Algorithms

    PubMed Central

    Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger

    2018-01-01

    The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200

  9. Impact of Hearing Aid Technology on Outcomes in Daily Life II: Speech Understanding and Listening Effort.

    PubMed

    Johnson, Jani A; Xu, Jingjing; Cox, Robyn M

    2016-01-01

    Modern hearing aid (HA) devices include a collection of acoustic signal-processing features designed to improve listening outcomes in a variety of daily auditory environments. Manufacturers market these features at successive levels of technological sophistication. The features included in costlier premium hearing devices are designed to result in further improvements to daily listening outcomes compared with the features included in basic hearing devices. However, independent research has not substantiated such improvements. This research was designed to explore differences in speech-understanding and listening-effort outcomes for older adults using premium-feature and basic-feature HAs in their daily lives. For this participant-blinded, repeated, crossover trial 45 older adults (mean age 70.3 years) with mild-to-moderate sensorineural hearing loss wore each of four pairs of bilaterally fitted HAs for 1 month. HAs were premium- and basic-feature devices from two major brands. After each 1-month trial, participants' speech-understanding and listening-effort outcomes were evaluated in the laboratory and in daily life. Three types of speech-understanding and listening-effort data were collected: measures of laboratory performance, responses to standardized self-report questionnaires, and participant diary entries about daily communication. The only statistically significant superiority for the premium-feature HAs occurred for listening effort in the loud laboratory condition and was demonstrated for only one of the tested brands. The predominant complaint of older adults with mild-to-moderate hearing impairment is difficulty understanding speech in various settings. The combined results of all the outcome measures used in this research suggest that, when fitted using scientifically based practices, both premium- and basic-feature HAs are capable of providing considerable, but essentially equivalent, improvements to speech understanding and listening effort in daily

  10. Measuring listening effort: driving simulator vs. simple dual-task paradigm

    PubMed Central

    Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth

    2014-01-01

    listening effort was not consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Conclusions Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing may not be fully translated to older listeners with hearing impairment. PMID:25083599

  11. Microscopic prediction of speech recognition for listeners with normal hearing in noise using an auditory model.

    PubMed

    Jürgens, Tim; Brand, Thomas

    2009-11-01

    This study compares the phoneme recognition performance in speech-shaped noise of a microscopic model for speech recognition with the performance of normal-hearing listeners. "Microscopic" is defined in terms of this model twofold. First, the speech recognition rate is predicted on a phoneme-by-phoneme basis. Second, microscopic modeling means that the signal waveforms to be recognized are processed by mimicking elementary parts of human's auditory processing. The model is based on an approach by Holube and Kollmeier [J. Acoust. Soc. Am. 100, 1703-1716 (1996)] and consists of a psychoacoustically and physiologically motivated preprocessing and a simple dynamic-time-warp speech recognizer. The model is evaluated while presenting nonsense speech in a closed-set paradigm. Averaged phoneme recognition rates, specific phoneme recognition rates, and phoneme confusions are analyzed. The influence of different perceptual distance measures and of the model's a-priori knowledge is investigated. The results show that human performance can be predicted by this model using an optimal detector, i.e., identical speech waveforms for both training of the recognizer and testing. The best model performance is yielded by distance measures which focus mainly on small perceptual distances and neglect outliers.

  12. Effects of sensorineural hearing loss on visually guided attention in a multitalker environment.

    PubMed

    Best, Virginia; Marrone, Nicole; Mason, Christine R; Kidd, Gerald; Shinn-Cunningham, Barbara G

    2009-03-01

    This study asked whether or not listeners with sensorineural hearing loss have an impaired ability to use top-down attention to enhance speech intelligibility in the presence of interfering talkers. Listeners were presented with a target string of spoken digits embedded in a mixture of five spatially separated speech streams. The benefit of providing simple visual cues indicating when and/or where the target would occur was measured in listeners with hearing loss, listeners with normal hearing, and a control group of listeners with normal hearing who were tested at a lower target-to-masker ratio to equate their baseline (no cue) performance with the hearing-loss group. All groups received robust benefits from the visual cues. The magnitude of the spatial-cue benefit, however, was significantly smaller in listeners with hearing loss. Results suggest that reduced utility of selective attention for resolving competition between simultaneous sounds contributes to the communication difficulties experienced by listeners with hearing loss in everyday listening situations.

  13. The music listening preferences and habits of youths in Singapore and its relation to leisure noise-induced hearing loss.

    PubMed

    Lee, Gary Jek Chong; Lim, Ming Yann; Kuan, Angeline Yi Wei; Teo, Joshua Han Wei; Tan, Hui Guang; Low, Wong Kein

    2014-02-01

    Noise-induced hearing loss (NIHL) is a preventable condition, and much has been done to protect workers from it. However, thus far, little attention has been given to leisure NIHL. The purpose of this study is to determine the possible music listening preferences and habits among young people in Singapore that may put them at risk of developing leisure NIHL. In our study, the proportion of participants exposed to > 85 dBA for eight hours a day (time-weighted average) was calculated by taking into account the daily number of hours spent listening to music and by determining the average sound pressure level at which music was listened to. A total of 1,928 students were recruited from Temasek Polytechnic, Singapore. Of which, 16.4% of participants listened to portable music players with a time-weighted average of > 85 dBA for 8 hours. On average, we found that male students were more likely to listen to music at louder volumes than female students (p < 0.001). We also found that the Malay students in our study listened to louder music than the Chinese students (p < 0.001). We found that up to one in six young persons in Singapore is at risk of developing leisure NIHL from music delivered via earphones. As additional risks due to exposure to leisure noise from other sources was not taken into account, the extent of the problem of leisure NIHL may be even greater. There is a compelling need for an effective leisure noise prevention program among young people in Singapore.

  14. Spectrotemporal modulation sensitivity for hearing-impaired listeners: dependence on carrier center frequency and the relationship to speech intelligibility.

    PubMed

    Mehraei, Golbarg; Gallun, Frederick J; Leek, Marjorie R; Bernstein, Joshua G W

    2014-07-01

    Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4-32 Hz), spectral ripple density [0.5-4 cycles/octave (c/o)] and carrier center frequency (500-4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4-12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements.

  15. Spectrotemporal modulation sensitivity for hearing-impaired listeners: Dependence on carrier center frequency and the relationship to speech intelligibility

    PubMed Central

    Mehraei, Golbarg; Gallun, Frederick J.; Leek, Marjorie R.; Bernstein, Joshua G. W.

    2014-01-01

    Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4–32 Hz), spectral ripple density [0.5–4 cycles/octave (c/o)] and carrier center frequency (500–4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4–12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements. PMID:24993215

  16. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    PubMed

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  17. Language skills and phonological awareness in children with cochlear implants and normal hearing.

    PubMed

    Soleymani, Zahra; Mahmoodabadi, Najmeh; Nouri, Mina Mohammadi

    2016-04-01

    Early auditory experience plays a major role in language acquisition. Linguistic and metalinguistic abilities of children aged 5-5.5 years with cochlear implants (CIs) were compared to age-matched children with normal hearing (NH) to investigate the effect of hearing on development of these two skills. Eighteen children with NH and 18 children with CIs took part in the study. The Test of Language Development-Primary, third edition, was used to assess language and metalinguistic skills by assessment of phonological awareness (PA). Language skills and PA were then compared between groups. Hierarchical linear regression was conducted to determine whether the language skills explained the unique variance in PA. There were significant differences between children with NH and those with CIs for language skills and PA (p≤0.001). All language skills (semantics, syntax, listening, spoken language, organizing, and speaking) were uniquely predictive of PA outcome in the CI children. Linear combinations of listening and semantics and listening, semantics, and syntax correlated significantly with PA. The results show that children with CIs may have trouble with language skills and PA. Listening, semantics, and syntax, among other skills, are significant indicators of the variance in PA for children with CIs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Evaluation of the olivocochlear efferent reflex strength in the susceptibility to temporary hearing deterioration after music exposure in young adults.

    PubMed

    Hannah, Keppler; Ingeborg, Dhooge; Leen, Maes; Annelies, Bockstael; Birgit, Philips; Freya, Swinnen; Bart, Vinck

    2014-01-01

    The objective of the current study was to evaluate the predictive role of the olivocochlear efferent reflex strength in temporary hearing deterioration in young adults exposed to music. This was based on the fact that a noise-protective role of the medial olivocochlear (MOC) system was observed in animals and that efferent suppression (ES) measured using contralateral acoustic stimulation (CAS) of otoacoustic emissions (OAEs) is capable of exploring the MOC system. Knowing an individual's susceptibility to cochlear damage after noise exposure would enhance preventive strategies for noise-induced hearing loss. The hearing status of 28 young adults was evaluated using pure-tone audiometry, transient evoked OAEs (TEOAEs) and distortion product OAEs (DPOAEs) before and after listening to music using an MP3 player during 1 h at an individually determined loud listening level. CAS of TEOAEs was measured before music exposure to determine the amount of ES. Regression analysis showed a distinctive positive correlation between temporary hearing deterioration and the preferred gain setting of the MP3 player. However, no clear relationship between temporary hearing deterioration and the amount of ES was found. In conclusion, clinical measurement of ES, using CAS of TEOAEs, is not correlated with the amount of temporary hearing deterioration after 1 h music exposure in young adults. However, it is possible that the temporary hearing deterioration in the current study was insufficient to activate the MOC system. More research regarding ES might provide more insight in the olivocochlear efferent pathways and their role in auditory functioning.

  19. Improving Mobile Phone Speech Recognition by Personalized Amplification: Application in People with Normal Hearing and Mild-to-Moderate Hearing Loss.

    PubMed

    Kam, Anna Chi Shan; Sung, John Ka Keung; Lee, Tan; Wong, Terence Ka Cheong; van Hasselt, Andrew

    In this study, the authors evaluated the effect of personalized amplification on mobile phone speech recognition in people with and without hearing loss. This prospective study used double-blind, within-subjects, repeated measures, controlled trials to evaluate the effectiveness of applying personalized amplification based on the hearing level captured on the mobile device. The personalized amplification settings were created using modified one-third gain targets. The participants in this study included 100 adults of age between 20 and 78 years (60 with age-adjusted normal hearing and 40 with hearing loss). The performance of the participants with personalized amplification and standard settings was compared using both subjective and speech-perception measures. Speech recognition was measured in quiet and in noise using Cantonese disyllabic words. Subjective ratings on the quality, clarity, and comfortableness of the mobile signals were measured with an 11-point visual analog scale. Subjective preferences of the settings were also obtained by a paired-comparison procedure. The personalized amplification application provided better speech recognition via the mobile phone both in quiet and in noise for people with hearing impairment (improved 8 to 10%) and people with normal hearing (improved 1 to 4%). The improvement in speech recognition was significantly better for people with hearing impairment. When the average device output level was matched, more participants preferred to have the individualized gain than not to have it. The personalized amplification application has the potential to improve speech recognition for people with mild-to-moderate hearing loss, as well as people with normal hearing, in particular when listening in noisy environments.

  20. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.

    PubMed

    Kidd, Gerald

    2017-10-17

    Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby

  1. Perception of Spectral Contrast by Hearing-Impaired Listeners

    ERIC Educational Resources Information Center

    Dreisbach, Laura E.; Leek, Marjorie R.; Lentz, Jennifer J.

    2005-01-01

    The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and…

  2. Analysis of Output Levels of an MP3 Player: Effects of Earphone Type, Music Genre, and Listening Duration.

    PubMed

    Shim, Hyunyong; Lee, Seungwan; Koo, Miseung; Kim, Jinsook

    2018-02-26

    To prevent noise induced hearing losses caused by listening to music with personal listening devices for young adults, this study was aimed to measure output levels of an MP3 and to identify preferred listening levels (PLLs) depending on earphone types, music genres, and listening durations. Twenty-two normal hearing young adults (mean=18.82, standard deviation=0.57) participated. Each participant was asked to select his or her most PLLs when listened to Korean ballade or dance music with an earbud or an over-the-ear earphone for 30 or 60 minutes. One side of earphone was connected to the participant's better ear and the other side was connected to a sound level meter via a 2 or 6 cc-couplers. Depending on earphone types, music genres, and listening durations, loudness A-weighted equivalent (LAeq) and loudness maximum time-weighted with A-frequency sound levels in dBA were measured. Neither main nor interaction effects of the PLLs among the three factors were significant. Overall output levels of earbuds were about 10-12 dBA greater than those of over-the-ear earphones. The PLLs were 1.73 dBA greater for earbuds than over-the-ear earphones. The average PLL for ballad was higher than for dance music. The PLLs at LAeq for both music genres were the greatest at 0.5 kHz followed by 1, 0.25, 2, 4, 0.125, 8 kHz in the order. The PLLs were not different significantly when listening to Korean ballad or dance music as functions of earphone types, music genres, and listening durations. However, over-the-ear earphones seemed to be more suitable to prevent noise induce hearing loss when listening to music, showing lower PLLs, possibly due to isolation from the background noise by covering ears.

  3. Characterizing Physician Listening Behavior During Hospitalist Handoffs using the HEAR Checklist

    PubMed Central

    Greenstein, Elizabeth A.; Arora, Vineet M.; Staisiunas, Paul G.; Banerjee, Stacy S.; Farnan, Jeanne M.

    2015-01-01

    Background The increasing fragmentation of healthcare has resulted in more patient handoffs. Many professional groups, including the Accreditation Council on Graduate Medical Education and the Society of Hospital Medicine, have made recommendations for safe and effective handoffs. Despite the two-way nature of handoff communication, the focus of these efforts has largely been on the person giving information. Objective To observe and characterize the listening behaviors of handoff receivers during hospitalist handoffs. Design Prospective observational study of shift change and service change handoffs on a non-teaching hospitalist service at a single academic tertiary care institution. Measurements The “HEAR Checklist”, a novel tool created based on review of effective listening behaviors, was used by third party observers to characterize active and passive listening behaviors and interruptions during handoffs. Results In 48 handoffs (25 shift change, 23 service change), active listening behaviors (e.g. read-back (17%), note-taking (23%), and reading own copy of the written signout (27%)) occurred less frequently than passive listening behaviors (e.g. affirmatory statements (56%) nodding (50%) and eye contact (58%)) (p<0.01). Read-back occurred only 8 times (17%). In 11 handoffs (23%) receivers took notes. Almost all (98%) handoffs were interrupted at least once, most often by side conversations, pagers going off, or clinicians arriving. Handoffs with more patients, such as service change, were associated with more interruptions (r= 0.46, p<0.01). Conclusions Using the “HEAR Checklist”, we can characterize hospitalist handoff listening behaviors. While passive listening behaviors are common, active listening behaviors that promote memory retention are rare. Handoffs are often interrupted, most commonly by side conversations. Future handoff improvement efforts should focus on augmenting listening and minimizing interruptions. PMID:23258389

  4. Functional hearing in the classroom: assistive listening devices for students with hearing impairment in a mainstream school setting.

    PubMed

    Zanin, Julien; Rance, Gary

    2016-12-01

    To assess the benefit of assistive listening devices (ALDs) for students with hearing impairment in mainstream schools. Speech recognition (CNC words) in background noise was assessed in a typical classroom. Participants underwent testing using four device configurations: (1) HA(s)/CI(s) alone, (2) soundfield amplification, (3) remote microphone (Roger Pen) on desk and (4) remote microphone at the loudspeaker. A sub-group of students subsequently underwent a 2-week classroom trial of each ALD. Degree of improvement from baseline [HA(s)/CI(s)] alone was assessed using teacher and student Listening Inventory for Education-Revised (LIFE-R) questionnaires. In all, 20 students, aged 12.5-18.9 years, underwent speech recognition assessment. In total, 10 of these participated in the classroom trial. Hearing loss ranged from mild-to-profound levels. Performance in each ALD configuration was higher than for HAs/CIs alone (p < 0.001). Teacher and student LIFE-R results indicated significant improvement in listening/communication when using the remote microphone in conjunction with HAs/CIs (p < 0.05). There was no difference between the soundfield system and the baseline measurement (p > 0.05). Speech recognition improvements were demonstrated with the implementation of both remote microphones and soundfield systems. Both students and teachers reported functional hearing advantages in the classroom when using the remote microphone in concert with their standard hearing devices.

  5. How Hearing Loss and Age Affect Emotional Responses to Nonspeech Sounds

    ERIC Educational Resources Information Center

    Picou, Erin M.

    2016-01-01

    Purpose: The purpose of this study was to evaluate the effects of hearing loss and age on subjective ratings of emotional valence and arousal in response to nonspeech sounds. Method: Three groups of adults participated: 20 younger listeners with normal hearing (M = 24.8 years), 20 older listeners with normal hearing (M = 55.8 years), and 20 older…

  6. Binaural speech discrimination under noise in hearing-impaired listeners

    NASA Technical Reports Server (NTRS)

    Kumar, K. V.; Rao, A. B.

    1988-01-01

    This paper presents the results of an assessment of speech discrimination by hearing-impaired listeners (sensori-neural, conductive, and mixed groups) under binaural free-field listening in the presence of background noise. Subjects with pure-tone thresholds greater than 20 dB in 0.5, 1.0 and 2.0 kHz were presented with a version of the W-22 list of phonetically balanced words under three conditions: (1) 'quiet', with the chamber noise below 28 dB and speech at 60 dB; (2) at a constant S/N ratio of +10 dB, and with a background white noise at 70 dB; and (3) same as condition (2), but with the background noise at 80 dB. The mean speech discrimination scores decreased significantly with noise in all groups. However, the decrease in binaural speech discrimination scores with an increase in hearing impairment was less for material presented under the noise conditions than for the material presented in quiet.

  7. Cognitive load during speech perception in noise: the influence of age, hearing loss, and cognition on the pupil response.

    PubMed

    Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M

    2011-01-01

    The aim of the present study was to evaluate the influence of age, hearing loss, and cognitive ability on the cognitive processing load during listening to speech presented in noise. Cognitive load was assessed by means of pupillometry (i.e., examination of pupil dilation), supplemented with subjective ratings. Two groups of subjects participated: 38 middle-aged participants (mean age = 55 yrs) with normal hearing and 36 middle-aged participants (mean age = 61 yrs) with hearing loss. Using three Speech Reception Threshold (SRT) in stationary noise tests, we estimated the speech-to-noise ratios (SNRs) required for the correct repetition of 50%, 71%, or 84% of the sentences (SRT50%, SRT71%, and SRT84%, respectively). We examined the pupil response during listening: the peak amplitude, the peak latency, the mean dilation, and the pupil response duration. For each condition, participants rated the experienced listening effort and estimated their performance level. Participants also performed the Text Reception Threshold (TRT) test, a test of processing speed, and a word vocabulary test. Data were compared with previously published data from young participants with normal hearing. Hearing loss was related to relatively poor SRTs, and higher speech intelligibility was associated with lower effort and higher performance ratings. For listeners with normal hearing, increasing age was associated with poorer TRTs and slower processing speed but with larger word vocabulary. A multivariate repeated-measures analysis of variance indicated main effects of group and SNR and an interaction effect between these factors on the pupil response. The peak latency was relatively short and the mean dilation was relatively small at low intelligibility levels for the middle-aged groups, whereas the reverse was observed for high intelligibility levels. The decrease in the pupil response as a function of increasing SNR was relatively small for the listeners with hearing loss. Spearman

  8. Objective Assessment of Listening Effort: Coregistration of Pupillometry and EEG.

    PubMed

    Miles, Kelly; McMahon, Catherine; Boisvert, Isabelle; Ibrahim, Ronny; de Lissa, Peter; Graham, Petra; Lyxell, Björn

    2017-01-01

    Listening to speech in noise is effortful, particularly for people with hearing impairment. While it is known that effort is related to a complex interplay between bottom-up and top-down processes, the cognitive and neurophysiological mechanisms contributing to effortful listening remain unknown. Therefore, a reliable physiological measure to assess effort remains elusive. This study aimed to determine whether pupil dilation and alpha power change, two physiological measures suggested to index listening effort, assess similar processes. Listening effort was manipulated by parametrically varying spectral resolution (16- and 6-channel noise vocoding) and speech reception thresholds (SRT; 50% and 80%) while 19 young, normal-hearing adults performed a speech recognition task in noise. Results of off-line sentence scoring showed discrepancies between the target SRTs and the true performance obtained during the speech recognition task. For example, in the SRT80% condition, participants scored an average of 64.7%. Participants' true performance levels were therefore used for subsequent statistical modelling. Results showed that both measures appeared to be sensitive to changes in spectral resolution (channel vocoding), while pupil dilation only was also significantly related to their true performance levels (%) and task accuracy (i.e., whether the response was correctly or partially recalled). The two measures were not correlated, suggesting they each may reflect different cognitive processes involved in listening effort. This combination of findings contributes to a growing body of research aiming to develop an objective measure of listening effort.

  9. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    PubMed Central

    2017-01-01

    Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic

  10. Coordination of gaze and speech in communication between children with hearing impairment and normal-hearing peers.

    PubMed

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2014-06-01

    To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions, statements, back channeling, and silence) as the predictor variable, group characteristics in gaze behavior were expressed with Kaplan-Meier survival functions (estimating time to gaze-to-partner) and odds ratios (comparing number of verbal events with and without gaze-to-partner). Analyses compared the listeners in each dyad (HI: n = 10, mean age = 12;6 years, mean better ear pure-tone average = 33.0 dB HL; NH: n = 10, mean age = 13;7 years). Log-rank tests revealed significant group differences in survival distributions for all verbal events, reflecting a higher probability of gaze to the partner's face for participants with HI. Expressed as odds ratios (OR), participants with HI displayed greater odds for gaze-to-partner (ORs ranging between 1.2 and 2.1) during all verbal events. The results show an increased probability for listeners with HI to gaze at the speaker's face in association with verbal events. Several explanations for the finding are possible, and implications for further research are discussed.

  11. A comparison of the effects of filtering and sensorineural hearing loss on patients of consonant confusions.

    PubMed

    Wang, M D; Reed, C M; Bilger, R C

    1978-03-01

    It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six-low pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of this individual listener's audiogram is given.

  12. Audiometric Testing With Pulsed, Steady, and Warble Tones in Listeners With Tinnitus and Hearing Loss

    PubMed Central

    Walker, Matthew A.; Short, Ciara E.; Skinner, Kimberly G.

    2017-01-01

    Purpose This study evaluated the American Speech-Language-Hearing Association's recommendation that audiometric testing for patients with tinnitus should use pulsed or warble tones. Using listeners with varied audiometric configurations and tinnitus statuses, we asked whether steady, pulsed, and warble tones yielded similar audiometric thresholds, and which tone type was preferred. Method Audiometric thresholds (octave frequencies from 0.25–16 kHz) were measured using steady, pulsed, and warble tones in 61 listeners, who were divided into 4 groups on the basis of hearing and tinnitus status. Participants rated the appeal and difficulty of each tone type on a 1–5 scale and selected a preferred type. Results For all groups, thresholds were lower for warble than for pulsed and steady tones, with the largest effects above 4 kHz. Appeal ratings did not differ across tone type, but the steady tone was rated as more difficult than the warble and pulsed tones. Participants generally preferred pulsed and warble tones. Conclusions Pulsed tones provide advantages over steady and warble tones for patients regardless of hearing or tinnitus status. Although listeners preferred pulsed and warble tones to steady tones, pulsed tones are not susceptible to the effects of off-frequency listening, a consideration when testing listeners with sloping audiograms. PMID:28892822

  13. Audiometric Testing With Pulsed, Steady, and Warble Tones in Listeners With Tinnitus and Hearing Loss.

    PubMed

    Lentz, Jennifer J; Walker, Matthew A; Short, Ciara E; Skinner, Kimberly G

    2017-09-18

    This study evaluated the American Speech-Language-Hearing Association's recommendation that audiometric testing for patients with tinnitus should use pulsed or warble tones. Using listeners with varied audiometric configurations and tinnitus statuses, we asked whether steady, pulsed, and warble tones yielded similar audiometric thresholds, and which tone type was preferred. Audiometric thresholds (octave frequencies from 0.25-16 kHz) were measured using steady, pulsed, and warble tones in 61 listeners, who were divided into 4 groups on the basis of hearing and tinnitus status. Participants rated the appeal and difficulty of each tone type on a 1-5 scale and selected a preferred type. For all groups, thresholds were lower for warble than for pulsed and steady tones, with the largest effects above 4 kHz. Appeal ratings did not differ across tone type, but the steady tone was rated as more difficult than the warble and pulsed tones. Participants generally preferred pulsed and warble tones. Pulsed tones provide advantages over steady and warble tones for patients regardless of hearing or tinnitus status. Although listeners preferred pulsed and warble tones to steady tones, pulsed tones are not susceptible to the effects of off-frequency listening, a consideration when testing listeners with sloping audiograms.

  14. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    ERIC Educational Resources Information Center

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  15. How age affects memory task performance in clinically normal hearing persons.

    PubMed

    Vercammen, Charlotte; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid

    2017-05-01

    The main objective of this study is to investigate memory task performance in different age groups, irrespective of hearing status. Data are collected on a short-term memory task (WAIS-III Digit Span forward) and two working memory tasks (WAIS-III Digit Span backward and the Reading Span Test). The tasks are administered to young (20-30 years, n = 56), middle-aged (50-60 years, n = 47), and older participants (70-80 years, n = 16) with normal hearing thresholds. All participants have passed a cognitive screening task (Montreal Cognitive Assessment (MoCA)). Young participants perform significantly better than middle-aged participants, while middle-aged and older participants perform similarly on the three memory tasks. Our data show that older clinically normal hearing persons perform equally well on the memory tasks as middle-aged persons. However, even under optimal conditions of preserved sensory processing, changes in memory performance occur. Based on our data, these changes set in before middle age.

  16. Acoustic Analysis of Persian Vowels in Cochlear Implant Users: A Comparison With Hearing-impaired Children Using Hearing Aid and Normal-hearing Children.

    PubMed

    Jafari, Narges; Yadegari, Fariba; Jalaie, Shohreh

    2016-11-01

    Vowel production in essence is auditorily controlled; hence, the role of the auditory feedback in vowel production is very important. The purpose of this study was to compare formant frequencies and vowel space in Persian-speaking deaf children with cochlear implantation (CI), hearing-impaired children with hearing aid (HA), and their normal-hearing (NH) peers. A total of 40 prelingually children with hearing impairment and 20 NH groups participated in this study. Participants were native Persian speakers. The average of first formant frequency (F 1 ) and second formant frequency (F 2 ) of the six vowels were measured using Praat software (version 5.1.44). One-way analysis of variance (ANOVA) was used to analyze the differences between the three3 groups. The mean value of F 1 for vowel /i/ was significantly different (between CI and NH children and also between HA and NH groups) (F 2, 57  = 9.229, P < 0.001). For vowel /a/, the mean value of F 1 was significantly different (between HA and NH groups) (F 2, 57  = 3.707, P < 0.05). Regarding the second formant frequency, a post hoc Tukey test revealed that the differences were between HA and NH children (P < 0.05). F 2 for vowel /o/ was significantly different (F 2, 57  = 4.572, P < 0.05). Also, the mean value of F 2 for vowel /a/ was significantly different (F 2, 57  = 3.184, P < 0.05). About 1 year after implantation, the formants shift closer to those of the NH listeners who tend to have more expanded vowel spaces than hearing-impaired listeners with hearing aids. Probably, this condition is because CI has a subtly positive impact on the place of articulation of vowels. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. Effects of a cochlear implant simulation on immediate memory in normal-hearing adults

    PubMed Central

    Burkholder, Rose A.; Pisoni, David B.; Svirsky, Mario A.

    2012-01-01

    This study assessed the effects of stimulus misidentification and memory processing errors on immediate memory span in 25 normal-hearing adults exposed to degraded auditory input simulating signals provided by a cochlear implant. The identification accuracy of degraded digits in isolation was measured before digit span testing. Forward and backward digit spans were shorter when digits were degraded than when they were normal. Participants’ normal digit spans and their accuracy in identifying isolated digits were used to predict digit spans in the degraded speech condition. The observed digit spans in degraded conditions did not differ significantly from predicted digit spans. This suggests that the decrease in memory span is related primarily to misidentification of digits rather than memory processing errors related to cognitive load. These findings provide complementary information to earlier research on auditory memory span of listeners exposed to degraded speech either experimentally or as a consequence of a hearing-impairment. PMID:16317807

  18. Consonant identification in noise using Hilbert-transform temporal fine-structure speech and recovered-envelope speech for listeners with normal and impaired hearinga)

    PubMed Central

    Léger, Agnès C.; Reed, Charlotte M.; Desloge, Joseph G.; Swaminathan, Jayaganesh; Braida, Louis D.

    2015-01-01

    Consonant-identification ability was examined in normal-hearing (NH) and hearing-impaired (HI) listeners in the presence of steady-state and 10-Hz square-wave interrupted speech-shaped noise. The Hilbert transform was used to process speech stimuli (16 consonants in a-C-a syllables) to present envelope cues, temporal fine-structure (TFS) cues, or envelope cues recovered from TFS speech. The performance of the HI listeners was inferior to that of the NH listeners both in terms of lower levels of performance in the baseline condition and in the need for higher signal-to-noise ratio to yield a given level of performance. For NH listeners, scores were higher in interrupted noise than in steady-state noise for all speech types (indicating substantial masking release). For HI listeners, masking release was typically observed for TFS and recovered-envelope speech but not for unprocessed and envelope speech. For both groups of listeners, TFS and recovered-envelope speech yielded similar levels of performance and consonant confusion patterns. The masking release observed for TFS and recovered-envelope speech may be related to level effects associated with the manner in which the TFS processing interacts with the interrupted noise signal, rather than to the contributions of TFS cues per se. PMID:26233038

  19. Looking Behavior and Audiovisual Speech Understanding in Children With Normal Hearing and Children With Mild Bilateral or Unilateral Hearing Loss.

    PubMed

    Lewis, Dawna E; Smith, Nicholas A; Spalding, Jody L; Valente, Daniel L

    Visual information from talkers facilitates speech intelligibility for listeners when audibility is challenged by environmental noise and hearing loss. Less is known about how listeners actively process and attend to visual information from different talkers in complex multi-talker environments. This study tracked looking behavior in children with normal hearing (NH), mild bilateral hearing loss (MBHL), and unilateral hearing loss (UHL) in a complex multi-talker environment to examine the extent to which children look at talkers and whether looking patterns relate to performance on a speech-understanding task. It was hypothesized that performance would decrease as perceptual complexity increased and that children with hearing loss would perform more poorly than their peers with NH. Children with MBHL or UHL were expected to demonstrate greater attention to individual talkers during multi-talker exchanges, indicating that they were more likely to attempt to use visual information from talkers to assist in speech understanding in adverse acoustics. It also was of interest to examine whether MBHL, versus UHL, would differentially affect performance and looking behavior. Eighteen children with NH, eight children with MBHL, and 10 children with UHL participated (8-12 years). They followed audiovisual instructions for placing objects on a mat under three conditions: a single talker providing instructions via a video monitor, four possible talkers alternately providing instructions on separate monitors in front of the listener, and the same four talkers providing both target and nontarget information. Multi-talker background noise was presented at a 5 dB signal-to-noise ratio during testing. An eye tracker monitored looking behavior while children performed the experimental task. Behavioral task performance was higher for children with NH than for either group of children with hearing loss. There were no differences in performance between children with UHL and children

  20. Hearing Impairment and Cognitive Energy: The Framework for Understanding Effortful Listening (FUEL).

    PubMed

    Pichora-Fuller, M Kathleen; Kramer, Sophia E; Eckert, Mark A; Edwards, Brent; Hornsby, Benjamin W Y; Humes, Larry E; Lemke, Ulrike; Lunner, Thomas; Matthen, Mohan; Mackersie, Carol L; Naylor, Graham; Phillips, Natalie A; Richter, Michael; Rudner, Mary; Sommers, Mitchell S; Tremblay, Kelly L; Wingfield, Arthur

    2016-01-01

    The Fifth Eriksholm Workshop on "Hearing Impairment and Cognitive Energy" was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in the challenging situations of everyday life.

  1. Spatial Release From Masking in Simulated Cochlear Implant Users With and Without Access to Low-Frequency Acoustic Hearing

    PubMed Central

    Dietz, Mathias; Hohmann, Volker; Jürgens, Tim

    2015-01-01

    For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types. PMID:26721918

  2. Spatial hearing benefits demonstrated with presentation of acoustic temporal fine structure cues in bilateral cochlear implant listeners.

    PubMed

    Churchill, Tyler H; Kan, Alan; Goupell, Matthew J; Litovsky, Ruth Y

    2014-09-01

    Most contemporary cochlear implant (CI) processing strategies discard acoustic temporal fine structure (TFS) information, and this may contribute to the observed deficits in bilateral CI listeners' ability to localize sounds when compared to normal hearing listeners. Additionally, for best speech envelope representation, most contemporary speech processing strategies use high-rate carriers (≥900 Hz) that exceed the limit for interaural pulse timing to provide useful binaural information. Many bilateral CI listeners are sensitive to interaural time differences (ITDs) in low-rate (<300 Hz) constant-amplitude pulse trains. This study explored the trade-off between superior speech temporal envelope representation with high-rate carriers and binaural pulse timing sensitivity with low-rate carriers. The effects of carrier pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition in quiet were examined in eight bilateral CI listeners. Stimuli consisted of speech tokens processed at different electrical stimulation rates, and pulse timings that either preserved or did not preserve acoustic TFS cues. Results showed that CI listeners were able to use low-rate pulse timing cues derived from acoustic TFS when presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli.

  3. Visual Cues and Listening Effort: Individual Variability

    ERIC Educational Resources Information Center

    Picou, Erin M.; Ricketts, Todd A; Hornsby, Benjamin W. Y.

    2011-01-01

    Purpose: To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Method: Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and…

  4. Musician effect on perception of spectro-temporally degraded speech, vocal emotion, and music in young adolescents.

    PubMed

    Başkent, Deniz; Fuller, Christina D; Galvin, John J; Schepel, Like; Gaudrain, Etienne; Free, Rolien H

    2018-05-01

    In adult normal-hearing musicians, perception of music, vocal emotion, and speech in noise has been previously shown to be better than non-musicians, sometimes even with spectro-temporally degraded stimuli. In this study, melodic contour identification, vocal emotion identification, and speech understanding in noise were measured in young adolescent normal-hearing musicians and non-musicians listening to unprocessed or degraded signals. Different from adults, there was no musician effect for vocal emotion identification or speech in noise. Melodic contour identification with degraded signals was significantly better in musicians, suggesting potential benefits from music training for young cochlear-implant users, who experience similar spectro-temporal signal degradations.

  5. Audio reproduction for personal ambient home assistance: concepts and evaluations for normal-hearing and hearing-impaired persons.

    PubMed

    Huber, Rainer; Meis, Markus; Klink, Karin; Bartsch, Christian; Bitzer, Joerg

    2014-01-01

    Within the Lower Saxony Research Network Design of Environments for Ageing (GAL), a personal activity and household assistant (PAHA), an ambient reminder system, has been developed. One of its central output modality to interact with the user is sound. The study presented here evaluated three different system technologies for sound reproduction using up to five loudspeakers, including the "phantom source" concept. Moreover, a technology for hearing loss compensation for the mostly older users of the PAHA was implemented and evaluated. Evaluation experiments with 21 normal hearing and hearing impaired test subjects were carried out. The results show that after direct comparison of the sound presentation concepts, the presentation by the single TV speaker was most preferred, whereas the phantom source concept got the highest acceptance ratings as far as the general concept is concerned. The localization accuracy of the phantom source concept was good as long as the exact listening position was known to the algorithm and speech stimuli were used. Most subjects preferred the original signals over the pre-processed, dynamic-compressed signals, although processed speech was often described as being clearer.

  6. Hearing Loss Severity: Impaired Processing of Formant Transition Duration

    ERIC Educational Resources Information Center

    Coez, A.; Belin, P.; Bizaguet, E.; Ferrary, E.; Zilbovicius, M.; Samson, Y.

    2010-01-01

    Normal hearing listeners exploit the formant transition (FT) detection to identify place of articulation for stop consonants. Neuro-imaging studies revealed that short FT induced less cortical activation than long FT. To determine the ability of hearing impaired listeners to distinguish short and long formant transitions (FT) from vowels of the…

  7. Effects of attention on the speech reception threshold and pupil response of people with impaired and normal hearing.

    PubMed

    Koelewijn, Thomas; Versfeld, Niek J; Kramer, Sophia E

    2017-10-01

    For people with hearing difficulties, following a conversation in a noisy environment requires substantial cognitive processing, which is often perceived as effortful. Recent studies with normal hearing (NH) listeners showed that the pupil dilation response, a measure of cognitive processing load, is affected by 'attention related' processes. How these processes affect the pupil dilation response for hearing impaired (HI) listeners remains unknown. Therefore, the current study investigated the effect of auditory attention on various pupil response parameters for 15 NH adults (median age 51 yrs.) and 15 adults with mild to moderate sensorineural hearing loss (median age 52 yrs.). Both groups listened to two different sentences presented simultaneously, one to each ear and partially masked by stationary noise. Participants had to repeat either both sentences or only one, for which they had to divide or focus attention, respectively. When repeating one sentence, the target sentence location (left or right) was either randomized or blocked across trials, which in the latter case allowed for a better spatial focus of attention. The speech-to-noise ratio was adjusted to yield about 50% sentences correct for each task and condition. NH participants had lower ('better') speech reception thresholds (SRT) than HI participants. The pupil measures showed no between-group effects, with the exception of a shorter peak latency for HI participants, which indicated a shorter processing time. Both groups showed higher SRTs and a larger pupil dilation response when two sentences were processed instead of one. Additionally, SRTs were higher and dilation responses were larger for both groups when the target location was randomized instead of fixed. We conclude that although HI participants could cope with less noise than the NH group, their ability to focus attention on a single talker, thereby improving SRTs and lowering cognitive processing load, was preserved. Shorter peak latencies

  8. Sensorineural hearing loss degrades behavioral and physiological measures of human spatial selective auditory attention

    PubMed Central

    Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.

    2018-01-01

    Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752

  9. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    ERIC Educational Resources Information Center

    Kidd, Gerald, Jr.

    2017-01-01

    Purpose: Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This…

  10. Do Older Listeners With Hearing Loss Benefit From Dynamic Pitch for Speech Recognition in Noise?

    PubMed

    Shen, Jing; Souza, Pamela E

    2017-10-12

    Dynamic pitch, the variation in the fundamental frequency of speech, aids older listeners' speech perception in noise. It is unclear, however, whether some older listeners with hearing loss benefit from strengthened dynamic pitch cues for recognizing speech in certain noise scenarios and how this relative benefit may be associated with individual factors. We first examined older individuals' relative benefit between natural and strong dynamic pitches for better speech recognition in noise. Further, we reported the individual factors of the 2 groups of listeners who benefit differently from natural and strong dynamic pitches. Speech reception thresholds of 13 older listeners with mild-moderate hearing loss were measured using target speech with 3 levels of dynamic pitch strength. Individuals' ability to benefit from dynamic pitch was defined as the speech reception threshold difference between speeches with and without dynamic pitch cues. The relative benefit of natural versus strong dynamic pitch varied across individuals. However, this relative benefit remained consistent for the same individuals across those background noises with temporal modulation. Those listeners who benefited more from strong dynamic pitch reported better subjective speech perception abilities. Strong dynamic pitch may be more beneficial than natural dynamic pitch for some older listeners to recognize speech better in noise, particularly when the noise has temporal modulation.

  11. The Impact of Frequency Modulation (FM) System Use and Caregiver Training on Young Children with Hearing Impairment in a Noisy Listening Environment

    ERIC Educational Resources Information Center

    Nguyen, Huong Thi Thien

    2011-01-01

    The two objectives of this single-subject study were to assess how an FM system use impacts parent-child interaction in a noisy listening environment, and how a parent/caregiver training affect the interaction between parent/caregiver and child. Two 5-year-old children with hearing loss and their parent/caregiver participated. Experiment 1 was…

  12. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  13. Long-term usage of modern signal processing by listeners with severe or profound hearing loss: a retrospective survey.

    PubMed

    Keidser, Gitte; Hartley, David; Carter, Lyndal

    2008-12-01

    To investigate the long-term benefit of multichannel wide dynamic range compression (WDRC) alone and in combination with directional microphones and noise reduction/speech enhancement for listeners with severe or profound hearing loss. At the conclusion of a research project, 39 participants with severe or profound hearing loss were fitted with WDRC in one program and WDRC with directional microphones and speech enhancement enabled in a 2nd program. More than 2 years after the 1st participants exited the project, a retrospective survey was conducted to determine the participants' use of, and satisfaction with, the 2 programs. From the 30 returned questionnaires, it seems that WDRC is used with a high degree of satisfaction in general everyday listening situations. The reported benefit from the addition of a directional microphone and speech enhancement for listening in noisy environments was lower and varied among the users. This variable was significantly correlated with how much the program was used. The less frequent and more varied use of the program with directional microphones and speech enhancement activated in combination suggests that these features may be best offered in a 2nd listening program for listeners with severe or profound hearing loss.

  14. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    PubMed

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  15. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    PubMed Central

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  16. Everyday listening questionnaire: correlation between subjective hearing and objective performance.

    PubMed

    Brendel, Martina; Frohne-Buechner, Carolin; Lesinski-Schiedat, Anke; Lenarz, Thomas; Buechner, Andreas

    2014-01-01

    Clinical experience has demonstrated that speech understanding by cochlear implant (CI) recipients has improved over recent years with the development of new technology. The Everyday Listening Questionnaire 2 (ELQ 2) was designed to collect information regarding the challenges faced by CI recipients in everyday listening. The aim of this study was to compare self-assessment of CI users using ELQ 2 with objective speech recognition measures and to compare results between users of older and newer coding strategies. During their regular clinical review appointments a group of representative adult CI recipients implanted with the Advanced Bionics implant system were asked to complete the questionnaire. The first 100 patients who agreed to participate in this survey were recruited independent of processor generation and speech coding strategy. Correlations between subjectively scored hearing performance in everyday listening situations and objectively measured speech perception abilities were examined relative to the speech coding strategies used. When subjects were grouped by strategy there were significant differences between users of older 'standard' strategies and users of the newer, currently available strategies (HiRes and HiRes 120), especially in the categories of telephone use and music perception. Significant correlations were found between certain subjective ratings and the objective speech perception data in noise. There is a good correlation between subjective and objective data. Users of more recent speech coding strategies tend to have fewer problems in difficult hearing situations.

  17. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing

    PubMed Central

    Papsin, Blake C.; Paludetti, Gaetano; Gordon, Karen A.

    2015-01-01

    Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in

  18. Experience Changes How Emotion in Music Is Judged: Evidence from Children Listening with Bilateral Cochlear Implants, Bimodal Devices, and Normal Hearing.

    PubMed

    Giannantonio, Sara; Polonenko, Melissa J; Papsin, Blake C; Paludetti, Gaetano; Gordon, Karen A

    2015-01-01

    Children using unilateral cochlear implants abnormally rely on tempo rather than mode cues to distinguish whether a musical piece is happy or sad. This led us to question how this judgment is affected by the type of experience in early auditory development. We hypothesized that judgments of the emotional content of music would vary by the type and duration of access to sound in early life due to deafness, altered perception of musical cues through new ways of using auditory prostheses bilaterally, and formal music training during childhood. Seventy-five participants completed the Montreal Emotion Identification Test. Thirty-three had normal hearing (aged 6.6 to 40.0 years) and 42 children had hearing loss and used bilateral auditory prostheses (31 bilaterally implanted and 11 unilaterally implanted with contralateral hearing aid use). Reaction time and accuracy were measured. Accurate judgment of emotion in music was achieved across ages and musical experience. Musical training accentuated the reliance on mode cues which developed with age in the normal hearing group. Degrading pitch cues through cochlear implant-mediated hearing induced greater reliance on tempo cues, but mode cues grew in salience when at least partial acoustic information was available through some residual hearing in the contralateral ear. Finally, when pitch cues were experimentally distorted to represent cochlear implant hearing, individuals with normal hearing (including those with musical training) switched to an abnormal dependence on tempo cues. The data indicate that, in a western culture, access to acoustic hearing in early life promotes a preference for mode rather than tempo cues which is enhanced by musical training. The challenge to these preferred strategies during cochlear implant hearing (simulated and real), regardless of musical training, suggests that access to pitch cues for children with hearing loss must be improved by preservation of residual hearing and improvements in

  19. An Investigation of Spatial Hearing in Children with Normal Hearing and with Cochlear Implants and the Impact of Executive Function

    NASA Astrophysics Data System (ADS)

    Misurelli, Sara M.

    The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.

  20. Objective Quality and Intelligibility Prediction for Users of Assistive Listening Devices

    PubMed Central

    Falk, Tiago H.; Parsa, Vijay; Santos, João F.; Arehart, Kathryn; Hazrati, Oldooz; Huber, Rainer; Kates, James M.; Scollie, Susan

    2015-01-01

    This article presents an overview of twelve existing objective speech quality and intelligibility prediction tools. Two classes of algorithms are presented, namely intrusive and non-intrusive, with the former requiring the use of a reference signal, while the latter does not. Investigated metrics include both those developed for normal hearing listeners, as well as those tailored particularly for hearing impaired (HI) listeners who are users of assistive listening devices (i.e., hearing aids, HAs, and cochlear implants, CIs). Representative examples of those optimized for HI listeners include the speech-to-reverberation modulation energy ratio, tailored to hearing aids (SRMR-HA) and to cochlear implants (SRMR-CI); the modulation spectrum area (ModA); the hearing aid speech quality (HASQI) and perception indices (HASPI); and the PErception MOdel - hearing impairment quality (PEMO-Q-HI). The objective metrics are tested on three subjectively-rated speech datasets covering reverberation-alone, noise-alone, and reverberation-plus-noise degradation conditions, as well as degradations resultant from nonlinear frequency compression and different speech enhancement strategies. The advantages and limitations of each measure are highlighted and recommendations are given for suggested uses of the different tools under specific environmental and processing conditions. PMID:26052190

  1. Evaluation of Extended-Wear Hearing Aid Technology for Operational Military Use

    DTIC Science & Technology

    2016-07-01

    listeners without degrading auditory situational awareness. To this point, significant progress has been made in this evaluation process. The devices...provide long-term hearing protection for listeners with normal hearing with minimal impact on auditory situational awareness and minimal annoyance due to...Test Plan: A comprehensive test plan is complete for the measurements at AFRL, which will incorporate goals 1-2 and 4-5 above using a normal

  2. Effects of Hearing Loss on Dual-Task Performance in an Audiovisual Virtual Reality Simulation of Listening While Walking.

    PubMed

    Lau, Sin Tung; Pichora-Fuller, M Kathleen; Li, Karen Z H; Singh, Gurjit; Campos, Jennifer L

    2016-07-01

    Most activities of daily living require the dynamic integration of sights, sounds, and movements as people navigate complex environments. Nevertheless, little is known about the effects of hearing loss (HL) or hearing aid (HA) use on listening during multitasking challenges. The objective of the current study was to investigate the effect of age-related hearing loss (ARHL) on word recognition accuracy in a dual-task experiment. Virtual reality (VR) technologies in a specialized laboratory (Challenging Environment Assessment Laboratory) were used to produce a controlled and safe simulated environment for listening while walking. In a simulation of a downtown street intersection, participants completed two single-task conditions, listening-only (standing stationary) and walking-only (walking on a treadmill to cross the simulated intersection with no speech presented), and a dual-task condition (listening while walking). For the listening task, they were required to recognize words spoken by a target talker when there was a competing talker. For some blocks of trials, the target talker was always located at 0° azimuth (100% probability condition); for other blocks, the target talker was more likely (60% of trials) to be located at the center (0° azimuth) and less likely (40% of trials) to be located at the left (270° azimuth). The participants were eight older adults with bilateral HL (mean age = 73.3 yr, standard deviation [SD] = 8.4; three males) who wore their own HAs during testing and eight controls with normal hearing (NH) thresholds (mean age = 69.9 yr, SD = 5.4; two males). No participant had clinically significant visual, cognitive, or mobility impairments. Word recognition accuracy and kinematic parameters (head and trunk angles, step width and length, stride time, cadence) were analyzed using mixed factorial analysis of variances with group as a between-subjects factor. Task condition (single versus dual) and probability (100% versus 60%) were within

  3. Standard-Chinese Lexical Neighborhood Test in normal-hearing young children.

    PubMed

    Liu, Chang; Liu, Sha; Zhang, Ning; Yang, Yilin; Kong, Ying; Zhang, Luo

    2011-06-01

    The purposes of the present study were to establish the Standard-Chinese version of Lexical Neighborhood Test (LNT) and to examine the lexical and age effects on spoken-word recognition in normal-hearing children. Six lists of monosyllabic and six lists of disyllabic words (20 words/list) were selected from the database of daily speech materials for normal-hearing (NH) children of ages 3-5 years. The lists were further divided into "easy" and "hard" halves according to the word frequency and neighborhood density in the database based on the theory of Neighborhood Activation Model (NAM). Ninety-six NH children (age ranged between 4.0 and 7.0 years) were divided into three different age groups of 1-year intervals. Speech-perception tests were conducted using the Standard-Chinese monosyllabic and disyllabic LNT. The inter-list performance was found to be equivalent and inter-rater reliability was high with 92.5-95% consistency. Results of word-recognition scores showed that the lexical effects were all significant. Children scored higher with disyllabic words than with monosyllabic words. "Easy" words scored higher than "hard" words. The word-recognition performance also increased with age in each lexical category. A multiple linear regression analysis showed that neighborhood density, age, and word frequency appeared to have increasingly more contributions to Chinese word recognition. The results of the present study indicated that performances of Chinese word recognition were influenced by word frequency, age, and neighborhood density, with word frequency playing a major role. These results were consistent with those in other languages, supporting the application of NAM in the Chinese language. The development of Standard-Chinese version of LNT and the establishment of a database of children of 4-6 years old can provide a reliable means for spoken-word recognition test in children with hearing impairment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. Reaction times of normal listeners to laryngeal, alaryngeal, and synthetic speech.

    PubMed

    Evitts, Paul M; Searl, Jeff

    2006-12-01

    The purpose of this study was to compare listener processing demands when decoding alaryngeal compared to laryngeal speech. Fifty-six listeners were presented with single words produced by 1 proficient speaker from 5 different modes of speech: normal, tracheosophageal (TE), esophageal (ES), electrolaryngeal (EL), and synthetic speech (SS). Cognitive processing load was indexed by listener reaction time (RT). To account for significant durational differences among the modes of speech, an RT ratio was calculated (stimulus duration divided by RT). Results indicated that the cognitive processing load was greater for ES and EL relative to normal speech. TE and normal speech did not differ in terms of RT ratio, suggesting fairly comparable cognitive demands placed on the listener. SS required greater cognitive processing load than normal and alaryngeal speech. The results are discussed relative to alaryngeal speech intelligibility and the role of the listener. Potential clinical applications and directions for future research are also presented.

  5. Temporal and speech processing skills in normal hearing individuals exposed to occupational noise.

    PubMed

    Kumar, U Ajith; Ameenudin, Syed; Sangamanatha, A V

    2012-01-01

    Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13), 41 50 ( = 13), 41-50 (n = 9), and 51-60 (n = 6) years and their non-noise-exposed counterparts (n = 30 in each age group). Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.

  6. Descending projections from the inferior colliculus to medial olivocochlear efferents: Mice with normal hearing, early onset hearing loss, and congenital deafness.

    PubMed

    Suthakar, Kirupa; Ryugo, David K

    2017-01-01

    Auditory efferent neurons reside in the brain and innervate the sensory hair cells of the cochlea to modulate incoming acoustic signals. Two groups of efferents have been described in mouse and this report will focus on the medial olivocochlear (MOC) system. Electrophysiological data suggest the MOC efferents function in selective listening by differentially attenuating auditory nerve fiber activity in quiet and noisy conditions. Because speech understanding in noise is impaired in age-related hearing loss, we asked whether pathologic changes in input to MOC neurons from higher centers could be involved. The present study investigated the anatomical nature of descending projections from the inferior colliculus (IC) to MOCs in 3-month old mice with normal hearing, and in 6-month old mice with normal hearing (CBA/CaH), early onset progressive hearing loss (DBA/2), and congenital deafness (homozygous Shaker-2). Anterograde tracers were injected into the IC and retrograde tracers into the cochlea. Electron microscopic analysis of double-labelled tissue confirmed direct synaptic contact from the IC onto MOCs in all cohorts. These labelled terminals are indicative of excitatory neurotransmission because they contain round synaptic vesicles, exhibit asymmetric membrane specializations, and are co-labelled with antibodies against VGlut2, a glutamate transporter. 3D reconstructions of the terminal fields indicate that in normal hearing mice, descending projections from the IC are arranged tonotopically with low frequencies projecting laterally and progressively higher frequencies projecting more medially. Along the mediolateral axis, the projections of DBA/2 mice with acquired high frequency hearing loss were shifted medially towards expected higher frequency projecting regions. Shaker-2 mice with congenital deafness had a much broader spatial projection, revealing abnormalities in the topography of connections. These data suggest that loss in precision of IC directed MOC

  7. A Review of Assistive Listening Device and Digital Wireless Technology for Hearing Instruments

    PubMed Central

    Kim, Chun Hyeok

    2014-01-01

    Assistive listening devices (ALDs) refer to various types of amplification equipment designed to improve the communication of individuals with hard of hearing to enhance the accessibility to speech signal when individual hearing instruments are not sufficient. There are many types of ALDs to overcome a triangle of speech to noise ratio (SNR) problems, noise, distance, and reverberation. ALDs vary in their internal electronic mechanisms ranging from simple hard-wire microphone-amplifier units to more sophisticated broadcasting systems. They usually use microphones to capture an audio source and broadcast it wirelessly over a frequency modulation (FM), infra-red, induction loop, or other transmission techniques. The seven types of ALDs are introduced including hardwire devices, FM sound system, infra-red sound system, induction loop system, telephone listening devices, television, and alert/alarm system. Further development of digital wireless technology in hearing instruments will make possible direct communication with ALDs without any accessories in the near future. There are two technology solutions for digital wireless hearing instruments improving SNR and convenience. One is near-field magnetic induction combined with Bluetooth radio frequency (RF) transmission or proprietary RF transmission and the other is proprietary RF transmission alone. Recently launched digital wireless hearing aid applying this new technology can communicate from the hearing instrument to personal computer, phones, Wi-Fi, alert systems, and ALDs via iPhone, iPad, and iPod. However, it comes with its own iOS application offering a range of features but there is no option for Android users as of this moment. PMID:25566400

  8. A review of assistive listening device and digital wireless technology for hearing instruments.

    PubMed

    Kim, Jin Sook; Kim, Chun Hyeok

    2014-12-01

    Assistive listening devices (ALDs) refer to various types of amplification equipment designed to improve the communication of individuals with hard of hearing to enhance the accessibility to speech signal when individual hearing instruments are not sufficient. There are many types of ALDs to overcome a triangle of speech to noise ratio (SNR) problems, noise, distance, and reverberation. ALDs vary in their internal electronic mechanisms ranging from simple hard-wire microphone-amplifier units to more sophisticated broadcasting systems. They usually use microphones to capture an audio source and broadcast it wirelessly over a frequency modulation (FM), infra-red, induction loop, or other transmission techniques. The seven types of ALDs are introduced including hardwire devices, FM sound system, infra-red sound system, induction loop system, telephone listening devices, television, and alert/alarm system. Further development of digital wireless technology in hearing instruments will make possible direct communication with ALDs without any accessories in the near future. There are two technology solutions for digital wireless hearing instruments improving SNR and convenience. One is near-field magnetic induction combined with Bluetooth radio frequency (RF) transmission or proprietary RF transmission and the other is proprietary RF transmission alone. Recently launched digital wireless hearing aid applying this new technology can communicate from the hearing instrument to personal computer, phones, Wi-Fi, alert systems, and ALDs via iPhone, iPad, and iPod. However, it comes with its own iOS application offering a range of features but there is no option for Android users as of this moment.

  9. Young children's preferences for listening rates.

    PubMed

    Leeper, H A; Thomas, C L

    1978-12-01

    A paired-comparison paradigm was utilized to determine the preferences of 20 young children for listening rate for prose speech. An electronic expansion/compression technique yielded nine rates of speech ranging from 100 wpm to 200 wpm, with intervals of 25 wpm. The results indicated that the children most preferred a listening rate of 200 wpm and least preferred a rate of 100 wpm. Comparisons of the present findings with preference rates of older, post-adolescent children and adults are discussed. Direction for further research with temporal alteration and linguistic constraints on the message are considered.

  10. Binaural fusion and listening effort in children who use bilateral cochlear implants: a psychoacoustic and pupillometric study.

    PubMed

    Steel, Morrison M; Papsin, Blake C; Gordon, Karen A

    2015-01-01

    Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this "binaural fusion" reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.

  11. Dynamic Spatial Hearing by Human and Robot Listeners

    NASA Astrophysics Data System (ADS)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  12. Assistive Listening Devices: A Report of the National Task Force on Quality of Services in the Postsecondary Education of Deaf and Hard of Hearing Students.

    ERIC Educational Resources Information Center

    Warick, Ruth; Clark, Catherine; Dancer, Jesse; Sinclair, Stephen

    This report examines the use of auditory assistive listening devices by students who are hard of hearing or deaf in the postsecondary educational setting. Individual sections address the following topics: (1) distinctions between hearing aids and assistive listening devices; (2) assistive listening devices and the college student; (3) types of…

  13. Unique Auditory Language-Learning Needs of Hearing-Impaired Children: Implications for Intervention.

    ERIC Educational Resources Information Center

    Johnson, Barbara Ann; Paterson, Marietta M.

    Twenty-seven hearing-impaired young adults with hearing potentially usable for language comprehension and a history of speech language therapy participated in this study of training in using residual hearing for the purpose of learning spoken language. Evaluation of their recalled therapy experiences indicated that listening to spoken language did…

  14. Parental Support for Language Development during Joint Book Reading for Young Children with Hearing Loss

    ERIC Educational Resources Information Center

    DesJardin, Jean L.; Doll, Emily R.; Stika, Carren J.; Eisenberg, Laurie S.; Johnson, Karen J.; Ganguly, Dianne Hammes; Colson, Bethany G.; Henning, Shirley C.

    2014-01-01

    Parent and child joint book reading (JBR) characteristics and parent facilitative language techniques (FLTs) were investigated in two groups of parents and their young children; children with normal hearing (NH; "n" = 60) and children with hearing loss (HL; "n" = 45). Parent-child dyads were videotaped during JBR interactions,…

  15. Low empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls.

    PubMed

    Netten, Anouk P; Rieffe, Carolien; Theunissen, Stephanie C P M; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J; Frijns, Johan H M

    2015-01-01

    The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children's level of empathy, their attendance to others' emotions, emotion recognition, and supportive behavior. Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships.

  16. Listening Effort and Speech Recognition with Frequency Compression Amplification for Children and Adults with Hearing Loss.

    PubMed

    Brennan, Marc A; Lewis, Dawna; McCreery, Ryan; Kopun, Judy; Alexander, Joshua M

    2017-10-01

    Nonlinear frequency compression (NFC) can improve the audibility of high-frequency sounds by lowering them to a frequency where audibility is better; however, this lowering results in spectral distortion. Consequently, performance is a combination of the effects of increased access to high-frequency sounds and the detrimental effects of spectral distortion. Previous work has demonstrated positive benefits of NFC on speech recognition when NFC is set to improve audibility while minimizing distortion. However, the extent to which NFC impacts listening effort is not well understood, especially for children with sensorineural hearing loss (SNHL). To examine the impact of NFC on recognition and listening effort for speech in adults and children with SNHL. Within-subject, quasi-experimental study. Participants listened to amplified nonsense words that were (1) frequency-lowered using NFC, (2) low-pass filtered at 5 kHz to simulate the restricted bandwidth (RBW) of conventional hearing aid processing, or (3) low-pass filtered at 10 kHz to simulate extended bandwidth (EBW) amplification. Fourteen children (8-16 yr) and 14 adults (19-65 yr) with mild-to-severe SNHL. Participants listened to speech processed by a hearing aid simulator that amplified input signals to fit a prescriptive target fitting procedure. Participants were blinded to the type of processing. Participants' responses to each nonsense word were analyzed for accuracy and verbal-response time (VRT; listening effort). A multivariate analysis of variance and linear mixed model were used to determine the effect of hearing-aid signal processing on nonsense word recognition and VRT. Both children and adults identified the nonsense words and initial consonants better with EBW and NFC than with RBW. The type of processing did not affect the identification of the vowels or final consonants. There was no effect of age on recognition of the nonsense words, initial consonants, medial vowels, or final consonants. VRT did

  17. Spatial Release From Masking in 2-Year-Olds With Normal Hearing and With Bilateral Cochlear Implants

    PubMed Central

    Hess, Christi L.; Misurelli, Sara M.; Litovsky, Ruth Y.

    2018-01-01

    This study evaluated spatial release from masking (SRM) in 2- to 3-year-old children who are deaf and were implanted with bilateral cochlear implants (BiCIs), and in age-matched normal-hearing (NH) toddlers. Here, we examined whether early activation of bilateral hearing has the potential to promote SRM that is similar to age-matched NH children. Listeners were 13 NH toddlers and 13 toddlers with BiCIs, ages 27 to 36 months. Speech reception thresholds (SRTs) were measured for target speech in front (0°) and for competitors that were either Colocated in front (0°) or Separated toward the right (+90°). SRM was computed as the difference between SRTs in the front versus in the asymmetrical condition. Results show that SRTs were higher in the BiCI than NH group in all conditions. Both groups had higher SRTs in the Colocated and Separated conditions compared with Quiet, indicating masking. SRM was significant only in the NH group. In the BiCI group, the group effect of SRM was not significant, likely limited by the small sample size; however, all but two children had SRM values within the NH range. This work shows that to some extent, the ability to use spatial cues for source segregation develops by age 2 to 3 in NH children and is attainable in most of the children in the BiCI group. There is potential for the paradigm used here to be used in clinical settings to evaluate outcomes of bilateral hearing in very young children. PMID:29761735

  18. Cortical Auditory Evoked Potentials in (Un)aided Normal-Hearing and Hearing-Impaired Adults

    PubMed Central

    Van Dun, Bram; Kania, Anna; Dillon, Harvey

    2016-01-01

    Cortical auditory evoked potentials (CAEPs) are influenced by the characteristics of the stimulus, including level and hearing aid gain. Previous studies have measured CAEPs aided and unaided in individuals with normal hearing. There is a significant difference between providing amplification to a person with normal hearing and a person with hearing loss. This study investigated this difference and the effects of stimulus signal-to-noise ratio (SNR) and audibility on the CAEP amplitude in a population with hearing loss. Twelve normal-hearing participants and 12 participants with a hearing loss participated in this study. Three speech sounds—/m/, /g/, and /t/—were presented in the free field. Unaided stimuli were presented at 55, 65, and 75 dB sound pressure level (SPL) and aided stimuli at 55 dB SPL with three different gains in steps of 10 dB. CAEPs were recorded and their amplitudes analyzed. Stimulus SNRs and audibility were determined. No significant effect of stimulus level or hearing aid gain was found in normal hearers. Conversely, a significant effect was found in hearing-impaired individuals. Audibility of the signal, which in some cases is determined by the signal level relative to threshold and in other cases by the SNR, is the dominant factor explaining changes in CAEP amplitude. CAEPs can potentially be used to assess the effects of hearing aid gain in hearing-impaired users. PMID:27587919

  19. Low Empathy in Deaf and Hard of Hearing (Pre)Adolescents Compared to Normal Hearing Controls

    PubMed Central

    Netten, Anouk P.; Rieffe, Carolien; Theunissen, Stephanie C. P. M.; Soede, Wim; Dirks, Evelien; Briaire, Jeroen J.; Frijns, Johan H. M.

    2015-01-01

    Objective The purpose of this study was to examine the level of empathy in deaf and hard of hearing (pre)adolescents compared to normal hearing controls and to define the influence of language and various hearing loss characteristics on the development of empathy. Methods The study group (mean age 11.9 years) consisted of 122 deaf and hard of hearing children (52 children with cochlear implants and 70 children with conventional hearing aids) and 162 normal hearing children. The two groups were compared using self-reports, a parent-report and observation tasks to rate the children’s level of empathy, their attendance to others’ emotions, emotion recognition, and supportive behavior. Results Deaf and hard of hearing children reported lower levels of cognitive empathy and prosocial motivation than normal hearing children, regardless of their type of hearing device. The level of emotion recognition was equal in both groups. During observations, deaf and hard of hearing children showed more attention to the emotion evoking events but less supportive behavior compared to their normal hearing peers. Deaf and hard of hearing children attending mainstream education or using oral language show higher levels of cognitive empathy and prosocial motivation than deaf and hard of hearing children who use sign (supported) language or attend special education. However, they are still outperformed by normal hearing children. Conclusions Deaf and hard of hearing children, especially those in special education, show lower levels of empathy than normal hearing children, which can have consequences for initiating and maintaining relationships. PMID:25906365

  20. Binaural Fusion and Listening Effort in Children Who Use Bilateral Cochlear Implants: A Psychoacoustic and Pupillometric Study

    PubMed Central

    Steel, Morrison M.; Papsin, Blake C.; Gordon, Karen A.

    2015-01-01

    Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing. PMID:25668423

  1. Speech Recognition in Fluctuating and Continuous Maskers: Effects of Hearing Loss and Presentation Level.

    ERIC Educational Resources Information Center

    Summers, Van; Molis, Michelle R.

    2004-01-01

    Listeners with normal-hearing sensitivity recognize speech more accurately in the presence of fluctuating background sounds, such as a single competing voice, than in unmodulated noise at the same overall level. These performance differences ore greatly reduced in listeners with hearing impairment, who generally receive little benefit from…

  2. Effects of Hearing Loss on Heart-Rate Variability and Skin Conductance Measured During Sentence Recognition in Noise

    PubMed Central

    Mackersie, Carol L.; MacPhee, Imola X.; Heldt, Emily W.

    2014-01-01

    SHORT SUMMARY (précis) Sentence recognition by participants with and without hearing loss was measured in quiet and in babble noise while monitoring two autonomic nervous system measures: heart-rate variability and skin conductance. Heart-rate variability decreased under difficult listening conditions for participants with hearing loss, but not for participants with normal hearing. Skin conductance noise reactivity was greater for those with hearing loss, than for those with normal hearing, but did not vary with the signal-to-noise ratio. Subjective ratings of workload/stress obtained after each listening condition were similar for the two participant groups. PMID:25170782

  3. Sensory-motor relationships in speech production in post-lingually deaf cochlear-implanted adults and normal-hearing seniors: Evidence from phonetic convergence and speech imitation.

    PubMed

    Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc

    2017-07-01

    Speech communication can be viewed as an interactive process involving a functional coupling between sensory and motor systems. One striking example comes from phonetic convergence, when speakers automatically tend to mimic their interlocutor's speech during communicative interaction. The goal of this study was to investigate sensory-motor linkage in speech production in postlingually deaf cochlear implanted participants and normal hearing elderly adults through phonetic convergence and imitation. To this aim, two vowel production tasks, with or without instruction to imitate an acoustic vowel, were proposed to three groups of young adults with normal hearing, elderly adults with normal hearing and post-lingually deaf cochlear-implanted patients. Measure of the deviation of each participant's f 0 from their own mean f 0 was measured to evaluate the ability to converge to each acoustic target. showed that cochlear-implanted participants have the ability to converge to an acoustic target, both intentionally and unintentionally, albeit with a lower degree than young and elderly participants with normal hearing. By providing evidence for phonetic convergence and speech imitation, these results suggest that, as in young adults, perceptuo-motor relationships are efficient in elderly adults with normal hearing and that cochlear-implanted adults recovered significant perceptuo-motor abilities following cochlear implantation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Hearing in young adults. Part I: The effects of attitudes and beliefs toward noise, hearing loss, and hearing protector devices.

    PubMed

    Keppler, Hannah; Dhooge, Ingeborg; Vinck, Bart

    2015-01-01

    There is great concern regarding the development of noise-induced hearing loss (NIHL) in youth caused by high sound levels during various leisure activities. Health-orientated behavior of young adults might be linked to the beliefs and attitudes toward noise, hearing loss, and hearing protector devices (HPDs). The objective of the current study was to evaluate the effects of attitudes and beliefs toward noise, hearing loss, and HPDs on young adults' hearing status. A questionnaire and an audiological test battery were completed by 163 subjects (aged 18-30 years). The questionnaire contained the Youth Attitude to Noise Scale (YANS) and Beliefs about Hearing Protection and Hearing Loss (BAHPHL). A more positive attitude or belief represented an attitude where noise or hearing loss is seen as unproblematic and attitudes and beliefs regarding HPDs is worse. Hearing was evaluated using (high frequency) pure tone audiometry (PTA), transient evoked and distortion product otoacoustic emissions. First, mean differences in hearing between the groups with different attitudes and beliefs were evaluated using one-way analysis of variance (ANOVA). Second, a χ² test was used to examine the usage of HPDs by the different groups with different attitudes and beliefs. Young adults with a positive attitude had significantly more deteriorated hearing and used HPDs less than the other subjects. Hearing conservation programs (HCPs) for young adults should provide information and knowledge regarding noise, hearing loss, and HPDs. Barriers wearing HPDs should especially be discussed. Further, those campaigns should focus on self-experienced hearing related symptoms that might serve as triggers for attitudinal and behavioral changes.

  5. Hearing in young adults. Part I: The effects of attitudes and beliefs toward noise, hearing loss, and hearing protector devices

    PubMed Central

    Keppler, Hannah; Dhooge, Ingeborg; Vinck, Bart

    2015-01-01

    There is great concern regarding the development of noise-induced hearing loss (NIHL) in youth caused by high sound levels during various leisure activities. Health-orientated behavior of young adults might be linked to the beliefs and attitudes toward noise, hearing loss, and hearing protector devices (HPDs). The objective of the current study was to evaluate the effects of attitudes and beliefs toward noise, hearing loss, and HPDs on young adults’ hearing status. A questionnaire and an audiological test battery were completed by 163 subjects (aged 18-30 years). The questionnaire contained the Youth Attitude to Noise Scale (YANS) and Beliefs about Hearing Protection and Hearing Loss (BAHPHL). A more positive attitude or belief represented an attitude where noise or hearing loss is seen as unproblematic and attitudes and beliefs regarding HPDs is worse. Hearing was evaluated using (high frequency) pure tone audiometry (PTA), transient evoked and distortion product otoacoustic emissions. First, mean differences in hearing between the groups with different attitudes and beliefs were evaluated using one-way analysis of variance (ANOVA). Second, a χ2 test was used to examine the usage of HPDs by the different groups with different attitudes and beliefs. Young adults with a positive attitude had significantly more deteriorated hearing and used HPDs less than the other subjects. Hearing conservation programs (HCPs) for young adults should provide information and knowledge regarding noise, hearing loss, and HPDs. Barriers wearing HPDs should especially be discussed. Further, those campaigns should focus on self-experienced hearing related symptoms that might serve as triggers for attitudinal and behavioral changes. PMID:26356365

  6. What Can We Learn about Auditory Processing from Adult Hearing Questionnaires?

    PubMed

    Bamiou, Doris-Eva; Iliadou, Vasiliki Vivian; Zanchetta, Sthella; Spyridakou, Chrysa

    2015-01-01

    Questionnaires addressing auditory disability may identify and quantify specific symptoms in adult patients with listening difficulties. (1) To assess validity of the Speech, Spatial, and Qualities of Hearing Scale (SSQ), the (Modified) Amsterdam Inventory for Auditory Disability (mAIAD), and the Hyperacusis Questionnaire (HYP) in adult patients experiencing listening difficulties in the presence of a normal audiogram. (2) To examine which individual questionnaire items give the worse scores in clinical participants with an auditory processing disorder (APD). A prospective correlational analysis study. Clinical participants (N = 58) referred for assessment because of listening difficulties in the presence of normal audiometric thresholds to audiology/ear, nose, and throat or audiovestibular medicine clinics. Normal control participants (N = 30). The mAIAD, HYP, and the SSQ were administered to a clinical population of nonneurological adults who were referred for auditory processing (AP) assessment because of hearing complaints, in the presence of normal audiogram and cochlear function, and to a sample of age-matched normal-hearing controls, before the AP testing. Clinical participants with abnormal results in at least one ear and in at least two tests of AP (and at least one of these tests to be nonspeech) were classified as clinical APD (N = 39), and the remaining (16 of whom had a single test abnormality) as clinical non-APD (N = 19). The SSQ correlated strongly with the mAIAD and the HYP, and correlation was similar within the clinical group and the normal controls. All questionnaire total scores and subscores (except sound distinction of mAIAD) were significantly worse in the clinical APD versus the normal group, while questionnaire total scores and most subscores indicated greater listening difficulties for the clinical non-APD versus the normal subgroups. Overall, the clinical non-APD group tended to give better scores than the APD in all questionnaires

  7. From Listening to Understanding: Interpreting Young Children's Perspectives

    ERIC Educational Resources Information Center

    Colliver, Yeshe

    2017-01-01

    As young children's perspectives are increasingly "taken seriously" across disciplines, the pursuit of authentic and ethical research with young children has become the subject of recent discussion. Much of this relates to listening "authentically" to (or understanding) young children, focusing on research design, ethics,…

  8. Glimpsing Speech in the Presence of Nonsimultaneous Amplitude Modulations from a Competing Talker: Effect of Modulation Rate, Age, and Hearing Loss

    ERIC Educational Resources Information Center

    Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.

    2016-01-01

    Purpose: This study investigated how listeners process acoustic cues preserved during sentences interrupted by nonsimultaneous noise that was amplitude modulated by a competing talker. Method: Younger adults with normal hearing and older adults with normal or impaired hearing listened to sentences with consonants or vowels replaced with noise…

  9. Speech intelligibility index predictions for young and old listeners in automobile noise: Can the index be improved by incorporating factors other than absolute threshold?

    NASA Astrophysics Data System (ADS)

    Saweikis, Meghan; Surprenant, Aimée M.; Davies, Patricia; Gallant, Don

    2003-10-01

    While young and old subjects with comparable audiograms tend to perform comparably on speech recognition tasks in quiet environments, the older subjects have more difficulty than the younger subjects with recognition tasks in degraded listening conditions. This suggests that factors other than an absolute threshold may account for some of the difficulty older listeners have on recognition tasks in noisy environments. Many metrics, including the Speech Intelligibility Index (SII), used to measure speech intelligibility, only consider an absolute threshold when accounting for age related hearing loss. Therefore these metrics tend to overestimate the performance for elderly listeners in noisy environments [Tobias et al., J. Acoust. Soc. Am. 83, 859-895 (1988)]. The present studies examine the predictive capabilities of the SII in an environment with automobile noise present. This is of interest because people's evaluation of the automobile interior sound is closely linked to their ability to carry on conversations with their fellow passengers. The four studies examine whether, for subjects with age related hearing loss, the accuracy of the SII can be improved by incorporating factors other than an absolute threshold into the model. [Work supported by Ford Motor Company.

  10. Sound Source Localization and Speech Understanding in Complex Listening Environments by Single-sided Deaf Listeners After Cochlear Implantation.

    PubMed

    Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H

    2015-09-01

    To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.

  11. Semantic priming, not repetition priming, is to blame for false hearing.

    PubMed

    Rogers, Chad S

    2017-08-01

    Contextual and sensory information are combined in speech perception. Conflict between the two can lead to false hearing, defined as a high-confidence misidentification of a spoken word. Rogers, Jacoby, and Sommers (Psychology and Aging, 27(1), 33-45, 2012) found that older adults are more susceptible to false hearing than are young adults, using a combination of semantic priming and repetition priming to create context. In this study, the type of context (repetition vs. sematic priming) responsible for false hearing was examined. Older and young adult participants read and listened to a list of paired associates (e.g., ROW-BOAT) and were told to remember the pairs for a later memory test. Following the memory test, participants identified words masked in noise that were preceded by a cue word in the clear. Targets were semantically associated to the cue (e.g., ROW-BOAT), unrelated to the cue (e.g., JAW-PASS), or phonologically related to a semantic associate of the cue (e.g., ROW-GOAT). How often each cue word and its paired associate were presented prior to the memory test was manipulated (0, 3, or 5 times) to test effects of repetition priming. Results showed repetitions had no effect on rates of context-based listening or false hearing. However, repetition did significantly increase sensory information as a basis for metacognitive judgments in young and older adults. This pattern suggests that semantic priming dominates as the basis for false hearing and highlights context and sensory information operating as qualitatively different bases for listening and metacognition.

  12. A correlational method to concurrently measure envelope and temporal fine structure weights: effects of age, cochlear pathology, and spectral shaping.

    PubMed

    Fogerty, Daniel; Humes, Larry E

    2012-09-01

    The speech signal may be divided into spectral frequency-bands, each band containing temporal properties of the envelope and fine structure. This study measured the perceptual weights for the envelope and fine structure in each of three frequency bands for sentence materials in young normal-hearing listeners, older normal-hearing listeners, aided older hearing-impaired listeners, and spectrally matched young normal-hearing listeners. The availability of each acoustic property was independently varied through noisy signal extraction. Thus, the full speech stimulus was presented with noise used to mask six different auditory channels. Perceptual weights were determined by correlating a listener's performance with the signal-to-noise ratio of each acoustic property on a trial-by-trial basis. Results demonstrate that temporal fine structure perceptual weights remain stable across the four listener groups. However, a different weighting typography was observed across the listener groups for envelope cues. Results suggest that spectral shaping used to preserve the audibility of the speech stimulus may alter the allocation of perceptual resources. The relative perceptual weighting of envelope cues may also change with age. Concurrent testing of sentences repeated once on a previous day demonstrated that weighting strategies for all listener groups can change, suggesting an initial stabilization period or susceptibility to auditory training.

  13. Head Position Comparison between Students with Normal Hearing and Students with Sensorineural Hearing Loss.

    PubMed

    Melo, Renato de Souza; Amorim da Silva, Polyanna Waleska; Souza, Robson Arruda; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica

    2013-10-01

    Introduction Head sense position is coordinated by sensory activity of the vestibular system, located in the inner ear. Children with sensorineural hearing loss may show changes in the vestibular system as a result of injury to the inner ear, which can alter the sense of head position in this population. Aim Analyze the head alignment in students with normal hearing and students with sensorineural hearing loss and compare the data between groups. Methods This prospective cross-sectional study examined the head alignment of 96 students, 48 with normal hearing and 48 with sensorineural hearing loss, aged between 7 and 18 years. The analysis of head alignment occurred through postural assessment performed according to the criteria proposed by Kendall et al. For data analysis we used the chi-square test or Fisher exact test. Results The students with hearing loss had a higher occurrence of changes in the alignment of the head than normally hearing students (p < 0.001). Forward head posture was the type of postural change observed most, occurring in greater proportion in children with hearing loss (p < 0.001), followed by the side slope head posture (p < 0.001). Conclusion Children with sensorineural hearing loss showed more changes in the head posture compared with children with normal hearing.

  14. Head Position Comparison between Students with Normal Hearing and Students with Sensorineural Hearing Loss

    PubMed Central

    Melo, Renato de Souza; Amorim da Silva, Polyanna Waleska; Souza, Robson Arruda; Raposo, Maria Cristina Falcão; Ferraz, Karla Mônica

    2013-01-01

    Introduction Head sense position is coordinated by sensory activity of the vestibular system, located in the inner ear. Children with sensorineural hearing loss may show changes in the vestibular system as a result of injury to the inner ear, which can alter the sense of head position in this population. Aim Analyze the head alignment in students with normal hearing and students with sensorineural hearing loss and compare the data between groups. Methods This prospective cross-sectional study examined the head alignment of 96 students, 48 with normal hearing and 48 with sensorineural hearing loss, aged between 7 and 18 years. The analysis of head alignment occurred through postural assessment performed according to the criteria proposed by Kendall et al. For data analysis we used the chi-square test or Fisher exact test. Results The students with hearing loss had a higher occurrence of changes in the alignment of the head than normally hearing students (p < 0.001). Forward head posture was the type of postural change observed most, occurring in greater proportion in children with hearing loss (p < 0.001), followed by the side slope head posture (p < 0.001). Conclusion Children with sensorineural hearing loss showed more changes in the head posture compared with children with normal hearing. PMID:25992037

  15. Older Adults Expend More Listening Effort than Young Adults Recognizing Speech in Noise

    ERIC Educational Resources Information Center

    Gosselin, Penny Anderson; Gagne, Jean-Pierre

    2011-01-01

    Purpose: Listening in noisy situations is a challenging experience for many older adults. The authors hypothesized that older adults exert more listening effort compared with young adults. Listening effort involves the attention and cognitive resources required to understand speech. The purpose was (a) to quantify the amount of listening effort…

  16. The Hearing Environment

    ERIC Educational Resources Information Center

    Capewell, Carmel

    2014-01-01

    Glue ear, a condition resulting in intermittent hearing loss in young children, affects about 80% of young children under seven years old. About 60% of children will spend a third of their time unable to hear within normal thresholds. Teachers are unlikely to consider the sound quality in classrooms. In my research young people provided…

  17. Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners with Hearing Impairment Using Hearing Aids

    ERIC Educational Resources Information Center

    Moradi, Shahram; Lidestam, Bjorn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Ronnberg, Jerker

    2017-01-01

    Purpose: We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels--in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands--in listeners with hearing impairment using hearing aids. Method: The study comprised 199…

  18. Evidence of noise-induced hearing loss in young people studying popular music.

    PubMed

    Barlow, Christopher

    2011-06-01

    The number of students studying popular music, music technology, and sound engineering courses at both school and university to has increased rapidly in the last few years. These students are generally involved in music-making/recording and listening to a high level, usually in environments with amplified music. Recent studies have shown that these students are potentially exposed to a high risk of noise-induced hearing loss (NIHL( and are not covered by the same regulatory framework as employees. This study examined the pure tone air conduction hearing thresholds of 50 undergraduate students, including recent school leavers, on a range of popular music courses, to assess if there was evidence of hearing loss. Forty-four percent of students showed evidence of audiometric notch at 4-6 kHz, and 16% were classified under the UK Occupational Health and Safety guidelines as exhibiting mild hearing loss. Instance of audiometric notch was considerably higher than reported from studies of the general population but was around the same level or lower than that reported from studies of "traditional" music courses and conservatoires, suggesting no higher risk for popular music students than for "classical" music students. No relationship with age was present, suggesting that younger students were as likely to exhibit audiometric notch as mature students. This indicates that these students may be damaging their hearing through leisure activities while still at school, suggesting a need for robust education measures to focus on noise exposure of young people.

  19. Effects of Hearing Loss and Cognitive Load on Speech Recognition with Competing Talkers.

    PubMed

    Meister, Hartmut; Schreitmüller, Stefan; Ortmann, Magdalene; Rählmann, Sebastian; Walger, Martin

    2016-01-01

    Everyday communication frequently comprises situations with more than one talker speaking at a time. These situations are challenging since they pose high attentional and memory demands placing cognitive load on the listener. Hearing impairment additionally exacerbates communication problems under these circumstances. We examined the effects of hearing loss and attention tasks on speech recognition with competing talkers in older adults with and without hearing impairment. We hypothesized that hearing loss would affect word identification, talker separation and word recall and that the difficulties experienced by the hearing impaired listeners would be especially pronounced in a task with high attentional and memory demands. Two listener groups closely matched for their age and neuropsychological profile but differing in hearing acuity were examined regarding their speech recognition with competing talkers in two different tasks. One task required repeating back words from one target talker (1TT) while ignoring the competing talker whereas the other required repeating back words from both talkers (2TT). The competing talkers differed with respect to their voice characteristics. Moreover, sentences either with low or high context were used in order to consider linguistic properties. Compared to their normal hearing peers, listeners with hearing loss revealed limited speech recognition in both tasks. Their difficulties were especially pronounced in the more demanding 2TT task. In order to shed light on the underlying mechanisms, different error sources, namely having misunderstood, confused, or omitted words were investigated. Misunderstanding and omitting words were more frequently observed in the hearing impaired than in the normal hearing listeners. In line with common speech perception models, it is suggested that these effects are related to impaired object formation and taxed working memory capacity (WMC). In a post-hoc analysis, the listeners were further

  20. The hearing-impaired child in the hearing society.

    PubMed

    Burton, M H

    1983-11-01

    This paper sets out to describe a method of educating the hearing-impaired which has been operating successfully for the past 18 years. The underlying tenet of our approach is that considerable communicative skills can be developed with children who have marked hearing loss. Even if the child is profoundly deaf he or she has some sensory input which can be used as the basis for training in language development. The attempt to make the most of the minimal hearing of the hearing-impaired child has proved to be successful in the vast majority of cases. The profoundly hearing-impaired child can learn to listen and to produce the spoken word. This is demonstrated by use of video-tape. The interaction of teacher with child is heard and the regional accent can be identified. The prosodic features of the speech are retained although articulation may be incomplete. Intelligibility of utterance is shown to be a combination of rhythm stress and intonation based on previously heard patterns rather than on perfectly articulated sounds. The social consequence of this approach is that child is not relegated to a minority subculture where only the deaf can communicate with the deaf but is allowed to enter into the world of normal relationships and expectations. Deaf children can be taught to listen and to use imperfectly heard patterns in order to interpret the meaning of language. This input of speech follows the natural language normally used by the child who is not deaf.

  1. Increased medial olivocochlear reflex strength in normal-hearing, noise-exposed humans

    PubMed Central

    2017-01-01

    Research suggests that college-aged adults are vulnerable to tinnitus and hearing loss due to exposure to traumatic levels of noise on a regular basis. Recent human studies have associated exposure to high noise exposure background (NEB, i.e., routine noise exposure) with the reduced cochlear output and impaired speech processing ability in subjects with clinically normal hearing sensitivity. While the relationship between NEB and the functions of the auditory afferent neurons are studied in the literature, little is known about the effects of NEB on functioning of the auditory efferent system. The objective of the present study was to investigate the relationship between medial olivocochlear reflex (MOCR) strength and NEB in subjects with clinically normal hearing sensitivity. It was hypothesized that subjects with high NEB would exhibit reduced afferent input to the MOCR circuit which would subsequently lead to reduced strength of the MOCR. In normal-hearing listeners, the study examined (1) the association between NEB and baseline click-evoked otoacoustic emissions (CEOAEs) and (2) the association between NEB and MOCR strength. The MOCR was measured using CEOAEs evoked by 60 dB pSPL linear clicks in a contralateral acoustic stimulation (CAS)-off and CAS-on (a broadband noise at 60 dB SPL) condition. Participants with at least 6 dB signal-to-noise ratio (SNR) in the CAS-off and CAS-on conditions were included for analysis. A normalized CEOAE inhibition index was calculated to express MOCR strength in a percentage value. NEB was estimated using a validated questionnaire. The results showed that NEB was not associated with the baseline CEOAE amplitude (r = -0.112, p = 0.586). Contrary to the hypothesis, MOCR strength was positively correlated with NEB (r = 0.557, p = 0.003). NEB remained a significant predictor of MOCR strength (β = 2.98, t(19) = 3.474, p = 0.003) after the unstandardized coefficient was adjusted to control for effects of smoking, sound level

  2. An Estimation of the Whole-of-Life Noise Exposure of Adolescent and Young Adult Australians with Hearing Impairment.

    PubMed

    Carter, Lyndal; Black, Deborah; Bundy, Anita; Williams, Warwick

    2016-10-01

    Since amplified music gained widespread popularity, there has been community concern that leisure-noise exposure may cause hearing loss in adolescents and young adults who would otherwise be free from hearing impairment. Repeated exposure to personal stereo players and music events (e.g., nightclubbing, rock concerts, and music festivals) are of particular concern. The same attention has not been paid to leisure-noise exposure risks for young people with hearing impairment (either present from birth or acquired before adulthood). This article reports on the analysis of a subset of data (leisure participation measures) collected during a large, two-phase study of the hearing health, attitudes, and behaviors of 11- to 35-yr-old Australians conducted by the National Acoustic Laboratories (n = 1,667 hearing threshold level datasets analyzed). The overall aim of the two-phase study was to determine whether a relationship between leisure-noise exposure and hearing loss exists. In the current study, the leisure activity profiles and accumulated ("whole-of-life") noise exposures of young people with (1) hearing impairment and (2) with normal hearing were compared. Cross-sectional cohort study. Hearing impaired (HI) group, n = 125; normal (nonimpaired) hearing (NH) group, n = 296, analyzed in two age-based subsets: adolescents (13- to 17-yr-olds) and young adults (18- to 24-yr-olds). Participant survey. The χ² test was used to identify systematic differences between the leisure profiles and exposure estimates of the HI and NH groups. Whole-of-life noise exposure was estimated by adapting techniques described in ISO 1999. For adolescents, leisure profiles were similar for the two groups and few individuals exceeded the stated risk criterion. For young adults, participation was significantly lower for the HI group for 7 out of 18 leisure activities surveyed. Activity diversity and whole-of-life exposure were also significantly lower for the HI group young adults. A

  3. The role of hearing ability and speech distortion in the facilitation of articulatory motor cortex.

    PubMed

    Nuttall, Helen E; Kennedy-Higgins, Daniel; Devlin, Joseph T; Adank, Patti

    2017-01-08

    Excitability of articulatory motor cortex is facilitated when listening to speech in challenging conditions. Beyond this, however, we have little knowledge of what listener-specific and speech-specific factors engage articulatory facilitation during speech perception. For example, it is unknown whether speech motor activity is independent or dependent on the form of distortion in the speech signal. It is also unknown if speech motor facilitation is moderated by hearing ability. We investigated these questions in two experiments. We applied transcranial magnetic stimulation (TMS) to the lip area of primary motor cortex (M1) in young, normally hearing participants to test if lip M1 is sensitive to the quality (Experiment 1) or quantity (Experiment 2) of distortion in the speech signal, and if lip M1 facilitation relates to the hearing ability of the listener. Experiment 1 found that lip motor evoked potentials (MEPs) were larger during perception of motor-distorted speech that had been produced using a tongue depressor, and during perception of speech presented in background noise, relative to natural speech in quiet. Experiment 2 did not find evidence of motor system facilitation when speech was presented in noise at signal-to-noise ratios where speech intelligibility was at 50% or 75%, which were significantly less severe noise levels than used in Experiment 1. However, there was a significant interaction between noise condition and hearing ability, which indicated that when speech stimuli were correctly classified at 50%, speech motor facilitation was observed in individuals with better hearing, whereas individuals with relatively worse but still normal hearing showed more activation during perception of clear speech. These findings indicate that the motor system may be sensitive to the quantity, but not quality, of degradation in the speech signal. Data support the notion that motor cortex complements auditory cortex during speech perception, and point to a role

  4. L2 Learners' Engagement with High Stakes Listening Tests: Does Technology Have a Beneficial Role to Play?

    ERIC Educational Resources Information Center

    East, Martin; King, Chris

    2012-01-01

    In the listening component of the IELTS examination candidates hear the input once, delivered at "normal" speed. This format for listening can be problematic for test takers who often perceive normal speed input to be too fast for effective comprehension. The study reported here investigated whether using computer software to slow down…

  5. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing.

    PubMed

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-07-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.

  6. A correlational method to concurrently measure envelope and temporal fine structure weights: Effects of age, cochlear pathology, and spectral shaping1

    PubMed Central

    Fogerty, Daniel; Humes, Larry E.

    2012-01-01

    The speech signal may be divided into spectral frequency-bands, each band containing temporal properties of the envelope and fine structure. This study measured the perceptual weights for the envelope and fine structure in each of three frequency bands for sentence materials in young normal-hearing listeners, older normal-hearing listeners, aided older hearing-impaired listeners, and spectrally matched young normal-hearing listeners. The availability of each acoustic property was independently varied through noisy signal extraction. Thus, the full speech stimulus was presented with noise used to mask six different auditory channels. Perceptual weights were determined by correlating a listener’s performance with the signal-to-noise ratio of each acoustic property on a trial-by-trial basis. Results demonstrate that temporal fine structure perceptual weights remain stable across the four listener groups. However, a different weighting typography was observed across the listener groups for envelope cues. Results suggest that spectral shaping used to preserve the audibility of the speech stimulus may alter the allocation of perceptual resources. The relative perceptual weighting of envelope cues may also change with age. Concurrent testing of sentences repeated once on a previous day demonstrated that weighting strategies for all listener groups can change, suggesting an initial stabilization period or susceptibility to auditory training. PMID:22978896

  7. Consequences of Early Conductive Hearing Loss on Long-Term Binaural Processing.

    PubMed

    Graydon, Kelley; Rance, Gary; Dowell, Richard; Van Dun, Bram

    The aim of the study was to investigate the long-term effects of early conductive hearing loss on binaural processing in school-age children. One hundred and eighteen children participated in the study, 82 children with a documented history of conductive hearing loss associated with otitis media and 36 controls who had documented histories showing no evidence of otitis media or conductive hearing loss. All children were demonstrated to have normal-hearing acuity and middle ear function at the time of assessment. The Listening in Spatialized Noise Sentence (LiSN-S) task and the masking level difference (MLD) task were used as the two different measures of binaural interaction ability. Children with a history of conductive hearing loss performed significantly poorer than controls on all LiSN-S conditions relying on binaural cues (DV90, p = <0.001 and SV90, p = 0.003). No significant difference was found between the groups in listening conditions without binaural cues. Fifteen children with a conductive hearing loss history (18%) showed results consistent with a spatial processing disorder. No significant difference was observed between the conductive hearing loss group and the controls on the MLD task. Furthermore, no correlations were found between LiSN-S and MLD. Results show a relationship between early conductive hearing loss and listening deficits that persist once hearing has returned to normal. Results also suggest that the two binaural interaction tasks (LiSN-S and MLD) may be measuring binaural processing at different levels. Findings highlight the need for a screening measure of functional listening ability in children with a history of early otitis media.

  8. Auditory Perceptual Learning in Adults with and without Age-Related Hearing Loss

    PubMed Central

    Karawani, Hanin; Bitan, Tali; Attias, Joseph; Banai, Karen

    2016-01-01

    Introduction : Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL). Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL. Methods : Fifty-six listeners (60–72 y/o), 35 participants with ARHL, and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training, and no-training group). Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1) Speech-in-noise, (2) time compressed speech, and (3) competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results : Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions : ARHL did not preclude auditory perceptual learning but there was little generalization to

  9. Relating binaural pitch perception to the individual listener's auditory profile.

    PubMed

    Santurette, Sébastien; Dau, Torsten

    2012-04-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception was linked to a specific deficit, the auditory profiles of the individual listeners were characterized using measures of loudness perception, cognitive ability, binaural processing, temporal fine structure processing, and frequency selectivity, in addition to common audiometric measures. Two of the listeners were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization.

  10. Ensuring financial access to hearing AIDS for infants and young children.

    PubMed

    Limb, Stephanie J; McManus, Margaret A; Fox, Harriette B; White, Karl R; Forsman, Irene

    2010-08-01

    Many young children with permanent hearing loss do not receive hearing aids and related professional services, in part because of public and private financing limitations. In 2006 the Children's Audiology Financing Workgroup was convened by the National Center for Hearing Assessment and Management to evaluate and make recommendations about public and private financing of hearing aids and related professional services for 0- to 3-year-old children. The workgroup recommended 4 possible strategies for ensuring that all infants and young children with hearing loss have access to appropriate hearing aids and professional services: (1) clarify that the definition of assistive technology, which is a required service under Part C of the Individuals With Disabilities Education Act (IDEA), includes not only analog hearing aids but also digital hearing aids with appropriate features as needed by young children with hearing loss; (2) clarify for both state Medicaid and Children's Health Insurance Programs that digital hearing aids are almost always the medically necessary type of hearing aid required for infants and young children and should be covered under the Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) program; (3) encourage the passage of private health insurance legislative mandates to require coverage of appropriate digital hearing aids and related professional services for infants and young children; and (4) establish hearing-aid loaner programs in every state. The costs of providing hearing aids to all 0- to 3-year old children in the United States are estimated here.

  11. Can We Teach Effective Listening? An Exploratory Study

    ERIC Educational Resources Information Center

    Caspersz, Donella; Stasinska, Ania

    2015-01-01

    Listening is not the same as hearing. While hearing is a physiological process, listening is a conscious process that requires us to be mentally attentive (Low & Sonntag, 2013). The obvious place for scholarship about listening is in communication studies. While interested in listening, the focus of this study is on effective listening.…

  12. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability-Implications for Cochlear Implant Candidacy.

    PubMed

    Firszt, Jill B; Reeder, Ruth M; Holden, Laura K

    At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of covariables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc), and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-sex-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not

  13. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability - Implications for Cochlear Implant Candidacy

    PubMed Central

    Firszt, Jill B.; Reeder, Ruth M.; Holden, Laura K.

    2016-01-01

    Objectives At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of co-variables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. Design The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc) and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-gender-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Results Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on

  14. Speech Perception in Noise in Normally Hearing Children: Does Binaural Frequency Modulated Fitting Provide More Benefit than Monaural Frequency Modulated Fitting?

    PubMed

    Mukari, Siti Zamratol-Mai Sarah; Umat, Cila; Razak, Ummu Athiyah Abdul

    2011-07-01

    The aim of the present study was to compare the benefit of monaural versus binaural ear-level frequency modulated (FM) fitting on speech perception in noise in children with normal hearing. Reception threshold for sentences (RTS) was measured in no-FM, monaural FM, and binaural FM conditions in 22 normally developing children with bilateral normal hearing, aged 8 to 9 years old. Data were gathered using the Pediatric Malay Hearing in Noise Test (P-MyHINT) with speech presented from front and multi-talker babble presented from 90°, 180°, 270° azimuths in a sound treated booth. The results revealed that the use of either monaural or binaural ear level FM receivers provided significantly better mean RTSs than the no-FM condition (P<0.001). However, binaural FM did not produce a significantly greater benefit in mean RTS than monaural fitting. The benefit of binaural over monaural FM varies across individuals; while binaural fitting provided better RTSs in about 50% of study subjects, there were those in whom binaural fitting resulted in either deterioration or no additional improvement compared to monaural FM fitting. The present study suggests that the use of monaural ear-level FM receivers in children with normal hearing might provide similar benefit as binaural use. Individual subjects' variations of binaural FM benefit over monaural FM suggests that the decision to employ monaural or binaural fitting should be individualized. It should be noted however, that the current study recruits typically developing normal hearing children. Future studies involving normal hearing children with high risk of having difficulty listening in noise is indicated to see if similar findings are obtained.

  15. Early Radiosurgery Improves Hearing Preservation in Vestibular Schwannoma Patients With Normal Hearing at the Time of Diagnosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akpinar, Berkcan; Mousavi, Seyed H., E-mail: mousavish@upmc.edu; McDowell, Michael M.

    Purpose: Vestibular schwannomas (VS) are increasingly diagnosed in patients with normal hearing because of advances in magnetic resonance imaging. We sought to evaluate whether stereotactic radiosurgery (SRS) performed earlier after diagnosis improved long-term hearing preservation in this population. Methods and Materials: We queried our quality assessment registry and found the records of 1134 acoustic neuroma patients who underwent SRS during a 15-year period (1997-2011). We identified 88 patients who had VS but normal hearing with no subjective hearing loss at the time of diagnosis. All patients were Gardner-Robertson (GR) class I at the time of SRS. Fifty-seven patients underwent earlymore » (≤2 years from diagnosis) SRS and 31 patients underwent late (>2 years after diagnosis) SRS. At a median follow-up time of 75 months, we evaluated patient outcomes. Results: Tumor control rates (decreased or stable in size) were similar in the early (95%) and late (90%) treatment groups (P=.73). Patients in the early treatment group retained serviceable (GR class I/II) hearing and normal (GR class I) hearing longer than did patients in the late treatment group (serviceable hearing, P=.006; normal hearing, P<.0001, respectively). At 5 years after SRS, an estimated 88% of the early treatment group retained serviceable hearing and 77% retained normal hearing, compared with 55% with serviceable hearing and 33% with normal hearing in the late treatment group. Conclusions: SRS within 2 years after diagnosis of VS in normal hearing patients resulted in improved retention of all hearing measures compared with later SRS.« less

  16. Low-frequency pitch perception in children with cochlear implants in comparison to normal hearing peers.

    PubMed

    Dincer D'Alessandro, Hilal; Filipo, Roberto; Ballantyne, Deborah; Attanasio, Giuseppe; Bosco, Ersilia; Nicastri, Maria; Mancini, Patrizia

    2015-11-01

    The aim of the present study was to investigate the application of two new pitch perception tests in children with cochlear implants (CI) and to compare CI outcomes to normal hearing (NH) children, as well as investigating the effect of chronological age on performance. The tests were believed to be linked to the availability of Temporal Fine Structure (TFS) cues. 20 profoundly deaf children with CI (5-17 years) and 31 NH peers participated in the study. Harmonic Intonation (HI) and Disharmonic Intonation (DI) tests were used to measure low-frequency pitch perception. HI/DI outcomes were found poorer in children with CI. CI and NH groups showed a statistically significant difference (p < 0.001). HI scores were better than those of DI test (p < 0.001). Chronological age had a significant effect on DI performance in NH group (p < 0.05); children under the age of 8.5 years showed larger inter-subject-variability; however, the majority of NH children showed outcomes that were considered normal at adult-level. For the DI test, bimodal listeners had better performance than when listening with CI alone. HI/DI tests were applicable as clinical tools in the pediatric population. The majority of CI users showed abnormal outcomes on both tests confirming poor TFS processing in the hearing-impaired population. Findings indicated that the DI test provided more differential low-frequency pitch perception outcomes in that it reflected phase locking and TFS processing capacities of the ear, whereas HI test provided information of its place coding capacity as well.

  17. A comparison of CIC and BTE hearing aids for three-dimensional localization of speech.

    PubMed

    Best, Virginia; Kalluri, Sridhar; McLachlan, Sara; Valentine, Susie; Edwards, Brent; Carlile, Simon

    2010-10-01

    Three-dimensional sound localization of speech in anechoic space was examined for eleven listeners with sensorineural hearing loss. The listeners were fitted bilaterally with CIC and BTE hearing aids having similar bandwidth capabilities. The goal was to determine whether differences in microphone placement for these two styles (CICs at the ear canal entrance; BTEs above the pinna) would influence the availability of pinna-related spectral cues and hence localization performance. While lateral and polar angle localization was unaffected by the hearing aid style, the rate of front-back reversals was lower with CICs. This pattern persisted after listeners accommodated to each set of aids for a six week period, although the overall rate of reversals declined. Performance on all measures in all conditions was considerably poorer than in a control group of listeners with normal hearing.

  18. Epidemiology and Risk Factors for Leisure Noise-Induced Hearing Damage in Flemish Young Adults

    PubMed Central

    Degeest, Sofie; Clays, Els; Corthals, Paul; Keppler, Hannah

    2017-01-01

    Context: Young people regularly expose themselves to leisure noise and are at risk for acquiring hearing damage. Aims: The objective of this study was to compare young adults’ hearing status in relation to sociodemographic variables, leisure noise exposure and attitudes and beliefs towards noise. Settings and Design: A self-administered questionnaire regarding hearing, the amount of leisure noise exposure and attitudes towards noise and hearing protection as well as an audiological test battery were completed. Five hundred and seventeen subjects between 18 and 30 years were included. Subject and Methods: Hearing was evaluated using conventional audiometry, transient evoked and distortion product otoacoustic emissions. On the basis of their hearing status, participants were categorised into normal hearing, sub-clinical or clinical hearing loss. Statistical Analysis Used: Independent samples t-tests, chi-square tests and multiple regression models were used to evaluate the relation between groups based on hearing status, sociodemographics, leisure noise and attitudes towards noise. Results: Age was significantly related to hearing status. Although, the subjects in this study frequently participated in leisure activities, no significant associations between leisure noise exposure and hearing status could be detected. No relation with subjects’ attitudes or the use of hearing protection devices was found. Conclusions: This study could not demonstrate clinically significant leisure noise-induced hearing damage, which may lead to more non-protective behaviour. However, the effects of leisure noise may become noticeable over a long-term use since age was found to be related with sub-clinical hearing loss. Longitudinal studies are needed to evaluate the long-term effects of noise exposure. PMID:28164934

  19. A comparison of speech intonation production and perception abilities of Farsi speaking cochlear implanted and normal hearing children.

    PubMed

    Moein, Narges; Khoddami, Seyyedeh Maryam; Shahbodaghi, Mohammad Rahim

    2017-10-01

    Cochlear implant prosthesis facilitates spoken language development and speech comprehension in children with severe-profound hearing loss. However, this prosthesis is limited in encoding information about fundamental frequency and pitch that are essentially for recognition of speech prosody. The purpose of the present study is to investigate the perception and production of intonation in cochlear implant children and comparison with normal hearing children. This study carried out on 25 cochlear implanted children and 50 children with normal hearing. First, using 10 action pictures statements and questions sentences were extracted. Fundamental frequency and pitch changes were identified using Praat software. Then, these sentences were judged by 7 adult listeners. In second stage 20 sentences were played for child and he/she determined whether it was in a question form or statement one. Performance of cochlear implanted children in perception and production of intonation was significantly lower than children with normal hearing. The difference between fundamental frequency and pitch changes in cochlear implanted children and children with normal hearing was significant (P < 0/05). Cochlear implanted children performance in perception and production of intonation has significant correlation with child's age surgery and duration of prosthesis use (P < 0/05). The findings of the current study show that cochlear prostheses have limited application in facilitating the perception and production of intonation in cochlear implanted children. It should be noted that the child's age at the surgery and duration of prosthesis's use is important in reduction of this limitation. According to these findings, speech and language pathologists should consider intervention of intonation in treatment program of cochlear implanted children. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Effects of Aging and Adult-Onset Hearing Loss on Cortical Auditory Regions

    PubMed Central

    Cardin, Velia

    2016-01-01

    Hearing loss is a common feature in human aging. It has been argued that dysfunctions in central processing are important contributing factors to hearing loss during older age. Aging also has well documented consequences for neural structure and function, but it is not clear how these effects interact with those that arise as a consequence of hearing loss. This paper reviews the effects of aging and adult-onset hearing loss in the structure and function of cortical auditory regions. The evidence reviewed suggests that aging and hearing loss result in atrophy of cortical auditory regions and stronger engagement of networks involved in the detection of salient events, adaptive control and re-allocation of attention. These cortical mechanisms are engaged during listening in effortful conditions in normal hearing individuals. Therefore, as a consequence of aging and hearing loss, all listening becomes effortful and cognitive load is constantly high, reducing the amount of available cognitive resources. This constant effortful listening and reduced cognitive spare capacity could be what accelerates cognitive decline in older adults with hearing loss. PMID:27242405

  1. Preferred listening levels of mobile phone programs when considering subway interior noise

    PubMed Central

    Yu, Jyaehyoung; Lee, Donguk; Han, Woojae

    2016-01-01

    Today, people listen to music loud using personal listening devices. Although a majority of studies have reported that the high volume played on these listening devices produces a latent risk of hearing problems, there is a lack of studies on “double noise exposures” such as environmental noise plus recreational noise. The present study measures the preferred listening levels of a mobile phone program with subway interior noise for 74 normal-hearing participants in five age groups (ranging from 20s to 60s). The speakers presented the subway interior noise at 73.45 dB, while each subject listened to three application programs [Digital Multimedia Broadcasting (DMB), music, game] for 30 min using a tablet personal computer with an earphone. The participants’ earphone volume levels were analyzed using a sound level meter and a 2cc coupler. Overall, the results showed that those in their 20s listened to the three programs significantly louder with DMB set at significantly higher volume levels than for the other programs. Higher volume levels were needed for middle frequency compared to the lower and higher frequencies. We concluded that any potential risk of noise-induced hearing loss for mobile phone users should be communicated when users listen regularly, although the volume level was not high enough that the users felt uncomfortable. When considering individual listening habits on mobile phones, further study to predict total accumulated environmental noise is still needed. PMID:26780960

  2. Preferred listening levels of mobile phone programs when considering subway interior noise.

    PubMed

    Yu, Jyaehyoung; Lee, Donguk; Han, Woojae

    2016-01-01

    Today, people listen to music loud using personal listening devices. Although a majority of studies have reported that the high volume played on these listening devices produces a latent risk of hearing problems, there is a lack of studies on "double noise exposures" such as environmental noise plus recreational noise. The present study measures the preferred listening levels of a mobile phone program with subway interior noise for 74 normal-hearing participants in five age groups (ranging from 20s to 60s). The speakers presented the subway interior noise at 73.45 dB, while each subject listened to three application programs [Digital Multimedia Broadcasting (DMB), music, game] for 30 min using a tablet personal computer with an earphone. The participants' earphone volume levels were analyzed using a sound level meter and a 2cc coupler. Overall, the results showed that those in their 20s listened to the three programs significantly louder with DMB set at significantly higher volume levels than for the other programs. Higher volume levels were needed for middle frequency compared to the lower and higher frequencies. We concluded that any potential risk of noise-induced hearing loss for mobile phone users should be communicated when users listen regularly, although the volume level was not high enough that the users felt uncomfortable. When considering individual listening habits on mobile phones, further study to predict total accumulated environmental noise is still needed.

  3. Motivation to Address Self-Reported Hearing Problems in Adults with Normal Hearing Thresholds

    ERIC Educational Resources Information Center

    Alicea, Carly C. M.; Doherty, Karen A.

    2017-01-01

    Purpose: The purpose of this study was to compare the motivation to change in relation to hearing problems in adults with normal hearing thresholds but who report hearing problems and that of adults with a mild-to-moderate sensorineural hearing loss. Factors related to their motivation were also assessed. Method: The motivation to change in…

  4. Combined Electric and Acoustic Stimulation With Hearing Preservation: Effect of Cochlear Implant Low-Frequency Cutoff on Speech Understanding and Perceived Listening Difficulty.

    PubMed

    Gifford, René H; Davis, Timothy J; Sunderhaus, Linsey W; Menapace, Christine; Buck, Barbara; Crosson, Jillian; O'Neill, Lori; Beiter, Anne; Segel, Phil

    The primary objective of this study was to assess the effect of electric and acoustic overlap for speech understanding in typical listening conditions using semidiffuse noise. This study used a within-subjects, repeated measures design including 11 experienced adult implant recipients (13 ears) with functional residual hearing in the implanted and nonimplanted ear. The aided acoustic bandwidth was fixed and the low-frequency cutoff for the cochlear implant (CI) was varied systematically. Assessments were completed in the R-SPACE sound-simulation system which includes a semidiffuse restaurant noise originating from eight loudspeakers placed circumferentially about the subject's head. AzBio sentences were presented at 67 dBA with signal to noise ratio varying between +10 and 0 dB determined individually to yield approximately 50 to 60% correct for the CI-alone condition with full CI bandwidth. Listening conditions for all subjects included CI alone, bimodal (CI + contralateral hearing aid), and bilateral-aided electric and acoustic stimulation (EAS; CI + bilateral hearing aid). Low-frequency cutoffs both below and above the original "clinical software recommendation" frequency were tested for all patients, in all conditions. Subjects estimated listening difficulty for all conditions using listener ratings based on a visual analog scale. Three primary findings were that (1) there was statistically significant benefit of preserved acoustic hearing in the implanted ear for most overlap conditions, (2) the default clinical software recommendation rarely yielded the highest level of speech recognition (1 of 13 ears), and (3) greater EAS overlap than that provided by the clinical recommendation yielded significant improvements in speech understanding. For standard-electrode CI recipients with preserved hearing, spectral overlap of acoustic and electric stimuli yielded significantly better speech understanding and less listening effort in a laboratory-based, restaurant

  5. Adolescents risky MP3-player listening and its psychosocial correlates.

    PubMed

    Vogel, Ineke; Brug, Johannes; Van der Ploeg, Catharina P B; Raat, Hein

    2011-04-01

    Analogue to occupational noise-induced hearing loss, MP3-induced hearing loss may be evolving into a significant social and public health problem. To inform prevention strategies and interventions, this study investigated correlates of adolescents' risky MP3-player listening behavior primarily informed by protection motivation theory. We invited 1687 adolescents (12- to 19-year old) of Dutch secondary schools to complete questionnaires about their MP3-player listening, sociodemographic characteristics and presumed psychosocial determinants of MP3-player listening. Of all participants, 90% reported listening to music through earphones on MP3 players; 28.6% were categorized as listeners at risk for hearing loss due to estimated exposure of 89 dBA for ≥1 hour per day. Compared with listeners not at risk for hearing loss, listeners at risk were more likely not to live with both parents, to experience rewards of listening to high-volume levels, to report a high habit strength related to risky MP3 listening, and were less likely to be motivated to protect their hearing. Habit strength was the strongest correlate of risky listening behavior, suggesting that voluntary behavior change among adolescents might be difficult to achieve and that a multiple strategy approach may be needed to prevent MP3-induced hearing loss.

  6. Perception of contrastive bi-syllabic lexical stress in unaccented and accented words by younger and older listeners.

    PubMed

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H; Pickett, Erin J; Fitzgibbons, Peter J

    2016-03-01

    This study examined the ability of older and younger listeners to perceive contrastive syllable stress in unaccented and Spanish-accented cognate bi-syllabic English words. Younger listeners with normal hearing, older listeners with normal hearing, and older listeners with hearing impairment judged recordings of words that contrasted in stress that conveyed a noun or verb form (e.g., CONduct/conDUCT), using two paradigms differing in the amount of semantic support. The stimuli were spoken by four speakers: one native English speaker and three Spanish-accented speakers (one moderately and two mildly accented). The results indicate that all listeners showed the lowest accuracy scores in responding to the most heavily accented speaker and the highest accuracy in judging the productions of the native English speaker. The two older groups showed lower accuracy in judging contrastive lexical stress than the younger group, especially for verbs produced by the most accented speaker. This general pattern of performance was observed in the two experimental paradigms, although performance was generally lower in the paradigm without semantic support. The findings suggest that age-related difficulty in adjusting to deviations in contrastive bi-syllabic lexical stress produced with a Spanish accent may be an important factor limiting perception of accented English by older people.

  7. Perception of contrastive bi-syllabic lexical stress in unaccented and accented words by younger and older listeners

    PubMed Central

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Pickett, Erin J.; Fitzgibbons, Peter J.

    2016-01-01

    This study examined the ability of older and younger listeners to perceive contrastive syllable stress in unaccented and Spanish-accented cognate bi-syllabic English words. Younger listeners with normal hearing, older listeners with normal hearing, and older listeners with hearing impairment judged recordings of words that contrasted in stress that conveyed a noun or verb form (e.g., CONduct/conDUCT), using two paradigms differing in the amount of semantic support. The stimuli were spoken by four speakers: one native English speaker and three Spanish-accented speakers (one moderately and two mildly accented). The results indicate that all listeners showed the lowest accuracy scores in responding to the most heavily accented speaker and the highest accuracy in judging the productions of the native English speaker. The two older groups showed lower accuracy in judging contrastive lexical stress than the younger group, especially for verbs produced by the most accented speaker. This general pattern of performance was observed in the two experimental paradigms, although performance was generally lower in the paradigm without semantic support. The findings suggest that age-related difficulty in adjusting to deviations in contrastive bi-syllabic lexical stress produced with a Spanish accent may be an important factor limiting perception of accented English by older people. PMID:27036250

  8. The effects of a hearing education program on recreational noise exposure, attitudes and beliefs toward noise, hearing loss, and hearing protector devices in young adults.

    PubMed

    Keppler, Hannah; Ingeborg, Dhooge; Sofie, Degeest; Bart, Vinck

    2015-01-01

    Excessive recreational noise exposure in young adults might result in noise-induced hearing loss (NIHL) and tinnitus. Inducing behavioral change in young adults is one of the aims of a hearing conservation program (HCP). The goal of the current study was to evaluate the effect of a hearing education program after 6 months in young adults in relation to knowledge regarding their individual hearing status. The results of a questionnaire regarding the weekly equivalent recreational noise exposure, attitudes and beliefs toward noise, and hearing loss and hearing protector devices (HPDs) were compared between both sessions. Seventy-eight young adults completed the questionnaire concerning recreational noise exposure, youth attitude to noise scale (YANS), and beliefs about hearing protection and hearing loss (BAHPHL). Their hearing status was evaluated based on admittance measures, audiometry, transient-evoked otoacoustic emissions (TEOAEs), and distortion-product otoacoustic emissions (DPOAEs). The main analysis consisted of a mixed model analysis of variance with dependent variables of either the noise exposure or the scores on (subscales of) YANS and BAHPHL. The independent variables were hearing status and session one versus session two. There was a significant decrease in recreational noise exposure and several (sub) scales of YANS and BAHPHL between both the sessions. This behavioral change resulted in a more frequent use of HPDs in 12% of the participants. However, the behavioral change was not completely related to the knowledge of young adults' individual hearing status. To prevent hearing damage in young people, investing in HCPs is necessary, apart from regulating sound levels and its compliance at various leisure-time activities. Also, the long-term effect of HCPs and their most cost-efficient repetition rates should be further investigated.

  9. A Trainable Hearing Aid Algorithm Reflecting Individual Preferences for Degree of Noise-Suppression, Input Sound Level, and Listening Situation.

    PubMed

    Yoon, Sung Hoon; Nam, Kyoung Won; Yook, Sunhyun; Cho, Baek Hwan; Jang, Dong Pyo; Hong, Sung Hwa; Kim, In Young

    2017-03-01

    In an effort to improve hearing aid users' satisfaction, recent studies on trainable hearing aids have attempted to implement one or two environmental factors into training. However, it would be more beneficial to train the device based on the owner's personal preferences in a more expanded environmental acoustic conditions. Our study aimed at developing a trainable hearing aid algorithm that can reflect the user's individual preferences in a more extensive environmental acoustic conditions (ambient sound level, listening situation, and degree of noise suppression) and evaluated the perceptual benefit of the proposed algorithm. Ten normal hearing subjects participated in this study. Each subjects trained the algorithm to their personal preference and the trained data was used to record test sounds in three different settings to be utilized to evaluate the perceptual benefit of the proposed algorithm by performing the Comparison Mean Opinion Score test. Statistical analysis revealed that of the 10 subjects, four showed significant differences in amplification constant settings between the noise-only and speech-in-noise situation ( P <0.05) and one subject also showed significant difference between the speech-only and speech-in-noise situation ( P <0.05). Additionally, every subject preferred different β settings for beamforming in all different input sound levels. The positive findings from this study suggested that the proposed algorithm has potential to improve hearing aid users' personal satisfaction under various ambient situations.

  10. Preliminary findings on associations between moral emotions and social behavior in young children with normal hearing and with cochlear implants.

    PubMed

    Ketelaar, Lizet; Wiefferink, Carin H; Frijns, Johan H M; Broekhof, Evelien; Rieffe, Carolien

    2015-11-01

    Moral emotions such as shame, guilt and pride are the result of an evaluation of the own behavior as (morally) right or wrong. The capacity to experience moral emotions is thought to be an important driving force behind socially appropriate behavior. The relationship between moral emotions and social behavior in young children has not been studied extensively in normally hearing (NH) children, let alone in those with a hearing impairment. This study compared young children with hearing impairments who have a cochlear implant (CI) to NH peers regarding the extent to which they display moral emotions, and how this relates to their social functioning and language skills. Responses of 184 NH children and 60 children with CI (14-61 months old) to shame-/guilt- and pride-inducing events were observed. Parents reported on their children's social competence and externalizing behavior, and experimenters observed children's cooperative behavior. To examine the role of communication in the development of moral emotions and social behavior, children's language skills were assessed. Results show that children with CI displayed moral emotions to a lesser degree than NH children. An association between moral emotions and social functioning was found in the NH group, but not in the CI group. General language skills were unrelated to moral emotions in the CI group, yet emotion vocabulary was related to social functioning in both groups of children. We conclude that facilitating emotion language skills has the potential to promote children's social functioning, and could contribute to a decrease in behavioral problems in children with CI specifically. Future studies should examine in greater detail which factors are associated with the development of moral emotions, particularly in children with CI. Some possible directions for future research are discussed.

  11. Auditory temporal-order processing of vowel sequences by young and elderly listeners.

    PubMed

    Fogerty, Daniel; Humes, Larry E; Kewley-Port, Diane

    2010-04-01

    This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18-31 years) and older (N=151; 60-88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners' SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age.

  12. Evidence of across-channel processing for spectral-ripple discrimination in cochlear implant listeners.

    PubMed

    Won, Jong Ho; Jones, Gary L; Drennan, Ward R; Jameyson, Elyse M; Rubinstein, Jay T

    2011-10-01

    Spectral-ripple discrimination has been used widely for psychoacoustical studies in normal-hearing, hearing-impaired, and cochlear implant listeners. The present study investigated the perceptual mechanism for spectral-ripple discrimination in cochlear implant listeners. The main goal of this study was to determine whether cochlear implant listeners use a local intensity cue or global spectral shape for spectral-ripple discrimination. The effect of electrode separation on spectral-ripple discrimination was also evaluated. Results showed that it is highly unlikely that cochlear implant listeners depend on a local intensity cue for spectral-ripple discrimination. A phenomenological model of spectral-ripple discrimination, as an "ideal observer," showed that a perceptual mechanism based on discrimination of a single intensity difference cannot account for performance of cochlear implant listeners. Spectral modulation depth and electrode separation were found to significantly affect spectral-ripple discrimination. The evidence supports the hypothesis that spectral-ripple discrimination involves integrating information from multiple channels. © 2011 Acoustical Society of America

  13. Understanding native Russian listeners' errors on an English word recognition test: model-based analysis of phoneme confusion.

    PubMed

    Shi, Lu-Feng; Morozova, Natalia

    2012-08-01

    Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.

  14. Reaction Times of Normal Listeners to Laryngeal, Alaryngeal, and Synthetic Speech

    ERIC Educational Resources Information Center

    Evitts, Paul M.; Searl, Jeff

    2006-01-01

    The purpose of this study was to compare listener processing demands when decoding alaryngeal compared to laryngeal speech. Fifty-six listeners were presented with single words produced by 1 proficient speaker from 5 different modes of speech: normal, tracheosophageal (TE), esophageal (ES), electrolaryngeal (EL), and synthetic speech (SS).…

  15. Do you hear the noise? The German matrix sentence test with a fixed noise level in subjects with normal hearing and hearing impairment.

    PubMed

    Wardenga, Nina; Batsoulis, Cornelia; Wagener, Kirsten C; Brand, Thomas; Lenarz, Thomas; Maier, Hannes

    2015-01-01

    The aim of this study was to determine the relationship between hearing loss and speech reception threshold (SRT) in a fixed noise condition using the German Oldenburg sentence test (OLSA). After training with two easily-audible lists of the OLSA, SRTs were determined monaurally with headphones at a fixed noise level of 65 dB SPL using a standard adaptive procedure, converging to 50% speech intelligibility. Data was obtained from 315 ears of 177 subjects with hearing losses ranging from -5 to 90 dB HL pure-tone average (PTA, 0.5, 1, 2, 3 kHz). Two domains were identified with a linear dependence of SRT on PTA. The SRT increased with a slope of 0.094 ± 0.006 dB SNR/dB HL (standard deviation (SD) of residuals = 1.17 dB) for PTAs < 47 dB HL and with a slope of 0.811 ± 0.049 dB SNR/dB HL (SD of residuals = 5.54 dB) for higher PTAs. The OLSA can be applied to subjects with a wide range of hearing losses. With 65 dB SPL fixed noise presentation level the SRT is determined by listening in noise for PTAs < ∼47 dB HL, and above it is determined by listening in quiet.

  16. Music and Hearing Aids

    PubMed Central

    Moore, Brian C. J.

    2014-01-01

    The signal processing and fitting methods used for hearing aids have mainly been designed to optimize the intelligibility of speech. Little attention has been paid to the effectiveness of hearing aids for listening to music. Perhaps as a consequence, many hearing-aid users complain that they are not satisfied with their hearing aids when listening to music. This issue inspired the Internet-based survey presented here. The survey was designed to identify the nature and prevalence of problems associated with listening to live and reproduced music with hearing aids. Responses from 523 hearing-aid users to 21 multiple-choice questions are presented and analyzed, and the relationships between responses to questions regarding music and questions concerned with information about the respondents, their hearing aids, and their hearing loss are described. Large proportions of the respondents reported that they found their hearing aids to be helpful for listening to both live and reproduced music, although less so for the former. The survey also identified problems such as distortion, acoustic feedback, insufficient or excessive gain, unbalanced frequency response, and reduced tone quality. The results indicate that the enjoyment of listening to music with hearing aids could be improved by an increase of the input and output dynamic range, extension of the low-frequency response, and improvement of feedback cancellation and automatic gain control systems. PMID:25361601

  17. Music and hearing aids.

    PubMed

    Madsen, Sara M K; Moore, Brian C J

    2014-10-31

    The signal processing and fitting methods used for hearing aids have mainly been designed to optimize the intelligibility of speech. Little attention has been paid to the effectiveness of hearing aids for listening to music. Perhaps as a consequence, many hearing-aid users complain that they are not satisfied with their hearing aids when listening to music. This issue inspired the Internet-based survey presented here. The survey was designed to identify the nature and prevalence of problems associated with listening to live and reproduced music with hearing aids. Responses from 523 hearing-aid users to 21 multiple-choice questions are presented and analyzed, and the relationships between responses to questions regarding music and questions concerned with information about the respondents, their hearing aids, and their hearing loss are described. Large proportions of the respondents reported that they found their hearing aids to be helpful for listening to both live and reproduced music, although less so for the former. The survey also identified problems such as distortion, acoustic feedback, insufficient or excessive gain, unbalanced frequency response, and reduced tone quality. The results indicate that the enjoyment of listening to music with hearing aids could be improved by an increase of the input and output dynamic range, extension of the low-frequency response, and improvement of feedback cancellation and automatic gain control systems. © The Author(s) 2014.

  18. Effects of Age and Hearing Loss on the Relationship between Discrimination of Stochastic Frequency Modulation and Speech Perception

    PubMed Central

    Sheft, Stanley; Shafiro, Valeriy; Lorenzi, Christian; McMullen, Rachel; Farrell, Caitlin

    2012-01-01

    Objective The frequency modulation (FM) of speech can convey linguistic information and also enhance speech-stream coherence and segmentation. Using a clinically oriented approach, the purpose of the present study was to examine the effects of age and hearing loss on the ability to discriminate between stochastic patterns of low-rate FM and determine whether difficulties in speech perception experienced by older listeners relate to a deficit in this ability. Design Data were collected from 18 normal-hearing young adults, and 18 participants who were at least 60 years old, nine normal-hearing and nine with a mild-to-moderate sensorineural hearing loss. Using stochastic frequency modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, discrimination thresholds were measured in terms of frequency excursion (ΔF) both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio (SNRFM) in the presence of a speech-babble masker. Speech perception ability was evaluated using Quick Speech-in-Noise (QuickSIN) sentences in four-talker babble. Results Results showed a significant effect of age, but not of hearing loss among the older listeners, for FM discrimination conditions with masking present (ΔF and SNRFM). The effect of age was not significant for the FM measures based on stimulus duration. ΔF and SNRFM were also the two conditions for which performance was significantly correlated with listener age when controlling for effect of hearing loss as measured by pure-tone average. With respect to speech-in-noise ability, results from the SNRFM condition were significantly correlated with QuickSIN performance. Conclusions Results indicate that aging is associated with reduced ability to discriminate moderate-duration patterns of low-rate stochastic FM. Furthermore, the relationship between QuickSIN performance and the SNRFM thresholds suggests that the difficulty experienced by older listeners with speech

  19. Effects of Age and Hearing Loss on Gap Detection and the Precedence Effect: Broadband Stimuli

    ERIC Educational Resources Information Center

    Roberts, Richard A.; Lister, Jennifer J.

    2004-01-01

    Older listeners with normal-hearing sensitivity and impaired-hearing sensitivity often demonstrate poorer-than-normal performance on tasks of speech understanding in noise and reverberation. Deficits in temporal resolution and in the precedence effect may underlie this difficulty. Temporal resolution is often studied by means of a gap-detection…

  20. Frequency-Limiting Effects on Speech and Environmental Sound Identification for Cochlear Implant and Normal Hearing Listeners

    PubMed Central

    Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S.; Cho, Chang Hyun

    2018-01-01

    Background and Objectives It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Subjects and Methods Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. Results CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. Conclusions This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing. PMID:29325391

  1. Frequency-Limiting Effects on Speech and Environmental Sound Identification for Cochlear Implant and Normal Hearing Listeners.

    PubMed

    Chang, Son-A; Won, Jong Ho; Kim, HyangHee; Oh, Seung-Ha; Tyler, Richard S; Cho, Chang Hyun

    2017-12-01

    It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing.

  2. Rapid Release From Listening Effort Resulting From Semantic Context, and Effects of Spectral Degradation and Cochlear Implants

    PubMed Central

    2016-01-01

    People with hearing impairment are thought to rely heavily on context to compensate for reduced audibility. Here, we explore the resulting cost of this compensatory behavior, in terms of effort and the efficiency of ongoing predictive language processing. The listening task featured predictable or unpredictable sentences, and participants included people with cochlear implants as well as people with normal hearing who heard full-spectrum/unprocessed or vocoded speech. The crucial metric was the growth of the pupillary response and the reduction of this response for predictable versus unpredictable sentences, which would suggest reduced cognitive load resulting from predictive processing. Semantic context led to rapid reduction of listening effort for people with normal hearing; the reductions were observed well before the offset of the stimuli. Effort reduction was slightly delayed for people with cochlear implants and considerably more delayed for normal-hearing listeners exposed to spectrally degraded noise-vocoded signals; this pattern of results was maintained even when intelligibility was perfect. Results suggest that speed of sentence processing can still be disrupted, and exertion of effort can be elevated, even when intelligibility remains high. We discuss implications for experimental and clinical assessment of speech recognition, in which good performance can arise because of cognitive processes that occur after a stimulus, during a period of silence. Because silent gaps are not common in continuous flowing speech, the cognitive/linguistic restorative processes observed after sentences in such studies might not be available to listeners in everyday conversations, meaning that speech recognition in conventional tests might overestimate sentence-processing capability. PMID:27698260

  3. Age-group differences in speech identification despite matched audiometrically normal hearing: contributions from auditory temporal processing and cognition

    PubMed Central

    Füllgrabe, Christian; Moore, Brian C. J.; Stone, Michael A.

    2015-01-01

    Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60–79 years) participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125–6 kHz were matched to nine young (YNH; 18–27 years) participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1) identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2) identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3) detection of modulation of the temporal envelope (TE) at frequencies 5–180 Hz; (4) monaural and binaural sensitivity to temporal fine structure (TFS); (5) various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (the improvement in speech identification obtained by amplitude modulating a noise background) and spatial masking release (the benefit obtained from spatially separating masker and target speech) were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN) identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in

  4. The Very Young Hearing-Impaired Child.

    ERIC Educational Resources Information Center

    World Federation of the Deaf, Rome (Italy).

    Five conference papers are presented on deaf preschool children and infants. "The Very Young Hearing-Impaired Child" by G.M. Harris of Canada; "The Organisation and Methods of Educational Work for Deaf Children at the Preschool Age" by K. Lundstrom of Sweden; "Speech Formation in the Young Deaf Child" by B.…

  5. Binaural hearing with electrical stimulation

    PubMed Central

    Kan, Alan; Litovsky, Ruth Y.

    2014-01-01

    Bilateral cochlear implantation is becoming a standard of care in many clinics. While much benefit has been shown through bilateral implantation, patients who have bilateral cochlear implants (CIs) still do not perform as well as normal hearing listeners in sound localization and understanding speech in noisy environments. This difference in performance can arise from a number of different factors, including the areas of hardware and engineering, surgical precision and pathology of the auditory system in deaf persons. While surgical precision and individual pathology are factors that are beyond careful control, improvements can be made in the areas of clinical practice and the engineering of binaural speech processors. These improvements should be grounded in a good understanding of the sensitivities of bilateral CI patients to the acoustic binaural cues that are important to normal hearing listeners for sound localization and speech in noise understanding. To this end, we review the current state-of-the-art in the understanding of the sensitivities of bilateral CI patients to binaural cues in electric hearing, and highlight the important issues and challenges as they relate to clinical practice and the development of new binaural processing strategies. PMID:25193553

  6. Are hearing losses among young Maori different to those found in the young NZ European population?

    PubMed

    Digby, Janet E; Purdy, Suzanne C; Kelly, Andrea S; Welch, David; Thorne, Peter R

    2014-07-18

    This study was undertaken to determine if young Maori have more permanent bilateral hearing loss, or less severe and profound hearing loss than New Zealand (NZ) Europeans. Data include hearing-impaired children from birth to 19 years of age from the New Zealand Deafness Notification Database (DND) and covering the periods 1982-2005 and 2009-2013. These were retrospectively analysed, as was information on children and young people with cochlear implants. Young Maori are more likely to be diagnosed with permanent hearing loss greater than 26 dB HL, averaged across speech frequencies, with 39-43% of hearing loss notifications listed as Maori. Maori have a lower prevalence of severe/profound losses (n=1571, chi squared=22.08, p=0.01) but significantly more bilateral losses than their NZ European peers (n=595, Chi-squared=9.05, p=0.01). The difference in severity profile is supported by cochlear implant data showing Maori are less likely to receive a cochlear implant. There are significant differences in the proportion of bilateral (compared to unilateral) losses and in the rates and severity profile of hearing loss among young Maori when compared with their NZ European peers. This has implications for screening and other hearing services in NZ.

  7. Production of Sentence-Final Intonation Contours by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Allen, George D.; Arndorfer, Patricia M.

    2000-01-01

    This study compared the relationship between acoustic parameters and listeners' perceptions of intonation contours produced by 12 children (ages 7-14) either with severe-to-profound hearing impairments (HI) or normal-hearing (NH). The HI children's productions were generally similar to the NH children in that they used fundamental frequency,…

  8. Evidence of across-channel processing for spectral-ripple discrimination in cochlear implant listeners a

    PubMed Central

    Ho Won, Jong; Jones, Gary L.; Drennan, Ward R.; Jameyson, Elyse M.; Rubinstein, Jay T.

    2011-01-01

    Spectral-ripple discrimination has been used widely for psychoacoustical studies in normal-hearing, hearing-impaired, and cochlear implant listeners. The present study investigated the perceptual mechanism for spectral-ripple discrimination in cochlear implant listeners. The main goal of this study was to determine whether cochlear implant listeners use a local intensity cue or global spectral shape for spectral-ripple discrimination. The effect of electrode separation on spectral-ripple discrimination was also evaluated. Results showed that it is highly unlikely that cochlear implant listeners depend on a local intensity cue for spectral-ripple discrimination. A phenomenological model of spectral-ripple discrimination, as an “ideal observer,” showed that a perceptual mechanism based on discrimination of a single intensity difference cannot account for performance of cochlear implant listeners. Spectral modulation depth and electrode separation were found to significantly affect spectral-ripple discrimination. The evidence supports the hypothesis that spectral-ripple discrimination involves integrating information from multiple channels. PMID:21973363

  9. Visual Cues Contribute Differentially to Audiovisual Perception of Consonants and Vowels in Improving Recognition and Reducing Cognitive Demands in Listeners With Hearing Impairment Using Hearing Aids.

    PubMed

    Moradi, Shahram; Lidestam, Björn; Danielsson, Henrik; Ng, Elaine Hoi Ning; Rönnberg, Jerker

    2017-09-18

    We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowels-in terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditory-only presentation, but only vowels (not consonants) in audiovisual presentation. Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.

  10. How Hard Can It Be to Listen? Fatigue in School-Age Children with Hearing Loss

    ERIC Educational Resources Information Center

    Bess, Fred H.; Gustafson, Samantha J.; Hornsby, Benjamin W. Y.

    2014-01-01

    Teachers and parents have long believed that children with hearing loss (CHL) are at increased risk for fatigue. CHL may be physically and mentally "worn out" as a result of focusing so intently on a teacher's speech and on conversations with other students. Moreover, increased listening effort, stress, and subsequent fatigue could…

  11. Binaural speech discrimination under noise in hearing-impaired listeners.

    PubMed

    Kumar, K V; Rao, A B

    1988-10-01

    This study was undertaken to assess speech discrimination under binaural listening with background noise in hearing-impaired subjects. Subjects (58 sensori-neural, 23 conductive, and 19 mixed) were administered an indigenous version of W-22 PB words under: Condition I--Quiet--chamber noise below 28 dB with speech at 60 dB; and at a constant signal-to-noise (S/N) ratio of +10 dB with background white noise at 70 dB in Condition II and 80 dB in Condition III. The scores were a) 81 +/- 16%, b) 77 +/- 9%, and c) 79 +/- 13%. Mean scores decreased significantly (p less than 0.001) with noise in all groups while the score was more (p less than 0.001) at the higher noise level only in the sensori-neural group. The decrease in scores with advancing hearing impairment was less in noise than in quiet, probably due to binaural and satisfactory S/N ratio. The scores did not fall below 70% unless the handicap was marked. The need for suitable standards of binaural speech discrimination under noise in aircrew assessment is emphasized.

  12. Recognition and Comprehension of "Narrow Focus" by Young Adults With Prelingual Hearing Loss Using Hearing Aids or Cochlear Implants.

    PubMed

    Segal, Osnat; Kishon-Rabin, Liat

    2017-12-20

    The stressed word in a sentence (narrow focus [NF]) conveys information about the intent of the speaker and is therefore important for processing spoken language and in social interactions. The ability of participants with severe-to-profound prelingual hearing loss to comprehend NF has rarely been investigated. The purpose of this study was to assess the recognition and comprehension of NF by young adults with prelingual hearing loss compared with those of participants with normal hearing (NH). The participants included young adults with hearing aids (HA; n = 10), cochlear implants (CI; n = 12), and NH (n = 18). The test material included the Hebrew Narrow Focus Test (Segal, Kaplan, Patael, & Kishon-Rabin, in press), with 3 subtests, which was used to assess the recognition and comprehension of NF in different contexts. The following results were obtained: (a) CI and HA users successfully recognized the stressed word, with the worst performance for CI; (b) HA and CI comprehended NF less well than NH; and (c) the comprehension of NF was associated with verbal working memory and expressive vocabulary in CI users. Most CI and HA users were able to recognize the stressed word in a sentence but had considerable difficulty understanding it. Different factors may contribute to this difficulty, including the memory load during the task itself and linguistic and pragmatic abilities. https://doi.org/10.23641/asha.5572792.

  13. Discotheques and the Risk of Hearing Loss among Youth: Risky Listening Behavior and Its Psychosocial Correlates

    ERIC Educational Resources Information Center

    Vogel, Ineke; Brug, Johannes; Van Der Ploeg, Catharina P. B.; Raat, Hein

    2010-01-01

    There is an increasing population at risk of hearing loss and tinnitus due to increasing high-volume music listening. To inform prevention strategies and interventions, this study aimed to identify important protection motivation theory-based constructs as well as the constructs "consideration of future consequences" and "habit…

  14. Speech recognition for bilaterally asymmetric and symmetric hearing aid microphone modes in simulated classroom environments.

    PubMed

    Ricketts, Todd A; Picou, Erin M

    2013-09-01

    This study aimed to evaluate the potential utility of asymmetrical and symmetrical directional hearing aid fittings for school-age children in simulated classroom environments. This study also aimed to evaluate speech recognition performance of children with normal hearing in the same listening environments. Two groups of school-age children 11 to 17 years of age participated in this study. Twenty participants had normal hearing, and 29 participants had sensorineural hearing loss. Participants with hearing loss were fitted with behind-the-ear hearing aids with clinically appropriate venting and were tested in 3 hearing aid configurations: bilateral omnidirectional, bilateral directional, and asymmetrical directional microphones. Speech recognition testing was completed in each microphone configuration in 3 environments: Talker-Front, Talker-Back, and Question-Answer situations. During testing, the location of the speech signal changed, but participants were always seated in a noisy, moderately reverberant classroom-like room. For all conditions, results revealed expected effects of directional microphones on speech recognition performance. When the signal of interest was in front of the listener, bilateral directional microphone was best, and when the signal of interest was behind the listener, bilateral omnidirectional microphone was best. Performance with asymmetric directional microphones was between the 2 symmetrical conditions. The magnitudes of directional benefits and decrements were not significantly correlated. In comparison with their peers with normal hearing, children with hearing loss performed similarly to their peers with normal hearing when fitted with directional microphones and the speech was from the front. In contrast, children with normal hearing still outperformed children with hearing loss if the speech originated from behind, even when the children were fitted with the optimal hearing aid microphone mode for the situation. Bilateral

  15. School Nurses' Role in Identifying and Referring Children at Risk of Noise-Induced Hearing Loss

    ERIC Educational Resources Information Center

    Hendershot, Candace; Pakulski, Lori A.; Thompson, Amy; Dowling, Jamie; Price, James H.

    2011-01-01

    Young people are likely to experience noise-induced hearing loss (NIHL), as the use of personal listening devices and other damaging factors (e.g., video games) increases. Little research has examined the role of school health personnel in the prevention and early identification of hearing impairment. A 32-item, valid and reliable survey was…

  16. Subjective and Objective Effects of Fast and Slow Compression on the Perception of Reverberant Speech in Listeners with Hearing Loss

    ERIC Educational Resources Information Center

    Shi, Lu-Feng; Doherty, Karen A.

    2008-01-01

    Purpose: The purpose of the current study was to assess the effect of fast and slow attack/release times (ATs/RTs) on aided perception of reverberant speech in quiet. Method: Thirty listeners with mild-to-moderate sensorineural hearing loss were tested monaurally with a commercial hearing aid programmed in 3 AT/RT settings: linear, fast (AT = 9…

  17. Auditory measures of selective and divided attention in young and older adults using single-talker competition.

    PubMed

    Humes, Larry E; Lee, Jae Hee; Coughlin, Maureen P

    2006-11-01

    In this study, two experiments were conducted on auditory selective and divided attention in which the listening task involved the identification of words in sentences spoken by one talker while a second talker produced a very similar competing sentence. Ten young normal-hearing (YNH) and 13 elderly hearing-impaired (EHI) listeners participated in each experiment. The type of attention cue used was the main difference between experiments. Across both experiments, several consistent trends were observed. First, in eight of the nine divided-attention tasks across both experiments, the EHI subjects performed significantly worse than the YNH subjects. By comparison, significant differences in performance between age groups were only observed on three of the nine selective-attention tasks. Finally, there were consistent individual differences in performance across both experiments. Correlational analyses performed on the data from the 13 older adults suggested that the individual differences in performance were associated with individual differences in memory (digit span). Among the elderly, differences in age or differences in hearing loss did not contribute to the individual differences observed in either experiment.

  18. Chinese Writing of Deaf or Hard-of-Hearing Students and Normal-Hearing Peers from Complex Network Approach

    PubMed Central

    Jin, Huiyuan; Liu, Haitao

    2016-01-01

    Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences. PMID:27920733

  19. Chinese Writing of Deaf or Hard-of-Hearing Students and Normal-Hearing Peers from Complex Network Approach.

    PubMed

    Jin, Huiyuan; Liu, Haitao

    2016-01-01

    Deaf or hard-of-hearing individuals usually face a greater challenge to learn to write than their normal-hearing counterparts. Due to the limitations of traditional research methods focusing on microscopic linguistic features, a holistic characterization of the writing linguistic features of these language users is lacking. This study attempts to fill this gap by adopting the methodology of linguistic complex networks. Two syntactic dependency networks are built in order to compare the macroscopic linguistic features of deaf or hard-of-hearing students and those of their normal-hearing peers. One is transformed from a treebank of writing produced by Chinese deaf or hard-of-hearing students, and the other from a treebank of writing produced by their Chinese normal-hearing counterparts. Two major findings are obtained through comparison of the statistical features of the two networks. On the one hand, both linguistic networks display small-world and scale-free network structures, but the network of the normal-hearing students' exhibits a more power-law-like degree distribution. Relevant network measures show significant differences between the two linguistic networks. On the other hand, deaf or hard-of-hearing students tend to have a lower language proficiency level in both syntactic and lexical aspects. The rigid use of function words and a lower vocabulary richness of the deaf or hard-of-hearing students may partially account for the observed differences.

  20. Variability and Intelligibility of Clarified Speech to Different Listener Groups

    NASA Astrophysics Data System (ADS)

    Silber, Ronnie F.

    Two studies examined the modifications that adult speakers make in speech to disadvantaged listeners. Previous research that has focused on speech to the deaf individuals and to young children has shown that adults clarify speech when addressing these two populations. Acoustic measurements suggest that the signal undergoes similar changes for both populations. Perceptual tests corroborate these results for the deaf population, but are nonsystematic in developmental studies. The differences in the findings for these populations and the nonsystematic results in the developmental literature may be due to methodological factors. The present experiments addressed these methodological questions. Studies of speech to hearing impaired listeners have used read, nonsense, sentences, for which speakers received explicit clarification instructions and feedback, while in the child literature, excerpts of real-time conversations were used. Therefore, linguistic samples were not precisely matched. In this study, experiments used various linguistic materials. Experiment 1 used a children's story; experiment 2, nonsense sentences. Four mothers read both types of material in four ways: (1) in "normal" adult speech, (2) in "babytalk," (3) under the clarification instructions used in the "hearing impaired studies" (instructed clear speech) and (4) in (spontaneous) clear speech without instruction. No extra practice or feedback was given. Sentences were presented to 40 normal hearing college students with and without simultaneous masking noise. Results were separately tabulated for content and function words, and analyzed using standard statistical tests. The major finding in the study was individual variation in speaker intelligibility. "Real world" speakers vary in their baseline intelligibility. The four speakers also showed unique patterns of intelligibility as a function of each independent variable. Results were as follows. Nonsense sentences were less intelligible than story

  1. Ranking Hearing Aid Input-Output Functions for Understanding Low-, Conversational-, and High-Level Speech in Multitalker Babble

    ERIC Educational Resources Information Center

    Chung, King; Killion, Mead C.; Christensen, Laurel A.

    2007-01-01

    Purpose: To determine the rankings of 6 input-output functions for understanding low-level, conversational, and high-level speech in multitalker babble without manipulating volume control for listeners with normal hearing, flat sensorineural hearing loss, and mildly sloping sensorineural hearing loss. Method: Peak clipping, compression limiting,…

  2. Automatic Speech Recognition Predicts Speech Intelligibility and Comprehension for Listeners with Simulated Age-Related Hearing Loss

    ERIC Educational Resources Information Center

    Fontan, Lionel; Ferrané, Isabelle; Farinas, Jérôme; Pinquier, Julien; Tardieu, Julien; Magnen, Cynthia; Gaillard, Pascal; Aumont, Xavier; Füllgrabe, Christian

    2017-01-01

    Purpose: The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist…

  3. Spectral Ripple Discrimination in Normal Hearing Infants

    PubMed Central

    Horn, David L.; Won, Jong Ho; Rubinstein, Jay T.; Werner, Lynne A.

    2016-01-01

    Objectives Spectral resolution is a correlate of open-set speech understanding in post-lingually deaf adults as well as pre-lingually deaf children who use cochlear implants (CIs). In order to apply measures of spectral resolution to assess device efficacy in younger CI users, it is necessary to understand how spectral resolution develops in NH children. In this study, spectral ripple discrimination (SRD) was used to measure listeners’ sensitivity to a shift in phase of the spectral envelope of a broadband noise. Both resolution of peak to peak location (frequency resolution) and peak to trough intensity (across-channel intensity resolution) are required for SRD. Design SRD was measured as the highest ripple density (in ripples per octave) for which a listener could discriminate a 90 degree shift in phase of the sinusoidally-modulated amplitude spectrum. A 2X3 between subjects design was used to assess the effects of age (7-month-old infants versus adults) and ripple peak/trough “depth” (10, 13, and 20 dB) on SRD in normal hearing listeners (Experiment 1). In Experiment 2, SRD thresholds in the same age groups were compared using a task in which ripple starting phases were randomized across trials to obscure within-channel intensity cues. In Experiment 3, the randomized starting phase method was used to measure SRD as a function of age (3-month-old infants, 7-month-old infants, and young adults) and ripple depth (10 and 20 dB in repeated measures design). Results In Experiment 1, there was a significant interaction between age and ripple depth. The Infant SRDs were significantly poorer than the adult SRDs at 10 and 13 dB ripple depths but adult-like at 20 dB depth. This result is consistent with immature across-channel intensity resolution. In contrast, the trajectory of SRD as a function of depth was steeper for infants than adults suggesting that frequency resolution was better in infants than adults. However, in Experiment 2 infant performance was

  4. Relations Between Self-Reported Daily-Life Fatigue, Hearing Status, and Pupil Dilation During a Speech Perception in Noise Task.

    PubMed

    Wang, Yang; Naylor, Graham; Kramer, Sophia E; Zekveld, Adriana A; Wendt, Dorothea; Ohlenforst, Barbara; Lunner, Thomas

    People with hearing impairment are likely to experience higher levels of fatigue because of effortful listening in daily communication. This hearing-related fatigue might not only constrain their work performance but also result in withdrawal from major social roles. Therefore, it is important to understand the relationships between fatigue, listening effort, and hearing impairment by examining the evidence from both subjective and objective measurements. The aim of the present study was to investigate these relationships by assessing subjectively measured daily-life fatigue (self-report questionnaires) and objectively measured listening effort (pupillometry) in both normally hearing and hearing-impaired participants. Twenty-seven normally hearing and 19 age-matched participants with hearing impairment were included in this study. Two self-report fatigue questionnaires Need For Recovery and Checklist Individual Strength were given to the participants before the test session to evaluate the subjectively measured daily fatigue. Participants were asked to perform a speech reception threshold test with single-talker masker targeting a 50% correct response criterion. The pupil diameter was recorded during the speech processing, and we used peak pupil dilation (PPD) as the main outcome measure of the pupillometry. No correlation was found between subjectively measured fatigue and hearing acuity, nor was a group difference found between the normally hearing and the hearing-impaired participants on the fatigue scores. A significant negative correlation was found between self-reported fatigue and PPD. A similar correlation was also found between Speech Intelligibility Index required for 50% correct and PPD. Multiple regression analysis showed that factors representing "hearing acuity" and "self-reported fatigue" had equal and independent associations with the PPD during the speech in noise test. Less fatigue and better hearing acuity were associated with a larger pupil

  5. The MOC Reflex during Active Listening to Speech

    ERIC Educational Resources Information Center

    Garinis, Angela C.; Glattke, Theodore; Cone, Barbara K.

    2011-01-01

    Purpose: The purpose of this study was to test the hypothesis that active listening to speech would increase medial olivocochlear (MOC) efferent activity for the right vs. the left ear. Method: Click-evoked otoacoustic emissions (CEOAEs) were evoked by 60-dB p.e. SPL clicks in 13 normally hearing adults in 4 test conditions for each ear: (a) in…

  6. Effect of occlusion, directionality and age on horizontal localization

    NASA Astrophysics Data System (ADS)

    Alworth, Lynzee Nicole

    Localization acuity of a given listener is dependent upon the ability discriminate between interaural time and level disparities. Interaural time differences are encoded by low frequency information whereas interaural level differences are encoded by high frequency information. Much research has examined effects of hearing aid microphone technologies and occlusion separately and prior studies have not evaluated age as a factor in localization acuity. Open-fit hearing instruments provide new earmold technologies and varying microphone capabilities; however, these instruments have yet to be evaluated with regard to horizontal localization acuity. Thus, the purpose of this study is to examine the effects of microphone configuration, type of dome in open-fit hearing instruments, and age on the horizontal localization ability of a given listener. Thirty adults participated in this study and were grouped based upon hearing sensitivity and age (young normal hearing, >50 years normal hearing, >50 hearing impaired). Each normal hearing participant completed one localization experiment (unaided/unamplified) where they listened to the stimulus "Baseball" and selected the point of origin. Hearing impaired listeners were fit with the same two receiver-in-the-ear hearing aids and same dome types, thus controlling for microphone technologies, type of dome, and fitting between trials. Hearing impaired listeners completed a total of 7 localization experiments (unaided/unamplified; open dome: omnidirectional, adaptive directional, fixed directional; micromold: omnidirectional, adaptive directional, fixed directional). Overall, results of this study indicate that age significantly affects horizontal localization ability as younger adult listeners with normal hearing made significantly fewer localization errors than older adult listeners with normal hearing. Also, results revealed a significant difference in performance between dome type; however, upon further examination was not

  7. Binaural hearing with electrical stimulation.

    PubMed

    Kan, Alan; Litovsky, Ruth Y

    2015-04-01

    Bilateral cochlear implantation is becoming a standard of care in many clinics. While much benefit has been shown through bilateral implantation, patients who have bilateral cochlear implants (CIs) still do not perform as well as normal hearing listeners in sound localization and understanding speech in noisy environments. This difference in performance can arise from a number of different factors, including the areas of hardware and engineering, surgical precision and pathology of the auditory system in deaf persons. While surgical precision and individual pathology are factors that are beyond careful control, improvements can be made in the areas of clinical practice and the engineering of binaural speech processors. These improvements should be grounded in a good understanding of the sensitivities of bilateral CI patients to the acoustic binaural cues that are important to normal hearing listeners for sound localization and speech in noise understanding. To this end, we review the current state-of-the-art in the understanding of the sensitivities of bilateral CI patients to binaural cues in electric hearing, and highlight the important issues and challenges as they relate to clinical practice and the development of new binaural processing strategies. This article is part of a Special Issue entitled . Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Knowledge, behaviors, and attitudes about hearing loss and hearing protection among racial/ethnically diverse young adults.

    PubMed

    Crandell, Carl; Mills, Terry L; Gauthier, Ricardo

    2004-02-01

    Over 11 million individuals exhibit some degree of permanent noise induced hearing loss (NIHL). Despite such data, there remains a paucity of empirical evidence on the knowledge of noise exposure and hearing protection devices (HPDs) for young adults, particularly those of diverse racial/ethnic backgrounds. This lack of research is unfortunate, as prior research suggests that the incidence of NIHL can be reduced through educational programs, such as hearing conservation programs (HCPs). Moreover, research also indicates that such educational programs are more beneficial when developed for specific age and/or ethnic/racial groups. The primary aim of this investigation was to determine the knowledge base of 200 college-aged young adults aged 18-29, concerning the auditory mechanism, NIHL, and the use of HPDs. The second aim of this study was to identify race and ethnicity differences or similarities in knowledge of these areas among African-American and caucasian young adults. Overall, in many instances, a majority of the young adults in our study demonstrated a high degree of knowledge concerning factors associated with exposure to excessive noise and the risk of hearing loss. Yet, the results also revealed significant racial/ethnic differences in knowledge, behaviors, and attitudes about the use of HPDs. Recent estimates suggest that more than 11 million individuals in the United States exhibit some degree of NIHL. Moreover, 40 million individuals work in environments that contain potentially harmful noise levels, and over 50 million Americans routinely use firearms--a common cause of noise-induced hearing impairment. A specific hallmark manifestation of NIHL is a permanent decrease in hearing sensitivity from 3,000-6,000 Hz, with a characteristic notch at 4,000 Hz. Additional effects of exposure to high noise levels include physiological changes in heart rate and blood pressure, decrease in work productivity, and an interference with communication that results

  9. The impact of aging and hearing status on verbal short-term memory.

    PubMed

    Verhaegen, Clémence; Collette, Fabienne; Majerus, Steve

    2014-01-01

    The aim of this study is to assess the impact of hearing status on age-related decrease in verbal short-term memory (STM) performance. This was done by administering a battery of verbal STM tasks to elderly and young adult participants matched for hearing thresholds, as well as to young normal-hearing control participants. The matching procedure allowed us to assess the importance of hearing loss as an explanatory factor of age-related STM decline. We observed that elderly participants and hearing-matched young participants showed equal levels of performance in all verbal STM tasks, and performed overall lower than the normal-hearing young control participants. This study provides evidence for recent theoretical accounts considering reduced hearing level as an important explanatory factor of poor auditory-verbal STM performance in older adults.

  10. Auditory temporal-order processing of vowel sequences by young and elderly listeners1

    PubMed Central

    Fogerty, Daniel; Humes, Larry E.; Kewley-Port, Diane

    2010-01-01

    This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18–31 years) and older (N=151; 60–88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners’ SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age. PMID:20370033

  11. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    PubMed

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.

  12. High-Frequency Amplification and Sound Quality in Listeners with Normal through Moderate Hearing Loss

    ERIC Educational Resources Information Center

    Ricketts, Todd A.; Dittberner, Andrew B.; Johnson, Earl E.

    2008-01-01

    Purpose: One factor that has been shown to greatly affect sound quality is audible bandwidth. Provision of gain for frequencies above 4-6 kHz has not generally been supported for groups of hearing aid wearers. The purpose of this study was to determine if preference for bandwidth extension in hearing aid processed sounds was related to the…

  13. Working memory, age, and hearing loss: susceptibility to hearing aid distortion.

    PubMed

    Arehart, Kathryn H; Souza, Pamela; Baca, Rosalinda; Kates, James M

    2013-01-01

    Hearing aids use complex processing intended to improve speech recognition. Although many listeners benefit from such processing, it can also introduce distortion that offsets or cancels intended benefits for some individuals. The purpose of the present study was to determine the effects of cognitive ability (working memory) on individual listeners' responses to distortion caused by frequency compression applied to noisy speech. The present study analyzed a large data set of intelligibility scores for frequency-compressed speech presented in quiet and at a range of signal-to-babble ratios. The intelligibility data set was based on scores from 26 adults with hearing loss with ages ranging from 62 to 92 years. The listeners were grouped based on working memory ability. The amount of signal modification (distortion) caused by frequency compression and noise was measured using a sound quality metric. Analysis of variance and hierarchical linear modeling were used to identify meaningful differences between subject groups as a function of signal distortion caused by frequency compression and noise. Working memory was a significant factor in listeners' intelligibility of sentences presented in babble noise and processed with frequency compression based on sinusoidal modeling. At maximum signal modification (caused by both frequency compression and babble noise), the factor of working memory (when controlling for age and hearing loss) accounted for 29.3% of the variance in intelligibility scores. Combining working memory, age, and hearing loss accounted for a total of 47.5% of the variability in intelligibility scores. Furthermore, as the total amount of signal distortion increased, listeners with higher working memory performed better on the intelligibility task than listeners with lower working memory did. Working memory is a significant factor in listeners' responses to total signal distortion caused by cumulative effects of babble noise and frequency compression

  14. Tinnitus in normally hearing patients: clinical aspects and repercussions.

    PubMed

    Sanchez, Tanit Ganz; Medeiros, Italo Roberto Torres de; Levy, Cristiane Passos Dias; Ramalho, Jeanne da Rosa Oiticica; Bento, Ricardo Ferreira

    2005-01-01

    Patients with tinnitus and normal hearing constitute an important group, given that findings do not suffer influence of the hearing loss. However, this group is rarely studied, so we do not know whether its clinical characteristics and interference in daily life are the same of those of the patients with tinnitus and hearing loss. To compare tinnitus characteristics and interference in daily life among patients with and without hearing loss. Historic cohort. Among 744 tinnitus patients seen at a Tinnitus Clinic, 55 with normal audiometry were retrospectively evaluated. The control group consisted of 198 patients with tinnitus and hearing loss, following the same protocol. We analyzed the patients' data as well as the tinnitus characteristics and interference in daily life. The mean age of the studied group (43.1 +/- 13.4 years) was significantly lower than that of the control group (49.9 +/- 14.5 years). In both groups, tinnitus was predominant in women, bilateral, single tone and constant, but there were no differences between both groups. The interference in concentration and emotional status (25.5% and 36.4%) was significantly lower in the studied group than that of the control group (46% and 61.6%), but it did not happen in regard to interference over sleep and social life. Patients with tinnitus and normal hearing showed similar characteristics when compared to those with hearing loss. However, the age of the patients and the interference over concentration and emotional status were significantly lower in this group.

  15. Making sense of listening: the IMAP test battery.

    PubMed

    Barry, Johanna G; Ferguson, Melanie A; Moore, David R

    2010-10-11

    The ability to hear is only the first step towards making sense of the range of information contained in an auditory signal. Of equal importance are the abilities to extract and use the information encoded in the auditory signal. We refer to these as listening skills (or auditory processing AP). Deficits in these skills are associated with delayed language and literacy development, though the nature of the relevant deficits and their causal connection with these delays is hotly debated. When a child is referred to a health professional with normal hearing and unexplained difficulties in listening, or associated delays in language or literacy development, they should ideally be assessed with a combination of psychoacoustic (AP) tests, suitable for children and for use in a clinic, together with cognitive tests to measure attention, working memory, IQ, and language skills. Such a detailed examination needs to be relatively short and within the technical capability of any suitably qualified professional. Current tests for the presence of AP deficits tend to be poorly constructed and inadequately validated within the normal population. They have little or no reference to the presenting symptoms of the child, and typically include a linguistic component. Poor performance may thus reflect problems with language rather than with AP. To assist in the assessment of children with listening difficulties, pediatric audiologists need a single, standardized child-appropriate test battery based on the use of language-free stimuli. We present the IMAP test battery which was developed at the MRC Institute of Hearing Research to supplement tests currently used to investigate cases of suspected AP deficits. IMAP assesses a range of relevant auditory and cognitive skills and takes about one hour to complete. It has been standardized in 1500 normally-hearing children from across the UK, aged 6-11 years. Since its development, it has been successfully used in a number of large scale

  16. Visual cues and listening effort: individual variability.

    PubMed

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  17. Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing

    PubMed Central

    Grieco-Calub, Tina M.; Litovsky, Ruth Y.

    2010-01-01

    Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0

  18. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology.

    PubMed

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2013-01-02

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.

  19. The effect of hearing impairment on localization dominance for single-word stimuli

    PubMed Central

    Akeroyd, Michael A; Guy, Fiona H.

    2012-01-01

    Localization dominance (one of the phenomena of the “precedence effect”) was measured in a large number of normal hearing and hearing-impaired individuals and related to self-reported difficulties in everyday listening. The stimuli (single words) were made-up of a “lead” followed 4-ms later by a equal-level “lag” from a different direction. The stimuli were presented from a circular ring of loudspeakers, either in quiet or in a background of spatially-diffuse babble. Listeners were required to identify the loudspeaker from which they heard the sound. Localization dominance was quantified by the weighting factor c [B.G. Shinn-Cunningham et al., J. Acoust. Soc. Am. 93, 2923-2932 (1993)]. The results demonstrated large individual differences: some listeners showed near-perfect localization dominance (c near 1) but many showed a much reduced effect. Two thirds (64/93) of listeners gave a value of c of at least 0.75. There was a significant correlation with hearing loss, such that better hearing listeners showed better localization dominance. One of the items of the self-report questionnaire (“Do you have the impression of sounds being exactly where you would expect them to be?”) showed a significant correlation with the experimental results. This suggests that reductions in localization dominance may affect everyday auditory perception. PMID:21786901

  20. Auditory-nerve responses predict pitch attributes related to musical consonance-dissonance for normal and impaired hearinga

    PubMed Central

    Bidelman, Gavin M.; Heinz, Michael G.

    2011-01-01

    Human listeners prefer consonant over dissonant musical intervals and the perceived contrast between these classes is reduced with cochlear hearing loss. Population-level activity of normal and impaired model auditory-nerve (AN) fibers was examined to determine (1) if peripheral auditory neurons exhibit correlates of consonance and dissonance and (2) if the reduced perceptual difference between these qualities observed for hearing-impaired listeners can be explained by impaired AN responses. In addition, acoustical correlates of consonance-dissonance were also explored including periodicity and roughness. Among the chromatic pitch combinations of music, consonant intervals∕chords yielded more robust neural pitch-salience magnitudes (determined by harmonicity∕periodicity) than dissonant intervals∕chords. In addition, AN pitch-salience magnitudes correctly predicted the ordering of hierarchical pitch and chordal sonorities described by Western music theory. Cochlear hearing impairment compressed pitch salience estimates between consonant and dissonant pitch relationships. The reduction in contrast of neural responses following cochlear hearing loss may explain the inability of hearing-impaired listeners to distinguish musical qualia as clearly as normal-hearing individuals. Of the neural and acoustic correlates explored, AN pitch salience was the best predictor of behavioral data. Results ultimately show that basic pitch relationships governing music are already present in initial stages of neural processing at the AN level. PMID:21895089

  1. Laryngeal Aerodynamics in Children with Hearing Impairment versus Age and Height Matched Normal Hearing Peers.

    PubMed

    Das, Barshapriya; Chatterjee, Indranil; Kumar, Suman

    2013-01-01

    Lack of proper auditory feedback in hearing-impaired subjects results in functional voice disorder. It is directly related to discoordination of intrinsic and extrinsic laryngeal muscles and disturbed contraction and relaxation of antagonistic muscles. A total of twenty children in the age range of 5-10 years were considered for the study. They were divided into two groups: normal hearing children and hearing aid user children. Results showed a significant difference in the vital capacity, maximum sustained phonation, and fast adduction abduction rate having equal variance for normal and hearing aid user children, respectively, but no significant difference was found in the peak flow value with being statistically significant. A reduced vital capacity in hearing aid user children suggests a limited use of the lung volume for speech production. It may be inferred from the study that the hearing aid user children have poor vocal proficiency which is reflected in their voice. The use of voicing component in hearing impaired subjects is seen due to improper auditory feedback. It was found that there was a significant difference in the vital capacity, maximum sustained phonation (MSP), and fast adduction abduction rate and no significant difference in the peak flow.

  2. Neural tracking of attended versus ignored speech is differentially affected by hearing loss.

    PubMed

    Petersen, Eline Borch; Wöstmann, Malte; Obleser, Jonas; Lunner, Thomas

    2017-01-01

    Hearing loss manifests as a reduced ability to understand speech, particularly in multitalker situations. In these situations, younger normal-hearing listeners' brains are known to track attended speech through phase-locking of neural activity to the slow-varying envelope of the speech. This study investigates how hearing loss, compensated by hearing aids, affects the neural tracking of the speech-onset envelope in elderly participants with varying degree of hearing loss (n = 27, 62-86 yr; hearing thresholds 11-73 dB hearing level). In an active listening task, a to-be-attended audiobook (signal) was presented either in quiet or against a competing to-be-ignored audiobook (noise) presented at three individualized signal-to-noise ratios (SNRs). The neural tracking of the to-be-attended and to-be-ignored speech was quantified through the cross-correlation of the electroencephalogram (EEG) and the temporal envelope of speech. We primarily investigated the effects of hearing loss and SNR on the neural envelope tracking. First, we found that elderly hearing-impaired listeners' neural responses reliably track the envelope of to-be-attended speech more than to-be-ignored speech. Second, hearing loss relates to the neural tracking of to-be-ignored speech, resulting in a weaker differential neural tracking of to-be-attended vs. to-be-ignored speech in listeners with worse hearing. Third, neural tracking of to-be-attended speech increased with decreasing background noise. Critically, the beneficial effect of reduced noise on neural speech tracking decreased with stronger hearing loss. In sum, our results show that a common sensorineural processing deficit, i.e., hearing loss, interacts with central attention mechanisms and reduces the differential tracking of attended and ignored speech. The present study investigates the effect of hearing loss in older listeners on the neural tracking of competing speech. Interestingly, we observed that whereas internal degradation (hearing

  3. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users

    ERIC Educational Resources Information Center

    Jaekel, Brittany N.; Newman, Rochelle S.; Goupell, Matthew J.

    2017-01-01

    Purpose: Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate…

  4. A physiologically-inspired model reproducing the speech intelligibility benefit in cochlear implant listeners with residual acoustic hearing.

    PubMed

    Zamaninezhad, Ladan; Hohmann, Volker; Büchner, Andreas; Schädler, Marc René; Jürgens, Tim

    2017-02-01

    This study introduces a speech intelligibility model for cochlear implant users with ipsilateral preserved acoustic hearing that aims at simulating the observed speech-in-noise intelligibility benefit when receiving simultaneous electric and acoustic stimulation (EA-benefit). The model simulates the auditory nerve spiking in response to electric and/or acoustic stimulation. The temporally and spatially integrated spiking patterns were used as the final internal representation of noisy speech. Speech reception thresholds (SRTs) in stationary noise were predicted for a sentence test using an automatic speech recognition framework. The model was employed to systematically investigate the effect of three physiologically relevant model factors on simulated SRTs: (1) the spatial spread of the electric field which co-varies with the number of electrically stimulated auditory nerves, (2) the "internal" noise simulating the deprivation of auditory system, and (3) the upper bound frequency limit of acoustic hearing. The model results show that the simulated SRTs increase monotonically with increasing spatial spread for fixed internal noise, and also increase with increasing the internal noise strength for a fixed spatial spread. The predicted EA-benefit does not follow such a systematic trend and depends on the specific combination of the model parameters. Beyond 300 Hz, the upper bound limit for preserved acoustic hearing is less influential on speech intelligibility of EA-listeners in stationary noise. The proposed model-predicted EA-benefits are within the range of EA-benefits shown by 18 out of 21 actual cochlear implant listeners with preserved acoustic hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    ERIC Educational Resources Information Center

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  6. The effect of voice quality and competing speakers in a passage comprehension task: performance in relation to cognitive functioning in children with normal hearing.

    PubMed

    von Lochow, Heike; Lyberg-Åhlander, Viveka; Sahlén, Birgitta; Kastberg, Tobias; Brännström, K Jonas

    2018-04-01

    This study explores the effect of voice quality and competing speaker/-s on children's performance in a passage comprehension task. Furthermore, it explores the interaction between passage comprehension and cognitive functioning. Forty-nine children (27 girls and 22 boys) with normal hearing (aged 7-12 years) participated. Passage comprehension was tested in six different listening conditions; a typical voice (non-dysphonic voice) in quiet, a typical voice with one competing speaker, a typical voice with four competing speakers, a dysphonic voice in quiet, a dysphonic voice with one competing speaker, and a dysphonic voice with four competing speakers. The children's working memory capacity and executive functioning were also assessed. The findings indicate no direct effect of voice quality on the children's performance, but a significant effect of background listening condition. Interaction effects were seen between voice quality, background listening condition, and executive functioning. The children's susceptibility to the effect of the dysphonic voice and the background listening conditions are related to the individual's executive functions. The findings have several implications for design of interventions in language learning environments such as classrooms.

  7. Hear Me, Oh Hear Me! Are We Listening to Our Employees?

    ERIC Educational Resources Information Center

    Loy, Darcy

    2011-01-01

    Listening is one of the most crucial skills that leaders need to possess but is often the most difficult to master. It takes hard work, concentration, and specific skill sets to become an effective listener. Facilities leaders need to perfect the art of listening to their employees. Employees possess pertinent knowledge about day-to-day operations…

  8. Classroom listening assessment: strategies for speech-language pathologists.

    PubMed

    Johnson, Cheryl DeConde

    2012-11-01

    Emphasis on classroom listening has gained importance for all children and especially for those with hearing loss and special listening needs. The rationale can be supported from trends in educational placements, the Response to Intervention initiative, student performance and accountability, the role of audition in reading, and improvement in hearing technologies. Speech-language pathologists have an instrumental role advocating for the accommodations that are necessary for effective listening for these children in school. To identify individual listening needs and make relevant recommendations for accommodations, a classroom listening assessment is suggested. Components of the classroom listening assessment include observation, behavioral assessment, self-assessment, and classroom acoustics measurements. Together, with a strong rationale, the results can be used to implement a plan that results in effective classroom listening for these children. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  9. The Effect of Gender on the N1-P2 Auditory Complex while Listening and Speaking with Altered Auditory Feedback

    ERIC Educational Resources Information Center

    Swink, Shannon; Stuart, Andrew

    2012-01-01

    The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered…

  10. Objective and perceptual comparisons of two bluetooth hearing aid assistive devices.

    PubMed

    Clark, Jackie L; Pustejovsky, Carmen; Vanneste, Sven

    2017-08-01

    With the advent of Bluetooth technology, many of the assistive listening devices for hearing have become manufacturer specific, with little objective information about the performance provided. Thirty native English-speaking adults (mean age 29.8) with normal hearing were tested pseudo-randomly with two major hearing aid manufacturers' proprietary Bluetooth connectivity devices paired to the accompanying manufacturer's specific hearing aids. Sentence recognition performance was objectively measured for each system with signals transmitted via a land-line to the same iPhone in two conditions. There was a significant effect of participant's performance according to listening condition. There was no significant effect between device manufacturers according to listening condition, but there was a significant effect in participant's perception of "quality of sound". Despite differences in signal transmission for each devise, when worn by participants both the systems performed equally. In fact, participants expressed personal preferences for specific technology that was largely due to their perceived quality of sound while listening to recorded signals. While further research is necessary to investigate other measures of benefit for Bluetooth connectivity devices, preliminary data suggest that in order to ensure comfort and compatibility, not only should objective measures of the patient benefit be completed, but also assessing the patient's perception of benefit is equally important. Implications for Rehabilitation All professionals who work with individuals with hearing loss, become aware of the differences in the multiple choices for assistive technology readily available for hearing loss. With the ever growing dispensing of Bluetooth connectivity devices coupled to hearing aids, there is an increased burden to determine whether performance differences could exist between manufacturers. There is a growing need to investigate other measures of benefit for Bluetooth

  11. The MOC reflex during active listening to speech.

    PubMed

    Garinis, Angela C; Glattke, Theodore; Cone, Barbara K

    2011-10-01

    The purpose of this study was to test the hypothesis that active listening to speech would increase medial olivocochlear (MOC) efferent activity for the right vs. the left ear. Click-evoked otoacoustic emissions (CEOAEs) were evoked by 60-dB p.e. SPL clicks in 13 normally hearing adults in 4 test conditions for each ear: (a) in quiet; (b) with 60-dB SPL contralateral broadband noise; (c) with words embedded (at -3-dB signal-to-noise ratio [SNR]) in 60-dB SPL contralateral noise during which listeners directed attention to the words; and (d) for the same SNR as in the 3rd condition, with words played backwards. There was greater suppression during active listening compared with passive listening that was apparent in the latency range of 6- to 18-ms poststimulus onset. Ear differences in CEOAE amplitude were observed in all conditions, with right-ear amplitudes larger than those for the left. The absolute difference between CEOAE amplitude in quiet and with contralateral noise, a metric of suppression, was equivalent for right and left ears. When the amplitude differences were normalized, suppression was greater for noise presented to the right and the effect measured for a probe in the left ear. The findings support the theory that cortical mechanisms involved in listening to speech affect cochlear function through the MOC efferent system.

  12. Reliability and Magnitude of Laterality Effects in Dichotic Listening with Exogenous Cueing

    ERIC Educational Resources Information Center

    Voyer, Daniel

    2004-01-01

    The purpose of the present study was to replicate and extend to word recognition previous findings of reduced magnitude and reliability of laterality effects when exogenous cueing was used in a dichotic listening task with syllable pairs. Twenty right-handed undergraduate students with normal hearing (10 females, 10 males) completed a dichotic…

  13. Hearing Screening

    ERIC Educational Resources Information Center

    Johnson-Curiskis, Nanette

    2012-01-01

    Hearing levels are threatened by modern life--headsets for music, rock concerts, traffic noises, etc. It is crucial we know our hearing levels so that we can draw attention to potential problems. This exercise requires that students receive a hearing screening for their benefit as well as for making the connection of hearing to listening.

  14. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology

    PubMed Central

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2012-01-01

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance. PMID:26557339

  15. Individual Sensitivity to Spectral and Temporal Cues in Listeners with Hearing Impairment

    ERIC Educational Resources Information Center

    Souza, Pamela E.; Wright, Richard A.; Blackburn, Michael C.; Tatman, Rachael; Gallun, Frederick J.

    2015-01-01

    Purpose: The present study was designed to evaluate use of spectral and temporal cues under conditions in which both types of cues were available. Method: Participants included adults with normal hearing and hearing loss. We focused on 3 categories of speech cues: static spectral (spectral shape), dynamic spectral (formant change), and temporal…

  16. Embedding Music into Language and Literacy Instruction for Young Children Who Are Deaf or Hard of Hearing

    ERIC Educational Resources Information Center

    Nelson, Lauri H.; Wright, Whitney; Parker, Elizabeth W.

    2016-01-01

    Children who are Deaf and Hard of Hearing (DHH) using Listening and spoken language (LSL) as their primary mode of communication have emerged as a growing population in general education and special education classroom settings, and have educational performance expectations similar to their same aged hearing peers. Academic instruction that…

  17. A Randomized Control Trial: Supplementing Hearing Aid Use with Listening and Communication Enhancement (LACE) Auditory Training.

    PubMed

    Saunders, Gabrielle H; Smith, Sherri L; Chisolm, Theresa H; Frederick, Melissa T; McArdle, Rachel A; Wilson, Richard H

    2016-01-01

    To examine the effectiveness of the Listening and Communication Enhancement (LACE) program as a supplement to standard-of-care hearing aid intervention in a Veteran population. A multisite randomized controlled trial was conducted to compare outcomes following standard-of-care hearing aid intervention supplemented with (1) LACE training using the 10-session DVD format, (2) LACE training using the 20-session computer-based format, (3) placebo auditory training (AT) consisting of actively listening to 10 hr of digitized books on a computer, and (4) educational counseling-the control group. The study involved 3 VA sites and enrolled 279 veterans. Both new and experienced hearing aid users participated to determine if outcomes differed as a function of hearing aid user status. Data for five behavioral and two self-report measures were collected during three research visits: baseline, immediately following the intervention period, and at 6 months postintervention. The five behavioral measures were selected to determine whether the perceptual and cognitive skills targeted in LACE training generalized to untrained tasks that required similar underlying skills. The two self-report measures were completed to determine whether the training resulted in a lessening of activity limitations and participation restrictions. Outcomes were obtained from 263 participants immediately following the intervention period and from 243 participants 6 months postintervention. Analyses of covariance comparing performance on each outcome measure separately were conducted using intervention and hearing aid user status as between-subject factors, visit as a within-subject factor, and baseline performance as a covariate. No statistically significant main effects or interactions were found for the use of LACE on any outcome measure. Findings from this randomized controlled trial show that LACE training does not result in improved outcomes over standard-of-care hearing aid intervention alone

  18. Use of Adaptive Digital Signal Processing to Improve Speech Communication for Normally Hearing aand Hearing-Impaired Subjects.

    ERIC Educational Resources Information Center

    Harris, Richard W.; And Others

    1988-01-01

    A two-microphone adaptive digital noise cancellation technique improved word-recognition ability for 20 normal and 12 hearing-impaired adults by reducing multitalker speech babble and speech spectrum noise 18-22 dB. Word recognition improvements averaged 37-50 percent for normal and 27-40 percent for hearing-impaired subjects. Improvement was best…

  19. The benefits of hearing aids and closed captioning for television viewing by older adults with hearing loss

    PubMed Central

    Gordon-Salant, Sandra; Callahan, Julia S.

    2010-01-01

    Objectives Although watching television is a common leisure activity of older adults, the ability to understand televised speech may be compromised by age-related hearing loss. Two potential assistive devices for improving television viewing are hearing aids and closed captioning, but their use and benefit by older adults with hearing loss are unknown. The primary purpose of this initial investigation was to determine if older hearing-impaired adults show improvements in understanding televised speech with the use of these two assistive devices (hearing aids and closed captioning) compared to conditions without these devices. A secondary purpose was to examine the frequency of hearing aid use and closed captioning use among a sample of older hearing aid wearers. Design The investigation entailed a randomized, repeated-measures design of 15 older adults (59–82 years) with bilateral sensorineural hearing losses who wore hearing aids. Participants viewed three types of televised programs (news, drama, game show) that were each edited into lists of speech segments, and provided an identification response. Each participant was tested in four conditions: baseline (no hearing aids or closed captioning), hearing aids only, closed captioning only, and hearing aids + closed captioning. Pilot testing with young normal-hearing listeners was conducted also to establish list equivalence and stimulus intelligibility with a control group. All testing was conducted in a quiet room to simulate a living room, using a 19-in flat screen television. Questionnaires were also administered to participants to determine frequency of hearing aid use and closed captioning use while watching television. Results A significant effect of viewing condition was observed for all programs. Participants exhibited significantly better speech recognition scores in conditions with closed captioning than those without closed captioning (p<.01). Use of personal hearing aids did not significantly improve

  20. Evaluation of a localization training program for hearing impaired listeners.

    PubMed

    Kuk, Francis; Keenan, Denise M; Lau, Chi; Crose, Bryan; Schumacher, Jennifer

    2014-01-01

    To evaluate the effectiveness of a home-based and a laboratory-based localization training program. This study examined the effectiveness of a localization training program on improving the localization ability of 15 participants with a mild-to-moderately severe hearing loss. These participants had worn the study hearing aids in a previous study. The training consisted of laboratory-based training and home-based training. The participants were divided into three groups: a control group, a group that performed the laboratory training first followed by the home training, and a group that completed the home training first followed by the laboratory training. The participants were evaluated before any training (baseline), at 2 weeks, 1 month, 2 months, and 3 months after baseline testing. All training was completed by the second month. The participants only wore the study hearing aids between the second month and the third month. Localization testing and laboratory training were conducted in a sound-treated room with a 360 degree, 12 loudspeaker array. There were three stimuli each randomly presented three times from each loudspeaker (nine test items from each loudspeaker) for a total of 108 items on each test or training trial. The stimuli, including a continuous noise, a telephone ring, and a speech passage "Search for the sound from this speaker" were high-pass filtered above 2000 Hz. The test stimuli had a duration of 300 ms, whereas the training stimuli had five durations (3 s, 2 s, 1 s, 500 ms, and 300 ms) and four back attenuation (-8, -4, -2, and 0 dB re: front presentation) values. All stimuli were presented at 30 dB SL or the most comfortable listening level of the participants. Each participant completed 6 to 8, 2 hr laboratory-based training within a month. The home training required a two-loudspeaker computer system using 30 different sounds of various durations (5) by attenuation (4) combinations. The participants were required to use the home training

  1. Translation and validation of the Listen Inventory for Education Revised into Dutch.

    PubMed

    Krijger, Stefanie; De Raeve, Leo; Anderson, Karen L; Dhooge, Ingeborg

    2018-04-01

    In Belgium the majority of children with CI's are being educated in mainstream schools. In mainstream schools difficult listening situations occur (e.g. due to background noise) which may result in educational risks for children with CI's. A tool that identifies potential listening difficulties, the English Listen inventory for Education Revised (LIFE-R), was translated and validated into Dutch for elementary and secondary schools (LIFE-NL, LIFE2-NL respectively). Two forward-backward translations were performed followed by a linguistic evaluation and validation by a multidisciplinary committee. The LIFE-NL was further validated on content by pre-testing the questionnaire in 5 students with hearing loss (8-13 years). After minor cross-cultural adaptations normative data were assembled from 187 normal-hearing (NH) students enrolled in mainstream secondary education (1st to 4th grade). The normative data were further analysed based on grade and school type. Additionally, the internal consistency was evaluated by calculating Cronbach's alpha for 3 different scales of the LIFE2-NL: the LIFE total (situation 1-15), LIFE class (situation 1-10: listening situations in classroom) and LIFE social (situation 11-15: social listening situations in school). NH students scored on average 72.0 (SD = 19.9%) on the LIFE2-NL, indicating they experience some difficulties in secondary mainstream schools. The most difficult listening situations were those where fellow students are noisy or when students have to listen in large classrooms. NH students scored significantly higher on the LIFE class compared to the LIFE social (84.1 ± 14.7% vs. 68.1 ± 19.0%, p < .000). Moreover the LIFE social tend to decrease from the 3rd grade on. The different subscales of the LIFE2-NL showed high internal consistency (Cronbach's alpha of 0.86, 0.89 and 0.75 for LIFE total, LIFE class and LIFE social respectively). The LIFE-NL and LIFE2-NL are valid Dutch translations of the original

  2. Intelligibility of Telephone Speech for the Hearing Impaired When Various Microphones Are Used for Acoustic Coupling.

    ERIC Educational Resources Information Center

    Janota, Claus P.; Janota, Jeanette Olach

    1991-01-01

    Various candidate microphones were evaluated for acoustic coupling of hearing aids to a telephone receiver. Results from testing by 9 hearing-impaired adults found comparable listening performance with a pressure gradient microphone at a 10 decibel higher level of interfering noise than with a normal pressure-sensitive microphone. (Author/PB)

  3. The medial olivocochlear reflex in children during active listening.

    PubMed

    Smith, Spencer B; Cone, Barbara

    2015-08-01

    To determine if active listening modulates the strength of the medial olivocochlear (MOC) reflex in children. Click-evoked otoacoustic emissions (CEOAEs) were recorded from the right ear in quiet and in four test conditions: one with contralateral broadband noise (BBN) only, and three with active listening tasks wherein attention was directed to speech embedded in contralateral BBN. Fifteen typically-developing children (ranging in age from 8 to14 years) with normal hearing. CEOAE levels were reduced in every condition with contralateral acoustic stimulus (CAS) when compared to preceding quiet conditions. There was an additional systematic decrease in CEOAE level with increased listening task difficulty, although this effect was very small. These CEOAE level differences were most apparent in the 8-18 ms region after click onset. Active listening may change the strength of the MOC reflex in children, although the effects reported here are very subtle. Further studies are needed to verify that task difficulty modulates the activity of the MOC reflex in children.

  4. Investigations in mechanisms and strategies to enhance hearing with cochlear implants

    NASA Astrophysics Data System (ADS)

    Churchill, Tyler H.

    Cochlear implants (CIs) produce hearing sensations by stimulating the auditory nerve (AN) with current pulses whose amplitudes are modulated by filtered acoustic temporal envelopes. While this technology has provided hearing for multitudinous CI recipients, even bilaterally-implanted listeners have more difficulty understanding speech in noise and localizing sounds than normal hearing (NH) listeners. Three studies reported here have explored ways to improve electric hearing abilities. Vocoders are often used to simulate CIs for NH listeners. Study 1 was a psychoacoustic vocoder study examining the effects of harmonic carrier phase dispersion and simulated CI current spread on speech intelligibility in noise. Results showed that simulated current spread was detrimental to speech understanding and that speech vocoded with carriers whose components' starting phases were equal was the least intelligible. Cross-correlogram analyses of AN model simulations confirmed that carrier component phase dispersion resulted in better neural envelope representation. Localization abilities rely on binaural processing mechanisms in the brainstem and mid-brain that are not fully understood. In Study 2, several potential mechanisms were evaluated based on the ability of metrics extracted from stereo AN simulations to predict azimuthal locations. Results suggest that unique across-frequency patterns of binaural cross-correlation may provide a strong cue set for lateralization and that interaural level differences alone cannot explain NH sensitivity to lateral position. While it is known that many bilateral CI users are sensitive to interaural time differences (ITDs) in low-rate pulsatile stimulation, most contemporary CI processing strategies use high-rate, constant-rate pulse trains. In Study 3, we examined the effects of pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition by bilateral CI listeners. Results showed that listeners were able to

  5. Hearing Aids

    MedlinePlus

    ... primarily useful in improving the hearing and speech comprehension of people who have hearing loss that results ... and you can change the program for different listening environments—from a small, quiet room to a ...

  6. Rotatory and collic vestibular evoked myogenic potential testing in normal-hearing and hearing-impaired children.

    PubMed

    Maes, Leen; De Kegel, Alexandra; Van Waelvelde, Hilde; Dhooge, Ingeborg

    2014-01-01

    Vertigo and imbalance are often underestimated in the pediatric population, due to limited communication abilities, atypical symptoms, and relatively quick adaptation and compensation in children. Moreover, examination and interpretation of vestibular tests are very challenging, because of difficulties with cooperation and maintenance of alertness, and because of the sometimes nauseatic reactions. Therefore, it is of great importance for each vestibular laboratory to implement a child-friendly test protocol with age-appropriate normative data. Because of the often masked appearance of vestibular problems in young children, the vestibular organ should be routinely examined in high-risk pediatric groups, such as children with a hearing impairment. Purposes of the present study were (1) to determine age-appropriate normative data for two child-friendly vestibular laboratory techniques (rotatory and collic vestibular evoked myogenic potential [cVEMP] test) in a group of children without auditory or vestibular complaints, and (2) to examine vestibular function in a group of children presenting with bilateral hearing impairment. Forty-eight typically developing children (mean age 8 years 0 months; range: 4 years 1 month to 12 years 11 months) without any auditory or vestibular complaints as well as 39 children (mean age 7 years 8 months; range: 3 years 8 months to 12 years 10 months) with a bilateral sensorineural hearing loss were included in this study. All children underwent three sinusoidal rotations (0.01, 0.05, and 0.1 Hz at 50 degrees/s) and bilateral cVEMP testing. No significant age differences were found for the rotatory test, whereas a significant increase of N1 latency and a significant threshold decrease was noticeable for the cVEMP, resulting in age-appropriate normative data. Hearing-impaired children demonstrated significantly lower gain values at the 0.01 Hz rotation and a larger percentage of absent cVEMP responses compared with normal-hearing children

  7. A critical review of hearing-aid single-microphone noise-reduction studies in adults and children.

    PubMed

    Chong, Foong Yen; Jenstad, Lorienne M

    2017-10-26

    Single-microphone noise reduction (SMNR) is implemented in hearing aids to suppress background noise. The purpose of this article was to provide a critical review of peer-reviewed studies in adults and children with sensorineural hearing loss who were fitted with hearing aids incorporating SMNR. Articles published between 2000 and 2016 were searched in PUBMED and EBSCO databases. Thirty-two articles were included in the final review. Most studies with adult participants showed that SMNR has no effect on speech intelligibility. Positive results were reported for acceptance of background noise, preference, and listening effort. Studies of school-aged children were consistent with the findings of adult studies. No study with infants or young children of under 5 years old was found. Recent studies on noise-reduction systems not yet available in wearable hearing aids have documented benefits of noise reduction on memory for speech processing for older adults. This evidence supports the use of SMNR for adults and school-aged children when the aim is to improve listening comfort or reduce listening effort. Future research should test SMNR with infants and children who are younger than 5 years of age. Further development, testing, and clinical trials should be carried out on algorithms not yet available in wearable hearing aids. Testing higher cognitive level for speech processing and learning of novel sounds or words could show benefits of advanced signal processing features. These approaches should be expanded to other populations such as children and younger adults. Implications for rehabilitation The review provides a quick reference for students and clinicians regarding the efficacy and effectiveness of SMNR in wearable hearing aids. This information is useful during counseling session to build a realistic expectation among hearing aid users. Most studies in the adult population suggest that SMNR may provide some benefits to adult listeners in terms of listening

  8. The effect of different cochlear implant microphones on acoustic hearing individuals’ binaural benefits for speech perception in noise

    PubMed Central

    Aronoff, Justin M.; Freed, Daniel J.; Fisher, Laurel M.; Pal, Ivan; Soli, Sigfrid D.

    2011-01-01

    Objectives Cochlear implant microphones differ in placement, frequency response, and other characteristics such as whether they are directional. Although normal hearing individuals are often used as controls in studies examining cochlear implant users’ binaural benefits, the considerable differences across cochlear implant microphones make such comparisons potentially misleading. The goal of this study was to examine binaural benefits for speech perception in noise for normal hearing individuals using stimuli processed by head-related transfer functions (HRTFs) based on the different cochlear implant microphones. Design HRTFs were created for different cochlear implant microphones and used to test participants on the Hearing in Noise Test. Experiment 1 tested cochlear implant users and normal hearing individuals with HRTF-processed stimuli and with sound field testing to determine whether the HRTFs adequately simulated sound field testing. Experiment 2 determined the measurement error and performance-intensity function for the Hearing in Noise Test with normal hearing individuals listening to stimuli processed with the various HRTFs. Experiment 3 compared normal hearing listeners’ performance across HRTFs to determine how the HRTFs affected performance. Experiment 4 evaluated binaural benefits for normal hearing listeners using the various HRTFs, including ones that were modified to investigate the contributions of interaural time and level cues. Results The results indicated that the HRTFs adequately simulated sound field testing for the Hearing in Noise Test. They also demonstrated that the test-retest reliability and performance-intensity function were consistent across HRTFs, and that the measurement error for the test was 1.3 dB, with a change in signal-to-noise ratio of 1 dB reflecting a 10% change in intelligibility. There were significant differences in performance when using the various HRTFs, with particularly good thresholds for the HRTF based on the

  9. Individual differences in selective attention predict speech identification at a cocktail party.

    PubMed

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-08-31

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise.

  10. The effect of changing the secondary task in dual-task paradigms for measuring listening effort.

    PubMed

    Picou, Erin M; Ricketts, Todd A

    2014-01-01

    The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times

  11. Can You Hear What I Think? Theory of Mind in Young Children With Moderate Hearing Loss.

    PubMed

    Netten, Anouk P; Rieffe, Carolien; Soede, Wim; Dirks, Evelien; Korver, Anna M H; Konings, Saskia; Briaire, Jeroen J; Oudesluys-Murphy, Anne Marie; Dekker, Friedo W; Frijns, Johan H M

    The first aim of this study was to examine various aspects of Theory of Mind (ToM) development in young children with moderate hearing loss (MHL) compared with hearing peers. The second aim was to examine the relation between language abilities and ToM in both groups. The third aim was to compare the sequence of ToM development between children with MHL and hearing peers. Forty-four children between 3 and 5 years old with MHL (35 to 70 dB HL) who preferred to use spoken language were identified from a nationwide study on hearing loss in young children. These children were compared with 101 hearing peers. Children were observed during several tasks to measure intention understanding, the acknowledgement of the other's desires, and belief understanding. Parents completed two scales of the child development inventory to assess expressive language and language comprehension in all participants. Objective language test scores were available from the medical files of children with MHL. Children with MHL showed comparable levels of intention understanding but lower levels of both desire and belief understanding than hearing peers. Parents reported lower language abilities in children with MHL compared with hearing peers. Yet, the language levels of children with MHL were within the average range compared with test normative samples. A stronger relation between language and ToM was found in the hearing children than in children with MHL. The expected developmental sequence of ToM skills was divergent in approximately one-fourth of children with MHL, when compared with hearing children. Children with MHL have more difficulty in their ToM reasoning than hearing peers, despite the fact that their language abilities lie within the average range compared with test normative samples.

  12. The Effects of Dual-Language Support on the Language Skills of Bilingual Children with Hearing Loss Who Use Listening Devices Relative to Their Monolingual Peers

    ERIC Educational Resources Information Center

    Bunta, Ferenc; Douglas, Michael

    2013-01-01

    Purpose: The present study investigated the effects of supporting both English and Spanish on language outcomes in bilingual children with hearing loss (HL) who used listening devices (cochlear implants and hearing aids). The English language skills of bilingual children with HL were compared to those of their monolingual English-speaking peers'…

  13. "Can You Repeat That?" Teaching Active Listening in Management Education

    ERIC Educational Resources Information Center

    Spataro, Sandra E.; Bloch, Janel

    2018-01-01

    Listening is a critical communication skill and therefore an essential element of management education. "Active" listening surpasses passive listening or simple hearing to establish a deeper connection between speaker and listener, as the listener gives the speaker full attention via inquiry, reflection, respect, and empathy. This…

  14. The Effects of Musical and Linguistic Components in Recognition of Real-World Musical Excerpts by Cochlear Implant Recipients and Normal-Hearing Adults

    PubMed Central

    Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob; Driscoll, Virginia; Olszewski, Carol; Knutson, John F.; Turner, Christopher; Gantz, Bruce

    2011-01-01

    Background Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. Objective The purpose of this study was to examine how accurately adults who use CIs (n=87) and those with normal hearing (NH) (n=17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. Results CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Methods Participants were tested on melody recognition of complex melodies (pop, country, classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. Conclusions These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, listening for enjoyment). PMID:22803258

  15. The effects of musical and linguistic components in recognition of real-world musical excerpts by cochlear implant recipients and normal-hearing adults.

    PubMed

    Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob J; Driscoll, Virginia; Olszewski, Carol; Knutson, John F; Turner, Christopher; Gantz, Bruce

    2012-01-01

    Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. The purpose of this study was to examine how accurately adults who use CIs (n = 87) and those with normal hearing (NH) (n = 17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Participants were tested on melody recognition of complex melodies (pop, country, & classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, & listening for enjoyment).

  16. Text as a Supplement to Speech in Young and Older Adults a)

    PubMed Central

    Krull, Vidya; Humes, Larry E.

    2015-01-01

    Objective The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, we tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. Our working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speechreading literature. We hypothesized that: 1) combining auditory and visual text information will result in improved recognition accuracy compared to auditory or visual text information alone; 2) benefit from supplementing speech with visual text (auditory and visual enhancement) in young adults will be greater than that in older adults; and 3) individual differences in performance on perceptual measures would be associated with cognitive abilities. Design Fifteen young adults with normal hearing, fifteen older adults with normal hearing, and fifteen older adults with hearing loss participated in this study. All participants completed sentence recognition tasks in auditory-only, text-only, and combined auditory-text conditions. The auditory sentence stimuli were spectrally shaped to restore audibility for the older participants with impaired hearing. All participants also completed various cognitive measures, including measures of working memory, processing speed, verbal comprehension, perceptual and cognitive speed, processing efficiency, inhibition, and the ability to form wholes from parts. Group effects were examined for each of the perceptual and cognitive measures. Audiovisual benefit was calculated relative to performance on auditory-only and visual-text only conditions. Finally, the relationship between perceptual measures and other independent measures were examined using principal-component factor analyses, followed by regression analyses. Results

  17. Story retelling skills in Persian speaking hearing-impaired children.

    PubMed

    Jarollahi, Farnoush; Mohamadi, Reyhane; Modarresi, Yahya; Agharasouli, Zahra; Rahimzadeh, Shadi; Ahmadi, Tayebeh; Keyhani, Mohammad-Reza

    2017-05-01

    Since the pragmatic skills of hearing-impaired Persian-speaking children have not yet been investigated particularly through story retelling, this study aimed to evaluate some pragmatic abilities of normal-hearing and hearing-impaired children using a story retelling test. 15 normal-hearing and 15 profound hearing-impaired 7-year-old children were evaluated using the story retelling test with the content validity of 89%, construct validity of 85%, and reliability of 83%. Three macro structure criteria including topic maintenance, event sequencing, explicitness, and four macro structure criteria including referencing, conjunctive cohesion, syntax complexity, and utterance length were assessed. The test was performed with live voice in a quiet room where children were then asked to retell the story. The tasks of the children were recorded on a tape, transcribed, scored and analyzed. In the macro structure criteria, utterances of hearing-impaired students were less consistent, enough information was not given to listeners to have a full understanding of the subject, and the story events were less frequently expressed in a rational order than those of normal-hearing group (P < 0.0001). Regarding the macro structure criteria of the test, unlike the normal-hearing students who obtained high scores, hearing-impaired students failed to gain any scores on the items of this section. These results suggest that Hearing-impaired children were not able to use language as effectively as their hearing peers, and they utilized quite different pragmatic functions. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Biologically inspired binaural hearing aid algorithms: Design principles and effectiveness

    NASA Astrophysics Data System (ADS)

    Feng, Albert

    2002-05-01

    Despite rapid advances in the sophistication of hearing aid technology and microelectronics, listening in noise remains problematic for people with hearing impairment. To solve this problem two algorithms were designed for use in binaural hearing aid systems. The signal processing strategies are based on principles in auditory physiology and psychophysics: (a) the location/extraction (L/E) binaural computational scheme determines the directions of source locations and cancels noise by applying a simple subtraction method over every frequency band; and (b) the frequency-domain minimum-variance (FMV) scheme extracts a target sound from a known direction amidst multiple interfering sound sources. Both algorithms were evaluated using standard metrics such as signal-to-noise-ratio gain and articulation index. Results were compared with those from conventional adaptive beam-forming algorithms. In free-field tests with multiple interfering sound sources our algorithms performed better than conventional algorithms. Preliminary intelligibility and speech reception results in multitalker environments showed gains for every listener with normal or impaired hearing when the signals were processed in real time with the FMV binaural hearing aid algorithm. [Work supported by NIH-NIDCD Grant No. R21DC04840 and the Beckman Institute.

  19. A Look into the Crystal Ball for Children Who Are Deaf or Hard of Hearing: Needs, Opportunities, and Challenges.

    PubMed

    Yoshinaga-Itano, Christine; Wiggin, Mallene

    2016-11-01

    Hearing is essential for the development of speech, spoken language, and listening skills. Children previously went undiagnosed with hearing loss until they were 2.5 or 3 years of age. The auditory deprivation during this critical period of development significantly impacted long-term listening and spoken language outcomes. Due to the advent of universal newborn hearing screening, the average age of diagnosis has dropped to the first few months of life, which sets the stage for outcomes that include children with speech, spoken language, and auditory skill testing in the normal range. However, our work is not finished. The future holds even greater possibilities for children with hearing loss. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. The Words-in-Noise Test (WIN), list 3: a practice list.

    PubMed

    Wilson, Richard H; Watts, Kelly L

    2012-02-01

    The Words-in-Noise Test (WIN) was developed as an instrument to quantify the ability of listeners to understand monosyllabic words in background noise using multitalker babble (Wilson, 2003). The 50% point, which is calculated with the Spearman-Kärber equation (Finney, 1952), is used as the evaluative metric with the WIN materials. Initially, the WIN was designed as a 70-word instrument that presented ten unique words at each of seven signal-to-noise ratios from 24 to 0 dB in 4 dB decrements. Subsequently, the 70-word list was parsed into two 35-word lists that achieved equivalent recognition performances (Wilson and Burks, 2005). This report involves the development of a third list (WIN List 3) that was developed to serve as a practice list to familiarize the participant with listening to words presented in background babble. To determine-on young listeners with normal hearing and on older listeners with sensorineural hearing loss-the psychometric properties of the WIN List 3 materials. A quasi-experimental, repeated-measures design was used. Twenty-four young adult listeners (M = 21.6 yr) with normal pure-tone thresholds (≤ 20 dB HL at 250 to 8000 Hz) and 24 older listeners (M = 65.9 yr) with sensorineural hearing loss participated. The level of the babble was fixed at 80 dB SPL with the level of the words varied from 104 to 80 dB SPL in 4 dB decrements. For listeners with normal hearing, the 50% points for Lists 1 and 2 were similar (4.3 and 5.1 dB S/N, respectively), both of which were lower than the 50% point for List 3 (7.4 dB S/N). A similar relation was observed with the listeners with hearing loss, 50% points for Lists 1 and 2 of 12.2 and 12.4 dB S/N, respectively, compared to 15.8 dB S/N for List 3. The differences between Lists 1 and 2 and List 3 were significant. The relations among the psychometric functions and the relations among the individual data both reflected these differences. The significant ∼3 dB difference between performances

  1. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words.

    PubMed

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H; Fitzgibbons, Peter J; Cohen, Julie I

    2015-02-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech.

  2. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words

    PubMed Central

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Fitzgibbons, Peter J.; Cohen, Julie I.

    2015-01-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech. PMID:25698021

  3. Listening Differently: A Pedagogy for Expanded Listening

    ERIC Educational Resources Information Center

    Gallagher, Michael; Prior, Jonathan; Needham, Martin; Holmes, Rachel

    2017-01-01

    Mainstream education promotes a narrow conception of listening, centred on the reception and comprehension of human meanings. As such, it is ill-equipped to hear how sound propagates affects, generates atmospheres, shapes environments and enacts power. Yet these aspects of sound are vital to how education functions. We therefore argue that there…

  4. Auditory, speech and language development in young children with cochlear implants compared with children with normal hearing.

    PubMed

    Schramm, Bianka; Bohnert, Andrea; Keilmann, Annerose

    2010-07-01

    This study had two aims: (1) to document the auditory and lexical development of children who are deaf and received the first cochlear implant (CI) by the age of 16 months and the second CI by the age of 31 months and (2) to compare these children's results with those of children with normal hearing (NH). This longitudinal study included five children with NH and five with sensorineural deafness. All children of the second group were observed for 36 months after the first fitting of the device (cochlear implant). The auditory development of the CI group was documented every 3 months up to the age of two years in hearing age and chronological age and for the NH group in chronological age. The language development of each NH child was assessed at 12, 18, 24 and 36 months of chronological age. Children with CIs were examined at the same age intervals at chronological and hearing age. In both groups, children showed individual patterns of auditory and language development. The children with CIs developed differently in the amount of receptive and expressive vocabulary compared with the NH control group. Three children in the CI group needed almost 6 months to make gains in speech development that were consistent with what would be expected for their chronological age. Overall, the receptive and expressive development in all children of the implanted group increased with their hearing age. These results indicate that early identification and early implantation is advisable to give children with sensorineural hearing loss a realistic chance to develop satisfactory expressive and receptive vocabulary and also to develop stable phonological, morphological and syntactical skills for school life. On the basis of these longitudinal data, we will be able to develop new diagnostic tools that enable clinicians to assess child's progress in hearing and speech development. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Subcortical amplitude modulation encoding deficits suggest evidence of cochlear synaptopathy in normal-hearing 18-19 year olds with higher lifetime noise exposure.

    PubMed

    Paul, Brandon T; Waheed, Sajal; Bruce, Ian C; Roberts, Larry E

    2017-11-01

    Noise exposure and aging can damage cochlear synapses required for suprathreshold listening, even when cochlear structures needed for hearing at threshold remain unaffected. To control for effects of aging, behavioral amplitude modulation (AM) detection and subcortical envelope following responses (EFRs) to AM tones in 25 age-restricted (18-19 years) participants with normal thresholds, but different self-reported noise exposure histories were studied. Participants with more noise exposure had smaller EFRs and tended to have poorer AM detection than less-exposed individuals. Simulations of the EFR using a well-established cochlear model were consistent with more synaptopathy in participants reporting greater noise exposure.

  6. Hearing Problems in Children

    MedlinePlus

    Most children hear and listen from the moment they are born. They learn to talk by imitating the sounds around them ... United States are born deaf or hard-of-hearing. More lose their hearing later during childhood. Babies ...

  7. Measures of Working Memory, Sequence Learning, and Speech Recognition in the Elderly.

    ERIC Educational Resources Information Center

    Humes, Larry E.; Floyd, Shari S.

    2005-01-01

    This study describes the measurement of 2 cognitive functions, working-memory capacity and sequence learning, in 2 groups of listeners: young adults with normal hearing and elderly adults with impaired hearing. The measurement of these 2 cognitive abilities with a unique, nonverbal technique capable of auditory, visual, and auditory-visual…

  8. Listening to Young Children's Voices: The Evaluation of a Coding System

    ERIC Educational Resources Information Center

    Tertoolen, Anja; Geldens, Jeannette; van Oers, Bert; Popeijus, Herman

    2015-01-01

    Listening to young children's voices is an issue with increasing relevance for many researchers in the field of early childhood research. At the same time, teachers and researchers are faced with challenges to provide children with possibilities to express their notions, and to find ways of comprehending children's voices. In our research we aim…

  9. Cultural Identity of Young Deaf Adults with Cochlear Implants in Comparison to Deaf without Cochlear Implants and Hard-of-Hearing Young Adults.

    PubMed

    Goldblat, Ester; Most, Tova

    2018-07-01

    This study examined the relationships between cultural identity, severity of hearing loss (HL), and the use of a cochlear implant (CI). One hundred and forty-one adolescents and young adults divided into three groups (deaf with CI, deaf without CI, and hard-of-hearing (HH)) and 134 parents participated. Adolescents and young adults completed questionnaires on cultural identity (hearing, Deaf, marginal, bicultural-hearing, and bicultural-deaf) and communication proficiencies (hearing, spoken language, and sign language). Parents completed a speech quality questionnaire. Deaf participants without CI and those with CI differed in all identities except marginal identity. CI users and HH participants had similar identities except for a stronger bicultural-deaf identity among CI users. Three clusters of participants evolved: participants with a dominant bicultural-deaf identity, participants with a dominant bicultural-hearing identity and participants without a formed cultural identity. Adolescents and young adults who were proficient in one of the modes of communication developed well-established bicultural identities. Adolescents and young adults who were not proficient in one of the modes of communication did not develop a distinguished cultural identity. These results suggest that communication proficiencies are crucial for developing defined identities.

  10. Behavioral manifestations of audiometrically-defined "slight" or "hidden" hearing loss revealed by measures of binaural detection.

    PubMed

    Bernstein, Leslie R; Trahiotis, Constantine

    2016-11-01

    This study assessed whether audiometrically-defined "slight" or "hidden" hearing losses might be associated with degradations in binaural processing as measured in binaural detection experiments employing interaurally delayed signals and maskers. Thirty-one listeners participated, all having no greater than slight hearing losses (i.e., no thresholds greater than 25 dB HL). Across the 31 listeners and consistent with the findings of Bernstein and Trahiotis [(2015). J. Acoust. Soc. Am. 138, EL474-EL479] binaural detection thresholds at 500 Hz and 4 kHz increased with increasing magnitude of interaural delay, suggesting a loss of precision of coding with magnitude of interaural delay. Binaural detection thresholds were consistently found to be elevated for listeners whose absolute thresholds at 4 kHz exceeded 7.5 dB HL. No such elevations were observed in conditions having no binaural cues available to aid detection (i.e., "monaural" conditions). Partitioning and analyses of the data revealed that those elevated thresholds (1) were more attributable to hearing level than to age and (2) result from increased levels of internal noise. The data suggest that listeners whose high-frequency monaural hearing status would be classified audiometrically as being normal or "slight loss" may exhibit substantial and perceptually meaningful losses of binaural processing.

  11. Pre- and Postoperative Binaural Unmasking for Bimodal Cochlear Implant Listeners.

    PubMed

    Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W

    Cochlear implants (CIs) are increasingly recommended to individuals with residual bilateral acoustic hearing. Although new hearing-preserving electrode designs and surgical approaches show great promise, CI recipients are still at risk to lose acoustic hearing in the implanted ear, which could prevent the ability to take advantage of binaural unmasking to aid speech recognition in noise. This study examined the tradeoff between the benefits of a CI for speech understanding in noise and the potential loss of binaural unmasking for CI recipients with some bilateral preoperative acoustic hearing. Binaural unmasking is difficult to evaluate in CI candidates because speech perception in noise is generally too poor to measure reliably in the range of signal to noise ratios (SNRs) where binaural intelligibility level differences (BILDs) are typically observed (<5 dB). Thus, a test of audiovisual speech perception in noise was employed to increase performance to measureable levels. BILDs were measured preoperatively for 11 CI candidates and at least 5 months post-activation for 10 of these individuals (1 individual elected not to receive a CI). Audiovisual sentences were presented in speech-shaped masking noise between -10 and +15 dB SNR. The noise was always correlated between the ears, while the speech signal was either correlated (N0S0) or inversely correlated (N0Sπ). Stimuli were delivered via headphones to the unaided ear(s) and, where applicable, via auxiliary input to the CI speech processor. A z test evaluated performance differences between the N0S0 and N0Sπ conditions for each listener pre- and postoperatively. For listeners showing a significant difference, the magnitude of the BILD was characterized as the difference in SNRs required to achieve 50% correct performance. One listener who underwent hearing-preservation surgery received additional postoperative tests, which presented sound directly to both ears and to the CI speech processor. Five of 11 listeners

  12. Loudness of dynamic stimuli in acoustic and electric hearing.

    PubMed

    Zhang, C; Zeng, F G

    1997-11-01

    Traditional loudness models have been based on the average energy and the critical band analysis of steady-state sounds. However, most environmental sounds, including speech, are dynamic stimuli, in which the average level [e.g., the root-mean-square (rms) level] does not account for the large temporal fluctuations. The question addressed here was whether two stimuli of the same rms level but different peak levels would produce an equal loudness sensation. A modern adaptive procedure was used to replicate two classic experiments demonstrating that the sensation of "beats" in a two- or three-tone complex resulted in a louder sensation [E. Zwicker and H. Fastl, Psychoacoustics-Facts and Models (Springer-Verlag, Berlin, 1990)]. Two additional experiments were conducted to study exclusively the effects of the temporal envelope on the loudness sensation of dynamic stimuli. Loudness balance was performed by normal-hearing listeners between a white noise and a sinusoidally amplitude-modulated noise in one experiment, and by cochlear implant listeners between two harmonic stimuli of the same magnitude spectra, but different phase spectra, in the other experiment. The results from both experiments showed that, for two stimuli of the same rms level, the stimulus with greater temporal fluctuations sometimes produced a significantly louder sensation, depending on the temporal frequency and overall stimulus level. In normal-hearing listeners, the louder sensation was produced for the amplitude-modulated stimuli with modulation frequencies lower than 400 Hz, and gradually disappeared above 400 Hz, resulting in a low-pass filtering characteristic which bore some similarity to the temporal modulation transfer function. The extent to which loudness was greater was a nonmonotonic function of level in acoustic hearing and a monotonically increasingly function in electric hearing. These results suggest that the loudness sensation of a dynamic stimulus is not limited to a 100-ms

  13. Individual differences in selective attention predict speech identification at a cocktail party

    PubMed Central

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-01-01

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise. DOI: http://dx.doi.org/10.7554/eLife.16747.001 PMID:27580272

  14. Effects of interaural time differences in fine structure and envelope on lateral discrimination in electric hearing.

    PubMed

    Majdak, Piotr; Laback, Bernhard; Baumgartner, Wolf-Dieter

    2006-10-01

    Bilateral cochlear implant (CI) listeners currently use stimulation strategies which encode interaural time differences (ITD) in the temporal envelope but which do not transmit ITD in the fine structure, due to the constant phase in the electric pulse train. To determine the utility of encoding ITD in the fine structure, ITD-based lateralization was investigated with four CI listeners and four normal hearing (NH) subjects listening to a simulation of electric stimulation. Lateralization discrimination was tested at different pulse rates for various combinations of independently controlled fine structure ITD and envelope ITD. Results for electric hearing show that the fine structure ITD had the strongest impact on lateralization at lower pulse rates, with significant effects for pulse rates up to 800 pulses per second. At higher pulse rates, lateralization discrimination depended solely on the envelope ITD. The data suggest that bilateral CI listeners benefit from transmitting fine structure ITD at lower pulse rates. However, there were strong interindividual differences: the better performing CI listeners performed comparably to the NH listeners.

  15. Masked speech perception across the adult lifespan: Impact of age and hearing impairment.

    PubMed

    Goossens, Tine; Vercammen, Charlotte; Wouters, Jan; van Wieringen, Astrid

    2017-02-01

    As people grow older, speech perception difficulties become highly prevalent, especially in noisy listening situations. Moreover, it is assumed that speech intelligibility is more affected in the event of background noises that induce a higher cognitive load, i.e., noises that result in informational versus energetic masking. There is ample evidence showing that speech perception problems in aging persons are partly due to hearing impairment and partly due to age-related declines in cognition and suprathreshold auditory processing. In order to develop effective rehabilitation strategies, it is indispensable to know how these different degrading factors act upon speech perception. This implies disentangling effects of hearing impairment versus age and examining the interplay between both factors in different background noises of everyday settings. To that end, we investigated open-set sentence identification in six participant groups: a young (20-30 years), middle-aged (50-60 years), and older cohort (70-80 years), each including persons who had normal audiometric thresholds up to at least 4 kHz, on the one hand, and persons who were diagnosed with elevated audiometric thresholds, on the other hand. All participants were screened for (mild) cognitive impairment. We applied stationary and amplitude modulated speech-weighted noise, which are two types of energetic maskers, and unintelligible speech, which causes informational masking in addition to energetic masking. By means of these different background noises, we could look into speech perception performance in listening situations with a low and high cognitive load, respectively. Our results indicate that, even when audiometric thresholds are within normal limits up to 4 kHz, irrespective of threshold elevations at higher frequencies, and there is no indication of even mild cognitive impairment, masked speech perception declines by middle age and decreases further on to older age. The impact of hearing

  16. Everyday listeners' impressions of speech produced by individuals with adductor spasmodic dysphonia.

    PubMed

    Nagle, Kathleen F; Eadie, Tanya L; Yorkston, Kathryn M

    2015-01-01

    Individuals with adductor spasmodic dysphonia (ADSD) have reported that unfamiliar communication partners appear to judge them as sneaky, nervous or not intelligent, apparently based on the quality of their speech; however, there is minimal research into the actual everyday perspective of listening to ADSD speech. The purpose of this study was to investigate the impressions of listeners hearing ADSD speech for the first time using a mixed-methods design. Everyday listeners were interviewed following sessions in which they made ratings of ADSD speech. A semi-structured interview approach was used and data were analyzed using thematic content analysis. Three major themes emerged: (1) everyday listeners make judgments about speakers with ADSD; (2) ADSD speech does not sound normal to everyday listeners; and (3) rating overall severity is difficult for everyday listeners. Participants described ADSD speech similarly to existing literature; however, some listeners inaccurately extrapolated speaker attributes based solely on speech samples. Listeners may draw erroneous conclusions about individuals with ADSD and these biases may affect the communicative success of these individuals. Results have implications for counseling individuals with ADSD, as well as the need for education and awareness about ADSD. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Divided listening in noise in a mock-up of a military command post.

    PubMed

    Abel, Sharon M; Nakashima, Ann; Smith, Ingrid

    2012-04-01

    This study investigated divided listening in noise in a mock-up of a vehicular command post. The effects of background noise from the vehicle, unattended speech of coworkers on speech understanding, and a visual cue that directed attention to the message source were examined. Sixteen normal-hearing males participated in sixteen listening conditions, defined by combinations of the absence/presence of vehicle and speech babble noises, availability of a vision cue, and number of channels (2 or 3, diotic or dichotic, and loudspeakers) over which concurrent series of call sign, color, and number phrases were presented. All wore a communications headset with integrated hearing protection. A computer keyboard was used to encode phrases beginning with an assigned call sign. Subjects achieved close to 100% correct phrase identification when presented over the headset (with or without vehicle noise) or over the loudspeakers, without vehicle noise. In contrast, the percentage correct phrase identification was significantly less by 30 to 35% when presented over loudspeakers with vehicle noise. Vehicle noise combined with babble noise decreased the accuracy by an additional 12% for dichotic listening. Vision cues increased phrase identification accuracy by 7% for diotic listening. Outcomes could be explained by the at-ear energy spectra of the speech and noise.

  18. Coordination of Gaze and Speech in Communication between Children with Hearing Impairment and Normal-Hearing Peers

    ERIC Educational Resources Information Center

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2014-01-01

    Purpose: To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Method: Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions,…

  19. The relationship between speech recognition, behavioural listening effort, and subjective ratings.

    PubMed

    Picou, Erin M; Ricketts, Todd A

    2018-06-01

    The purpose of this study was to evaluate the reliability and validity of four subjective questions related to listening effort. A secondary purpose of this study was to evaluate the effects of hearing aid beamforming microphone arrays on word recognition and listening effort. Participants answered subjective questions immediately following testing in a dual-task paradigm with three microphone settings in a moderately reverberant laboratory environment in two noise configurations. Participants rated their: (1) mental work, (2) desire to improve the situation, (3) tiredness, and (4) desire to give up. Data were analysed using repeated measures and reliability analyses. Eighteen adults with symmetrical sensorineural hearing loss participated. Beamforming differentially affected word recognition and listening effort. Analysis revealed the same pattern of results for behavioural listening effort and subjective ratings of desire to improve the situation. Conversely, ratings of work revealed the same pattern of results as word recognition performance. Ratings of tiredness and desire to give up were unaffected by hearing aid microphone or noise configuration. Participant ratings of their desire to control the listening situation appear to reliable subjective indicators of listening effort that align with results from a behavioural measure of listening effort.

  20. Objective analysis of ambisonics for hearing aid applications: Effect of listener's head, room reverberation, and directional microphones.

    PubMed

    Oreinos, Chris; Buchholz, Jörg M

    2015-06-01

    Recently, an increased interest has been demonstrated in evaluating hearing aids (HAs) inside controlled, but at the same time, realistic sound environments. A promising candidate that employs loudspeakers for realizing such sound environments is the listener-centered method of higher-order ambisonics (HOA). Although the accuracy of HOA has been widely studied, it remains unclear to what extent the results can be generalized when (1) a listener wearing HAs that may feature multi-microphone directional algorithms is considered inside the reconstructed sound field and (2) reverberant scenes are recorded and reconstructed. For the purpose of objectively validating HOA for listening tests involving HAs, a framework was developed to simulate the entire path of sounds presented in a modeled room, recorded by a HOA microphone array, decoded to a loudspeaker array, and finally received at the ears and HA microphones of a dummy listener fitted with HAs. Reproduction errors at the ear signals and at the output of a cardioid HA microphone were analyzed for different anechoic and reverberant scenes. It was found that the diffuse reverberation reduces the considered time-averaged HOA reconstruction errors which, depending on the considered application, suggests that reverberation can increase the usable frequency range of a HOA system.

  1. Rapid word-learning in normal-hearing and hearing-impaired children: effects of age, receptive vocabulary, and high-frequency amplification.

    PubMed

    Pittman, A L; Lewis, D E; Hoover, B M; Stelmachowicz, P G

    2005-12-01

    This study examined rapid word-learning in 5- to 14-year-old children with normal and impaired hearing. The effects of age and receptive vocabulary were examined as well as those of high-frequency amplification. Novel words were low-pass filtered at 4 kHz (typical of current amplification devices) and at 9 kHz. It was hypothesized that (1) the children with normal hearing would learn more words than the children with hearing loss, (2) word-learning would increase with age and receptive vocabulary for both groups, and (3) both groups would benefit from a broader frequency bandwidth. Sixty children with normal hearing and 37 children with moderate sensorineural hearing losses participated in this study. Each child viewed a 4-minute animated slideshow containing 8 nonsense words created using the 24 English consonant phonemes (3 consonants per word). Each word was repeated 3 times. Half of the 8 words were low-pass filtered at 4 kHz and half were filtered at 9 kHz. After viewing the story twice, each child was asked to identify the words from among pictures in the slide show. Before testing, a measure of current receptive vocabulary was obtained using the Peabody Picture Vocabulary Test (PPVT-III). The PPVT-III scores of the hearing-impaired children were consistently poorer than those of the normal-hearing children across the age range tested. A similar pattern of results was observed for word-learning in that the performance of the hearing-impaired children was significantly poorer than that of the normal-hearing children. Further analysis of the PPVT and word-learning scores suggested that although word-learning was reduced in the hearing-impaired children, their performance was consistent with their receptive vocabularies. Additionally, no correlation was found between overall performance and the age of identification, age of amplification, or years of amplification in the children with hearing loss. Results also revealed a small increase in performance for both

  2. The medial olivocochlear reflex in children during active listening

    PubMed Central

    Smith, Spencer B.; Cone, Barbara

    2015-01-01

    Objective To determine if active listening modulates the strength of the medial olivocochlear (MOC) reflex in children. Design Click-evoked otoacoustic emissions (CEOAEs) were recorded from the right ear in quiet and in four test conditions: one with contralateral broadband noise (BBN) only, and three with active listening tasks wherein attention was directed to speech embedded in contralateral BBN. Study sample Fifteen typically-developing children (ranging in age from 8 to 14 years) with normal hearing. Results CEOAE levels were reduced in every condition with contralateral acoustic stimulus (CAS) when compared to preceding quiet conditions. There was an additional systematic decrease in CEOAE level with increased listening task difficulty, although this effect was very small. These CEOAE level differences were most apparent in the 8–18 ms region after click onset. Conclusions Active listening may change the strength of the MOC reflex in children, although the effects reported here are very subtle. Further studies are needed to verify that task difficulty modulates the activity of the MOC reflex in children. PMID:25735203

  3. Hearing in young adults. Part II: The effects of recreational noise exposure

    PubMed Central

    Keppler, Hannah; Dhooge, Ingeborg; Vinck, Bart

    2015-01-01

    Great concern arises from recreational noise exposure, which might lead to noise-induced hearing loss in young adults. The objective of the current study was to evaluate the effects of recreational noise exposure on hearing function in young adults. A questionnaire concerning recreational noise exposures and an audiological test battery were completed by 163 subjects (aged 18-30 years). Based on the duration of exposure and self-estimated loudness of various leisure-time activities, the weekly and lifetime equivalent noise exposure were calculated. Subjects were categorized in groups with low, intermediate, and high recreational noise exposure based on these values. Hearing was evaluated using audiometry, transient-evoked otoacoustic emissions (TEOAEs), and distortion-product otoacoustic emissions (DPOAEs). Mean differences in hearing between groups with low, intermediate, and high recreational noise exposure were evaluated using one-way analysis of variance (ANOVA). There were no significant differences in hearing thresholds, TEOAE amplitudes, and DPOAE amplitudes between groups with low, intermediate, or high recreational noise exposure. Nevertheless, one-third of our subjects exceeded the weekly equivalent noise exposure for all activities of 75 dBA. Further, the highest equivalent sound pressure levels (SPLs) were calculated for the activities visiting nightclubs or pubs, attending concerts or festivals, and playing in a band or orchestra. Moreover, temporary tinnitus after recreational noise exposure was found in 86% of our subjects. There were no significant differences in hearing between groups with low, intermediate, and high recreational noise exposure. Nevertheless, a long-term assessment of young adults’ hearing in relation to recreational noise exposure is needed. PMID:26356366

  4. Multiple Solutions to the Same Problem: Utilization of Plausibility and Syntax in Sentence Comprehension by Older Adults with Impaired Hearing.

    PubMed

    Amichetti, Nicole M; White, Alison G; Wingfield, Arthur

    2016-01-01

    A fundamental question in psycholinguistic theory is whether equivalent success in sentence comprehension may come about by different underlying operations. Of special interest is whether adult aging, especially when accompanied by reduced hearing acuity, may shift the balance of reliance on formal syntax vs. plausibility in determining sentence meaning. In two experiments participants were asked to identify the thematic roles in grammatical sentences that contained either plausible or implausible semantic relations. Comprehension of sentence meanings was indexed by the ability to correctly name the agent or the recipient of an action represented in the sentence. In Experiment 1 young and older adults' comprehension was tested for plausible and implausible sentences with the meaning expressed with either an active-declarative or a passive syntactic form. In Experiment 2 comprehension performance was examined for young adults with age-normal hearing, older adults with good hearing acuity, and age-matched older adults with mild-to-moderate hearing loss for plausible or implausible sentences with meaning expressed with either a subject-relative (SR) or an object-relative (OR) syntactic structure. Experiment 1 showed that the likelihood of interpreting a sentence according to its literal meaning was reduced when that meaning expressed an implausible relationship. Experiment 2 showed that this likelihood was further decreased for OR as compared to SR sentences, and especially so for older adults whose hearing impairment added to the perceptual challenge. Experiment 2 also showed that working memory capacity as measured with a letter-number sequencing task contributed to the likelihood that listeners would base their comprehension responses on the literal syntax even when this processing scheme yielded an implausible meaning. Taken together, the results of both experiments support the postulate that listeners may use more than a single uniform processing strategy for

  5. Multiple Solutions to the Same Problem: Utilization of Plausibility and Syntax in Sentence Comprehension by Older Adults with Impaired Hearing

    PubMed Central

    Amichetti, Nicole M.; White, Alison G.; Wingfield, Arthur

    2016-01-01

    A fundamental question in psycholinguistic theory is whether equivalent success in sentence comprehension may come about by different underlying operations. Of special interest is whether adult aging, especially when accompanied by reduced hearing acuity, may shift the balance of reliance on formal syntax vs. plausibility in determining sentence meaning. In two experiments participants were asked to identify the thematic roles in grammatical sentences that contained either plausible or implausible semantic relations. Comprehension of sentence meanings was indexed by the ability to correctly name the agent or the recipient of an action represented in the sentence. In Experiment 1 young and older adults’ comprehension was tested for plausible and implausible sentences with the meaning expressed with either an active-declarative or a passive syntactic form. In Experiment 2 comprehension performance was examined for young adults with age-normal hearing, older adults with good hearing acuity, and age-matched older adults with mild-to-moderate hearing loss for plausible or implausible sentences with meaning expressed with either a subject-relative (SR) or an object-relative (OR) syntactic structure. Experiment 1 showed that the likelihood of interpreting a sentence according to its literal meaning was reduced when that meaning expressed an implausible relationship. Experiment 2 showed that this likelihood was further decreased for OR as compared to SR sentences, and especially so for older adults whose hearing impairment added to the perceptual challenge. Experiment 2 also showed that working memory capacity as measured with a letter-number sequencing task contributed to the likelihood that listeners would base their comprehension responses on the literal syntax even when this processing scheme yielded an implausible meaning. Taken together, the results of both experiments support the postulate that listeners may use more than a single uniform processing strategy for

  6. Hearing, listening, action: Enhancing nursing practice through aural awareness education.

    PubMed

    Collins, Anita; Vanderheide, Rebecca; McKenna, Lisa

    2014-01-01

    Abstract Noise overload within the clinical environment has been found to interfere with the healing process for patients, as well as nurses' ability to assess patients effectively. Awareness and responsibility for noise production begins during initial nursing training and consequently a program to enhance aural awareness skills was designed for graduate entry nursing students in an Australian university. The program utilized an innovative combination of music education activities to develop the students' ability to distinguishing individual sounds (hearing), appreciate patients' experience of sounds (listening) and improve their auscultation skills and reduce the negative effects of noise on patients (action). Using a mixed methods approach, students reported heightened auscultation skills and greater recognition of both patients' and clinicians' aural overload. Results of this pilot suggest that music education activities can assist nursing students to develop their aural awareness and to action changes within the clinical environment to improve the patient's experience of noise.

  7. Hearing, Listening, Action: Enhancing nursing practice through aural awareness education.

    PubMed

    Collins, Anita; Vanderheide, Rebecca; McKenna, Lisa

    2014-03-29

    Abstract Noise overload within the clinical environment has been found to interfere with the healing process for patients, as well as nurses ability to effectively assess patients. Awareness and responsibility for noise production begins during initial nursing training and consequently a program to enhance aural awareness skills was designed for graduate entry nursing students in an Australian university. The program utilised an innovative combination of music education activities to develop the students' ability to distinguishing individual sounds (hearing), appreciate patient's experience of sounds (listening) and improve their auscultation skills and reduce the negative effects of noise on patients (action). Using a mixed methods approach, students' reported heightened auscultation skills and greater recognition of both patients' and clinicians' aural overload. Results of this pilot suggest that music education activities can assist nursing students to develop their aural awareness and to action changes within the clinical environment to improve the patient's experience of noise.

  8. The effect of vision and hearing loss on listeners' perception of referential meaning in music.

    PubMed

    Darrow, Alice-Ann; Novak, Julie

    2007-01-01

    The purpose of the present study was to examine the effect of vision and hearing loss on listeners' perception of referential meaning in music. Participants were students at a state school for the deaf and blind, and students with typical hearing and vision who attended neighboring public schools (N = 96). The music stimuli consisted of six 37-second randomly ordered excerpts from Saint Saëns, Carnival of the Animals. The excerpts were chosen because of their use in similar studies and the composer's clearly intended meaning conveyed in the titles of the excerpts. After allowing for appropriate procedural accommodations for participants with hearing or vision loss, all participants were asked to select the image portrayed by the music. A univariate ANOVA was computed to address the research question, "Do students with vision or hearing loss assign the same visual images to music as students without such sensory losses?" Data were analyzed to examine the effects of sensory condition as well as age and gender. A significant main effect was found for sensory condition, with follow up tests indicating that participants with typical hearing and vision agreed with the composer's intended meaning significantly more often than did participants with vision or hearing loss. No significant main effects were found for gender or age, and no significant interactions were found. Summary data indicated that selected images were more easily identified, or were more difficult to identify across conditions. The data also revealed an order of difficulty and patterns of confusion that were similar across sensory conditions and ages, indicating participant responses were not random, and that some referential meaning in music is conventional.

  9. Right-Ear Advantage for Speech-in-Noise Recognition in Patients with Nonlateralized Tinnitus and Normal Hearing Sensitivity.

    PubMed

    Tai, Yihsin; Husain, Fatima T

    2018-04-01

    Despite having normal hearing sensitivity, patients with chronic tinnitus may experience more difficulty recognizing speech in adverse listening conditions as compared to controls. However, the association between the characteristics of tinnitus (severity and loudness) and speech recognition remains unclear. In this study, the Quick Speech-in-Noise test (QuickSIN) was conducted monaurally on 14 patients with bilateral tinnitus and 14 age- and hearing-matched adults to determine the relation between tinnitus characteristics and speech understanding. Further, Tinnitus Handicap Inventory (THI), tinnitus loudness magnitude estimation, and loudness matching were obtained to better characterize the perceptual and psychological aspects of tinnitus. The patients reported low THI scores, with most participants in the slight handicap category. Significant between-group differences in speech-in-noise performance were only found at the 5-dB signal-to-noise ratio (SNR) condition. The tinnitus group performed significantly worse in the left ear than in the right ear, even though bilateral tinnitus percept and symmetrical thresholds were reported in all patients. This between-ear difference is likely influenced by a right-ear advantage for speech sounds, as factors related to testing order and fatigue were ruled out. Additionally, significant correlations found between SNR loss in the left ear and tinnitus loudness matching suggest that perceptual factors related to tinnitus had an effect on speech-in-noise performance, pointing to a possible interaction between peripheral and cognitive factors in chronic tinnitus. Further studies, that take into account both hearing and cognitive abilities of patients, are needed to better parse out the effect of tinnitus in the absence of hearing impairment.

  10. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences

    ERIC Educational Resources Information Center

    Koeritzer, Margaret A.; Rogers, Chad S.; Van Engen, Kristin J.; Peelle, Jonathan E.

    2018-01-01

    Purpose: The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. Method: We tested 30 young adults and 30 older adults. Participants heard lists of sentences in…

  11. False Belief Development in Children Who Are Hard of Hearing Compared with Peers with Normal Hearing

    ERIC Educational Resources Information Center

    Walker, Elizabeth A.; Ambrose, Sophie E.; Oleson, Jacob; Moeller, Mary Pat

    2017-01-01

    Purpose: This study investigates false belief (FB) understanding in children who are hard of hearing (CHH) compared with children with normal hearing (CNH) at ages 5 and 6 years and at 2nd grade. Research with this population has theoretical significance, given that the early auditory-linguistic experiences of CHH are less restricted compared with…

  12. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences.

    PubMed

    Koeritzer, Margaret A; Rogers, Chad S; Van Engen, Kristin J; Peelle, Jonathan E

    2018-03-15

    The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. https://doi.org/10.23641/asha.5848059.

  13. The Effect of Conventional and Transparent Surgical Masks on Speech Understanding in Individuals with and without Hearing Loss.

    PubMed

    Atcherson, Samuel R; Mendel, Lisa Lucks; Baltimore, Wesley J; Patro, Chhayakanta; Lee, Sungmin; Pousson, Monique; Spann, M Joshua

    2017-01-01

    It is generally well known that speech perception is often improved with integrated audiovisual input whether in quiet or in noise. In many health-care environments, however, conventional surgical masks block visual access to the mouth and obscure other potential facial cues. In addition, these environments can be noisy. Although these masks may not alter the acoustic properties, the presence of noise in addition to the lack of visual input can have a deleterious effect on speech understanding. A transparent ("see-through") surgical mask may help to overcome this issue. To compare the effect of noise and various visual input conditions on speech understanding for listeners with normal hearing (NH) and hearing impairment using different surgical masks. Participants were assigned to one of three groups based on hearing sensitivity in this quasi-experimental, cross-sectional study. A total of 31 adults participated in this study: one talker, ten listeners with NH, ten listeners with moderate sensorineural hearing loss, and ten listeners with severe-to-profound hearing loss. Selected lists from the Connected Speech Test were digitally recorded with and without surgical masks and then presented to the listeners at 65 dB HL in five conditions against a background of four-talker babble (+10 dB SNR): without a mask (auditory only), without a mask (auditory and visual), with a transparent mask (auditory only), with a transparent mask (auditory and visual), and with a paper mask (auditory only). A significant difference was found in the spectral analyses of the speech stimuli with and without the masks; however, no more than ∼2 dB root mean square. Listeners with NH performed consistently well across all conditions. Both groups of listeners with hearing impairment benefitted from visual input from the transparent mask. The magnitude of improvement in speech perception in noise was greatest for the severe-to-profound group. Findings confirm improved speech perception

  14. Evidence of hearing loss in a “normally-hearing” college-student population

    PubMed Central

    Le Prell, C. G.; Hensley, B.N.; Campbell, K. C. M.; Hall, J. W.; Guire, K.

    2011-01-01

    We report pure-tone hearing threshold findings in 56 college students. All subjects reported normal hearing during telephone interviews, yet not all subjects had normal sensitivity as defined by well-accepted criteria. At one or more test frequencies (0.25–8 kHz), 7% of ears had thresholds ≥25 dB HL and 12% had thresholds ≥20 dB HL. The proportion of ears with abnormal findings decreased when three-frequency pure-tone-averages were used. Low-frequency PTA hearing loss was detected in 2.7% of ears and high-frequency PTA hearing loss was detected in 7.1% of ears; however, there was little evidence for “notched” audiograms. There was a statistically reliable relationship in which personal music player use was correlated with decreased hearing status in male subjects. Routine screening and education regarding hearing loss risk factors are critical as college students do not always self-identify early changes in hearing. Large-scale systematic investigations of college students’ hearing status appear to be warranted; the current sample size was not adequate to precisely measure potential contributions of different sound sources to the elevated thresholds measured in some subjects. PMID:21288064

  15. Effects of age on F0 discrimination and intonation perception in simulated electric and electroacoustic hearing.

    PubMed

    Souza, Pamela; Arehart, Kathryn; Miller, Christi Wise; Muralimanohar, Ramesh Kumar

    2011-02-01

    Recent research suggests that older listeners may have difficulty processing information related to the fundamental frequency (F0) of voiced speech. In this study, the focus was on the mechanisms that may underlie this reduced ability. We examined whether increased age resulted in decreased ability to perceive F0 using fine-structure cues provided by the harmonic structure of voiced speech sounds or cues provided by high-rate envelope fluctuations (periodicity). Younger listeners with normal hearing and older listeners with normal to near-normal hearing completed two tasks of F0 perception. In the first task (steady state F0), the fundamental frequency difference limen (F0DL) was measured adaptively for synthetic vowel stimuli. In the second task (time-varying F0), listeners relied on variations in F0 to judge intonation of synthetic diphthongs. For both tasks, three processing conditions were created: eight-channel vocoding that preserved periodicity cues to F0; a simulated electroacoustic stimulation condition, which consisted of high-frequency vocoder processing combined with a low-pass-filtered portion, and offered both periodicity and fine-structure cues to F0; and an unprocessed condition. F0 difference limens for steady state vowel sounds and the ability to discern rising and falling intonations were significantly worse in the older subjects compared with the younger subjects. For both older and younger listeners, scores were lowest for the vocoded condition, and there was no difference in scores between the unprocessed and electroacoustic simulation conditions. Older listeners had difficulty using periodicity cues to obtain information related to talker fundamental frequency. However, performance was improved by combining periodicity cues with (low frequency) acoustic information, and that strategy should be considered in individuals who are appropriate candidates for such processing. For cochlear implant candidates, this effect might be achieved by partial

  16. Objective measures of listening effort: effects of background noise and noise reduction.

    PubMed

    Sarampalis, Anastasios; Kalluri, Sridhar; Edwards, Brent; Hafter, Ervin

    2009-10-01

    This work is aimed at addressing a seeming contradiction related to the use of noise-reduction (NR) algorithms in hearing aids. The problem is that although some listeners claim a subjective improvement from NR, it has not been shown to improve speech intelligibility, often even making it worse. To address this, the hypothesis tested here is that the positive effects of NR might be to reduce cognitive effort directed toward speech reception, making it available for other tasks. Normal-hearing individuals participated in 2 dual-task experiments, in which 1 task was to report sentences or words in noise set to various signal-to-noise ratios. Secondary tasks involved either holding words in short-term memory or responding in a complex visual reaction-time task. At low values of signal-to-noise ratio, although NR had no positive effect on speech reception thresholds, it led to better performance on the word-memory task and quicker responses in visual reaction times. Results from both dual tasks support the hypothesis that NR reduces listening effort and frees up cognitive resources for other tasks. Future hearing aid research should incorporate objective measurements of cognitive benefits.

  17. Level-dependent changes in detection of temporal gaps in noise markers by adults with normal and impaired hearing

    PubMed Central

    Horwitz, Amy R.; Ahlstrom, Jayne B.; Dubno, Judy R.

    2011-01-01

    Compression in the basilar-membrane input–output response flattens the temporal envelope of a fluctuating signal when more gain is applied to lower level than higher level temporal components. As a result, level-dependent changes in gap detection for signals with different depths of envelope fluctuation and for subjects with normal and impaired hearing may reveal effects of compression. To test these assumptions, gap detection with and without a broadband noise was measured with 1 000-Hz-wide (flatter) and 50-Hz-wide (fluctuating) noise markers as a function of marker level. As marker level increased, background level also increased, maintaining a fixed acoustic signal-to-noise ratio (SNR) to minimize sensation-level effects on gap detection. Significant level-dependent changes in gap detection were observed, consistent with effects of cochlear compression. For the flatter marker, gap detection that declines with increases in level up to mid levels and improves with further increases in level may be explained by an effective flattening of the temporal envelope at mid levels, where compression effects are expected to be strongest. A flatter effective temporal envelope corresponds to a reduced effective SNR. The effects of a reduction in compression (resulting in larger effective SNRs) may contribute to better-than-normal gap detection observed for some hearing-impaired listeners. PMID:22087921

  18. The contribution of visual areas to speech comprehension: a PET study in cochlear implants patients and normal-hearing subjects.

    PubMed

    Giraud, Anne Lise; Truy, Eric

    2002-01-01

    Early visual cortex can be recruited by meaningful sounds in the absence of visual information. This occurs in particular in cochlear implant (CI) patients whose dependency on visual cues in speech comprehension is increased. Such cross-modal interaction mirrors the response of early auditory cortex to mouth movements (speech reading) and may reflect the natural expectancy of the visual counterpart of sounds, lip movements. Here we pursue the hypothesis that visual activations occur specifically in response to meaningful sounds. We performed PET in both CI patients and controls, while subjects listened either to their native language or to a completely unknown language. A recruitment of early visual cortex, the left posterior inferior temporal gyrus (ITG) and the left superior parietal cortex was observed in both groups. While no further activation occurred in the group of normal-hearing subjects, CI patients additionally recruited the right perirhinal/fusiform and mid-fusiform, the right temporo-occipito-parietal (TOP) junction and the left inferior prefrontal cortex (LIPF, Broca's area). This study confirms a participation of visual cortical areas in semantic processing of speech sounds. Observation of early visual activation in normal-hearing subjects shows that auditory-to-visual cross-modal effects can also be recruited under natural hearing conditions. In cochlear implant patients, speech activates the mid-fusiform gyrus in the vicinity of the so-called face area. This suggests that specific cross-modal interaction involving advanced stages in the visual processing hierarchy develops after cochlear implantation and may be the correlate of increased usage of lip-reading.

  19. Four cases of acoustic neuromas with normal hearing.

    PubMed

    Valente, M; Peterein, J; Goebel, J; Neely, J G

    1995-05-01

    In 95 percent of the cases, patients with acoustic neuromas will have some magnitude of hearing loss in the affected ear. This paper reports on four patients who had acoustic neuromas and normal hearing. Results from the case history, audiometric evaluation, auditory brainstem response (ABR), electroneurography (ENOG), and vestibular evaluation are reported for each patient. For all patients, the presence of unilateral tinnitus was the most common complaint. Audiologically, elevated or absent acoustic reflex thresholds and abnormal ABR findings were the most powerful diagnostic tools.

  20. Can you hear me now? Teaching listening skills.

    PubMed

    Nemec, Patricia B; Spagnolo, Amy Cottone; Soydan, Anne Sullivan

    2017-12-01

    This column provides an overview of methods for training to improve service provider active listening and reflective responding skills. Basic skills in active listening and reflective responding allow service providers to gather information about and explore the needs, desires, concerns, and preference of people using their services-activities that are of critical importance if services are to be truly person-centered and person-driven. Sources include the personal experience of the authors as well as published literature on the value of basic counseling skills and best practices in training on listening and other related soft skills. Training in listening is often needed but rarely sought by behavioral health service providers. Effective curricula exist, providing content and practice opportunities that can be incorporated into training, supervision, and team meetings. When providers do not listen well to the people who use their services, the entire premise of recovery-oriented person-driven services is undermined. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Measuring Listening Effort: Convergent Validity, Sensitivity, and Links With Cognitive and Personality Measures.

    PubMed

    Strand, Julia F; Brown, Violet A; Merchant, Madeleine B; Brown, Hunter E; Smith, Julia

    2018-06-19

    Listening effort (LE) describes the attentional or cognitive requirements for successful listening. Despite substantial theoretical and clinical interest in LE, inconsistent operationalization makes it difficult to make generalizations across studies. The aims of this large-scale validation study were to evaluate the convergent validity and sensitivity of commonly used measures of LE and assess how scores on those tasks relate to cognitive and personality variables. Young adults with normal hearing (N = 111) completed 7 tasks designed to measure LE, 5 tests of cognitive ability, and 2 personality measures. Scores on some behavioral LE tasks were moderately intercorrelated but were generally not correlated with subjective and physiological measures of LE, suggesting that these tasks may not be tapping into the same underlying construct. LE measures differed in their sensitivity to changes in signal-to-noise ratio and the extent to which they correlated with cognitive and personality variables. Given that LE measures do not show consistent, strong intercorrelations and differ in their relationships with cognitive and personality predictors, these findings suggest caution in generalizing across studies that use different measures of LE. The results also indicate that people with greater cognitive ability appear to use their resources more efficiently, thereby diminishing the detrimental effects associated with increased background noise during language processing.

  2. The effects of reverberant self- and overlap-masking on speech recognition in cochlear implant listeners.

    PubMed

    Desmond, Jill M; Collins, Leslie M; Throckmorton, Chandra S

    2014-06-01

    Many cochlear implant (CI) listeners experience decreased speech recognition in reverberant environments [Kokkinakis et al., J. Acoust. Soc. Am. 129(5), 3221-3232 (2011)], which may be caused by a combination of self- and overlap-masking [Bolt and MacDonald, J. Acoust. Soc. Am. 21(6), 577-580 (1949)]. Determining the extent to which these effects decrease speech recognition for CI listeners may influence reverberation mitigation algorithms. This study compared speech recognition with ideal self-masking mitigation, with ideal overlap-masking mitigation, and with no mitigation. Under these conditions, mitigating either self- or overlap-masking resulted in significant improvements in speech recognition for both normal hearing subjects utilizing an acoustic model and for CI listeners using their own devices.

  3. Effort and Displeasure in People Who Are Hard of Hearing.

    PubMed

    Matthen, Mohan

    2016-01-01

    Listening effort helps explain why people who are hard of hearing are prone to fatigue and social withdrawal. However, a one-factor model that cites only effort due to hardness of hearing is insufficient as there are many who lead happy lives despite their disability. This article explores other contributory factors, in particular motivational arousal and pleasure. The theory of rational motivational arousal predicts that some people forego listening comprehension because they believe it to be impossible and hence worth no effort at all. This is problematic. Why should the listening task be rated this way, given the availability of aids that reduce its difficulty? Two additional factors narrow the explanatory gap. First, we separate the listening task from the benefit derived as a consequence. The latter is temporally more distant, and is discounted as a result. The second factor is displeasure attributed to the listening task, which increases listening cost. Many who are hard of hearing enjoy social interaction. In such cases, the actual activity of listening is a benefit, not a cost. These people also reap the benefits of listening, but do not have to balance these against the displeasure of the task. It is suggested that if motivational harmony can be induced by training in somebody who is hard of hearing, then the obstacle to motivational arousal would be removed. This suggests a modified goal for health care professionals. Do not just teach those who are hard of hearing how to use hearing assistance devices. Teach them how to do so with pleasure and enjoyment.

  4. Increasing motivation changes subjective reports of listening effort and choice of coping strategy.

    PubMed

    Picou, Erin M; Ricketts, Todd A

    2014-06-01

    The purpose of this project was to examine the effect of changing motivation on subjective ratings of listening effort and on the likelihood that a listener chooses either a controlling or an avoidance coping strategy. Two experiments were conducted, one with auditory-only (AO) and one with auditory-visual (AV) stimuli, both using the same speech recognition in noise materials. Four signal-to-noise ratios (SNRs) were used, two in each experiment. The two SNRs targeted 80% and 50% correct performance. Motivation was manipulated by either having participants listen carefully to the speech (low motivation), or listen carefully to the speech and then answer quiz questions about the speech (high motivation). Sixteen participants with normal hearing participated in each experiment. Eight randomly selected participants participated in both. Using AO and AV stimuli, motivation generally increased subjective ratings of listening effort and tiredness. In addition, using auditory-visual stimuli, motivation generally increased listeners' willingness to do something to improve the situation, and decreased their willingness to avoid the situation. These results suggest a listener's mental state may influence listening effort and choice of coping strategy.

  5. Masking release with changing fundamental frequency: Electric acoustic stimulation resembles normal hearing subjects.

    PubMed

    Auinger, Alice Barbara; Riss, Dominik; Liepins, Rudolfs; Rader, Tobias; Keck, Tilman; Keintzel, Thomas; Kaider, Alexandra; Baumgartner, Wolf-Dieter; Gstoettner, Wolfgang; Arnoldner, Christoph

    2017-07-01

    It has been shown that patients with electric acoustic stimulation (EAS) perform better in noisy environments than patients with a cochlear implant (CI). One reason for this could be the preserved access to acoustic low-frequency cues including the fundamental frequency (F0). Therefore, our primary aim was to investigate whether users of EAS experience a release from masking with increasing F0 difference between target talker and masking talker. The study comprised 29 patients and consisted of three groups of subjects: EAS users, CI users and normal-hearing listeners (NH). All CI and EAS users were implanted with a MED-EL cochlear implant and had at least 12 months of experience with the implant. Speech perception was assessed with the Oldenburg sentence test (OlSa) using one sentence from the test corpus as speech masker. The F0 in this masking sentence was shifted upwards by 4, 8, or 12 semitones. For each of these masker conditions the speech reception threshold (SRT) was assessed by adaptively varying the masker level while presenting the target sentences at a fixed level. A statistically significant improvement in speech perception was found for increasing difference in F0 between target sentence and masker sentence in EAS users (p = 0.038) and in NH listeners (p = 0.003). In CI users (classic CI or EAS users with electrical stimulation only) speech perception was independent from differences in F0 between target and masker. A release from masking with increasing difference in F0 between target and masking speech was only observed in listeners and configurations in which the low-frequency region was presented acoustically. Thus, the speech information contained in the low frequencies seems to be crucial for allowing listeners to separate multiple sources. By combining acoustic and electric information, EAS users even manage tasks as complicated as segregating the audio streams from multiple talkers. Preserving the natural code, like fine-structure cues in

  6. Lexical and age effects on word recognition in noise in normal-hearing children.

    PubMed

    Ren, Cuncun; Liu, Sha; Liu, Haihong; Kong, Ying; Liu, Xin; Li, Shujing

    2015-12-01

    The purposes of the present study were (1) to examine the lexical and age effects on word recognition of normal-hearing (NH) children in noise, and (2) to compare the word-recognition performance in noise to that in quiet listening conditions. Participants were 213 NH children (age ranged between 3 and 6 years old). Eighty-nine and 124 of the participants were tested in noise and quiet listening conditions, respectively. The Standard-Chinese Lexical Neighborhood Test, which contains lists of words in four lexical categories (i.e., dissyllablic easy (DE), dissyllablic hard (DH), monosyllable easy (ME), and monosyllable hard (MH)) was used to evaluate the Mandarin Chinese word recognition in speech spectrum-shaped noise (SSN) with a signal-to-noise ratio (SNR) of 0dB. A two-way repeated-measures analysis of variance was conducted to examine the lexical effects with syllable length and difficulty level as the main factors on word recognition in the quiet and noise listening conditions. The effects of age on word-recognition performance were examined using a regression model. The word-recognition performance in noise was significantly poorer than that in quiet and the individual variations in performance in noise were much greater than those in quiet. Word recognition scores showed that the lexical effects were significant in the SSN. Children scored higher with dissyllabic words than with monosyllabic words; "easy" words scored higher than "hard" words in the noise condition. The scores of the NH children in the SSN (SNR=0dB) for the DE, DH, ME, and MH words were 85.4, 65.9, 71.7, and 46.2% correct, respectively. The word-recognition performance also increased with age in each lexical category for the NH children tested in noise. Both age and lexical characteristics of words had significant influences on the performance of Mandarin-Chinese word recognition in noise. The lexical effects were more obvious under noise listening conditions than in quiet. The word

  7. A Spondee Recognition Test for Young Hearing-Impaired Children

    ERIC Educational Resources Information Center

    Cramer, Kathryn D.; Erber, Norman P.

    1974-01-01

    An auditory test of 10 spondaic words recorded on Language Master cards was presented monaurally, through insert receivers to 58 hearing-impaired young children to evaluate their ability to recognize familiar speech material. (MYS)

  8. The eye as a window to the listening brain: neural correlates of pupil size as a measure of cognitive listening load.

    PubMed

    Zekveld, Adriana A; Heslenfeld, Dirk J; Johnsrude, Ingrid S; Versfeld, Niek J; Kramer, Sophia E

    2014-11-01

    An important aspect of hearing is the degree to which listeners have to deploy effort to understand speech. One promising measure of listening effort is task-evoked pupil dilation. Here, we use functional magnetic resonance imaging (fMRI) to identify the neural correlates of pupil dilation during comprehension of degraded spoken sentences in 17 normal-hearing listeners. Subjects listened to sentences degraded in three different ways: the target female speech was masked by fluctuating noise, by speech from a single male speaker, or the target speech was noise-vocoded. The degree of degradation was individually adapted such that 50% or 84% of the sentences were intelligible. Control conditions included clear speech in quiet, and silent trials. The peak pupil dilation was larger for the 50% compared to the 84% intelligibility condition, and largest for speech masked by the single-talker masker, followed by speech masked by fluctuating noise, and smallest for noise-vocoded speech. Activation in the bilateral superior temporal gyrus (STG) showed the same pattern, with most extensive activation for speech masked by the single-talker masker. Larger peak pupil dilation was associated with more activation in the bilateral STG, bilateral ventral and dorsal anterior cingulate cortex and several frontal brain areas. A subset of the temporal region sensitive to pupil dilation was also sensitive to speech intelligibility and degradation type. These results show that pupil dilation during speech perception in challenging conditions reflects both auditory and cognitive processes that are recruited to cope with degraded speech and the need to segregate target speech from interfering sounds. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Listen and learn: engaging young people, their families and schools in early intervention research

    PubMed Central

    Connor, Charlotte

    2017-01-01

    Recent policy guidelines highlight the importance of increasing the identification of young people at risk of developing mental health problems in order to prevent their transition to long-term problems, avoid crisis and remove the need for care through specialist mental health services or hospitalisation. Early awareness of the often insidious behavioural and cognitive changes associated with deteriorating mental well-being, however, is difficult, but it is vital if young people, their families and those who work with them are to be fully equipped with the skills to aid early help-seeking. Our early intervention research continues to highlight the necessity of engaging with and listening to the voices of young people, families and those who work with children and young people, in developing greater understanding of why some young people may be more at risk in terms of their mental health, and to provide children and young people with the best mental health support we can. Collaborative working with young people, their families and those who work with them has been an essential dimension of our youth mental health research in Birmingham, UK, enabling us to listen to the personal narratives of those with lived experience and to work alongside them. This paper highlights some of our key studies and how we have endeavoured to make intra-agency working successful at each stage of the research process through increasing use of digital and youth-informed resources to engage young people: a methodology which continues to inform, guide and develop our early intervention research and implementation. PMID:28559370

  10. Measuring listening-related effort and fatigue in school-aged children using pupillometry.

    PubMed

    McGarrigle, Ronan; Dawes, Piers; Stewart, Andrew J; Kuchinsky, Stefanie E; Munro, Kevin J

    2017-09-01

    Stress and fatigue from effortful listening may compromise well-being, learning, and academic achievement in school-aged children. The aim of this study was to investigate the effect of a signal-to-noise ratio (SNR) typical of those in school classrooms on listening effort (behavioral and pupillometric) and listening-related fatigue (self-report and pupillometric) in a group of school-aged children. A sample of 41 normal-hearing children aged 8-11years performed a narrative speech-picture verification task in a condition with recommended levels of background noise ("ideal": +15dB SNR) and a condition with typical classroom background noise levels ("typical": -2dB SNR). Participants showed increased task-evoked pupil dilation in the typical listening condition compared with the ideal listening condition, consistent with an increase in listening effort. No differences were found between listening conditions in terms of performance accuracy and response time on the behavioral task. Similarly, no differences were found between listening conditions in self-report and pupillometric markers of listening-related fatigue. This is the first study to (a) examine listening-related fatigue in children using pupillometry and (b) demonstrate physiological evidence consistent with increased listening effort while listening to spoken narratives despite ceiling-level task performance accuracy. Understanding the physiological mechanisms that underpin listening-related effort and fatigue could inform intervention strategies and ultimately mitigate listening difficulties in children. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Modeling Speech Level as a Function of Background Noise Level and Talker-to-Listener Distance for Talkers Wearing Hearing Protection Devices

    ERIC Educational Resources Information Center

    Bouserhal, Rachel E.; Bockstael, Annelies; MacDonald, Ewen; Falk, Tiago H.; Voix, Jérémie

    2017-01-01

    Purpose: Studying the variations in speech levels with changing background noise level and talker-to-listener distance for talkers wearing hearing protection devices (HPDs) can aid in understanding communication in background noise. Method: Speech was recorded using an intra-aural HPD from 12 different talkers at 5 different distances in 3…

  12. Effects of age on F0-discrimination and intonation perception in simulated electric and electro-acoustic hearing

    PubMed Central

    Souza, Pamela; Arehart, Kathryn; Miller, Christi Wise; Muralimanohar, Ramesh Kumar

    2010-01-01

    Objectives Recent research suggests that older listeners may have difficulty processing information related to the fundamental frequency (F0) of voiced speech. In this study, the focus was on the mechanisms that may underlie this reduced ability. We examined whether increased age resulted in decreased ability to perceive F0 using fine structure cues provided by the harmonic structure of voiced speech sounds and/or cues provided by high-rate envelope fluctuations (periodicity). Design Younger listeners with normal hearing and older listeners with normal to near-normal hearing completed two tasks of F0 perception. In the first task (steady-state F0), the fundamental frequency difference limen (F0DL) was measured adaptively for synthetic vowel stimuli. In the second task (time-varying F0), listeners relied on variations in F0 to judge intonation of synthetic diphthongs. For both tasks, three processing conditions were created: 8-channel vocoding which preserved periodicity cues to F0; a simulated electroacoustic stimulation condition, which consisted of high-frequency vocoder processing combined with a low-pass filtered portion, and offered both periodicity and fine-structure cues to F0; and an unprocessed condition. Results F0 difference limens for steady-state vowel sounds and the ability to discern rising and falling intonations were significantly worse in the older subjects compared to the younger subjects. For both older and younger listeners scores were lowest for the vocoded condition, and there was no difference in scores between the unprocessed and electroacoustic simulation conditions. Conclusions Older listeners had difficulty using periodicity cues to obtain information related to talker fundamental frequency. However, performance was improved by combining periodicity cues with (low-frequency) acoustic information, and that strategy should be considered in individuals who are appropriate candidates for such processing. For cochlear implant candidates, that

  13. The Emotional Communication in Hearing Questionnaire (EMO-CHeQ): Development and Evaluation.

    PubMed

    Singh, Gurjit; Liskovoi, Lisa; Launer, Stefan; Russo, Frank

    2018-06-11

    The objectives of this research were to develop and evaluate a self-report questionnaire (the Emotional Communication in Hearing Questionnaire or EMO-CHeQ) designed to assess experiences of hearing and handicap when listening to signals that contain vocal emotion information. Study 1 involved internet-based administration of a 42-item version of the EMO-CHeQ to 586 adult participants (243 with self-reported normal hearing [NH], 193 with self-reported hearing impairment but no reported use of hearing aids [HI], and 150 with self-reported hearing impairment and use of hearing aids [HA]). To better understand the factor structure of the EMO-CHeQ and eliminate redundant items, an exploratory factor analysis was conducted. Study 2 involved laboratory-based administration of a 16-item version of the EMO-CHeQ to 32 adult participants (12 normal hearing/near normal hearing (NH/nNH), 10 HI, and 10 HA). In addition, participants completed an emotion-identification task under audio and audiovisual conditions. In study 1, the exploratory factor analysis yielded an interpretable solution with four factors emerging that explained a total of 66.3% of the variance in performance the EMO-CHeQ. Item deletion resulted in construction of the 16-item EMO-CHeQ. In study 1, both the HI and HA group reported greater vocal emotion communication handicap on the EMO-CHeQ than on the NH group, but differences in handicap were not observed between the HI and HA group. In study 2, the same pattern of reported handicap was observed in individuals with audiometrically verified hearing as was found in study 1. On the emotion-identification task, no group differences in performance were observed in the audiovisual condition, but group differences were observed in the audio alone condition. Although the HI and HA group exhibited similar emotion-identification performance, both groups performed worse than the NH/nNH group, thus suggesting the presence of behavioral deficits that parallel self

  14. Social Connectedness and Perceived Listening Effort in Adult Cochlear Implant Users: A Grounded Theory to Establish Content Validity for a New Patient-Reported Outcome Measure.

    PubMed

    Hughes, Sarah E; Hutchings, Hayley A; Rapport, Frances L; McMahon, Catherine M; Boisvert, Isabelle

    2018-02-08

    Individuals with hearing loss often report a need for increased effort when listening, particularly in challenging acoustic environments. Despite audiologists' recognition of the impact of listening effort on individuals' quality of life, there are currently no standardized clinical measures of listening effort, including patient-reported outcome measures (PROMs). To generate items and content for a new PROM, this qualitative study explored the perceptions, understanding, and experiences of listening effort in adults with severe-profound sensorineural hearing loss before and after cochlear implantation. Three focus groups (1 to 3) were conducted. Purposive sampling was used to recruit 17 participants from a cochlear implant (CI) center in the United Kingdom. The participants included adults (n = 15, mean age = 64.1 years, range 42 to 84 years) with acquired severe-profound sensorineural hearing loss who satisfied the UK's national candidacy criteria for cochlear implantation and their normal-hearing significant others (n = 2). Participants were CI candidates who used hearing aids (HAs) and were awaiting CI surgery or CI recipients who used a unilateral CI or a CI and contralateral HA (CI + HA). Data from a pilot focus group conducted with 2 CI recipients were included in the analysis. The data, verbatim transcripts of the focus group proceedings, were analyzed qualitatively using constructivist grounded theory (GT) methodology. A GT of listening effort in cochlear implantation was developed from participants' accounts. The participants provided rich, nuanced descriptions of the complex and multidimensional nature of their listening effort. Interpreting and integrating these descriptions through GT methodology, listening effort was described as the mental energy required to attend to and process the auditory signal, as well as the effort required to adapt to, and compensate for, a hearing loss. Analyses also suggested that listening effort for most participants was

  15. Factors associated with Hearing Loss in a Normal-Hearing Guinea Pig Model of Hybrid Cochlear Implants

    PubMed Central

    Tanaka, Chiemi; Nguyen-Huynh, Anh; Loera, Katherine; Stark, Gemaine; Reiss, Lina

    2014-01-01

    The Hybrid cochlear implant (CI), also known as Electro- Acoustic Stimulation (EAS), is a new type of CI that preserves residual acoustic hearing and enables combined cochlear implant and hearing aid use in the same ear. However, 30-55% of patients experience acoustic hearing loss within days to months after activation, suggesting that both surgical trauma and electrical stimulation may cause hearing loss. The goals of this study were to: 1) determine the contributions of both implantation surgery and EAS to hearing loss in a normal-hearing guinea pig model; 2) determine which cochlear structural changes are associated with hearing loss after surgery and EAS. Two groups of animals were implanted (n=6 per group), with one group receiving chronic acoustic and electric stimulation for 10 weeks, and the other group receiving no direct acoustic or electric stimulation during this time frame. A third group (n=6) was not implanted, but received chronic acoustic stimulation. Auditory brainstem response thresholds were followed over time at 1, 2, 6, and 16 kHz. At the end of the study, the following cochlear measures were quantified: hair cells, spiral ganglion neuron density, fibrous tissue density, and stria vascularis blood vessel density; the presence or absence of ossification around the electrode entry was also noted. After surgery, implanted animals experienced a range of 0-55 dB of threshold shifts in the vicinity of the electrode at 6 and 16 kHz. The degree of hearing loss was significantly correlated with reduced stria vascularis vessel density and with the presence of ossification, but not with hair cell counts, spiral ganglion neuron density, or fibrosis area. After 10 weeks of stimulation, 67% of implanted, stimulated animals had more than 10 dB of additional threshold shift at 1 kHz, compared to 17% of implanted, non-stimulated animals and 0% of non-implanted animals. This 1-kHz hearing loss was not associated with changes in any of the cochlear measures

  16. Trends in the prevalence of hearing loss among young adults entering an industrial workforce 1985 to 2004.

    PubMed

    Rabinowitz, Peter M; Slade, Martin D; Galusha, Deron; Dixon-Ernst, Christine; Cullen, Mark R

    2006-08-01

    Studies have suggested that hearing loss due to recreational noise exposure may be on the rise among adolescents and young adults. This study examines whether the hearing status of young US adults entering an industrial workforce has worsened over the past 20 yr. The baseline audiograms of 2526 individuals ages 17 to 25 beginning employment at a multisite US corporation between 1985 and 2004 were analyzed to determine the yearly prevalence of hearing loss. Approximately 16% of the young adults in the sample had high frequency hearing loss (defined as hearing thresholds greater than 15 dB in either ear at 3,4, or 6 kHz). In a linear regression model, this prevalence decreased over the 20-yr period (odds ratio (OR) = 0.96, 95% confidence interval (CI): 0.94, 0.99). Almost 20% of subjects had audiometric "notches" consistent with noise exposure; this rate remained constant over the 20 yr, as did the prevalence (5%) of low frequency hearing loss. These results indicate that despite concern about widespread recreational noise exposures, the prevalence of hearing loss among a group of young US adults has not significantly increased over the past two decades.

  17. Hearing: Noise-Induced Hearing Loss

    MedlinePlus

    ... stereo headsets (at about 110 dB), attending a rock concert (at about l20 dB), or hearing a ... by listening to parents, teachers, television, and radio. Music, the sounds of nature, and the voices of ...

  18. The Perception of Stress Pattern in Young Cochlear Implanted Children: An EEG Study.

    PubMed

    Vavatzanidis, Niki K; Mürbe, Dirk; Friederici, Angela D; Hahne, Anja

    2016-01-01

    Children with sensorineural hearing loss may (re)gain hearing with a cochlear implant-a device that transforms sounds into electric pulses and bypasses the dysfunctioning inner ear by stimulating the auditory nerve directly with an electrode array. Many implanted children master the acquisition of spoken language successfully, yet we still have little knowledge of the actual input they receive with the implant and specifically which language sensitive cues they hear. This would be important however, both for understanding the flexibility of the auditory system when presented with stimuli after a (life-) long phase of deprivation and for planning therapeutic intervention. In rhythmic languages the general stress pattern conveys important information about word boundaries. Infant language acquisition relies on such cues and can be severely hampered when this information is missing, as seen for dyslexic children and children with specific language impairment. Here we ask whether children with a cochlear implant perceive differences in stress patterns during their language acquisition phase and if they do, whether it is present directly following implant stimulation or if and how much time is needed for the auditory system to adapt to the new sensory modality. We performed a longitudinal ERP study, testing in bimonthly intervals the stress pattern perception of 17 young hearing impaired children (age range: 9-50 months; mean: 22 months) during their first 6 months of implant use. An additional session before the implantation served as control baseline. During a session they passively listened to an oddball paradigm featuring the disyllable "baba," which was stressed either on the first or second syllable (trochaic vs. iambic stress pattern). A group of age-matched normal hearing children participated as controls. Our results show, that within the first 6 months of implant use the implanted children develop a negative mismatch response for iambic but not for trochaic

  19. Listening comprehension across the adult lifespan

    PubMed Central

    Sommers, Mitchell S.; Hale, Sandra; Myerson, Joel; Rose, Nathan; Tye-Murray, Nancy; Spehar, Brent

    2011-01-01

    Short Summary The current study provides the first systematic assessment of listening comprehension across the adult lifespan. A total of 433 participants ranging in age from 20-90 listened to spoken passages and answered comprehension questions following each passage. In addition, measures of auditory sensitivity were obtained from all participants to determine if hearing loss and listening comprehension changed similarly across the adult lifespan. As expected, auditory sensitivity declined from age 20 to age 90. In contrast, listening comprehension remained relatively unchanged until approximately age 65-70, with declines evident only for the oldest participants. PMID:21716112

  20. EEG activity as an objective measure of cognitive load during effortful listening: A study on pediatric subjects with bilateral, asymmetric sensorineural hearing loss.

    PubMed

    Marsella, Pasquale; Scorpecci, Alessandro; Cartocci, Giulia; Giannantonio, Sara; Maglione, Anton Giulio; Venuti, Isotta; Brizi, Ambra; Babiloni, Fabio

    2017-08-01

    Deaf subjects with hearing aids or cochlear implants generally find it challenging to understand speech in noisy environments where a great deal of listening effort and cognitive load are invested. In prelingually deaf children, such difficulties may have detrimental consequences on the learning process and, later in life, on academic performance. Despite the importance of such a topic, currently, there is no validated test for the assessment of cognitive load during audiological tasks. Recently, alpha and theta EEG rhythm variations in the parietal and frontal areas, respectively, have been used as indicators of cognitive load in adult subjects. The aim of the present study was to investigate, by means of EEG, the cognitive load of pediatric subjects affected by asymmetric sensorineural hearing loss as they were engaged in a speech-in-noise identification task. Seven children (4F and 3M, age range = 8-16 years) affected by asymmetric sensorineural hearing loss (i.e. profound degree on one side, mild-to-severe degree on the other side) and using a hearing aid only in their better ear, were included in the study. All of them underwent EEG recording during a speech-in-noise identification task: the experimental conditions were quiet, binaural noise, noise to the better hearing ear and noise to the poorer hearing ear. The subjects' Speech Recognition Thresholds (SRT) were also measured in each test condition. The primary outcome measures were: frontal EEG Power Spectral Density (PSD) in the theta band and parietal EEG PSD in the alpha band, as assessed before stimulus (word) onset. No statistically significant differences were noted among frontal theta power levels in the four test conditions. However, parietal alpha power levels were significantly higher in the "binaural noise" and in the "noise to worse hearing ear" conditions than in the "quiet" and "noise to better hearing ear" conditions (p < 0.001). SRT scores were consistent with task difficulty, but did

  1. Word Recognition and Learning: Effects of Hearing Loss and Amplification Feature

    PubMed Central

    Stewart, Elizabeth C.; Willman, Amanda P.; Odgear, Ian S.

    2017-01-01

    Two amplification features were examined using auditory tasks that varied in stimulus familiarity. It was expected that the benefits of certain amplification features would increase as the familiarity with the stimuli decreased. A total of 20 children and 15 adults with normal hearing as well as 21 children and 17 adults with mild to severe hearing loss participated. Three models of ear-level devices were selected based on the quality of the high-frequency amplification or the digital noise reduction (DNR) they provided. The devices were fitted to each participant and used during testing only. Participants completed three tasks: (a) word recognition, (b) repetition and lexical decision of real and nonsense words, and (c) novel word learning. Performance improved significantly with amplification for both the children and the adults with hearing loss. Performance improved further with wideband amplification for the children more than for the adults. In steady-state noise and multitalker babble, performance decreased for both groups with little to no benefit from amplification or from the use of DNR. When compared with the listeners with normal hearing, significantly poorer performance was observed for both the children and adults with hearing loss on all tasks with few exceptions. Finally, analysis of across-task performance confirmed the hypothesis that benefit increased as the familiarity of the stimuli decreased for wideband amplification but not for DNR. However, users who prefer DNR for listening comfort are not likely to jeopardize their ability to detect and learn new information when using this feature. PMID:29169314

  2. Salivary Cortisol Profiles of Children with Hearing Loss

    ERIC Educational Resources Information Center

    Bess, Fred H.; Gustafson, Samantha J.; Corbett, Blythe A.; Lambert, E. Warren; Camarata, Stephen M.; Hornsby, Benjamin W. Y.

    2016-01-01

    Objectives: It has long been speculated that effortful listening places children with hearing loss at risk for fatigue. School-age children with hearing loss experiencing cumulative stress and listening fatigue on a daily basis might undergo dysregulation of hypothalamic-pituitary-adrenal (HPA) axis activity resulting in elevated or flattened…

  3. Evaluation of cochlear function in normal-hearing young adults exposed to MP3 player noise by analyzing transient evoked otoacoustic emissions and distortion products.

    PubMed

    Santaolalla Montoya, Francisco; Ibargüen, Agustín Martinez; Vences, Ana Rodriguez; del Rey, Ana Sanchez; Fernandez, Jose Maria Sanchez

    2008-10-01

    Exposure to recreational noise may cause injuries to the inner ear, and transient evoked (TEOAEs) and distortion product otoacoustic emissions (DPOAEs) may identify these cochlear alterations. The goal of this study was to evaluate TEOAEs and DPOAEs as a method to diagnose early cochlear alterations in young adults exposed to MP3 player noise. We performed a prospective study of the cochlear function in normal-hearing MP3 player users by analyzing TEOAE and DPOAE incidence, amplitude, and spectral content. We gathered a sample of 40 ears from patients between 19 and 29 years old (mean age 24.09 years, SD 3.9 years). We compared the results with those of a control group of 232 ears not exposed to MP3 noise from patients aged 18 to 32 years (mean age 23.35 years, SD 2.7 years). Fifty percent of ears were from females and 50% were from males. Subjects who had used MP3 players for most years and for more hours each week exhibited a reduction in TEOAE and DPOAE incidence and amplitudes and an increase in DPOAE thresholds. TEOAEs showed a statistically significant lower incidence and amplitudes for normal-hearing subjects using MP3 players at frequencies of 2000, 3000, and 4000 Hz. DPOAE incidence was lower at 700, 1000, 1500, and 2000 Hz; the amplitudes were lower at frequencies between 1500 and 6000 Hz; and the thresholds were higher for all frequency bands, statistically significant at frequencies from 1500 to 6000 Hz, p < .05. Cochlear impairment caused by MP3 player noise exposure may be detectable by analyzing TEOAEs and DPOAEs before the impairment becomes clinically apparent.

  4. Evaluation of gap filling skills and reading mistakes of cochlear implanted and normally hearing students.

    PubMed

    Çizmeci, Hülya; Çiprut, Ayça

    2018-06-01

    This study aimed to (1) evaluate the gap filling skills and reading mistakes of students with cochlear implants, and to (2) compare their results with those of their normal-hearing peers. The effects of implantation age and total time of cochlear implant use were analyzed in relation to the subjects' reading skills development. The study included 19 students who underwent cochlear implantation and 20 students with normal hearing, who were enrolled at the 6th to 8th grades. The subjects' ages ranged between 12 and 14 years old. Their reading skills were evaluated by using the Informal Reading Inventory. A significant relationship were found between implanted and normal-hearing students in terms of the percentages of reading error and the percentages of gap filling scores. The average order of the reading errors of students using cochlear implants was higher than that of normal-hearing students. As for the gap filling, the performances of implanted students in the passage are lower than those of their normal-hearing peers. No significant relationship was found between the variables tested in terms of age and duration of implantation on the reading performances of implanted students. Even if they were early implanted, there were significant differences in the reading performances of implanted students compared with those of their normal-hearing peers in older classes. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    PubMed Central

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that

  6. Longitudinal Development of Distortion Product Otoacoustic Emissions in Infants With Normal Hearing.

    PubMed

    Hunter, Lisa L; Blankenship, Chelsea M; Keefe, Douglas H; Feeney, M Patrick; Brown, David K; McCune, Annie; Fitzpatrick, Denis F; Lin, Li

    2018-01-23

    The purpose of this study was to describe normal characteristics of distortion product otoacoustic emission (DPOAE) signal and noise level in a group of newborns and infants with normal hearing followed longitudinally from birth to 15 months of age. This is a prospective, longitudinal study of 231 infants who passed newborn hearing screening and were verified to have normal hearing. Infants were enrolled from a well-baby nursery and two neonatal intensive care units (NICUs) in Cincinnati, OH. Normal hearing was confirmed with threshold auditory brainstem response and visual reinforcement audiometry. DPOAEs were measured in up to four study visits over the first year after birth. Stimulus frequencies f1 and f2 were used with f2/f1 = 1.22, and the DPOAE was recorded at frequency 2f1-f2. A longitudinal repeated-measure linear mixed model design was used to study changes in DPOAE level and noise level as related to age, middle ear transfer, race, and NICU history. Significant changes in the DPOAE and noise levels occurred from birth to 12 months of age. DPOAE levels were the highest at 1 month of age. The largest decrease in DPOAE level occurred between 1 and 5 months of age in the mid to high frequencies (2 to 8 kHz) with minimal changes occurring between 6, 9, and 12 months of age. The decrease in DPOAE level was significantly related to a decrease in wideband absorbance at the same f2 frequencies. DPOAE noise level increased only slightly with age over the first year with the highest noise levels in the 12-month-old age range. Minor, nonsystematic effects for NICU history, race, and gestational age at birth were found, thus these results were generalizable to commonly seen clinical populations. DPOAE levels were related to wideband middle ear absorbance changes in this large sample of infants confirmed to have normal hearing at auditory brainstem response and visual reinforcement audiometry testing. This normative database can be used to evaluate clinical results

  7. Associations between speech understanding and auditory and visual tests of verbal working memory: effects of linguistic complexity, task, age, and hearing loss

    PubMed Central

    Smith, Sherri L.; Pichora-Fuller, M. Kathleen

    2015-01-01

    Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners’ auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding. PMID:26441769

  8. Central Auditory Processing of Temporal and Spectral-Variance Cues in Cochlear Implant Listeners

    PubMed Central

    Pham, Carol Q.; Bremen, Peter; Shen, Weidong; Yang, Shi-Ming; Middlebrooks, John C.; Zeng, Fan-Gang; Mc Laughlin, Myles

    2015-01-01

    Cochlear implant (CI) listeners have difficulty understanding speech in complex listening environments. This deficit is thought to be largely due to peripheral encoding problems arising from current spread, which results in wide peripheral filters. In normal hearing (NH) listeners, central processing contributes to segregation of speech from competing sounds. We tested the hypothesis that basic central processing abilities are retained in post-lingually deaf CI listeners, but processing is hampered by degraded input from the periphery. In eight CI listeners, we measured auditory nerve compound action potentials to characterize peripheral filters. Then, we measured psychophysical detection thresholds in the presence of multi-electrode maskers placed either inside (peripheral masking) or outside (central masking) the peripheral filter. This was intended to distinguish peripheral from central contributions to signal detection. Introduction of temporal asynchrony between the signal and masker improved signal detection in both peripheral and central masking conditions for all CI listeners. Randomly varying components of the masker created spectral-variance cues, which seemed to benefit only two out of eight CI listeners. Contrastingly, the spectral-variance cues improved signal detection in all five NH listeners who listened to our CI simulation. Together these results indicate that widened peripheral filters significantly hamper central processing of spectral-variance cues but not of temporal cues in post-lingually deaf CI listeners. As indicated by two CI listeners in our study, however, post-lingually deaf CI listeners may retain some central processing abilities similar to NH listeners. PMID:26176553

  9. Safety of the HyperSound® Audio System in Subjects with Normal Hearing.

    PubMed

    Mehta, Ritvik P; Mattson, Sara L; Kappus, Brian A; Seitzman, Robin L

    2015-06-11

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions.

  10. Safety of the HyperSound® Audio System in Subjects with Normal Hearing

    PubMed Central

    Mattson, Sara L.; Kappus, Brian A.; Seitzman, Robin L.

    2015-01-01

    The objective of the study was to assess the safety of the HyperSound® Audio System (HSS), a novel audio system using ultrasound technology, in normal hearing subjects under normal use conditions; we considered pre-exposure and post-exposure test design. We investigated primary and secondary outcome measures: i) temporary threshold shift (TTS), defined as >10 dB shift in pure tone air conduction thresholds and/or a decrement in distortion product otoacoustic emissions (DPOAEs) >10 dB at two or more frequencies; ii) presence of new-onset otologic symptoms after exposure. Twenty adult subjects with normal hearing underwent a pre-exposure assessment (pure tone air conduction audiometry, tympanometry, DPOAEs and otologic symptoms questionnaire) followed by exposure to a 2-h movie with sound delivered through the HSS emitter followed by a post-exposure assessment. No TTS or new-onset otological symptoms were identified. HSS demonstrates excellent safety in normal hearing subjects under normal use conditions. PMID:26779330

  11. Localization and interaural time difference (ITD) thresholds for cochlear implant recipients with preserved acoustic hearing in the implanted ear

    PubMed Central

    Gifford, René H.; Grantham, D. Wesley; Sheffield, Sterling W.; Davis, Timothy J.; Dwyer, Robert; Dorman, Michael F.

    2014-01-01

    The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from −90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100–900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. PMID:24607490

  12. Localization and interaural time difference (ITD) thresholds for cochlear implant recipients with preserved acoustic hearing in the implanted ear.

    PubMed

    Gifford, René H; Grantham, D Wesley; Sheffield, Sterling W; Davis, Timothy J; Dwyer, Robert; Dorman, Michael F

    2014-06-01

    The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from -90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100-900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. Copyright © 2014. Published by Elsevier B.V.

  13. The influence of informational masking on speech perception and pupil response in adults with hearing impairment.

    PubMed

    Koelewijn, Thomas; Zekveld, Adriana A; Festen, Joost M; Kramer, Sophia E

    2014-03-01

    A recent pupillometry study on adults with normal hearing indicates that the pupil response during speech perception (cognitive processing load) is strongly affected by the type of speech masker. The current study extends these results by recording the pupil response in 32 participants with hearing impairment (mean age 59 yr) while they were listening to sentences masked by fluctuating noise or a single-talker. Efforts were made to improve audibility of all sounds by means of spectral shaping. Additionally, participants performed tests measuring verbal working memory capacity, inhibition of interfering information in working memory, and linguistic closure. The results showed worse speech reception thresholds for speech masked by single-talker speech compared to fluctuating noise. In line with previous results for participants with normal hearing, the pupil response was larger when listening to speech masked by a single-talker compared to fluctuating noise. Regression analysis revealed that larger working memory capacity and better inhibition of interfering information related to better speech reception thresholds, but these variables did not account for inter-individual differences in the pupil response. In conclusion, people with hearing impairment show more cognitive load during speech processing when there is interfering speech compared to fluctuating noise.

  14. Variation in Music Player Listening Level as a Function of Campus Location.

    PubMed

    Park, Yunea; Guercio, Diana; Ledon, Victoria; Le Prell, Colleen G

    2017-04-01

    There has been significant discussion in the literature regarding music player use by adolescents and young adults, including whether device use is driving an increase in hearing loss in these populations. While many studies report relatively safe preferred listening levels, some studies with college student participants have reported listening habits that may put individuals at risk for noise-induced hearing loss (NIHL) if those listening habits continue over the long term. The goal of the current investigation was to extend listening level data collection sites from urban city settings studied by others to a more rural campus setting. This was a prospective study. Participants were 138 students on the University of Florida campus (94 males, 44 females), 18 years or older (mean = 21 years; range: 18-33 years). In this investigation, the current output level (listening level) was measured from personal listening devices used by students as they passed by a recruiting table located in one of three areas of the University of Florida campus. One location was in an open-air campus square; the other two locations were outside the campus recreation building ("gym") and outside the undergraduate library, with participants recruited as they exited the gym or library buildings. After providing written informed consent, participants completed a survey that included questions about demographics and typical listening habits (hours per day, days per week). The output level on their device was then measured using a "Jolene" mannequin. Average listening levels for participants at the three locations were as follows: gym: 85.9 ± 1.4 dBA; campus square: 83.3 ± 2.0 dBA; library: 76.9 ± 1.3 dBA. After adjusting to free-field equivalent level, average listening levels were gym: 79.7 ± 1.4 dBA; campus square: 76.9 ± 2.1 dBA; library: 70.4 ± 1.4 dBA. There were no statistically significant differences between male and female listeners, and there were no reliable differences as a

  15. Top-down Processes in Simulated Electric-Acoustic Hearing: The Effect of Linguistic Context on Bimodal Benefit for Temporally Interrupted Speech

    PubMed Central

    Oh, Soo Hee; Donaldson, Gail S.; Kong, Ying-Yee

    2016-01-01

    Objectives Previous studies have documented the benefits of bimodal hearing as compared with a CI alone, but most have focused on the importance of bottom-up, low-frequency cues. The purpose of the present study was to evaluate the role of top-down processing in bimodal hearing by measuring the effect of sentence context on bimodal benefit for temporally interrupted sentences. It was hypothesized that low-frequency acoustic cues would facilitate the use of contextual information in the interrupted sentences, resulting in greater bimodal benefit for the higher context (CUNY) sentences than for the lower context (IEEE) sentences. Design Young normal-hearing listeners were tested in simulated bimodal listening conditions in which noise band vocoded sentences were presented to one ear with or without low-pass (LP) filtered speech or LP harmonic complexes (LPHCs) presented to the contralateral ear. Speech recognition scores were measured in three listening conditions: vocoder-alone, vocoder combined with LP speech, and vocoder combined with LPHCs. Temporally interrupted versions of the CUNY and IEEE sentences were used to assess listeners’ ability to fill in missing segments of speech by using top-down linguistic processing. Sentences were square-wave gated at a rate of 5 Hz with a 50 percent duty cycle. Three vocoder channel conditions were tested for each type of sentence (8, 12, and 16 channels for CUNY; 12, 16, and 32 channels for IEEE) and bimodal benefit was compared for similar amounts of spectral degradation (matched-channel comparisons) and similar ranges of baseline performance. Two gain measures, percentage-point gain and normalized gain, were examined. Results Significant effects of context on bimodal benefit were observed when LP speech was presented to the residual-hearing ear. For the matched-channel comparisons, CUNY sentences showed significantly higher normalized gains than IEEE sentences for both the 12-channel (20 points higher) and 16-channel (18

  16. Listening Effort Through Depth of Processing in School-Age Children.

    PubMed

    Hsu, Benson Cheng-Lin; Vanpoucke, Filiep; van Wieringen, Astrid

    A reliable and practical measure of listening effort is crucial in the aural rehabilitation of children with communication disorders. In this article, we propose a novel behavioral paradigm designed to measure listening effort in school-age children based on different depths and levels of verbal processing. The paradigm consists of a classic word recognition task performed in quiet and in noise coupled to one of three additional tasks asking the children to judge the color of simple pictures or a certain semantic category of the presented words. The response time (RT) from the categorization tasks is considered the primary indicator of listening effort. The listening effort paradigm was evaluated in a group of 31 normal-hearing, normal-developing children 7 to 12 years of age. A total of 146 Dutch nouns were selected for the experiment after surveying 14 local Dutch-speaking children. Windows-based custom software was developed to administer the behavioral paradigm from a conventional laptop computer. A separate touch screen was used as a response interface to gather the RT data from the participants. Verbal repetition of each presented word was scored by the tester and a percentage-correct word recognition score (WRS) was calculated for each condition. Randomized lists of target words were presented in one of three signal to noise ratios (SNR) to examine the effect of background noise on the two outcome measures of WRS and RT. Three novel categorization tasks, each corresponding to a different depth or elaboration level of semantic processing, were developed to examine the effect of processing level on either WRS or RT. It was hypothesized that, while listening effort as measured by RT would be affected by both noise and processing level, WRS performance would be affected by changes in noise level only. The RT measure was also hypothesized to increase more from an increase in noise level in categorization conditions demanding a deeper or more elaborate form of

  17. Relationship between Consonant Recognition in Noise and Hearing Threshold

    ERIC Educational Resources Information Center

    Yoon, Yang-soo; Allen, Jont B.; Gooler, David M.

    2012-01-01

    Purpose: Although poorer understanding of speech in noise by listeners who are hearing-impaired (HI) is known not to be directly related to audiometric hearing threshold, "HT" (f), grouping HI listeners with "HT" (f) is widely practiced. In this article, the relationship between consonant recognition and "HT" (f) is…

  18. Listen and learn: engaging young people, their families and schools in early intervention research.

    PubMed

    Connor, Charlotte

    2017-06-01

    Recent policy guidelines highlight the importance of increasing the identification of young people at risk of developing mental health problems in order to prevent their transition to long-term problems, avoid crisis and remove the need for care through specialist mental health services or hospitalisation. Early awareness of the often insidious behavioural and cognitive changes associated with deteriorating mental well-being, however, is difficult, but it is vital if young people, their families and those who work with them are to be fully equipped with the skills to aid early help-seeking. Our early intervention research continues to highlight the necessity of engaging with and listening to the voices of young people, families and those who work with children and young people, in developing greater understanding of why some young people may be more at risk in terms of their mental health, and to provide children and young people with the best mental health support we can. Collaborative working with young people, their families and those who work with them has been an essential dimension of our youth mental health research in Birmingham, UK, enabling us to listen to the personal narratives of those with lived experience and to work alongside them. This paper highlights some of our key studies and how we have endeavoured to make intra-agency working successful at each stage of the research process through increasing use of digital and youth-informed resources to engage young people: a methodology which continues to inform, guide and develop our early intervention research and implementation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  19. Spatial Release from Masking in Children: Effects of Simulated Unilateral Hearing Loss

    PubMed Central

    Corbin, Nicole E.; Buss, Emily; Leibold, Lori J.

    2016-01-01

    Objectives The purpose of this study was twofold: 1) to determine the effect of an acute simulated unilateral hearing loss on children’s spatial release from masking in two-talker speech and speech-shaped noise, and 2) to develop a procedure to be used in future studies that will assess spatial release from masking in children who have permanent unilateral hearing loss. There were three main predictions. First, spatial release from masking was expected to be larger in two-talker speech than speech-shaped noise. Second, simulated unilateral hearing loss was expected to worsen performance in all listening conditions, but particularly in the spatially separated two-talker speech masker. Third, spatial release from masking was expected to be smaller for children than for adults in the two-talker masker. Design Participants were 12 children (8.7 to 10.9 yrs) and 11 adults (18.5 to 30.4 yrs) with normal bilateral hearing. Thresholds for 50%-correct recognition of Bamford-Kowal-Bench sentences were measured adaptively in continuous two-talker speech or speech-shaped noise. Target sentences were always presented from a loudspeaker at 0° azimuth. The masker stimulus was either co-located with the target or spatially separated to +90° or −90° azimuth. Spatial release from masking was quantified as the difference between thresholds obtained when the target and masker were co-located and thresholds obtained when the masker was presented from +90° or − 90°. Testing was completed both with and without a moderate simulated unilateral hearing loss, created with a foam earplug and supra-aural earmuff. A repeated-measures design was used to compare performance between children and adults, and performance in the no-plug and simulated-unilateral-hearing-loss conditions. Results All listeners benefited from spatial separation of target and masker stimuli on the azimuth plane in the no-plug listening conditions; this benefit was larger in two-talker speech than in speech

  20. Beginning to Talk Like an Adult: Increases in Speech-like Utterances in Young Cochlear Implant Recipients and Toddlers with Normal Hearing

    PubMed Central

    Ertmer, David J.; Jung, Jongmin; Kloiber, Diana True

    2013-01-01

    Background Speech-like utterances containing rapidly combined consonants and vowels eventually dominate the prelinguistic and early word productions of toddlers who are developing typically (TD). It seems reasonable to expect a similar phenomenon in young cochlear implants (CI) recipients. This study sought to determine the number of months of robust hearing experience needed to achieve a majority of speech-like utterances in both of these groups. Methods Speech samples were recorded at 3-month intervals during the first 2 years of CI experience, and between 6- and 24 months of age in TD children. Speech-like utterances were operationally defined as those belonging to the Basic Canonical Syllables (BCS) or Advanced Forms (AF) levels of the Consolidated Stark Assessment of Early Vocal Development-Revised. Results On average, the CI group achieved a majority of speech- like utterances after 12 months, and the TD group after 18 months of robust hearing experience. The CI group produced greater percentages of speech-like utterances at each interval until 24-months, when both groups approximated 80%. Conclusion Auditory deprivation did not limit progress in vocal development as young CI recipients showed more-rapid-than-typical speech development during the first 2 years of device use. Implications for the Infraphonological model of speech development are considered. PMID:23813203

  1. Speech perception in noise in unilateral hearing loss.

    PubMed

    Mondelli, Maria Fernanda Capoani Garcia; Dos Santos, Marina de Marchi; José, Maria Renata

    2016-01-01

    Unilateral hearing loss is characterized by a decrease of hearing in one ear only. In the presence of ambient noise, individuals with unilateral hearing loss are faced with greater difficulties understanding speech than normal listeners. To evaluate the speech perception of individuals with unilateral hearing loss in speech perception with and without competitive noise, before and after the hearing aid fitting process. The study included 30 adults of both genders diagnosed with moderate or severe sensorineural unilateral hearing loss using the Hearing In Noise Test - Hearing In Noise Test-Brazil, in the following scenarios: silence, frontal noise, noise to the right, and noise to the left, before and after the hearing aid fitting process. The study participants had a mean age of 41.9 years and most of them presented right unilateral hearing loss. In all cases evaluated with Hearing In Noise Test, a better performance in speech perception was observed with the use of hearing aids. Using the Hearing In Noise Test-Brazil test evaluation, individuals with unilateral hearing loss demonstrated better performance in speech perception when using hearing aids, both in silence and in situations with a competing noise, with use of hearing aids. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  2. A Taxonomy of Fatigue Concepts and Their Relation to Hearing Loss

    PubMed Central

    Hornsby, Benjamin W.Y.; Naylor, Graham; Bess, Fred H.

    2016-01-01

    Fatigue is common in individuals with a variety of chronic health conditions and can have significant negative effects on quality of life. Although limited in scope, recent work suggests persons with hearing loss may be at increased risk for fatigue, in part due to effortful listening that is exacerbated by their hearing impairment. However, the mechanisms responsible for hearing loss-related fatigue, and the efficacy of audiologic interventions for reducing fatigue, remain unclear. To improve our understanding of hearing loss-related fatigue, as a field it is important to develop a common conceptual understanding of this construct. In this paper the broader fatigue literature is reviewed to identify and describe core constructs, consequences and methods for assessing fatigue and related constructs. Finally, our current knowledge linking hearing loss and fatigue is described and may be summarised as follows: Hearing impairment increases the risk of subjective fatigue and vigor deficits.Adults with hearing loss require more time to recover from fatigue after work, and have more work absences.Sustained, effortful, listening can be fatiguing.Optimal methods for eliciting and measuring fatigue in persons with hearing loss remain unclear and may vary with listening condition.Amplification may minimize decrements in cognitive processing speed during sustained effortful listening. Future research is needed to develop reliable measurement methods to quantify hearing loss-related fatigue; explore factors responsible for modulating fatigue in people with hearing loss; and identify and evaluate potential interventions for reducing hearing loss-related fatigue. PMID:27355763

  3. Hearing for Success in the Classroom.

    ERIC Educational Resources Information Center

    Ireland, JoAnn C.; And Others

    1988-01-01

    Hearing-impaired children in mainstreamed classes require assistive listening devices beyond hearing aids, lipreading, and preferential seating. Frequency modulation auditory training devices can improve speech intelligibility and provide an adequate signal-to-noise ratio, and should be incorporated into regular classes containing hearing-impaired…

  4. Listening comprehension across the adult lifespan.

    PubMed

    Sommers, Mitchell S; Hale, Sandra; Myerson, Joel; Rose, Nathan; Tye-Murray, Nancy; Spehar, Brent

    2011-01-01

    Although age-related declines in perceiving spoken language are well established, the primary focus of research has been on perception of phonemes, words, and sentences. In contrast, relatively few investigations have been directed at establishing the effects of age on the comprehension of extended spoken passages. Moreover, most previous work has used extreme-group designs in which the performance of a group of young adults is contrasted with that of a group of older adults and little if any information is available regarding changes in listening comprehension across the adult lifespan. Accordingly, the goals of the current investigation were to determine whether there are age differences in listening comprehension across the adult lifespan and, if so, whether similar trajectories are observed for age-related changes in auditory sensitivity and listening comprehension. This study used a cross-sectional lifespan design in which approximately 60 individuals in each of 7 decades, from age 20 to 89 yr (a total of 433 participants), were tested on three different measures of listening comprehension. In addition, we obtained measures of auditory sensitivity from all participants. Changes in auditory sensitivity across the adult lifespan exhibited the progressive high-frequency loss typical of age-related hearing impairment. Performance on the listening comprehension measures, however, demonstrated a very different pattern, with scores on all measures remaining relatively stable until age 65 to 70 yr, after which significant declines were observed. Follow-up analyses indicated that this same general pattern was observed across three different types of passages (lectures, interviews, and narratives) and three different question types (information, integration, and inference). Multiple regression analyses indicated that low-frequency pure-tone average was the single largest contributor to age-related variance in listening comprehension for individuals older than 65 yr, but

  5. Left-right and front-back spatial hearing with multiple directional microphone configurations in modern hearing aids.

    PubMed

    Carette, Evelyne; Van den Bogaert, Tim; Laureyns, Mark; Wouters, Jan

    2014-10-01

    Several studies have demonstrated negative effects of directional microphone configurations on left-right and front-back (FB) sound localization. New processing schemes, such as frequency-dependent directionality and front focus with wireless ear-to-ear communication in recent, commercial hearing aids may preserve the binaural cues necessary for left-right localization and may introduce useful spectral cues necessary for FB disambiguation. In this study, two hearing aids with different processing schemes, which were both designed to preserve the ability to localize sounds in the horizontal plane (left-right and FB), were compared. We compared horizontal (left-right and FB) sound localization performance of hearing aid users fitted with two types of behind-the-ear (BTE) devices. The first type of BTE device had four different programs that provided (1) no directionality, (2-3) symmetric frequency-dependent directionality, and (4) an asymmetric configuration. The second pair of BTE devices was evaluated in its omnidirectional setting. This setting automatically activates a soft forward-oriented directional scheme that mimics the pinna effect. Also, wireless communication between the hearing aids was present in this configuration (5). A broadband stimulus was used as a target signal. The directional hearing abilities of the listeners were also evaluated without hearing aids as a reference. A total of 12 listeners with moderate to severe hearing loss participated in this study. All were experienced hearing-aid users. As a reference, 11 listeners with normal hearing participated. The participants were positioned in a 13-speaker array (left-right, -90°/+90°) or 7-speaker array (FB, 0-180°) and were asked to report the number of the loudspeaker located the closest to where the sound was perceived. The root mean square error was calculated for the left-right experiment, and the percentage of FB errors was used as a FB performance measure. RESULTS were analyzed with

  6. Listening and understanding

    PubMed Central

    Parrott, Linda J.

    1984-01-01

    The activities involved in mediating reinforcement for a speaker's behavior constitute only one phase of a listener's reaction to verbal stimulation. Other phases include listening and understanding what a speaker has said. It is argued that the relative subtlety of these activities is reason for their careful scrutiny, not their complete neglect. Listening is conceptualized as a functional relation obtaining between the responding of an organism and the stimulating of an object. A current instance of listening is regarded as a point in the evolution of similar instances, whereby one's history of perceptual activity may be regarded as existing in one's current interbehavior. Understanding reactions are similarly analyzed; however, they are considerably more complex than listening reactions due to the preponderance of implicit responding involved in reactions of this type. Implicit responding occurs by way of substitute stimulation, and an analysis of the serviceability of verbal stimuli in this regard is made. Understanding is conceptualized as seeing, hearing, or otherwise reacting to actual things in the presence of their “names” alone. The value of an inferential analysis of listening and understanding is also discussed, with the conclusion that unless some attempt is made to elaborate on the nature and operation of these activities, the more apparent reinforcement mediational activities of a listener are merely asserted without an explanation for their occurrence. PMID:22478594

  7. The Speech, Spatial and Qualities of Hearing Scale (SSQ)

    PubMed Central

    Gatehouse, Stuart; Noble, William

    2017-01-01

    The Speech, Spatial and Qualities of Hearing Scale (SSQ) is designed to measure a range of hearing disabilities across several domains. Particular attention is given to hearing speech in a variety of competing contexts, and to the directional, distance and movement components of spatial hearing. In addition, the abilities both to segregate sounds and to attend to simultaneous speech streams are assessed, reflecting the reality of hearing in the everyday world. Qualities of hearing experience include ease of listening, and the naturalness, clarity and identifiability of different speakers, different musical pieces and instruments, and different everyday sounds. Application of the SSQ to 153 new clinic clients prior to hearing aid fitting showed that the greatest difficulty was experienced with simultaneous speech streams, ease of listening, listening in groups and in noise, and judging distance and movement. SSQ ratings were compared with an independent measure of handicap. After differences in hearing level were controlled for, it was found that identification, attention and effort problems, as well as spatial hearing problems, feature prominently in the disability–handicap relationship, along with certain features of speech hearing. The results implicate aspects of temporal and spatial dynamics of hearing disability in the experience of handicap. The SSQ shows promise as an instrument for evaluating interventions of various kinds, particularly (but not exclusively) those that implicate binaural function. PMID:15035561

  8. The benefits of hearing aids and closed captioning for television viewing by older adults with hearing loss.

    PubMed

    Gordon-Salant, Sandra; Callahan, Julia S

    2009-08-01

    Although watching television is a common leisure activity of older adults, the ability to understand televised speech may be compromised by age-related hearing loss. Two potential assistive devices for improving television viewing are hearing aids (HAs) and closed captioning (CC), but their use and benefit by older adults with hearing loss are unknown. The primary purpose of this initial investigation was to determine if older hearing-impaired adults show improvements in understanding televised speech with the use of these two assistive devices (HAs and CC) compared with conditions without these devices. A secondary purpose was to examine the frequency of HA and CC use among a sample of older HA wearers. The investigation entailed a randomized, repeated-measures design of 15 older adults (59 to 82 yr) with bilateral sensorineural hearing losses who wore HAs. Participants viewed three types of televised programs (news, drama, and game show) that were each edited into lists of speech segments and provided an identification response. Each participant was tested in four conditions: baseline (no HA or CC), HA only, CC only, and HA + CC. Also, pilot testing with young normal-hearing listeners was conducted to establish list equivalence and stimulus intelligibility with a control group. All testing was conducted in a quiet room to simulate a living room, using a 20 in flat screen television. Questionnaires were also administered to participants to determine the frequency of HA and CC use while watching television. A significant effect of viewing condition was observed for all programs. Participants exhibited significantly better speech recognition scores in conditions with CC than those without CC (p < 0.01). Use of personal HAs did not significantly improve recognition of televised speech compared with the unaided condition. The condition effect was similar across the three different programs. Most of the participants (73%) regularly wore their HAs while watching

  9. Listen Up! Noises Can Damage Your Hearing

    MedlinePlus

    ... Shortened Understanding Aphasia Wise Choices It’s a Noisy Planet: Protect Your Hearing Your ears can be your ... the noise (wear earplugs or earmuffs). Links Noisy Planet Noise-Induced Hearing Loss Interactive Sound Ruler AgePage: ...

  10. The neural consequences of age-related hearing loss

    PubMed Central

    Peelle, Jonathan E.; Wingfield, Arthur

    2016-01-01

    During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension. PMID:27262177

  11. [Improvement in Phoneme Discrimination in Noise in Normal Hearing Adults].

    PubMed

    Schumann, A; Garea Garcia, L; Hoppe, U

    2017-02-01

    Objective: The study's aim was to examine the possibility to train phoneme-discrimination in noise with normal hearing adults, and its effectivity on speech recognition in noise. A specific computerised training program was used, consisting of special nonsense-syllables with background noise, to train participants' discrimination ability. Material and Methods: 46 normal hearing subjects took part in this study, 28 as training group participants, 18 as control group participants. Only the training group subjects were asked to train over a period of 3 weeks, twice a week for an hour with a computer-based training program. Speech recognition in noise were measured pre- to posttraining for the training group subjects with the Freiburger Einsilber Test. The control group subjects obtained test and restest measures within a 2-3 week break. For the training group follow-up speech recognition was measured 2-3 months after the end of the training. Results: The majority of training group subjects improved their phoneme discrimination significantly. Besides, their speech recognition in noise improved significantly during the training compared to the control group, and remained stable for a period of time. Conclusions: Phonem-Discrimination in noise can be trained by normal hearing adults. The improvements have got a positiv effect on speech recognition in noise, also for a longer period of time. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Effect of Three Classroom Listening Conditions on Speech Intelligibility

    ERIC Educational Resources Information Center

    Ross, Mark; Giolas, Thomas G.

    1971-01-01

    Speech discrimination scores for 13 deaf children were obtained in a classroom under: usual listening condition (hearing aid or not), binaural listening situation using auditory trainer/FM receiver with wireless microphone transmitter turned off, and binaural condition with inputs from auditory trainer/FM receiver and wireless microphone/FM…

  13. Binaural pitch fusion: Pitch averaging and dominance in hearing-impaired listeners with broad fusion.

    PubMed

    Oh, Yonghee; Reiss, Lina A J

    2017-08-01

    Both bimodal cochlear implant and bilateral hearing aid users can exhibit broad binaural pitch fusion, the fusion of dichotically presented tones over a broad range of pitch differences between ears [Reiss, Ito, Eggleston, and Wozny. (2014). J. Assoc. Res. Otolaryngol. 15(2), 235-248; Reiss, Eggleston, Walker, and Oh. (2016). J. Assoc. Res. Otolaryngol. 17(4), 341-356; Reiss, Shayman, Walker, Bennett, Fowler, Hartling, Glickman, Lasarev, and Oh. (2017). J. Acoust. Soc. Am. 143(3), 1909-1920]. Further, the fused binaural pitch is often a weighted average of the different pitches perceived in the two ears. The current study was designed to systematically measure these pitch averaging phenomena in bilateral hearing aid users with broad fusion. The fused binaural pitch of the reference-pair tone combination was initially measured by pitch-matching to monaural comparison tones presented to the pair tone ear. The averaged results for all subjects showed two distinct trends: (1) The fused binaural pitch was dominated by the lower-pitch component when the pair tone was either 0.14 octaves below or 0.78 octaves above the reference tone; (2) pitch averaging occurred when the pair tone was between the two boundaries above, with the most equal weighting at 0.38 octaves above the reference tone. Findings from two subjects suggest that randomization or alternation of the comparison ear can eliminate this asymmetry in the pitch averaging range. Overall, these pitch averaging phenomena suggest that spectral distortions and thus binaural interference may arise during binaural stimulation in hearing-impaired listeners with broad fusion.

  14. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    PubMed

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some

  15. Relating age and hearing loss to monaural, bilateral, and binaural temporal sensitivity1

    PubMed Central

    Gallun, Frederick J.; McMillan, Garnett P.; Molis, Michelle R.; Kampel, Sean D.; Dann, Serena M.; Konrad-Martin, Dawn L.

    2014-01-01

    Older listeners are more likely than younger listeners to have difficulties in making temporal discriminations among auditory stimuli presented to one or both ears. In addition, the performance of older listeners is often observed to be more variable than that of younger listeners. The aim of this work was to relate age and hearing loss to temporal processing ability in a group of younger and older listeners with a range of hearing thresholds. Seventy-eight listeners were tested on a set of three temporal discrimination tasks (monaural gap discrimination, bilateral gap discrimination, and binaural discrimination of interaural differences in time). To examine the role of temporal fine structure in these tasks, four types of brief stimuli were used: tone bursts, broad-frequency chirps with rising or falling frequency contours, and random-phase noise bursts. Between-subject group analyses conducted separately for each task revealed substantial increases in temporal thresholds for the older listeners across all three tasks, regardless of stimulus type, as well as significant correlations among the performance of individual listeners across most combinations of tasks and stimuli. Differences in performance were associated with the stimuli in the monaural and binaural tasks, but not the bilateral task. Temporal fine structure differences among the stimuli had the greatest impact on monaural thresholds. Threshold estimate values across all tasks and stimuli did not show any greater variability for the older listeners as compared to the younger listeners. A linear mixed model applied to the data suggested that age and hearing loss are independent factors responsible for temporal processing ability, thus supporting the increasingly accepted hypothesis that temporal processing can be impaired for older compared to younger listeners with similar hearing and/or amounts of hearing loss. PMID:25009458

  16. Behavioral assessment of adaptive feedback equalization in a digital hearing aid.

    PubMed

    French-St George, M; Wood, D J; Engebretson, A M

    1993-01-01

    An evaluation was made of the efficacy of a digital feedback equalization algorithm employed by the Central Institute for the Deaf Wearable Adaptive Digital Hearing Aid. Three questions were addressed: 1) Does acoustic feedback limit gain adjustments made by hearing aid users? 2) Does feedback equalization permit users with hearing-impairment to select more gain without feedback? and, 3) If more gain is used when feedback equalization is active, does word identification performance improve? Nine subjects with hearing impairment participated in the study. Results suggest that listeners with hearing impairment are indeed limited by acoustic feedback when listening to soft speech (55 dB A) in quiet. The average listener used an additional 4 dB gain when feedback equalization was active. This additional gain resulted in an average 10 rationalized arcsine units (RAU) improvement in word identification score.

  17. Reducing the risk of music-induced hearing loss from overuse of portable listening devices: understanding the problems and establishing strategies for improving awareness in adolescents.

    PubMed

    Portnuff, Cory Df

    2016-01-01

    Hearing loss from the overuse of portable listening devices (PLDs), such as MP3 players or iPods, is of great concern in the popular media. This review aims to discuss the current state of scientific knowledge about music-induced hearing loss from PLD use. This report evaluates the literature on the risk to hearing from PLD use, the individual and psychological factors that influence PLD usage, and strategies for reducing exposure to music through PLDs. Specific interventions are reviewed, and several recommendations for designing interventions and for individual intervention in clinical practice are presented. Clinical recommendations suggested include the "80-90 rule" and the use of isolator-style earphones to reduce background noise.

  18. Reducing the risk of music-induced hearing loss from overuse of portable listening devices: understanding the problems and establishing strategies for improving awareness in adolescents

    PubMed Central

    Portnuff, Cory DF

    2016-01-01

    Hearing loss from the overuse of portable listening devices (PLDs), such as MP3 players or iPods, is of great concern in the popular media. This review aims to discuss the current state of scientific knowledge about music-induced hearing loss from PLD use. This report evaluates the literature on the risk to hearing from PLD use, the individual and psychological factors that influence PLD usage, and strategies for reducing exposure to music through PLDs. Specific interventions are reviewed, and several recommendations for designing interventions and for individual intervention in clinical practice are presented. Clinical recommendations suggested include the “80–90 rule” and the use of isolator-style earphones to reduce background noise. PMID:26929674

  19. Performance-intensity functions of Mandarin word recognition tests in noise: test dialect and listener language effects.

    PubMed

    Liu, Danzheng; Shi, Lu-Feng

    2013-06-01

    This study established the performance-intensity function for Beijing and Taiwan Mandarin bisyllabic word recognition tests in noise in native speakers of Wu Chinese. Effects of the test dialect and listeners' first language on psychometric variables (i.e., slope and 50%-correct threshold) were analyzed. Thirty-two normal-hearing Wu-speaking adults who used Mandarin since early childhood were compared to 16 native Mandarin-speaking adults. Both Beijing and Taiwan bisyllabic word recognition tests were presented at 8 signal-to-noise ratios (SNRs) in 4-dB steps (-12 dB to +16 dB). At each SNR, a half list (25 words) was presented in speech-spectrum noise to listeners' right ear. The order of the test, SNR, and half list was randomized across listeners. Listeners responded orally and in writing. Overall, the Wu-speaking listeners performed comparably to the Mandarin-speaking listeners on both tests. Compared to the Taiwan test, the Beijing test yielded a significantly lower threshold for both the Mandarin- and Wu-speaking listeners, as well as a significantly steeper slope for the Wu-speaking listeners. Both Mandarin tests can be used to evaluate Wu-speaking listeners. Of the 2, the Taiwan Mandarin test results in more comparable functions across listener groups. Differences in the performance-intensity function between listener groups and between tests indicate a first language and dialectal effect, respectively.

  20. Binaural Interference and the Effects of Age and Hearing Loss.

    PubMed

    Mussoi, Bruna S S; Bentler, Ruth A

    2017-01-01

    The existence of binaural interference, defined here as poorer speech recognition with both ears than with the better ear alone, is well documented. Studies have suggested that its prevalence may be higher in the elderly population. However, no study to date has explored binaural interference in groups of younger and older adults in conditions that favor binaural processing (i.e., in spatially separated noise). Also, the effects of hearing loss have not been studied. To examine binaural interference through speech perception tests, in groups of younger adults with normal hearing, older adults with normal hearing for their age, and older adults with hearing loss. A cross-sectional study. Thirty-three participants with symmetric thresholds were recruited from the University of Iowa community. Participants were grouped as follows: younger with normal hearing (18-28 yr, n = 12), older with normal hearing for their age (73-87 yr, n = 9), and older with hearing loss (78-94 yr, n = 12). Prior noise exposure was ruled out. The Connected Speech Test (CST) and Hearing in Noise Test (HINT) were administered to all participants bilaterally, and to each ear separately. Test materials were presented in the sound field with speech at 0° azimuth and the noise at 180°. The Dichotic Digits Test (DDT) was administered to all participants through earphones. Hearing aids were not used during testing. Group results were compared with repeated measures and one-way analysis of variances, as appropriate. Within-subject analyses using pre-established critical differences for each test were also performed. The HINT revealed no effect of condition (individual ear versus bilateral presentation) using group analysis, although within-subject analysis showed that 27% of the participants had binaural interference (18% had binaural advantage). On the CST, there was significant binaural advantage across all groups with group data analysis, as well as for 12% of the participants at each of the two