ERIC Educational Resources Information Center
Zekveld, Adriana A.; George, Erwin L. J.; Kramer, Sophia E.; Goverts, S. Theo; Houtgast, Tammo
2007-01-01
Purpose: In this study, the authors aimed to develop a visual analogue of the widely used Speech Reception Threshold (SRT; R. Plomp & A. M. Mimpen, 1979b) test. The Text Reception Threshold (TRT) test, in which visually presented sentences are masked by a bar pattern, enables the quantification of modality-aspecific variance in speech-in-noise…
The effect of guessing on the speech reception thresholds of children.
Moodley, A
1990-01-01
Speech audiometry is an essential part of the assessment of hearing impaired children and it is now widely used throughout the United Kingdom. Although instructions are universally agreed upon as an important aspect in the administration of any form of audiometric testing, there has been little, if any, research towards evaluating the influence which instructions that are given to a listener have on the Speech Reception Threshold obtained. This study attempts to evaluate what effect guessing has on the Speech Reception Threshold of children. A sample of 30 secondary school pupils between 16 and 18 years of age with normal hearing was used in the study. It is argued that the type of instruction normally used for Speech Reception Threshold in audiometric testing may not provide a sufficient amount of control for guessing and the implications of this, using data obtained in the study, are examined.
Simulated Critical Differences for Speech Reception Thresholds
ERIC Educational Resources Information Center
Pedersen, Ellen Raben; Juhl, Peter Møller
2017-01-01
Purpose: Critical differences state by how much 2 test results have to differ in order to be significantly different. Critical differences for discrimination scores have been available for several decades, but they do not exist for speech reception thresholds (SRTs). This study presents and discusses how critical differences for SRTs can be…
Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.
Bissmeyer, Susan R S; Goldsworthy, Raymond L
2017-09-01
Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.
Reading behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users
ERIC Educational Resources Information Center
Zekveld, Adriana A.; Pronk, Marieke; Danielsson, Henrik; Rönnberg, Jerker
2018-01-01
Purpose: The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in…
Accuracy of cochlear implant recipients in speech reception in the presence of background music.
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-12-01
This study examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of 3 contrasting types of background music, and compared performance based upon listener groups: CI recipients using conventional long-electrode devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing adults. We tested 154 long-electrode CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 normal-hearing adults on closed-set recognition of spondees presented in 3 contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Signal-to-noise ratio thresholds for speech in music were examined in relation to measures of speech recognition in background noise and multitalker babble, pitch perception, and music experience. The signal-to-noise ratio thresholds for speech in music varied as a function of category of background music, group membership (long-electrode, Hybrid, normal-hearing), and age. The thresholds for speech in background music were significantly correlated with measures of pitch perception and thresholds for speech in background noise; auditory status was an important predictor. Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music.
Best, Virginia; Keidser, Gitte; Buchholz, Jörg M; Freeston, Katrina
2015-01-01
There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing-aid benefit from those measured in the standard environment. The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests.
Best, Virginia; Keidser, Gitte; Buchholz, J(x004E7)rg M.; Freeston, Katrina
2016-01-01
Objective There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Design Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. Study Sample The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Results Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing aid benefit from those measured in the standard environment. Conclusions The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests. PMID:25853616
Accuracy of Cochlear Implant Recipients on Speech Reception in Background Music
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-01-01
Objectives This study (a) examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of three contrasting types of background music, and (b) compared performance based upon listener groups: CI recipients using conventional long-electrode (LE) devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing (NH) adults. Methods We tested 154 LE CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 NH adults on closed-set recognition of spondees presented in three contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Outcomes Signal-to-noise thresholds for speech in music (SRTM) were examined in relation to measures of speech recognition in background noise and multi-talker babble, pitch perception, and music experience. Results SRTM thresholds varied as a function of category of background music, group membership (LE, Hybrid, NH), and age. Thresholds for speech in background music were significantly correlated with measures of pitch perception and speech in background noise thresholds; auditory status was an important predictor. Conclusions Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music. PMID:23342550
Hochmuth, Sabine; Kollmeier, Birger; Brand, Thomas; Jürgens, Tim
2015-01-01
To compare speech reception thresholds (SRTs) in noise using matrix sentence tests in four languages: German, Spanish, Russian, Polish. The four tests were composed of equivalent five-word sentences and were all designed and optimized using the same principles. Six stationary speech-shaped noises and three non-stationary noises were used as maskers. Forty native listeners with normal hearing: 10 for each language. SRTs were about 3 dB higher for the German and Spanish tests than for the Russian and Polish tests when stationary noise was used that matched the long-term frequency spectrum of the respective speech test materials. This general SRT difference was also observed for the other stationary noises. The within-test variability across noise conditions differed between languages. About 56% of the observed variance was predicted by the speech intelligibility index. The observed SRT benefit in fluctuating noise was similar for all tests, with a slightly smaller benefit for the Spanish test. Of the stationary noises employed, noise with the same spectrum as the speech yielded the best masking. SRT differences across languages and noises could be attributed in part to spectral differences. These findings provide the feasibility and limits of comparing audiological results across languages.
ERIC Educational Resources Information Center
Besser, Jana; Zekveld, Adriana A.; Kramer, Sophia E.; Ronnberg, Jerker; Festen, Joost M.
2012-01-01
Purpose: In this research, the authors aimed to increase the analogy between Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) and Speech Reception Threshold (SRT; Plomp & Mimpen, 1979) and to examine the TRT's value in estimating cognitive abilities that are important for speech comprehension in noise. Method: The…
Haumann, Sabine; Hohmann, Volker; Meis, Markus; Herzke, Tobias; Lenarz, Thomas; Büchner, Andreas
2012-01-01
Owing to technological progress and a growing body of clinical experience, indication criteria for cochlear implants (CI) are being extended to less severe hearing impairments. It is, therefore, worth reconsidering these indication criteria by introducing novel testing procedures. The diagnostic evidence collected will be evaluated. The investigation includes postlingually deafened adults seeking a CI. Prior to surgery, speech perception tests [Freiburg Speech Test and Oldenburg sentence (OLSA) test] were performed unaided and aided using the Oldenburg Master Hearing Aid (MHA) system. Linguistic skills were assessed with the visual Text Reception Threshold (TRT) test, and general state of health, socio-economic status (SES) and subjective hearing were evaluated through questionnaires. After surgery, the speech tests were repeated aided with a CI. To date, 97 complete data sets are available for evaluation. Statistical analyses showed significant correlations between postsurgical speech reception threshold (SRT) measured with the adaptive OLSA test and pre-surgical data such as the TRT test (r=−0.29), SES (r=−0.22) and (if available) aided SRT (r=0.53). The results suggest that new measures and setups such as the TRT test, SES and speech perception with the MHA provide valuable extra information regarding indication for CI. PMID:26557327
Neher, Tobias
2017-02-01
To scrutinize the binaural contribution to speech-in-noise reception, four groups of elderly participants with or without audiometric asymmetry <2 kHz and with or without near-normal binaural intelligibility level difference (BILD) completed tests of monaural and binaural phase sensitivity as well as cognitive function. Groups did not differ in age, overall degree of hearing loss, or cognitive function. Analyses revealed an influence of BILD status but not audiometric asymmetry on monaural phase sensitivity, strong correlations between monaural and binaural detection thresholds, and monaural and binaural but not cognitive BILD contributions. Furthermore, the N 0 S π threshold at 500 Hz predicted BILD performance effectively.
A Spanish matrix sentence test for assessing speech reception thresholds in noise.
Hochmuth, Sabine; Brand, Thomas; Zokoll, Melanie A; Castro, Franz Zenker; Wardenga, Nina; Kollmeier, Birger
2012-07-01
To develop, optimize, and evaluate a new Spanish sentence test in noise. The test comprises a basic matrix of ten names, verbs, numerals, nouns, and adjectives. From this matrix, test lists of ten sentences with an equal syntactical structure can be formed at random, with each list containing the whole speech material. The speech material represents the phoneme distribution of the Spanish language. The test was optimized for measuring speech reception thresholds (SRTs) in noise by adjusting the presentation levels of the individual words. Subsequently, the test was evaluated by independent measurements investigating the training effects, the comparability of test lists, open-set vs. closed-set test format, and performance of listeners of different Spanish varieties. In total, 68 normal-hearing native Spanish-speaking listeners. SRTs measured using an adaptive procedure were 6.2 ± 0.8 dB SNR for the open-set and 7.2 ± 0.7 dB SNR for the closed-set test format. The residual training effect was less than 1 dB after using two double-lists before data collection. No significant differences were found for listeners of different Spanish varieties indicating that the test is applicable to Spanish as well as Latin American listeners. Test lists can be used interchangeably.
Processing load induced by informational masking is related to linguistic abilities.
Koelewijn, Thomas; Zekveld, Adriana A; Festen, Joost M; Rönnberg, Jerker; Kramer, Sophia E
2012-01-01
It is often assumed that the benefit of hearing aids is not primarily reflected in better speech performance, but that it is reflected in less effortful listening in the aided than in the unaided condition. Before being able to assess such a hearing aid benefit the present study examined how processing load while listening to masked speech relates to inter-individual differences in cognitive abilities relevant for language processing. Pupil dilation was measured in thirty-two normal hearing participants while listening to sentences masked by fluctuating noise or interfering speech at either 50% and 84% intelligibility. Additionally, working memory capacity, inhibition of irrelevant information, and written text reception was tested. Pupil responses were larger during interfering speech as compared to fluctuating noise. This effect was independent of intelligibility level. Regression analysis revealed that high working memory capacity, better inhibition, and better text reception were related to better speech reception thresholds. Apart from a positive relation to speech recognition, better inhibition and better text reception are also positively related to larger pupil dilation in the single-talker masker conditions. We conclude that better cognitive abilities not only relate to better speech perception, but also partly explain higher processing load in complex listening conditions.
Carroll, Rebecca; Meis, Markus; Schulte, Michael; Vormann, Matthias; Kießling, Jürgen; Meister, Hartmut
2015-02-01
To report the development of a standardized German version of a reading span test (RST) with a dual task design. Special attention was paid to psycholinguistic control of the test items and time-sensitive scoring. We aim to establish our RST version to use for determining an individual's working memory in the framework of hearing research in German contexts. RST stimuli were controlled and pretested for psycholinguistic factors. The RST task was to read sentences, quickly determine their plausibility, and later recall certain words to determine a listener's individual reading span. RST results were correlated with outcomes of additional sentence-in-noise tests measured in an aided and an unaided listening condition, each at two reception thresholds. Item plausibility was pre-determined by 28 native German participants. An additional 62 listeners (45-86 years, M = 69.8) with mild-to-moderate hearing loss were tested for speech intelligibility and reading span in a multicenter study. The reading span test significantly correlated with speech intelligibility at both speech reception thresholds in the aided listening condition. Our German RST is standardized with respect to psycholinguistic construction principles of the stimuli, and is a cognitive correlate of intelligibility in a German matrix speech-in-noise test.
Decreased Speech-In-Noise Understanding in Young Adults with Tinnitus
Gilles, Annick; Schlee, Winny; Rabau, Sarah; Wouters, Kristien; Fransen, Erik; Van de Heyning, Paul
2016-01-01
Objectives: Young people are often exposed to high music levels which make them more at risk to develop noise-induced symptoms such as hearing loss, hyperacusis, and tinnitus of which the latter is the symptom perceived the most by young adults. Although, subclinical neural damage was demonstrated in animal experiments, the human correlate remains under debate. Controversy exists on the underlying condition of young adults with normal hearing thresholds and noise-induced tinnitus (NIT) due to leisure noise. The present study aimed to assess differences in audiological characteristics between noise-exposed adolescents with and without NIT. Methods: A group of 87 young adults with a history of recreational noise exposure was investigated by use of the following tests: otoscopy, impedance measurements, pure-tone audiometry including high-frequencies, transient and distortion product otoacoustic emissions, speech-in-noise testing with continuous and modulated noise (amplitude-modulated by 15 Hz), auditory brainstem responses (ABR) and questionnaires.Nineteen students reported NIT due to recreational noise exposure, and their measures were compared to the non-tinnitus subjects. Results: No significant differences between tinnitus and non-tinnitus subjects could be found for hearing thresholds, otoacoustic emissions, and ABR results.Tinnitus subjects had significantly worse speech reception in noise compared to non-tinnitus subjects for sentences embedded in steady-state noise (mean speech reception threshold (SRT) scores, respectively −5.77 and −6.90 dB SNR; p = 0.025) as well as for sentences embedded in 15 Hz AM-noise (mean SRT scores, respectively −13.04 and −15.17 dB SNR; p = 0.013). In both groups speech reception was significantly improved during AM-15 Hz noise compared to the steady-state noise condition (p < 0.001). However, the modulation masking release was not affected by the presence of NIT. Conclusions: Young adults with and without NIT did not differ regarding audiometry, OAE, and ABR.However, tinnitus patients showed decreased speech-in-noise reception. The results are discussed in the light of previous findings suggestion NIT may occur in the absence of measurable peripheral damage as reflected in speech-in-noise deficits in tinnitus subjects. PMID:27445661
Döge, Julia; Baumann, Uwe; Weissgerber, Tobias; Rader, Tobias
2017-12-01
To assess auditory localization accuracy and speech reception threshold (SRT) in complex noise conditions in adult patients with acquired single-sided deafness, after intervention with a cochlear implant (CI) in the deaf ear. Nonrandomized, open, prospective patient series. Tertiary referral university hospital. Eleven patients with late-onset single-sided deafness (SSD) and normal hearing in the unaffected ear, who received a CI. All patients were experienced CI users. Unilateral cochlear implantation. Speech perception was tested in a complex multitalker equivalent noise field consisting of multiple sound sources. Speech reception thresholds in noise were determined in aided (with CI) and unaided conditions. Localization accuracy was assessed in complete darkness. Acoustic stimuli were radiated by multiple loudspeakers distributed in the frontal horizontal plane between -60 and +60 degrees. In the aided condition, results show slightly improved speech reception scores compared with the unaided condition in most of the patients. For 8 of the 11 subjects, SRT was improved between 0.37 and 1.70 dB. Three of the 11 subjects showed deteriorations between 1.22 and 3.24 dB SRT. Median localization error decreased significantly by 12.9 degrees compared with the unaided condition. CI in single-sided deafness is an effective treatment to improve the auditory localization accuracy. Speech reception in complex noise conditions is improved to a lesser extent in 73% of the participating CI SSD patients. However, the absence of true binaural interaction effects (summation, squelch) impedes further improvements. The development of speech processing strategies that respect binaural interaction seems to be mandatory to advance speech perception in demanding listening situations in SSD patients.
Cognitive abilities relate to self-reported hearing disability.
Zekveld, Adriana A; George, Erwin L J; Houtgast, Tammo; Kramer, Sophia E
2013-10-01
In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1996) and performed the Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) test as well as tests of spatial working memory (SWM) and visual sustained attention. Regression analyses examined the predictive value of age, hearing thresholds (pure-tone averages [PTAs]), speech perception in noise (speech reception thresholds in noise [SRTNs]), and the cognitive tests for the 5 AIADH factors. Besides the variance explained by age, PTA, and SRTN, cognitive abilities were related to each hearing factor. The reported difficulties with sound detection and speech perception in quiet were less severe for participants with higher age, lower PTAs, and better TRTs. Fewer sound localization and speech perception in noise problems were reported by participants with better SRTNs and smaller SWM. Fewer sound discrimination difficulties were reported by subjects with better SRTNs and TRTs and smaller SWM. The results suggest a general role of the ability to read partly masked text in subjective hearing. Large working memory was associated with more reported hearing difficulties. This study shows that besides auditory variables and age, cognitive abilities are related to self-reported hearing disability.
Koelewijn, Thomas; Zekveld, Adriana A; Festen, Joost M; Kramer, Sophia E
2014-03-01
A recent pupillometry study on adults with normal hearing indicates that the pupil response during speech perception (cognitive processing load) is strongly affected by the type of speech masker. The current study extends these results by recording the pupil response in 32 participants with hearing impairment (mean age 59 yr) while they were listening to sentences masked by fluctuating noise or a single-talker. Efforts were made to improve audibility of all sounds by means of spectral shaping. Additionally, participants performed tests measuring verbal working memory capacity, inhibition of interfering information in working memory, and linguistic closure. The results showed worse speech reception thresholds for speech masked by single-talker speech compared to fluctuating noise. In line with previous results for participants with normal hearing, the pupil response was larger when listening to speech masked by a single-talker compared to fluctuating noise. Regression analysis revealed that larger working memory capacity and better inhibition of interfering information related to better speech reception thresholds, but these variables did not account for inter-individual differences in the pupil response. In conclusion, people with hearing impairment show more cognitive load during speech processing when there is interfering speech compared to fluctuating noise.
Sinex, Donal G.
2013-01-01
Binary time-frequency (TF) masks can be applied to separate speech from noise. Previous studies have shown that with appropriate parameters, ideal TF masks can extract highly intelligible speech even at very low speech-to-noise ratios (SNRs). Two psychophysical experiments provided additional information about the dependence of intelligibility on the frequency resolution and threshold criteria that define the ideal TF mask. Listeners identified AzBio Sentences in noise, before and after application of TF masks. Masks generated with 8 or 16 frequency bands per octave supported nearly-perfect identification. Word recognition accuracy was slightly lower and more variable with 4 bands per octave. When TF masks were generated with a local threshold criterion of 0 dB SNR, the mean speech reception threshold was −9.5 dB SNR, compared to −5.7 dB for unprocessed sentences in noise. Speech reception thresholds decreased by about 1 dB per dB of additional decrease in the local threshold criterion. Information reported here about the dependence of speech intelligibility on frequency and level parameters has relevance for the development of non-ideal TF masks for clinical applications such as speech processing for hearing aids. PMID:23556604
Reading Behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users.
Zekveld, Adriana A; Pronk, Marieke; Danielsson, Henrik; Rönnberg, Jerker
2018-03-15
The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in the TRT. Second, we aimed to assess the relationships between the TRT and the speech reception thresholds (SRTs) estimated in various conditions. First, we reviewed studies reporting relationships between the TRT and the auditory and/or cognitive factors and formulated specific hypotheses regarding the TRT predictors. These hypotheses were tested using a prediction model applied to a rich data set of 180 hearing aid users. In separate association models, we tested the relationships between the TRT and the various SRTs and subjective hearing difficulties, while taking into account potential confounding variables. The results of the prediction model indicate that the TRT is predicted by the ability to fill in missing words in incomplete sentences, by lexical access speed, and by working memory capacity. Furthermore, in line with previous studies, a moderate association between higher age, poorer pure-tone hearing acuity, and poorer TRTs was observed. Better TRTs were associated with better SRTs for the correct perception of 50% of Hagerman matrix sentences in a 4-talker babble, as well as with better subjective ratings of speech perception. Age and pure-tone hearing thresholds significantly confounded these associations. The associations of the TRT with SRTs estimated in other conditions and with subjective qualities of hearing were not statistically significant when adjusting for age and pure-tone average. We conclude that the abilities tapped into by the TRT test include processes relevant for speeded lexical decision making when completing partly masked sentences and that these processes require working memory capacity. Furthermore, the TRT is associated with the SRT of hearing aid users as estimated in a challenging condition that includes informational masking and with experienced difficulties with speech perception in daily-life conditions. The current results underline the value of using the TRT test in studies involving speech perception and aid in the interpretation of findings acquired using the test.
Zhou, Ning
2017-03-01
The study examined whether the benefit of deactivating stimulation sites estimated to have broad neural excitation was attributed to improved spectral resolution in cochlear implant users. The subjects' spatial neural excitation pattern was estimated by measuring low-rate detection thresholds across the array [see Zhou (2016). PLoS One 11, e0165476]. Spectral resolution, as assessed by spectral-ripple discrimination thresholds, significantly improved after deactivation of five high-threshold sites. The magnitude of improvement in spectral-ripple discrimination thresholds predicted the magnitude of improvement in speech reception thresholds after deactivation. Results suggested that a smaller number of relatively independent channels provide a better outcome than using all channels that might interact.
Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger
2016-05-01
A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.
How linguistic closure and verbal working memory relate to speech recognition in noise--a review.
Besser, Jana; Koelewijn, Thomas; Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M
2013-06-01
The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations.
How Linguistic Closure and Verbal Working Memory Relate to Speech Recognition in Noise—A Review
Koelewijn, Thomas; Zekveld, Adriana A.; Kramer, Sophia E.; Festen, Joost M.
2013-01-01
The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations. PMID:23945955
Iwasaki, Satoshi; Usami, Shin-Ichi; Takahashi, Haruo; Kanda, Yukihiko; Tono, Tetsuya; Doi, Katsumi; Kumakawa, Kozo; Gyo, Kiyofumi; Naito, Yasushi; Kanzaki, Sho; Yamanaka, Noboru; Kaga, Kimitaka
2017-07-01
To report on the safety and efficacy of an investigational active middle ear implant (AMEI) in Japan, and to compare results to preoperative results with a hearing aid. Prospective study conducted in Japan in which 23 Japanese-speaking adults suffering from conductive or mixed hearing loss received a VIBRANT SOUNDBRIDGE with implantation at the round window. Postoperative thresholds, speech perception results (word recognition scores, speech reception thresholds, signal-to-noise ratio [SNR]), and quality of life questionnaires at 20 weeks were compared with preoperative results with all patients receiving the same, best available hearing aid (HA). Statistically significant improvements in postoperative AMEI-aided thresholds (1, 2, 4, and 8 kHz) and on the speech reception thresholds and word recognition scores tests, compared with preoperative HA-aided results, were observed. On the SNR, the subjects' mean values showed statistically significant improvement, with -5.7 dB SNR for the AMEI-aided mean and -2.1 dB SNR for the preoperative HA-assisted mean. The APHAB quality of life questionnaire also showed statistically significant improvement with the AMEI. Results with the AMEI applied to the round window exceeded those of the best available hearing aid in speech perception as well as quality of life questionnaires. There were minimal adverse events or changes to patients' residual hearing.
Rader, T
2015-02-01
Cochlear implantation with the aim of hearing preservation for combined electric-acoustic stimulation (EAS) is the therapy of choice for patients with residual low-frequency hearing. Preserved residual acoustic hearing has a positive effect on speech intelligibility in difficult noise conditions. The goal of this study was to assess speech reception thresholds in various complex noise conditions for patients with EAS in comparison with patients using bilateral cochlear implants (CI). Speech perception in noise was measured for bilateral CI and EAS patient groups. A total of 22 listeners with normal hearing served as a control group. Speech reception thresholds (SRT) were measured using a closed-set sentence matrix test. Speech was presented with a single source in frontal position; noise was presented in frontal position or in a multisource noise field (MSNF) consisting of a four-loudspeaker array with independent noise sources. Modulated speech-simulating noise and pseudocontinuous noise served respectively as interference signal with different temporal characteristics. The average SRTs in the EAS group were significantly better in all test conditions than those of the group with bilateral CI. Both user groups showed significant improvement in the MSNF condition compared with the frontal noise condition as a result of bilateral interaction. The normal-hearing control group was able to use short temporal gaps in modulated noise to improve speech perception in noise (gap listening). This effect was absent in both implanted user groups. Patients with combined EAS in one ear and a hearing aid in the contralateral ear show significantly improved speech perception in complex noise conditions compared with bilateral CI recipients.
Gifford, René H; Dorman, Michael F; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L W; Roland, Peter; Buchman, Craig A
2013-01-01
The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. The present study included a within-subjects, repeated-measures design including 21 English-speaking and 17 Polish-speaking cochlear implant (CI) recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250, and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an eight-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: CI plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best-aided condition). A subset of six English-speaking listeners were also assessed on measures of interaural time difference thresholds for a 250-Hz signal. Small, but significant, improvements in performance (1.7-2.1 dB and 6-10 percentage points) were found for the best-aided condition versus the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of electric and acoustic stimulation (EAS) benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold after surgery and improvement in speech understanding in reverberation. There was a significant correlation between interaural time difference threshold at 250 Hz and EAS-related benefit for the adaptive speech reception threshold. The findings of this study suggest that (1) preserved low-frequency hearing improves speech understanding for CI recipients, (2) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing, and (3) preservation of binaural timing cues, although poorer than observed for individuals with normal hearing, is possible after unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. The results of this study demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of CI criteria to include individuals with low-frequency thresholds in even the normal to near-normal range.
NASA Astrophysics Data System (ADS)
Oxenham, Andrew J.; Rosengard, Peninah S.; Braida, Louis D.
2004-05-01
Cochlear damage can lead to a reduction in the overall amount of peripheral auditory compression, presumably due to outer hair cell (OHC) loss or dysfunction. The perceptual consequences of functional OHC loss include loudness recruitment and reduced dynamic range, poorer frequency selectivity, and poorer effective temporal resolution. These in turn may lead to a reduced ability to make use of spectral and temporal fluctuations in background noise when listening to a target sound, such as speech. We tested the effect of OHC function on speech reception in hearing-impaired listeners by comparing psychoacoustic measures of cochlear compression and sentence recognition in a variety of noise backgrounds. In line with earlier studies, we found weak (nonsignificant) correlations between the psychoacoustic tasks and speech reception thresholds in quiet or in steady-state noise. However, when spectral and temporal fluctuations were introduced in the masker, speech reception improved to an extent that was well predicted by the psychoacoustic measures. Thus, our initial results suggest a strong relationship between measures of cochlear compression and the ability of listeners to take advantage of spectral and temporal masker fluctuations in recognizing speech. [Work supported by NIH Grants Nos. R01DC03909, T32DC00038, and R01DC00117.
Ng, Elaine H N; Classon, Elisabet; Larsby, Birgitta; Arlinger, Stig; Lunner, Thomas; Rudner, Mary; Rönnberg, Jerker
2014-11-23
The present study aimed to investigate the changing relationship between aided speech recognition and cognitive function during the first 6 months of hearing aid use. Twenty-seven first-time hearing aid users with symmetrical mild to moderate sensorineural hearing loss were recruited. Aided speech recognition thresholds in noise were obtained in the hearing aid fitting session as well as at 3 and 6 months postfitting. Cognitive abilities were assessed using a reading span test, which is a measure of working memory capacity, and a cognitive test battery. Results showed a significant correlation between reading span and speech reception threshold during the hearing aid fitting session. This relation was significantly weakened over the first 6 months of hearing aid use. Multiple regression analysis showed that reading span was the main predictor of speech recognition thresholds in noise when hearing aids were first fitted, but that the pure-tone average hearing threshold was the main predictor 6 months later. One way of explaining the results is that working memory capacity plays a more important role in speech recognition in noise initially rather than after 6 months of use. We propose that new hearing aid users engage working memory capacity to recognize unfamiliar processed speech signals because the phonological form of these signals cannot be automatically matched to phonological representations in long-term memory. As familiarization proceeds, the mismatch effect is alleviated, and the engagement of working memory capacity is reduced. © The Author(s) 2014.
Aided and Unaided Speech Perception by Older Hearing Impaired Listeners
Woods, David L.; Arbogast, Tanya; Doss, Zoe; Younus, Masood; Herron, Timothy J.; Yund, E. William
2015-01-01
The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners. PMID:25730423
Pedersen, K; Rosenhall, U
1991-01-01
The relationship between self-assessed hearing handicap and audiometric measures using pure-tone and speech audiometry was studied in a group of elderly persons representative of an urban Swedish population. The study population consisted of two cohorts, one of which was followed longitudinally. Significant correlations between measured and self-assessed hearing were found. Speech discrimination scores showed lower correlations with the self-estimated hearing than pure-tone averages and speech reception threshold. Questions concerning conversation with one person and concerning difficulty in hearing the doorbell showed lower correlations with measured hearing than the other questions. The discrimination score test is an inadequate tool for measuring hearing handicap.
Mostafapour, S P; Lahargoue, K; Gates, G A
1998-12-01
No consensus exists regarding the magnitude of the risk of noise-induced hearing loss (NIHL) associated with leisure noise, in particular, personal listening devices in young adults. Examine the magnitude of hearing loss associated with personal listening devices and other sources of leisure noise in causing NIHL in young adults. Prospective auditory testing of college student volunteers with retrospective history exposure to home stereos, personal listening devices, firearms, and other sources of recreational noise. Subjects underwent audiologic examination consisting of estimation of pure-tone thresholds, speech reception thresholds, and word recognition at 45 dB HL. Fifty subjects aged 18 to 30 years were tested. All hearing thresholds of all subjects (save one-a unilateral 30 dB HL threshold at 6 kHz) were normal, (i.e., 25 dB HL or better). A 10 dB threshold elevation (notch) in either ear at 3 to 6 kHz as compared with neighboring frequencies was noted in 11 (22%) subjects and an unequivocal notch (15 dB or greater) in either ear was noted in 14 (28%) of subjects. The presence or absence of any notch (small or large) did not correlate with any single or cumulative source of noise exposure. No difference in pure-tone threshold, speech reception threshold, or speech discrimination was found among subjects when segregated by noise exposure level. The majority of young users of personal listening devices are at low risk for substantive NIHL. Interpretation of the significance of these findings in relation to noise exposure must be made with caution. NIHL is an additive process and even subtle deficits may contribute to unequivocal hearing loss with continued exposure. The low prevalence of measurable deficits in this study group may not exclude more substantive deficits in other populations with greater exposures. Continued education of young people about the risk to hearing from recreational noise exposure is warranted.
Smits, Cas; Merkus, Paul; Festen, Joost M.; Goverts, S. Theo
2017-01-01
Not all of the variance in speech-recognition performance of cochlear implant (CI) users can be explained by biographic and auditory factors. In normal-hearing listeners, linguistic and cognitive factors determine most of speech-in-noise performance. The current study explored specifically the influence of visually measured lexical-access ability compared with other cognitive factors on speech recognition of 24 postlingually deafened CI users. Speech-recognition performance was measured with monosyllables in quiet (consonant-vowel-consonant [CVC]), sentences-in-noise (SIN), and digit-triplets in noise (DIN). In addition to a composite variable of lexical-access ability (LA), measured with a lexical-decision test (LDT) and word-naming task, vocabulary size, working-memory capacity (Reading Span test [RSpan]), and a visual analogue of the SIN test (text reception threshold test) were measured. The DIN test was used to correct for auditory factors in SIN thresholds by taking the difference between SIN and DIN: SRTdiff. Correlation analyses revealed that duration of hearing loss (dHL) was related to SIN thresholds. Better working-memory capacity was related to SIN and SRTdiff scores. LDT reaction time was positively correlated with SRTdiff scores. No significant relationships were found for CVC or DIN scores with the predictor variables. Regression analyses showed that together with dHL, RSpan explained 55% of the variance in SIN thresholds. When controlling for auditory performance, LA, LDT, and RSpan separately explained, together with dHL, respectively 37%, 36%, and 46% of the variance in SRTdiff outcome. The results suggest that poor verbal working-memory capacity and to a lesser extent poor lexical-access ability limit speech-recognition ability in listeners with a CI. PMID:29205095
Corthals, Paul
2008-01-01
The aim of the present study is to construct a simple method for visualizing and quantifying the audibility of speech on the audiogram and to predict speech intelligibility. The proposed method involves a series of indices on the audiogram form reflecting the sound pressure level distribution of running speech. The indices that coincide with a patient's pure tone thresholds reflect speech audibility and give evidence of residual functional hearing capacity. Two validation studies were conducted among sensorineurally hearing-impaired participants (n = 56 and n = 37, respectively) to investigate the relation with speech recognition ability and hearing disability. The potential of the new audibility indices as predictors for speech reception thresholds is comparable to the predictive potential of the ANSI 1968 articulation index and the ANSI 1997 speech intelligibility index. The sum of indices or a weighted combination can explain considerable proportions of variance in speech reception results for sentences in quiet free field conditions. The proportions of variance that can be explained in questionnaire results on hearing disability are less, presumably because the threshold indices almost exclusively reflect message audibility and much less the psychosocial consequences of hearing deficits. The outcomes underpin the validity of the new audibility indexing system, even though the proposed method may be better suited for predicting relative performance across a set of conditions than for predicting absolute speech recognition performance. (c) 2007 S. Karger AG, Basel
Do Older Listeners With Hearing Loss Benefit From Dynamic Pitch for Speech Recognition in Noise?
Shen, Jing; Souza, Pamela E
2017-10-12
Dynamic pitch, the variation in the fundamental frequency of speech, aids older listeners' speech perception in noise. It is unclear, however, whether some older listeners with hearing loss benefit from strengthened dynamic pitch cues for recognizing speech in certain noise scenarios and how this relative benefit may be associated with individual factors. We first examined older individuals' relative benefit between natural and strong dynamic pitches for better speech recognition in noise. Further, we reported the individual factors of the 2 groups of listeners who benefit differently from natural and strong dynamic pitches. Speech reception thresholds of 13 older listeners with mild-moderate hearing loss were measured using target speech with 3 levels of dynamic pitch strength. Individuals' ability to benefit from dynamic pitch was defined as the speech reception threshold difference between speeches with and without dynamic pitch cues. The relative benefit of natural versus strong dynamic pitch varied across individuals. However, this relative benefit remained consistent for the same individuals across those background noises with temporal modulation. Those listeners who benefited more from strong dynamic pitch reported better subjective speech perception abilities. Strong dynamic pitch may be more beneficial than natural dynamic pitch for some older listeners to recognize speech better in noise, particularly when the noise has temporal modulation.
Panday, Seema; Kathard, Harsha; Pillay, Mershen; Govender, Cyril
2009-01-01
The aim of this investigation was to determine which of 58 preselected Zulu words developed by Panday et al. (2007) could be used for Speech Reception Threshold (SRT) testing. To realize this aim the homogeneity of audibility of 58 bisyllabic Zulu low tone verbs was measured, followed by an analysis of the prosodic features of the selected words. The words were digitally recorded by a Zulu first language male speaker and presented at 6 intensity levels to 30 Zulu first language speakers (18-25 years, mean age of 21.5 years), whose hearing was normal. Homogeneity of audibility was determined by employing logistic regression analysis. Twenty eight words met the criterion of homogeneity of audibility. This was evidenced by a mean slope of 50% at 5.98%/dB. The prosodic features of the twenty eight words were further analyzed using a computerized speech laboratory system. The findings confirmed that the pitch contours of the words followed the prosodic pattern apparent within Zulu linguistic structure. Eighty nine percent of the Zulu verbs were found to have a difference in the pitch pattern between the two syllables i.e. the first syllable was low in pitch, while the second syllable was high in pitch. It emerged that the twenty eight words could be used for establishing SRT within a normal hearing Zulu speaking population. Further research within clinical populations is recommended.
Binaural sluggishness in the perception of tone sequences and speech in noise.
Culling, J F; Colburn, H S
2000-01-01
The binaural system is well-known for its sluggish response to changes in the interaural parameters to which it is sensitive. Theories of binaural unmasking have suggested that detection of signals in noise is mediated by detection of differences in interaural correlation. If these theories are correct, improvements in the intelligibility of speech in favorable binaural conditions is most likely mediated by spectro-temporal variations in interaural correlation of the stimulus which mirror the spectro-temporal amplitude modulations of the speech. However, binaural sluggishness should limit the temporal resolution of the representation of speech recovered by this means. The present study tested this prediction in two ways. First, listeners' masked discrimination thresholds for ascending vs descending pure-tone arpeggios were measured as a function of rate of frequency change in the NoSo and NoSpi binaural configurations. Three-tone arpeggios were presented repeatedly and continuously for 1.6 s, masked by a 1.6-s burst of noise. In a two-interval task, listeners determined the interval in which the arpeggios were ascending. The results showed a binaural advantage of 12-14 dB for NoSpi at 3.3 arpeggios per s (arp/s), which reduced to 3-5 dB at 10.4 arp/s. This outcome confirmed that the discrimination of spectro-temporal patterns in noise is susceptible to the effects of binaural sluggishness. Second, listeners' masked speech-reception thresholds were measured in speech-shaped noise using speech which was 1, 1.5, and 2 times the original articulation rate. The articulation rate was increased using a phase-vocoder technique which increased all the modulation frequencies in the speech without altering its pitch. Speech-reception thresholds were, on average, 5.2 dB lower for the NoSpi than for the NoSo configuration, at the original articulation rate. This binaural masking release was reduced to 2.8 dB when the articulation rate was doubled, but the most notable effect was a 6-8 dB increase in thresholds with articulation rate for both configurations. These results suggest that higher modulation frequencies in masked signals cannot be temporally resolved by the binaural system, but that the useful modulation frequencies in speech are sufficiently low (<5 Hz) that they are invulnerable to the effects of binaural sluggishness, even at elevated articulation rates.
The effect of noise-induced hearing loss on the intelligibility of speech in noise
NASA Astrophysics Data System (ADS)
Smoorenburg, G. F.; Delaat, J. A. P. M.; Plomp, R.
1981-06-01
Speech reception thresholds, both in quiet and in noise, and tone audiograms were measured for 14 normal ears (7 subjects) and 44 ears (22 subjects) with noise-induced hearing loss. Maximum hearing loss in the 4-6 kHz region equalled 40 to 90 dB (losses exceeded by 90% and 10%, respectively). Hearing loss for speech in quiet measured with respect to the median speech reception threshold for normal ears ranged from 1.8 dB to 13.4 dB. For speech in noise the numbers are 1.2 dB to 7.0 dB which means that the subjects with noise-induced hearing loss need a 1.2 to 7.0 dB higher signal-to-noise ratio than normal to understand sentences equally well. A hearing loss for speech of 1 dB corresponds to a decrease in sentence intelligibility of 15 to 20%. The relation between hearing handicap conceived as a reduced ability to understand speech and tone audiogram is discussed. The higher signal-to-noise ratio needed by people with noise-induced hearing loss to understand speech in noisy environments is shown to be due partly to the decreased bandwidth of their hearing caused by the noise dip.
Panday, Seema; Kathard, Harsha; Pillay, Mershen; Wilson, Wayne
2018-03-29
The purpose of this study was to consider the value of adding first-language speaker ratings to the process of validating word recordings for use in a new speech reception threshold (SRT) test in audiology. Previous studies had identified 28 word recordings as being suitable for use in a new SRT test. These word recordings had been shown to satisfy the linguistic criteria of familiarity, phonetic dissimilarity and tone, and the psychometric criterion of homogeneity of audibility. Objectives: The aim of the study was to consider the value of adding first-language speakers' ratings when validating word recordings for a new SRT test. Method: A single observation, cross-sectional design was used to collect and analyse quantitative data in this study. Eleven first-language isiZulu speakers, purposively selected, were asked to rate each of the word recordings for pitch, clarity, naturalness, speech rate and quality on a 5-point Likert scale. The percent agreement and Friedman test were used for analysis. Results: More than 20% of these 11 participants rated the three-word recordings below 'strongly agree' in the category of pitch or tone, and one-word recording below 'strongly agree' in the categories of pitch or tone, clarity or articulation and naturalness or dialect. Conclusion: The first-language speaker ratings proved to be a valuable addition to the process of selecting word recordings for use in a new SRT test. In particular, these ratings identified potentially problematic word recordings in the new SRT test that had been missed by the previously and more commonly used linguistic and psychometric selection criteria.
2014-01-01
This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772
Dincer D'Alessandro, Hilal; Ballantyne, Deborah; Boyle, Patrick J; De Seta, Elio; DeVincentiis, Marco; Mancini, Patrizia
2017-11-30
The aim of the study was to investigate the link between temporal fine structure (TFS) processing, pitch, and speech perception performance in adult cochlear implant (CI) recipients, including bimodal listeners who may benefit better low-frequency (LF) temporal coding in the contralateral ear. The study participants were 43 adult CI recipients (23 unilateral, 6 bilateral, and 14 bimodal listeners). Two new LF pitch perception tests-harmonic intonation (HI) and disharmonic intonation (DI)-were used to evaluate TFS sensitivity. HI and DI were designed to estimate a difference limen for discrimination of tone changes based on harmonic or inharmonic pitch glides. Speech perception was assessed using the newly developed Italian Sentence Test with Adaptive Randomized Roving level (STARR) test where sentences relevant to everyday contexts were presented at low, medium, and high levels in a fluctuating background noise to estimate a speech reception threshold (SRT). Although TFS and STARR performances in the majority of CI recipients were much poorer than those of hearing people reported in the literature, a considerable intersubject variability was observed. For CI listeners, median just noticeable differences were 27.0 and 147.0 Hz for HI and DI, respectively. HI outcomes were significantly better than those for DI. Median STARR score was 14.8 dB. Better performers with speech reception thresholds less than 20 dB had a median score of 8.6 dB. A significant effect of age was observed for both HI/DI tests, suggesting that TFS sensitivity tended to worsen with increasing age. CI pure-tone thresholds and duration of profound deafness were significantly correlated with STARR performance. Bimodal users showed significantly better TFS and STARR performance for bimodal listening than for their CI-only condition. Median bimodal gains were 33.0 Hz for the HI test and 95.0 Hz for the DI test. DI outcomes in bimodal users revealed a significant correlation with unaided hearing thresholds for octave frequencies lower than 1000 Hz. Median STARR scores were 17.3 versus 8.1 dB for CI only and bimodal listening, respectively. STARR performance was significantly correlated with HI findings for CI listeners and with those of DI for bimodal listeners. LF pitch perception was found to be abnormal in the majority of adult CI recipients, confirming poor TFS processing of CIs. Similarly, the STARR findings reflected a common performance deterioration with the HI/DI tests, suggesting the cause probably being a lack of access to TFS information. Contralateral hearing aid users obtained a remarkable bimodal benefit for all tests. Such results highlighted the importance of TFS cues for challenging speech perception and the relevance to everyday listening conditions. HI/DI and STARR tests show promise for gaining insights into how TFS and speech perception are being limited and may guide the customization of CI program parameters and support the fine tuning of bimodal listening.
Development of the Russian matrix sentence test.
Warzybok, Anna; Zokoll, Melanie; Wardenga, Nina; Ozimek, Edward; Boboshko, Maria; Kollmeier, Birger
2015-01-01
To develop the Russian matrix sentence test for speech intelligibility measurements in noise. Test development included recordings, optimization of speech material, and evaluation to investigate the equivalency of the test lists and training. For each of the 500 test items, the speech intelligibility function, speech reception threshold (SRT: signal-to-noise ratio, SNR, that provides 50% speech intelligibility), and slope was obtained. The speech material was homogenized by applying level corrections. In evaluation measurements, speech intelligibility was measured at two fixed SNRs to compare list-specific intelligibility functions. To investigate the training effect and establish reference data, speech intelligibility was measured adaptively. Overall, 77 normal-hearing native Russian listeners. The optimization procedure decreased the spread in SRTs across words from 2.8 to 0.6 dB. Evaluation measurements confirmed that the 16 test lists were equivalent, with a mean SRT of -9.5 ± 0.2 dB and a slope of 13.8 ± 1.6%/dB. The reference SRT, -8.8 ± 0.8 dB for the open-set and -9.4 ± 0.8 dB for the closed-set format, increased slightly for noise levels above 75 dB SPL. The Russian matrix sentence test is suitable for accurate and reliable speech intelligibility measurements in noise.
[Improving speech comprehension using a new cochlear implant speech processor].
Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A
2009-06-01
The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg sentences in the clinical setting S(0)N(CI), with speech signal at 0 degrees and noise lateral to the CI at 90 degrees . With the convincing findings from our evaluations of this multicenter study cohort, a trial with the Freedom 24 sound processor for all suitable CI users is recommended. For evaluating the benefits of a new processor, the comparative assessment paradigm used in our study design would be considered ideal for use with individual patients.
Zokoll, Melanie A; Wagener, Kirsten C; Brand, Thomas; Buschermöhle, Michael; Kollmeier, Birger
2012-09-01
A review is given of internationally comparable speech-in-noise tests for hearing screening purposes that were part of the European HearCom project. This report describes the development, optimization, and evaluation of such tests for headphone and telephone presentation, using the example of the German digit triplet test. In order to achieve the highest possible comparability, language- and speaker-dependent factors in speech intelligibility should be compensated for. The tests comprise spoken numbers in background noise and estimate the speech reception threshold (SRT), i.e. the signal-to-noise ratio (SNR) yielding 50% speech intelligibility. The respective reference speech intelligibility functions for headphone and telephone presentation of the German version for 15 and 10 normal-hearing listeners are described by a SRT of -9.3 ± 0.2 and -6.5 ± 0.4 dB SNR, and slopes of 19.6 and 17.9%/dB, respectively. Reference speech intelligibility functions of all digit triplet tests optimized within the HearCom project allow for investigation of the comparability due to language specificities. The optimization criteria established here should be used for similar screening tests in other languages.
Reduced auditory efferent activity in childhood selective mutism.
Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava
2004-06-01
Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.
Davidson, Lisa S; Geers, Ann E; Nicholas, Johanna G
2014-07-01
A novel word learning (NWL) paradigm was used to explore underlying phonological and cognitive mechanisms responsible for delayed vocabulary level in children with cochlear implants (CIs). One hundred and one children using CIs, 6-12 years old, were tested along with 47 children with normal hearing (NH). Tests of NWL, receptive vocabulary, and speech perception at 2 loudness levels were administered to children with CIs. Those with NH completed the NWL task and a receptive vocabulary test. CI participants with good audibility (GA) versus poor audibility (PA) were compared on all measures. Analysis of variance was used to compare performance across the children with NH and the two groups of children with CIs. Multiple regression analysis was employed to identify independent predictors of vocabulary outcomes. Children with CIs in the GA group scored higher in receptive vocabulary and NWL than children in the PA group, although they did not reach NH levels. CI-aided pure tone threshold and performance on the NWL task predicted independent variance in vocabulary after accounting for other known predictors. Acquiring spoken vocabulary is facilitated by GA with a CI and phonological learning and memory skills. Children with CIs did not learn novel words at the same rate or achieve the same receptive vocabulary levels as their NH peers. Maximizing audibility for the perception of speech and direct instruction of new vocabulary may be necessary for children with CIs to reach levels seen in peers with NH.
Panday, Seema; Kathard, Harsha; Pillay, Mershen; Govender, Cyril
2007-01-01
The measurement of speech reception threshold (SRT) is best evaluated in an individual's first language. The present study focused on the development of a Zulu SRT word list, according to adapted criteria for SRT in Zulu. The aim of this paper is to present the process involved in the development of the Zulu word list. In acquiring the data to realize this aim, 131 common bisyllabic Zulu words were identified by two Zulu speaking language interpreters and two tertiary level educators. Eighty two percent of these words were described as bisyllabic verbs. Thereafter using a three point Likert scale, 58 bisyllabic verbs were rated by 5 linguistic experts as being familiar, phonetically dissimilar and being low tone verbs. According to the Kendall's co-efficient of concordance at 95% level of confidence the agreement among the raters was good for each criterion. The results highlighted the importance of adapting the criteria for SRT to suit the structure of the language. An important research implication emerging from the study is the theoretical guidelines proposed for the development of SRT material in other African Languages. Furthermore, the importance of using speech material appropriate to the language has also being highlighted. The developed SRT word list in Zulu is applicable to the adult Zulu First Language Speaker in KZN.
Rosemann, Stephanie; Gießing, Carsten; Özyurt, Jale; Carroll, Rebecca; Puschmann, Sebastian; Thiel, Christiane M.
2017-01-01
Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome. PMID:28638329
de Kleijn, Jasper L; van Kalmthout, Ludwike W M; van der Vossen, Martijn J B; Vonck, Bernard M D; Topsakal, Vedat; Bruijnzeel, Hanneke
2018-05-24
Although current guidelines recommend cochlear implantation only for children with profound hearing impairment (HI) (>90 decibel [dB] hearing level [HL]), studies show that children with severe hearing impairment (>70-90 dB HL) could also benefit from cochlear implantation. To perform a systematic review to identify audiologic thresholds (in dB HL) that could serve as an audiologic candidacy criterion for pediatric cochlear implantation using 4 domains of speech and language development as independent outcome measures (speech production, speech perception, receptive language, and auditory performance). PubMed and Embase databases were searched up to June 28, 2017, to identify studies comparing speech and language development between children who were profoundly deaf using cochlear implants and children with severe hearing loss using hearing aids, because no studies are available directly comparing children with severe HI in both groups. If cochlear implant users with profound HI score better on speech and language tests than those with severe HI who use hearing aids, this outcome could support adjusting cochlear implantation candidacy criteria to lower audiologic thresholds. Literature search, screening, and article selection were performed using a predefined strategy. Article screening was executed independently by 4 authors in 2 pairs; consensus on article inclusion was reached by discussion between these 4 authors. This study is reported according to the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) statement. Title and abstract screening of 2822 articles resulted in selection of 130 articles for full-text review. Twenty-one studies were selected for critical appraisal, resulting in selection of 10 articles for data extraction. Two studies formulated audiologic thresholds (in dB HLs) at which children could qualify for cochlear implantation: (1) at 4-frequency pure-tone average (PTA) thresholds of 80 dB HL or greater based on speech perception and auditory performance subtests and (2) at PTA thresholds of 88 and 96 dB HL based on a speech perception subtest. In 8 of the 18 outcome measures, children with profound HI using cochlear implants performed similarly to children with severe HI using hearing aids. Better performance of cochlear implant users was shown with a picture-naming test and a speech perception in noise test. Owing to large heterogeneity in study population and selected tests, it was not possible to conduct a meta-analysis. Studies indicate that lower audiologic thresholds (≥80 dB HL) than are advised in current national and manufacturer guidelines would be appropriate as audiologic candidacy criteria for pediatric cochlear implantation.
Schoof, Tim; Rosen, Stuart
2014-01-01
Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60–72 years) and younger (19–29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed. PMID:25429266
Dietz, Mathias; Hohmann, Volker; Jürgens, Tim
2015-01-01
For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types. PMID:26721918
Pronk, Marieke; Deeg, Dorly J H; Kramer, Sophia E
2018-04-17
The purpose of this study is to determine which demographic, health-related, mood, personality, or social factors predict discrepancies between older adults' functional speech-in-noise test result and their self-reported hearing problems. Data of 1,061 respondents from the Longitudinal Aging Study Amsterdam were used (ages ranged from 57 to 95 years). Functional hearing problems were measured using a digit triplet speech-in-noise test. Five questions were used to assess self-reported hearing problems. Scores of both hearing measures were dichotomized. Two discrepancy outcomes were created: (a) being unaware: those with functional but without self-reported problems (reference is aware: those with functional and self-reported problems); (b) reporting false complaints: those without functional but with self-reported problems (reference is well: those without functional and self-reported hearing problems). Two multivariable prediction models (logistic regression) were built with 19 candidate predictors. The speech reception threshold in noise was kept (forced) as a predictor in both models. Persons with higher self-efficacy (to initiate behavior) and higher self-esteem had a higher odds to being unaware than persons with lower self-efficacy scores (odds ratio [OR] = 1.13 and 1.11, respectively). Women had a higher odds than men (OR = 1.47). Persons with more chronic diseases and persons with worse (i.e., higher) speech-in-noise reception thresholds in noise had a lower odds to being unaware (OR = 0.85 and 0.91, respectively) than persons with less diseases and better thresholds, respectively. A higher odds to reporting false complaints was predicted by more depressive symptoms (OR = 1.06), more chronic diseases (OR = 1.21), and a larger social network (OR = 1.02). Persons with higher self-efficacy (to complete behavior) had a lower odds (OR = 0.86), whereas persons with higher self-esteem had a higher odds to report false complaints (OR = 1.21). The explained variance of both prediction models was small (Nagelkerke R2 = .11 for the unaware model, and .10 for the false complaints model). The findings suggest that a small proportion of the discrepancies between older individuals' results on a speech-in-noise screening test and their self-reports of hearing problems can be explained by the unique context of these individuals. The likelihood of discrepancies partly depends on a person's health (chronic diseases), demographics (gender), personality (self-efficacy to initiate behavior and to persist in adversity, self-esteem), mood (depressive symptoms), and social situation (social network size). Implications are discussed.
Eustaquio-Martín, Almudena; Stohl, Joshua S.; Wolford, Robert D.; Schatzer, Reinhold; Wilson, Blake S.
2016-01-01
Objectives: In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. Design: Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. Results: In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. Conclusions: The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids. PMID:26862711
Lopez-Poveda, Enrique A; Eustaquio-Martín, Almudena; Stohl, Joshua S; Wolford, Robert D; Schatzer, Reinhold; Wilson, Blake S
2016-01-01
In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids.
Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.
2014-01-01
This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120
Development of a test battery for evaluating speech perception in complex listening environments.
Brungart, Douglas S; Sheffield, Benjamin M; Kubli, Lina R
2014-08-01
In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.
Comparison of Fluoroplastic Causse Loop Piston and Titanium Soft-Clip in Stapedotomy
Faramarzi, Mohammad; Gilanifar, Nafiseh; Roosta, Sareh
2017-01-01
Introduction: Different types of prosthesis are available for stapes replacement. Because there has been no published report on the efficacy of the titanium soft-clip vs the fluoroplastic Causse loop Teflon piston, we compared short-term hearing results of both types of prosthesis in patients who underwent stapedotomy due to otosclerosis. Materials and Methods: A total of 57 ears were included in the soft-clip group and 63 ears were included in the Teflon-piston group. Pre-operative and post-operative air conduction, bone conduction, air-bone gaps, speech discrimination score, and speech reception thresholds were analyzed. Results: Post-operative speech reception threshold gains did not differ significantly between the two groups (P=0.919). However, better post-operative air-bone gap improvement at low frequencies was observed in the Teflon-piston group over the short-term follow-up (at frequencies of 0.25 and 0.50 kHz; P=0.007 and P=0.001, respectively). Conclusion: Similar post-operative hearing results were observed in the two groups in the short-term. PMID:28229059
Speech Perception in Older Hearing Impaired Listeners: Benefits of Perceptual Training
Woods, David L.; Doss, Zoe; Herron, Timothy J.; Arbogast, Tanya; Younus, Masood; Ettlinger, Marc; Yund, E. William
2015-01-01
Hearing aids (HAs) only partially restore the ability of older hearing impaired (OHI) listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC) syllables containing different vowels (/ɑ/, /i/, or /u/) and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs) spanning a 12 dB range. Noise levels were adjusted over training sessions based on d’ measures. Listeners were tested before and after training to measure (1) changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2) sentence reception thresholds (SeRTs) using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d’ thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in processing simple sentences and that greater transfer of benefit would be evident in the comprehension of more unpredictable speech material. PMID:25730330
Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas
2014-03-01
Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.
Semeraro, Hannah D; Rowan, Daniel; van Besouw, Rachel M; Allsopp, Adrian A
2017-10-01
The studies described in this article outline the design and development of a British English version of the coordinate response measure (CRM) speech-in-noise (SiN) test. Our interest in the CRM is as a SiN test with high face validity for occupational auditory fitness for duty (AFFD) assessment. Study 1 used the method of constant stimuli to measure and adjust the psychometric functions of each target word, producing a speech corpus with equal intelligibility. After ensuring all the target words had similar intelligibility, for Studies 2 and 3, the CRM was presented in an adaptive procedure in stationary speech-spectrum noise to measure speech reception thresholds and evaluate the test-retest reliability of the CRM SiN test. Studies 1 (n = 20) and 2 (n = 30) were completed by normal-hearing civilians. Study 3 (n = 22) was completed by hearing impaired military personnel. The results display good test-retest reliability (95% confidence interval (CI) < 2.1 dB) and concurrent validity when compared to the triple-digit test (r ≤ 0.65), and the CRM is sensitive to hearing impairment. The British English CRM using stationary speech-spectrum noise is a "ready to use" SiN test, suitable for investigation as an AFFD assessment tool for military personnel.
Objective measures of listening effort: effects of background noise and noise reduction.
Sarampalis, Anastasios; Kalluri, Sridhar; Edwards, Brent; Hafter, Ervin
2009-10-01
This work is aimed at addressing a seeming contradiction related to the use of noise-reduction (NR) algorithms in hearing aids. The problem is that although some listeners claim a subjective improvement from NR, it has not been shown to improve speech intelligibility, often even making it worse. To address this, the hypothesis tested here is that the positive effects of NR might be to reduce cognitive effort directed toward speech reception, making it available for other tasks. Normal-hearing individuals participated in 2 dual-task experiments, in which 1 task was to report sentences or words in noise set to various signal-to-noise ratios. Secondary tasks involved either holding words in short-term memory or responding in a complex visual reaction-time task. At low values of signal-to-noise ratio, although NR had no positive effect on speech reception thresholds, it led to better performance on the word-memory task and quicker responses in visual reaction times. Results from both dual tasks support the hypothesis that NR reduces listening effort and frees up cognitive resources for other tasks. Future hearing aid research should incorporate objective measurements of cognitive benefits.
Cortical activation patterns correlate with speech understanding after cochlear implantation
Olds, Cristen; Pollonini, Luca; Abaya, Homer; Larky, Jannine; Loy, Megan; Bortfeld, Heather; Beauchamp, Michael S.; Oghalai, John S.
2015-01-01
Objectives Cochlear implants are a standard therapy for deafness, yet the ability of implanted patients to understand speech varies widely. To better understand this variability in outcomes, we used functional near-infrared spectroscopy (fNIRS) to image activity within regions of the auditory cortex and compare the results to behavioral measures of speech perception. Design We studied 32 deaf adults hearing through cochlear implants and 35 normal-hearing controls. We used fNIRS to measure responses within the lateral temporal lobe and the superior temporal gyrus to speech stimuli of varying intelligibility. The speech stimuli included normal speech, channelized speech (vocoded into 20 frequency bands), and scrambled speech (the 20 frequency bands were shuffled in random order). We also used environmental sounds as a control stimulus. Behavioral measures consisted of the Speech Reception Threshold, CNC words, and AzBio Sentence tests measured in quiet. Results Both control and implanted participants with good speech perception exhibited greater cortical activations to natural speech than to unintelligible speech. In contrast, implanted participants with poor speech perception had large, indistinguishable cortical activations to all stimuli. The ratio of cortical activation to normal speech to that of scrambled speech directly correlated with the CNC Words and AzBio Sentences scores. This pattern of cortical activation was not correlated with auditory threshold, age, side of implantation, or time after implantation. Turning off the implant reduced cortical activations in all implanted participants. Conclusions Together, these data indicate that the responses we measured within the lateral temporal lobe and the superior temporal gyrus correlate with behavioral measures of speech perception, demonstrating a neural basis for the variability in speech understanding outcomes after cochlear implantation. PMID:26709749
Speech Perception With Combined Electric-Acoustic Stimulation: A Simulation and Model Comparison.
Rader, Tobias; Adel, Youssef; Fastl, Hugo; Baumann, Uwe
2015-01-01
The aim of this study is to simulate speech perception with combined electric-acoustic stimulation (EAS), verify the advantage of combined stimulation in normal-hearing (NH) subjects, and then compare it with cochlear implant (CI) and EAS user results from the authors' previous study. Furthermore, an automatic speech recognition (ASR) system was built to examine the impact of low-frequency information and is proposed as an applied model to study different hypotheses of the combined-stimulation advantage. Signal-detection-theory (SDT) models were applied to assess predictions of subject performance without the need to assume any synergistic effects. Speech perception was tested using a closed-set matrix test (Oldenburg sentence test), and its speech material was processed to simulate CI and EAS hearing. A total of 43 NH subjects and a customized ASR system were tested. CI hearing was simulated by an aurally adequate signal spectrum analysis and representation, the part-tone-time-pattern, which was vocoded at 12 center frequencies according to the MED-EL DUET speech processor. Residual acoustic hearing was simulated by low-pass (LP)-filtered speech with cutoff frequencies 200 and 500 Hz for NH subjects and in the range from 100 to 500 Hz for the ASR system. Speech reception thresholds were determined in amplitude-modulated noise and in pseudocontinuous noise. Previously proposed SDT models were lastly applied to predict NH subject performance with EAS simulations. NH subjects tested with EAS simulations demonstrated the combined-stimulation advantage. Increasing the LP cutoff frequency from 200 to 500 Hz significantly improved speech reception thresholds in both noise conditions. In continuous noise, CI and EAS users showed generally better performance than NH subjects tested with simulations. In modulated noise, performance was comparable except for the EAS at cutoff frequency 500 Hz where NH subject performance was superior. The ASR system showed similar behavior to NH subjects despite a positive signal-to-noise ratio shift for both noise conditions, while demonstrating the synergistic effect for cutoff frequencies ≥300 Hz. One SDT model largely predicted the combined-stimulation results in continuous noise, while falling short of predicting performance observed in modulated noise. The presented simulation was able to demonstrate the combined-stimulation advantage for NH subjects as observed in EAS users. Only NH subjects tested with EAS simulations were able to take advantage of the gap listening effect, while CI and EAS user performance was consistently degraded in modulated noise compared with performance in continuous noise. The application of ASR systems seems feasible to assess the impact of different signal processing strategies on speech perception with CI and EAS simulations. In continuous noise, SDT models were largely able to predict the performance gain without assuming any synergistic effects, but model amendments are required to explain the gap listening effect in modulated noise.
Development and preliminary evaluation of a pediatric Spanish-English speech perception task.
Calandruccio, Lauren; Gomez, Bianca; Buss, Emily; Leibold, Lori J
2014-06-01
The purpose of this study was to develop a task to evaluate children's English and Spanish speech perception abilities in either noise or competing speech maskers. Eight bilingual Spanish-English and 8 age-matched monolingual English children (ages 4.9-16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish-English talkers. The target stimuli were 30 disyllabic English and Spanish words, familiar to 5-year-olds and easily illustrated. Competing stimuli included either 2-talker English or 2-talker Spanish speech (corresponding to target language) and spectrally matched noise. For both groups of children, regardless of test language, performance was significantly worse for the 2-talker than for the noise masker condition. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. Results indicated that the stimuli and task were appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use.
Development and preliminary evaluation of a pediatric Spanish/English speech perception task
Calandruccio, Lauren; Gomez, Bianca; Buss, Emily; Leibold, Lori J.
2014-01-01
Purpose To develop a task to evaluate children’s English and Spanish speech perception abilities in either noise or competing speech maskers. Methods Eight bilingual Spanish/English and eight age matched monolingual English children (ages 4.9 –16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish/English talkers. The target stimuli were thirty disyllabic English and Spanish words, familiar to five-year-olds, and easily illustrated. Competing stimuli included either two-talker English or two-talker Spanish speech (corresponding to target language) and spectrally matched noise. Results For both groups of children, regardless of test language, performance was significantly worse for the two-talker than the noise masker. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. Conclusions Results indicate that the stimuli and task are appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use. PMID:24686915
Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise
Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther
2016-01-01
Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18–35 years) and 22 older (60–78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults’ poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access. PMID:27458400
Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise.
Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther
2016-01-01
Vocabulary size has been suggested as a useful measure of "verbal abilities" that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18-35 years) and 22 older (60-78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults' poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access.
Hearing status in patients with rheumatoid arthritis.
Ahmadzadeh, A; Daraei, M; Jalessi, M; Peyvandi, A A; Amini, E; Ranjbar, L A; Daneshi, A
2017-10-01
Rheumatoid arthritis is thought to induce conductive hearing loss and/or sensorineural hearing loss. This study evaluated the function of the middle ear and cochlea, and the related factors. Pure tone audiometry, speech reception thresholds, speech discrimination scores, tympanometry, acoustic reflexes, and distortion product otoacoustic emissions were assessed in rheumatoid arthritis patients and healthy volunteers. Pure tone audiometry results revealed a higher bone conduction threshold in the rheumatoid arthritis group, but there was no significant difference when evaluated according to the sensorineural hearing loss definition. Distortion product otoacoustic emissions related prevalence of conductive or mixed hearing loss, tympanometry values, acoustic reflexes, and speech discrimination scores were not significantly different between the two groups. Sensorineural hearing loss was significantly more prevalent in patients who used azathioprine, cyclosporine and etanercept. Higher bone conduction thresholds in some frequencies were detected in rheumatoid arthritis patients that were not clinically significant. Sensorineural hearing loss is significantly more prevalent in refractory rheumatoid arthritis patients.
The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise.
Shen, Jing; Souza, Pamela E
2017-09-18
This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise. Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise. The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise. Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss.
The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise
Souza, Pamela E.
2017-01-01
Purpose This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise. Method Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise. Results The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise. Conclusions Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss. PMID:28800370
Ragab, A; Shreef, E; Behiry, E; Zalat, S; Noaman, M
2009-01-01
To investigate the safety and efficacy of ozone therapy in adult patients with sudden sensorineural hearing loss. Prospective, randomised, double-blinded, placebo-controlled, parallel group, clinical trial. Forty-five adult patients presented with sudden sensorineural hearing loss, and were randomly allocated to receive either placebo (15 patients) or ozone therapy (auto-haemotherapy; 30 patients). For the latter treatment, 100 ml of the patient's blood was treated immediately with a 1:1 volume, gaseous mixture of oxygen and ozone (from an ozone generator) and re-injected into the patient by intravenous infusion. Treatments were administered twice weekly for 10 sessions. The following data were recorded: pre- and post-treatment mean hearing gains; air and bone pure tone averages; speech reception thresholds; speech discrimination scores; and subjective recovery rates. Significant recovery was observed in 23 patients (77 per cent) receiving ozone treatment, compared with six (40 per cent) patients receiving placebo (p < 0.05). Mean hearing gains, pure tone averages, speech reception thresholds and subjective recovery rates were significantly better in ozone-treated patients compared with placebo-treated patients (p < 0.05). Ozone therapy is a significant modality for treatment of sudden sensorineural hearing loss; no complications were observed.
Assessment of a directional microphone array for hearing-impaired listeners.
Soede, W; Bilsen, F A; Berkhout, A J
1993-08-01
Hearing-impaired listeners often have great difficulty understanding speech in surroundings with background noise or reverberation. Based on array techniques, two microphone prototypes (broadside and endfire) have been developed with strongly directional characteristics [Soede et al., "Development of a new directional hearing instrument based on array technology," J. Acoust. Soc. Am. 94, 785-798 (1993)]. Physical measurements show that the arrays attenuate reverberant sound by 6 dB (free-field) and can improve the signal-to-noise ratio by 7 dB in a diffuse noise field (measured with a KEMAR manikin). For the clinical assessment of these microphones an experimental setup was made in a sound-insulated listening room with one loudspeaker in front of the listener simulating the partner in a discussion and eight loudspeakers placed on the edges of a cube producing a diffuse background noise. The hearing-impaired subject wearing his own (familiar) hearing aid is placed in the center of the cube. The speech-reception threshold in noise for simple Dutch sentences was determined with a normal single omnidirectional microphone and with one of the microphone arrays. The results of monaural listening tests with hearing impaired subjects show that in comparison with an omnidirectional hearing-aid microphone the broadside and endfire microphone array gives a mean improvement of the speech reception threshold in noise of 7.0 dB (26 subjects) and 6.8 dB (27 subjects), respectively. Binaural listening with two endfire microphone arrays gives a binaural improvement which is comparable to the binaural improvement obtained by listening with two normal ears or two conventional hearing aids.
Razza, Sergio; Zaccone, Monica; Meli, Aannalisa; Cristofari, Eliana
2017-12-01
Children affected by hearing loss can experience difficulties in challenging and noisy environments even when deafness is corrected by Cochlear implant (CI) devices. These patients have a selective attention deficit in multiple listening conditions. At present, the most effective ways to improve the performance of speech recognition in noise consists of providing CI processors with noise reduction algorithms and of providing patients with bilateral CIs. The aim of this study was to compare speech performances in noise, across increasing noise levels, in CI recipients using two kinds of wireless remote-microphone radio systems that use digital radio frequency transmission: the Roger Inspiro accessory and the Cochlear Wireless Mini Microphone accessory. Eleven Nucleus Cochlear CP910 CI young user subjects were studied. The signal/noise ratio, at a speech reception threshold (SRT) value of 50%, was measured in different conditions for each patient: with CI only, with the Roger or with the MiniMic accessory. The effect of the application of the SNR-noise reduction algorithm in each of these conditions was also assessed. The tests were performed with the subject positioned in front of the main speaker, at a distance of 2.5 m. Another two speakers were positioned at 3.50 m. The main speaker at 65 dB issued disyllabic words. Babble noise signal was delivered through the other speakers, with variable intensity. The use of both wireless remote microphones improved the SRT results. Both systems improved gain of speech performances. The gain was higher with the Mini Mic system (SRT = -4.76) than the Roger system (SRT = -3.01). The addition of the NR algorithm did not statistically further improve the results. There is significant improvement in speech recognition results with both wireless digital remote microphone accessories, in particular with the Mini Mic system when used with the CP910 processor. The use of a remote microphone accessory surpasses the benefit of application of NR algorithm. Copyright © 2017. Published by Elsevier B.V.
Potgieter, Jenni-Marí; Swanepoel, De Wet; Myburgh, Hermanus Carel; Hopper, Thomas Christopher; Smits, Cas
2015-07-01
The objective of this study was to develop and validate a smartphone-based digits-in-noise hearing test for South African English. Single digits (0-9) were recorded and spoken by a first language English female speaker. Level corrections were applied to create a set of homogeneous digits with steep speech recognition functions. A smartphone application was created to utilize 120 digit-triplets in noise as test material. An adaptive test procedure determined the speech reception threshold (SRT). Experiments were performed to determine headphones effects on the SRT and to establish normative data. Participants consisted of 40 normal-hearing subjects with thresholds ≤15 dB across the frequency spectrum (250-8000 Hz) and 186 subjects with normal-hearing in both ears, or normal-hearing in the better ear. The results show steep speech recognition functions with a slope of 20%/dB for digit-triplets presented in noise using the smartphone application. The results of five headphone types indicate that the smartphone-based hearing test is reliable and can be conducted using standard Android smartphone headphones or clinical headphones. A digits-in-noise hearing test was developed and validated for South Africa. The mean SRT and speech recognition functions correspond to previous developed telephone-based digits-in-noise tests.
Evaluating a smartphone digits-in-noise test as part of the audiometric test battery.
Potgieter, Jenni-Mari; Swanepoel, De Wet; Smits, Cas
2018-05-21
Speech-in-noise tests have become a valuable part of the audiometric test battery providing an indication of a listener's ability to function in background noise. A simple digits-in-noise (DIN) test could be valuable to support diagnostic hearing assessments, hearing aid fittings and counselling for both paediatric and adult populations. Objective: The objective of this study was to evaluate the South African English smartphone DIN test's performance as part of the audiometric test battery. Design: This descriptive study evaluated 109 adult subjects (43 male and 66 female subjects) with and without sensorineural hearing loss by comparing pure-tone air conduction thresholds, speech recognition monaural performance scores (SRS dB) and the DIN speech reception threshold (SRT). An additional nine adult hearing aid users (four male and five female subjects) were included in a subset to determine aided and unaided DIN SRTs. Results: The DIN SRT is strongly associated with the best ear 4 frequency pure-tone average (4FPTA) (rs = 0.81) and maximum SRS dB (r = 0.72). The DIN test had high sensitivity and specificity to identify abnormal pure-tone (0.88 and 0.88, respectively) and SRS dB (0.76 and 0.88, respectively) results. There was a mean signal-to-noise ratio (SNR) improvement in the aided condition that demonstrated an overall benefit of 0.84 SNR dB. Conclusion: The DIN SRT was significantly correlated with the best ear 4FPTA and maximum SRS dB. The DIN SRT provides a useful measure of speech recognition in noise that can evaluate hearing aid fittings, manage counselling and hearing expectations.
Gifford, René H.; Dorman, Michael F.; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L. W.; Roland, Peter; Buchman, Craig A.
2012-01-01
Objective The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. Design The current study included a within subjects, repeated-measures design including 21 English speaking and 17 Polish speaking cochlear implant recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250 and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an 8-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: cochlear implant (CI) plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best aided condition). A subset of 6 English-speaking listeners were also assessed on measures of interaural time difference (ITD) thresholds for a 250-Hz signal. Results Small, but significant, improvements in performance (1.7 – 2.1 dB and 6 – 10 percentage points) were found for the best-aided condition vs. the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of EAS benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold following surgery and improvement in speech understanding in reverberation. There was a significant correlation between ITD threshold at 250 Hz and EAS-related benefit for the adaptive SRT. Conclusions Our results suggest that (i) preserved low-frequency hearing improves speech understanding for CI recipients (ii) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing and (iii) preservation of binaural timing cues, albeit poorer than observed for individuals with normal hearing, is possible following unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. Our results demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of cochlear implant criteria to include individuals with low-frequency thresholds in even the normal to near-normal range. PMID:23446225
Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M
2011-01-01
The aim of the present study was to evaluate the influence of age, hearing loss, and cognitive ability on the cognitive processing load during listening to speech presented in noise. Cognitive load was assessed by means of pupillometry (i.e., examination of pupil dilation), supplemented with subjective ratings. Two groups of subjects participated: 38 middle-aged participants (mean age = 55 yrs) with normal hearing and 36 middle-aged participants (mean age = 61 yrs) with hearing loss. Using three Speech Reception Threshold (SRT) in stationary noise tests, we estimated the speech-to-noise ratios (SNRs) required for the correct repetition of 50%, 71%, or 84% of the sentences (SRT50%, SRT71%, and SRT84%, respectively). We examined the pupil response during listening: the peak amplitude, the peak latency, the mean dilation, and the pupil response duration. For each condition, participants rated the experienced listening effort and estimated their performance level. Participants also performed the Text Reception Threshold (TRT) test, a test of processing speed, and a word vocabulary test. Data were compared with previously published data from young participants with normal hearing. Hearing loss was related to relatively poor SRTs, and higher speech intelligibility was associated with lower effort and higher performance ratings. For listeners with normal hearing, increasing age was associated with poorer TRTs and slower processing speed but with larger word vocabulary. A multivariate repeated-measures analysis of variance indicated main effects of group and SNR and an interaction effect between these factors on the pupil response. The peak latency was relatively short and the mean dilation was relatively small at low intelligibility levels for the middle-aged groups, whereas the reverse was observed for high intelligibility levels. The decrease in the pupil response as a function of increasing SNR was relatively small for the listeners with hearing loss. Spearman correlation coefficients indicated that the cognitive load was larger in listeners with better TRT performances as reflected by a longer peak latency (normal-hearing participants, SRT50% condition) and a larger peak amplitude and longer response duration (hearing-impaired participants, SRT50% and SRT84% conditions). Also, a larger word vocabulary was related to longer response duration in the SRT84% condition for the participants with normal hearing. The pupil response systematically increased with decreasing speech intelligibility. Ageing and hearing loss were related to less release from effort when increasing the intelligibility of speech in noise. In difficult listening conditions, these factors may induce cognitive overload relatively early or they may be associated with relatively shallow speech processing. More research is needed to elucidate the underlying mechanisms explaining these results. Better TRTs and larger word vocabulary were related to higher mental processing load across speech intelligibility levels. This indicates that utilizing linguistic ability to improve speech perception is associated with increased listening load.
Clinical Validation of a Sound Processor Upgrade in Direct Acoustic Cochlear Implant Subjects
Kludt, Eugen; D’hondt, Christiane; Lenarz, Thomas; Maier, Hannes
2017-01-01
Objective: The objectives of the investigation were to evaluate the effect of a sound processor upgrade on the speech reception threshold in noise and to collect long-term safety and efficacy data after 2½ to 5 years of device use of direct acoustic cochlear implant (DACI) recipients. Study Design: The study was designed as a mono-centric, prospective clinical trial. Setting: Tertiary referral center. Patients: Fifteen patients implanted with a direct acoustic cochlear implant. Intervention: Upgrade with a newer generation of sound processor. Main Outcome Measures: Speech recognition test in quiet and in noise, pure tone thresholds, subject-reported outcome measures. Results: The speech recognition in quiet and in noise is superior after the sound processor upgrade and stable after long-term use of the direct acoustic cochlear implant. The bone conduction thresholds did not decrease significantly after long-term high level stimulation. Conclusions: The new sound processor for the DACI system provides significant benefits for DACI users for speech recognition in both quiet and noise. Especially the noise program with the use of directional microphones (Zoom) allows DACI patients to have much less difficulty when having conversations in noisy environments. Furthermore, the study confirms that the benefits of the sound processor upgrade are available to the DACI recipients even after several years of experience with a legacy sound processor. Finally, our study demonstrates that the DACI system is a safe and effective long-term therapy. PMID:28406848
The multilingual matrix test: Principles, applications, and comparison across languages: A review.
Kollmeier, Birger; Warzybok, Anna; Hochmuth, Sabine; Zokoll, Melanie A; Uslar, Verena; Brand, Thomas; Wagener, Kirsten C
2015-01-01
A review of the development, evaluation, and application of the so-called 'matrix sentence test' for speech intelligibility testing in a multilingual society is provided. The format allows for repeated use with the same patient in her or his native language even if the experimenter does not understand the language. Using a closed-set format, the syntactically fixed, semantically unpredictable sentences (e.g. 'Peter bought eight white ships') provide a vocabulary of 50 words (10 alternatives for each position in the sentence). The principles (i.e. construction, optimization, evaluation, and validation) for 14 different languages are reviewed. Studies of the influence of talker, language, noise, the training effect, open vs. closed conduct of the test, and the subjects' language proficiency are reported and application examples are discussed. The optimization principles result in a steep intelligibility function and a high homogeneity of the speech materials presented and test lists employed, yielding a high efficiency and excellent comparability across languages. The characteristics of speakers generally dominate the differences across languages. The matrix test format with the principles outlined here is recommended for producing efficient, reliable, and comparable speech reception thresholds across different languages.
Nordberg, Ann; Dahlgren Sandberg, Annika; Miniscalco, Carmela
2015-01-01
Research on retelling ability and cognition is limited in children with cerebral palsy (CP) and speech impairment. To explore the impact of expressive and receptive language, narrative discourse dimensions (Narrative Assessment Profile measures), auditory and visual memory, theory of mind (ToM) and non-verbal cognition on the retelling ability of children with CP and speech impairment. Fifteen speaking children with speech impairment (seven girls, eight boys) (mean age = 11 years, SD = 1;4 years), and different types of CP and different levels of gross motor and cognitive function participated in the present study. Story retelling skills were tested and analysed with the Bus Story Test (BST) and the Narrative Assessment Profile (NAP). Receptive language ability was tested with the Test for Reception of Grammar-2 (TROG-2) and the Peabody Picture Vocabulary Test - IV (PPVT-IV). Non-verbal cognitive level was tested with the Raven's coloured progressive matrices (RCPM), memory functions assessed with the Corsi block-tapping task (CB) and the Digit Span from the Wechsler Intelligence Scale for Children-III. ToM was assessed with the false belief items of the two story tests "Kiki and the Cat" and "Birthday Puppy". The children had severe problems with retelling ability corresponding to an age-equivalent of 5;2-6;9 years. Receptive and expressive language, visuo-spatial and auditory memory, non-verbal cognitive level and ToM varied widely within and among the children. Both expressive and receptive language correlated significantly with narrative ability in terms of NAP total scores, so did auditory memory. The results suggest that retelling ability in the children with CP in the present study is dependent on language comprehension and production, and memory functions. Consequently, it is important to examine retelling ability together with language and cognitive abilities in these children in order to provide appropriate support. © 2015 Royal College of Speech and Language Therapists.
Role of working memory and lexical knowledge in perceptual restoration of interrupted speech.
Nagaraj, Naveen K; Magimairaj, Beula M
2017-12-01
The role of working memory (WM) capacity and lexical knowledge in perceptual restoration (PR) of missing speech was investigated using the interrupted speech perception paradigm. Speech identification ability, which indexed PR, was measured using low-context sentences periodically interrupted at 1.5 Hz. PR was measured for silent gated, low-frequency speech noise filled, and low-frequency fine-structure and envelope filled interrupted conditions. WM capacity was measured using verbal and visuospatial span tasks. Lexical knowledge was assessed using both receptive vocabulary and meaning from context tests. Results showed that PR was better for speech noise filled condition than other conditions tested. Both receptive vocabulary and verbal WM capacity explained unique variance in PR for the speech noise filled condition, but were unrelated to performance in the silent gated condition. It was only receptive vocabulary that uniquely predicted PR for fine-structure and envelope filled conditions. These findings suggest that the contribution of lexical knowledge and verbal WM during PR depends crucially on the information content that replaced the silent intervals. When perceptual continuity was partially restored by filler speech noise, both lexical knowledge and verbal WM capacity facilitated PR. Importantly, for fine-structure and envelope filled interrupted conditions, lexical knowledge was crucial for PR.
Selective Auditory Attention in Adults: Effects of Rhythmic Structure of the Competing Language
ERIC Educational Resources Information Center
Reel, Leigh Ann; Hicks, Candace Bourland
2012-01-01
Purpose: The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Method: Reception thresholds for English sentences were measured for 50…
Danielsson, Henrik; Hällgren, Mathias; Stenfelt, Stefan; Rönnberg, Jerker; Lunner, Thomas
2016-01-01
The audiogram predicts <30% of the variance in speech-reception thresholds (SRTs) for hearing-impaired (HI) listeners fitted with individualized frequency-dependent gain. The remaining variance could reflect suprathreshold distortion in the auditory pathways or nonauditory factors such as cognitive processing. The relationship between a measure of suprathreshold auditory function—spectrotemporal modulation (STM) sensitivity—and SRTs in noise was examined for 154 HI listeners fitted with individualized frequency-specific gain. SRTs were measured for 65-dB SPL sentences presented in speech-weighted noise or four-talker babble to an individually programmed master hearing aid, with the output of an ear-simulating coupler played through insert earphones. Modulation-depth detection thresholds were measured over headphones for STM (2cycles/octave density, 4-Hz rate) applied to an 85-dB SPL, 2-kHz lowpass-filtered pink-noise carrier. SRTs were correlated with both the high-frequency (2–6 kHz) pure-tone average (HFA; R2 = .31) and STM sensitivity (R2 = .28). Combined with the HFA, STM sensitivity significantly improved the SRT prediction (ΔR2 = .13; total R2 = .44). The remaining unaccounted variance might be attributable to variability in cognitive function and other dimensions of suprathreshold distortion. STM sensitivity was most critical in predicting SRTs for listeners < 65 years old or with HFA <53 dB HL. Results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low-frequency carriers is impaired by a reduced ability to use temporal fine-structure information to detect dynamic spectra. STM detection is a fast test of suprathreshold auditory function for frequencies <2 kHz that complements the HFA to predict variability in hearing-aid outcomes for speech perception in noise. PMID:27815546
Geravanchizadeh, Masoud; Fallah, Ali
2015-12-01
A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.
From innervation density to tactile acuity: 1. Spatial representation.
Brown, Paul B; Koerber, H Richard; Millecchia, Ronald
2004-06-11
We tested the hypothesis that the population receptive field representation (a superposition of the excitatory receptive field areas of cells responding to a tactile stimulus) provides spatial information sufficient to mediate one measure of static tactile acuity. In psychophysical tests, two-point discrimination thresholds on the hindlimbs of adult cats varied as a function of stimulus location and orientation, as they do in humans. A statistical model of the excitatory low threshold mechanoreceptive fields of spinocervical, postsynaptic dorsal column and spinothalamic tract neurons was used to simulate the population receptive field representations in this neural population of the one- and two-point stimuli used in the psychophysical experiments. The simulated and observed thresholds were highly correlated. Simulated and observed thresholds' relations to physiological and anatomical variables such as stimulus location and orientation, receptive field size and shape, map scale, and innervation density were strikingly similar. Simulated and observed threshold variations with receptive field size and map scale obeyed simple relationships predicted by the signal detection model, and were statistically indistinguishable from each other. The population receptive field representation therefore contains information sufficient for this discrimination.
Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias
2018-03-01
To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.
Plomp, R; Duquesnoy, A J
1980-12-01
This article deals with the combined effects of noise and reverberation on the speech-reception threshold for sentences. It is based on a series of current investigations on: (1) the modulation-transfer function as a measure of speech intelligibility in rooms, (2) the applicability of this concept to hearing-impaired persons, and (3) hearing loss for speech in quiet and in noise as a function of age. It is shown that, generally, in auditoria, classrooms, etc. the reverberation time T, acceptable for normal-hearing listeners, has to be reduced to (0.75)DT in order to be acceptable for elderly subjects with a hearing loss of D dB for speech in noise; for listening conditions as in lounges, restaurants, etc. the corresponding value is (0.82)DT.
Results using the OPAL strategy in Mandarin speaking cochlear implant recipients.
Vandali, Andrew E; Dawson, Pam W; Arora, Komal
2017-01-01
To evaluate the effectiveness of an experimental pitch-coding strategy for improving recognition of Mandarin lexical tone in cochlear implant (CI) recipients. Adult CI recipients were tested on recognition of Mandarin tones in quiet and speech-shaped noise at a signal-to-noise ratio of +10 dB; Mandarin sentence speech-reception threshold (SRT) in speech-shaped noise; and pitch discrimination of synthetic complex-harmonic tones in quiet. Two versions of the experimental strategy were examined: (OPAL) linear (1:1) mapping of fundamental frequency (F0) to the coded modulation rate; and (OPAL+) transposed mapping of high F0s to a lower coded rate. Outcomes were compared to results using the clinical ACE™ strategy. Five Mandarin speaking users of Nucleus® cochlear implants. A small but significant benefit in recognition of lexical tones was observed using OPAL compared to ACE in noise, but not in quiet, and not for OPAL+ compared to ACE or OPAL in quiet or noise. Sentence SRTs were significantly better using OPAL+ and comparable using OPAL to those using ACE. No differences in pitch discrimination thresholds were observed across strategies. OPAL can provide benefits to Mandarin lexical tone recognition in moderately noisy conditions and preserve perception of Mandarin sentences in challenging noise conditions.
A study of the high-frequency hearing thresholds of dentistry professionals
Lopes, Andréa Cintra; de Melo, Ana Dolores Passarelli; Santos, Cibele Carmelo
2012-01-01
Summary Introduction: In the dentistry practice, dentists are exposed to harmful effects caused by several factors, such as the noise produced by their work instruments. In 1959, the American Dental Association recommended periodical hearing assessments and the use of ear protectors. Aquiring more information regarding dentists', dental nurses', and prosthodontists' hearing abilities is necessary to propose prevention measures and early treatment strategies. Objective: To investigate the auditory thresholds of dentists, dental nurses, and prosthodontists. Method: In this clinical and experimental study, 44 dentists (Group I; GI), 36 dental nurses (Group II; GII), and 28 prosthodontists (Group III; GIII) were included, , with a total of 108 professionals. The procedures that were performed included a specific interview, ear canal inspection, conventional and high-frequency threshold audiometry, a speech reception threshold test, and an acoustic impedance test. Results: In the 3 groups that were tested, the comparison between the mean hearing thresholds provided evidence of worsened hearing ability relative to the increase in frequency. For the tritonal mean at 500 to 2,000 Hz and 3,000 to 6,000 Hz, GIII presented the worst thresholds. For the mean of the high frequencies (9,000 and 16,000 Hz), GII presented the worst thresholds. Conclusion: The conventional hearing threshold evaluation did not demonstrate alterations in the 3 groups that were tested; however, the complementary tests such as high-frequency audiometry provided greater efficacy in the early detection of hearing problems, since this population's hearing loss impaired hearing ability at frequencies that are not tested by the conventional tests. Therefore, we emphasize the need of utilizing high-frequency threshold audiometry in the hearing assessment routine in combination with other audiological tests. PMID:25991940
Lopez-Poveda, Enrique A; Eustaquio-Martín, Almudena; Stohl, Joshua S; Wolford, Robert D; Schatzer, Reinhold; Gorospe, José M; Ruiz, Santiago Santa Cruz; Benito, Fernando; Wilson, Blake S
2017-05-01
We have recently proposed a binaural cochlear implant (CI) sound processing strategy inspired by the contralateral medial olivocochlear reflex (the MOC strategy) and shown that it improves intelligibility in steady-state noise (Lopez-Poveda et al., 2016, Ear Hear 37:e138-e148). The aim here was to evaluate possible speech-reception benefits of the MOC strategy for speech maskers, a more natural type of interferer. Speech reception thresholds (SRTs) were measured in six bilateral and two single-sided deaf CI users with the MOC strategy and with a standard (STD) strategy. SRTs were measured in unilateral and bilateral listening conditions, and for target and masker stimuli located at azimuthal angles of (0°, 0°), (-15°, +15°), and (-90°, +90°). Mean SRTs were 2-5 dB better with the MOC than with the STD strategy for spatially separated target and masker sources. For bilateral CI users, the MOC strategy (1) facilitated the intelligibility of speech in competition with spatially separated speech maskers in both unilateral and bilateral listening conditions; and (2) led to an overall improvement in spatial release from masking in the two listening conditions. Insofar as speech is a more natural type of interferer than steady-state noise, the present results suggest that the MOC strategy holds potential for promising outcomes for CI users. Copyright © 2017. Published by Elsevier B.V.
Characterizing Speech Intelligibility in Noise After Wide Dynamic Range Compression.
Rhebergen, Koenraad S; Maalderink, Thijs H; Dreschler, Wouter A
The effects of nonlinear signal processing on speech intelligibility in noise are difficult to evaluate. Often, the effects are examined by comparing speech intelligibility scores with and without processing measured at fixed signal to noise ratios (SNRs) or by comparing the adaptive measured speech reception thresholds corresponding to 50% intelligibility (SRT50) with and without processing. These outcome measures might not be optimal. Measuring at fixed SNRs can be affected by ceiling or floor effects, because the range of relevant SNRs is not know in advance. The SRT50 is less time consuming, has a fixed performance level (i.e., 50% correct), but the SRT50 could give a limited view, because we hypothesize that the effect of most nonlinear signal processing algorithms at the SRT50 cannot be generalized to other points of the psychometric function. In this article, we tested the value of estimating the entire psychometric function. We studied the effect of wide dynamic range compression (WDRC) on speech intelligibility in stationary, and interrupted speech-shaped noise in normal-hearing subjects, using a fast method-based local linear fitting approach and by two adaptive procedures. The measured performance differences for conditions with and without WDRC for the psychometric functions in stationary noise and interrupted speech-shaped noise show that the effects of WDRC on speech intelligibility are SNR dependent. We conclude that favorable and unfavorable effects of WDRC on speech intelligibility can be missed if the results are presented in terms of SRT50 values only.
ERIC Educational Resources Information Center
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2013-01-01
Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up…
Lee, Youngmee; Jeong, Sung-Wook; Kim, Lee-Suk
2013-12-01
The aim of this study was to examine the efficacy of a new habilitation approach, augmentative and alternative communication (AAC) intervention using a voice output communication aid (VOCA), in improving speech perception, speech production, receptive vocabulary skills, and communicative behaviors in children with cochlear implants (CIs) who had multiple disabilities. Five children with mental retardation and/or cerebral palsy who had used CIs over two years were included in this study. Five children in the control group were matched to children who had AAC intervention on the basis of the type/severity of their additional disabilities and chronological age. They had limited oral communication skills after cochlear implantation because of their limited cognition and oromotor function. The children attended the AAC intervention with parents once a week for 6 months. We evaluated their performance using formal tests, including the monosyllabic word tests, the articulation test, and the receptive vocabulary test. We also assessed parent-child interactions. We analyzed the data using a one-group pretest and posttest design. The mean scores of the formal tests performed in these children improved from 26% to 48% in the phoneme scores of the monosyllabic word tests, from 17% to 35% in the articulation test, and from 11 to 18.4 in the receptive vocabulary test after AAC intervention (all p < .05). Some children in the control group showed improvement in the speech perception, speech production, and receptive vocabulary tests for 6 months, but the differences did not achieve statistical significance (all p > .05). The frequency of spontaneous communicative behaviors (i.e., vocalization, gestures, and words) and imitative words significantly increased after AAC intervention (p < .05). AAC intervention using a VOCA was very useful and effective on improving communicative skills in children with multiple disabilities who had very limited oral communication skills after cochlear implantation. Copyright © 2013. Published by Elsevier Ireland Ltd.
Musicians and non-musicians are equally adept at perceiving masked speech
Boebinger, Dana; Evans, Samuel; Scott, Sophie K.; Rosen, Stuart; Lima, César F.; Manly, Tom
2015-01-01
There is much interest in the idea that musicians perform better than non-musicians in understanding speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n = 25) and non-musicians (n = 25). Although musicians outperformed non-musicians on a measure of frequency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that nonverbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise. PMID:25618067
Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech.
Borrie, Stephanie A; Lansford, Kaitlin L; Barrett, Tyson S
2017-03-01
The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception (initial intelligibility) and learning (intelligibility improvement) of naturally dysrhythmic speech, dysarthria. Fifty young adults with typical hearing participated in 3 key tests, including a rhythm perception test, a receptive vocabulary test, and a speech perception and learning test, with standard pretest, familiarization, and posttest phases. Initial intelligibility scores were calculated as the proportion of correct pretest words, while intelligibility improvement scores were calculated by subtracting this proportion from the proportion of correct posttest words. Rhythm perception scores predicted intelligibility improvement scores but not initial intelligibility. On the other hand, receptive vocabulary scores predicted initial intelligibility scores but not intelligibility improvement. Expertise in rhythm perception appears to provide an advantage for processing dysrhythmic speech, but a familiarization experience is required for the advantage to be realized. Findings are discussed in relation to the role of rhythm in speech processing and shed light on processing models that consider the consequence of rhythm abnormalities in dysarthria.
Kabadi, S J; Ruhl, D S; Mukherjee, S; Kesser, B W
2018-02-01
Middle ear space is one of the most important components of the Jahrsdoerfer grading system (J-score), which is used to determine surgical candidacy for congenital aural atresia. The purpose of this study was to introduce a semiautomated method for measuring middle ear volume and determine whether middle ear volume, either alone or in combination with the J-score, can be used to predict early postoperative audiometric outcomes. A retrospective analysis was conducted of 18 patients who underwent an operation for unilateral congenital aural atresia at our institution. Using the Livewire Segmentation tool in the Carestream Vue PACS, we segmented middle ear volumes using a semiautomated method for all atretic and contralateral normal ears on preoperative high-resolution CT imaging. Postsurgical audiometric outcome data were then analyzed in the context of these middle ear volumes. Atretic middle ear volumes were significantly smaller than those in contralateral normal ears ( P < .001). Patients with atretic middle ear volumes of >305 mm 3 had significantly better postoperative pure tone average and speech reception thresholds than those with atretic ears below this threshold volume ( P = .01 and P = .006, respectively). Atretic middle ear volume incorporated into the J-score offered the best association with normal postoperative hearing (speech reception threshold ≤ 30 dB; OR = 37.8, P = .01). Middle ear volume, calculated in a semiautomated fashion, is predictive of postsurgical audiometric outcomes, both independently and in combination with the conventional J-score. © 2018 by American Journal of Neuroradiology.
Lauritsen, Maj-Britt Glenn; Söderström, Margareta; Kreiner, Svend; Dørup, Jens; Lous, Jørgen
2016-01-01
We tested "the Galker test", a speech reception in noise test developed for primary care for Danish preschool children, to explore if the children's ability to hear and understand speech was associated with gender, age, middle ear status, and the level of background noise. The Galker test is a 35-item audio-visual, computerized word discrimination test in background noise. Included were 370 normally developed children attending day care center. The children were examined with the Galker test, tympanometry, audiometry, and the Reynell test of verbal comprehension. Parents and daycare teachers completed questionnaires on the children's ability to hear and understand speech. As most of the variables were not assessed using interval scales, non-parametric statistics (Goodman-Kruskal's gamma) were used for analyzing associations with the Galker test score. For comparisons, analysis of variance (ANOVA) was used. Interrelations were adjusted for using a non-parametric graphic model. In unadjusted analyses, the Galker test was associated with gender, age group, language development (Reynell revised scale), audiometry, and tympanometry. The Galker score was also associated with the parents' and day care teachers' reports on the children's vocabulary, sentence construction, and pronunciation. Type B tympanograms were associated with a mean hearing 5-6dB below that of than type A, C1, or C2. In the graphic analysis, Galker scores were closely and significantly related to Reynell test scores (Gamma (G)=0.35), the children's age group (G=0.33), and the day care teachers' assessment of the children's vocabulary (G=0.26). The Galker test of speech reception in noise appears promising as an easy and quick tool for evaluating preschool children's understanding of spoken words in noise, and it correlated well with the day care teachers' reports and less with the parents' reports. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Stiles, Derek J; Bentler, Ruth A; McGregor, Karla K
2012-06-01
To determine whether a clinically obtainable measure of audibility, the aided Speech Intelligibility Index (SII; American National Standards Institute, 2007), is more sensitive than the pure-tone average (PTA) at predicting the lexical abilities of children who wear hearing aids (CHA). School-age CHA and age-matched children with normal hearing (CNH) repeated words and nonwords, learned novel words, and completed a standardized receptive vocabulary test. Analyses of covariance allowed comparison of the 2 groups. For CHA, regression analyses determined whether SII held predictive value over and beyond PTA. CHA demonstrated poorer performance than CNH on tests of word and nonword repetition and receptive vocabulary. Groups did not differ on word learning. Aided SII was a stronger predictor of word and nonword repetition and receptive vocabulary than PTA. After accounting for PTA, aided SII remained a significant predictor of nonword repetition and receptive vocabulary. Despite wearing hearing aids, CHA performed more poorly on 3 of 4 lexical measures. Individual differences among CHA were predicted by aided SII. Unlike PTA, aided SII incorporates hearing aid amplification characteristics and speech-frequency weightings and may provide a more valid estimate of the child's access to and ability to learn from auditory input in real-world environments.
Selective spatial attention modulates bottom-up informational masking of speech
Carlile, Simon; Corkhill, Caitlin
2015-01-01
To hear out a conversation against other talkers listeners overcome energetic and informational masking. Largely attributed to top-down processes, information masking has also been demonstrated using unintelligible speech and amplitude-modulated maskers suggesting bottom-up processes. We examined the role of speech-like amplitude modulations in information masking using a spatial masking release paradigm. Separating a target talker from two masker talkers produced a 20 dB improvement in speech reception threshold; 40% of which was attributed to a release from informational masking. When across frequency temporal modulations in the masker talkers are decorrelated the speech is unintelligible, although the within frequency modulation characteristics remains identical. Used as a masker as above, the information masking accounted for 37% of the spatial unmasking seen with this masker. This unintelligible and highly differentiable masker is unlikely to involve top-down processes. These data provides strong evidence of bottom-up masking involving speech-like, within-frequency modulations and that this, presumably low level process, can be modulated by selective spatial attention. PMID:25727100
Selective spatial attention modulates bottom-up informational masking of speech.
Carlile, Simon; Corkhill, Caitlin
2015-03-02
To hear out a conversation against other talkers listeners overcome energetic and informational masking. Largely attributed to top-down processes, information masking has also been demonstrated using unintelligible speech and amplitude-modulated maskers suggesting bottom-up processes. We examined the role of speech-like amplitude modulations in information masking using a spatial masking release paradigm. Separating a target talker from two masker talkers produced a 20 dB improvement in speech reception threshold; 40% of which was attributed to a release from informational masking. When across frequency temporal modulations in the masker talkers are decorrelated the speech is unintelligible, although the within frequency modulation characteristics remains identical. Used as a masker as above, the information masking accounted for 37% of the spatial unmasking seen with this masker. This unintelligible and highly differentiable masker is unlikely to involve top-down processes. These data provides strong evidence of bottom-up masking involving speech-like, within-frequency modulations and that this, presumably low level process, can be modulated by selective spatial attention.
ERIC Educational Resources Information Center
Peter, Beate
2012-01-01
This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD…
Hochmuth, Sabine; Jürgens, Tim; Brand, Thomas; Kollmeier, Birger
2015-01-01
Investigate talker- and language-specific aspects of speech intelligibility in noise and reverberation using highly comparable matrix sentence tests across languages. Matrix sentences spoken by German/Russian and German/Spanish bilingual talkers were recorded. These sentences were used to measure speech reception thresholds (SRTs) with native listeners in the respective languages in different listening conditions (stationary and fluctuating noise, multi-talker babble, reverberated speech-in-noise condition). Four German/Russian and four German/Spanish bilingual talkers; 20 native German-speaking, 10 native Russian-speaking, and 10 native Spanish-speaking listeners. Across-talker SRT differences of up to 6 dB were found for both groups of bilinguals. SRTs of German/Russian bilingual talkers were the same in both languages. SRTs of German/Spanish bilingual talkers were higher when they talked in Spanish than when they talked in German. The benefit from listening in the gaps was similar across all languages. The detrimental effect of reverberation was larger for Spanish than for German and Russian. Within the limitations set by the number and slight accentedness of talkers and other possible confounding factors, talker- and test-condition-dependent differences were isolated from the language effect: Russian and German exhibited similar intelligibility in noise and reverberation, whereas Spanish was more impaired in these situations.
Polat, Zahra; Bulut, Erdoğan; Ataş, Ahmet
2016-09-01
Spoken word recognition and speech perception tests in quiet are being used as a routine in assessment of the benefit which children and adult cochlear implant users receive from their devices. Cochlear implant users generally demonstrate high level performances in these test materials as they are able to achieve high level speech perception ability in quiet situations. Although these test materials provide valuable information regarding Cochlear Implant (CI) users' performances in optimal listening conditions, they do not give realistic information regarding performances in adverse listening conditions, which is the case in the everyday environment. The aim of this study was to assess the speech intelligibility performance of post lingual CI users in the presence of noise at different signal-to-noise ratio with the Matrix Test developed for Turkish language. Cross-sectional study. The thirty post lingual implant user adult subjects, who had been using implants for a minimum of one year, were evaluated with Turkish Matrix test. Subjects' speech intelligibility was measured using the adaptive and non-adaptive Matrix Test in quiet and noisy environments. The results of the study show a correlation between Pure Tone Average (PTA) values of the subjects and Matrix test Speech Reception Threshold (SRT) values in the quiet. Hence, it is possible to asses PTA values of CI users using the Matrix Test also. However, no correlations were found between Matrix SRT values in the quiet and Matrix SRT values in noise. Similarly, the correlation between PTA values and intelligibility scores in noise was also not significant. Therefore, it may not be possible to assess the intelligibility performance of CI users using test batteries performed in quiet conditions. The Matrix Test can be used to assess the benefit of CI users from their systems in everyday life, since it is possible to perform intelligibility test with the Matrix test using a material that CI users experience in their everyday life and it is possible to assess their difficulty in speech discrimination in noisy conditions they have to cope with.
Effect of gap detection threshold on consistency of speech in children with speech sound disorder.
Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz
2017-02-01
The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lexical and phonological variability in preschool children with speech sound disorder.
Macrae, Toby; Tyler, Ann A; Lewis, Kerry E
2014-02-01
The authors of this study examined relationships between measures of word and speech error variability and between these and other speech and language measures in preschool children with speech sound disorder (SSD). In this correlational study, 18 preschool children with SSD, age-appropriate receptive vocabulary, and normal oral motor functioning and hearing were assessed across 2 sessions. Experimental measures included word and speech error variability, receptive vocabulary, nonword repetition (NWR), and expressive language. Pearson product–moment correlation coefficients were calculated among the experimental measures. The correlation between word and speech error variability was slight and nonsignificant. The correlation between word variability and receptive vocabulary was moderate and negative, although nonsignificant. High word variability was associated with small receptive vocabularies. The correlations between speech error variability and NWR and between speech error variability and the mean length of children's utterances were moderate and negative, although both were nonsignificant. High speech error variability was associated with poor NWR and language scores. High word variability may reflect unstable lexical representations, whereas high speech error variability may reflect indistinct phonological representations. Preschool children with SSD who show abnormally high levels of different types of speech variability may require slightly different approaches to intervention.
Normal Aspects of Speech, Hearing, and Language.
ERIC Educational Resources Information Center
Minifie, Fred. D., Ed.; And Others
This book is written as a guide to the understanding of the processes involved in human speech communication. Ten authorities contributed material to provide an introduction to the physiological aspects of speech production and reception, the acoustical aspects of speech production and transmission, the psychophysics of sound reception, the nature…
Naylor, Graham
2016-07-01
Adaptive Speech Reception Threshold in noise (SRTn) measurements are often used to make comparisons between alternative hearing aid (HA) systems. Such measurements usually do not constrain the signal-to-noise ratio (SNR) at which testing takes place. Meanwhile, HA systems increasingly include nonlinear features that operate differently in different SNRs, and listeners differ in their inherent SNR requirements. To show that SRTn measurements, as commonly used in comparisons of alternative HA systems, suffer from threats to their validity, to illustrate these threats with examples of potentially invalid conclusions in the research literature, and to propose ways to tackle these threats. An examination of the nature of SRTn measurements in the context of test theory, modern nonlinear HAs, and listener diversity. Examples from the audiological research literature were used to estimate typical interparticipant variation in SRTn and to illustrate cases where validity may have been compromised. There can be no doubt that SRTn measurements, when used to compare nonlinear HA systems, in principle, suffer from threats to their internal and external/ecological validity. Interactions between HA nonlinearities and SNR, and interparticipant differences in inherent SNR requirements, can act to generate misleading results. In addition, SRTn may lie at an SNR outside the range for which the HA system is designed or expected to operate in. Although the extent of invalid conclusions in the literature is difficult to evaluate, examples of studies were nevertheless identified where the risk of each form of invalidity is significant. Reliable data on ecological SNRs is becoming available, so that ecological validity can be assessed. Methodological developments that can reduce the risk of invalid conclusions include variations on the SRTn measurement procedure itself, manipulations of stimulus or scoring conditions to place SRTn in an ecologically relevant range, and design and analysis approaches that take account of interparticipant differences. American Academy of Audiology.
Stam, Mariska; Smits, Cas; Twisk, Jos W R; Lemke, Ulrike; Festen, Joost M; Kramer, Sophia E
2015-01-01
The first aim of the present study was to determine the change in speech recognition in noise over a period of 5 years in participants ages 18 to 70 years at baseline. The second aim was to investigate whether age, gender, educational level, the level of initial speech recognition in noise, and reported chronic conditions were associated with a change in speech recognition in noise. The baseline and 5-year follow-up data of 427 participants with and without hearing impairment participating in the National Longitudinal Study on Hearing (NL-SH) were analyzed. The ability to recognize speech in noise was measured twice with the online National Hearing Test, a digit-triplet speech-in-noise test. Speech-reception-threshold in noise (SRTn) scores were calculated, corresponding to 50% speech intelligibility. Unaided SRTn scores obtained with the same transducer (headphones or loudspeakers) at both test moments were included. Changes in SRTn were calculated as a raw shift (T1 - T0) and an adjusted shift for regression towards the mean. Paired t tests and multivariable linear regression analyses were applied. The mean increase (i.e., deterioration) in SRTn was 0.38-dB signal-to-noise ratio (SNR) over 5 years (p < 0.001). Results of the multivariable regression analyses showed that the age group of 50 to 59 years had a significantly larger deterioration in SRTn compared with the age group of 18 to 39 years (raw shift: beta: 0.64-dB SNR; 95% confidence interval: 0.07-1.22; p = 0.028, adjusted for initial speech recognition level - adjusted shift: beta: 0.82-dB SNR; 95% confidence interval: 0.27-1.34; p = 0.004). Gender, educational level, and the number of chronic conditions were not associated with a change in SRTn over time. No significant differences in increase of SRTn were found between the initial levels of speech recognition (i.e., good, insufficient, or poor) when taking into account the phenomenon regression towards the mean. The study results indicate that hearing deterioration of speech recognition in noise over 5 years can also be detected in adults ages 18 to 70 years. This rather small numeric change might represent a relevant impact on an individual's ability to understand speech in everyday life.
Selective auditory attention in adults: effects of rhythmic structure of the competing language.
Reel, Leigh Ann; Hicks, Candace Bourland
2012-02-01
The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Reception thresholds for English sentences were measured for 50 monolingual English-speaking adults in conditions with 2-talker (male-female) competing speech spoken in a stress-based (English, German), syllable-based (Spanish, French), or mora-based (Japanese) language. Two different masking signals were created for each language (i.e., 2 different 2-talker pairs). All subjects were tested in 10 competing conditions (2 conditions for each of the 5 languages). A significant difference was noted between the 2 masking signals within each language. Across languages, significantly greater listening difficulty was observed in conditions where competing speech was spoken in English, German, or Japanese, as compared with Spanish or French. Results suggest that (a) for a particular language, masking effectiveness can vary between different male-female 2-talker maskers and (b) for stress-based vs. syllable-based languages, competing speech is more difficult to ignore when spoken in a language from the native rhythmic class as compared with a nonnative rhythmic class, regardless of whether the language is familiar or unfamiliar to the listener.
Baumgärtel, Regina M; Hu, Hongmei; Krawczyk-Becker, Martin; Marquardt, Daniel; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Bomke, Katrin; Plotz, Karsten; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-12-30
Several binaural audio signal enhancement algorithms were evaluated with respect to their potential to improve speech intelligibility in noise for users of bilateral cochlear implants (CIs). 50% speech reception thresholds (SRT50) were assessed using an adaptive procedure in three distinct, realistic noise scenarios. All scenarios were highly nonstationary, complex, and included a significant amount of reverberation. Other aspects, such as the perfectly frontal target position, were idealized laboratory settings, allowing the algorithms to perform better than in corresponding real-world conditions. Eight bilaterally implanted CI users, wearing devices from three manufacturers, participated in the study. In all noise conditions, a substantial improvement in SRT50 compared to the unprocessed signal was observed for most of the algorithms tested, with the largest improvements generally provided by binaural minimum variance distortionless response (MVDR) beamforming algorithms. The largest overall improvement in speech intelligibility was achieved by an adaptive binaural MVDR in a spatially separated, single competing talker noise scenario. A no-pre-processing condition and adaptive differential microphones without a binaural link served as the two baseline conditions. SRT50 improvements provided by the binaural MVDR beamformers surpassed the performance of the adaptive differential microphones in most cases. Speech intelligibility improvements predicted by instrumental measures were shown to account for some but not all aspects of the perceptually obtained SRT50 improvements measured in bilaterally implanted CI users. © The Author(s) 2015.
Frequency-specific hearing outcomes in pediatric type I tympanoplasty.
Kent, David T; Kitsko, Dennis J; Wine, Todd; Chi, David H
2014-02-01
Middle ear disease is the primary cause of hearing loss in children and has a significant impact on language development and academic performance. Multiple prognostic factors have previously been examined, but there is little published data regarding frequency-specific hearing outcomes. To examine the relationship between type I tympanoplasty in a pediatric population and frequency-specific hearing changes, as well as the relationship between several prognostic factors and graft retention. Retrospective medical chart review (February 2006 to October 2011) of 492 consecutive pediatric otolaryngology patients undergoing type I tympanoplasty for tympanic membrane (TM) perforation of any etiology at a tertiary-care pediatric otolaryngology practice. Type I tympanoplasty. Preoperative and postoperative audiometric data were collected for patients undergoing successful TM repair. It was hypothesized before data collection that conductive hearing would improve at all frequencies with no significant change in sensorineural hearing. Data collected included air conduction at 250 to 8000 Hz, speech reception thresholds, bone conduction at 500 to 4000 Hz, and air-bone gap at 500 to 4000 Hz. Demographic data obtained included sex, age, size, mechanism, location of perforation, and operative repair technique. Of 492 patients, 320 were excluded; results were thus examined for 172 patients. Surgery was successful for 73.8% of patients. Perforation size was significantly associated with repair success (mean [SD] surgical success rate of 38.6% [15.3%] vs surgical failure rate of 31.4% [15.0%]; P < .01); however, mean (SD) age (9.02 [3.89] years [surgical success] vs 8.52 [3.43] years [surgical failure]; P > .05) and repair technique (medial [73.08%] vs lateral [76.47%] graft success; P > .99) were not. Air conduction significantly improved from 250 to 2000 Hz (P < .001), did not significantly improve at 4000 Hz (P = .08), and there was a nonsignificant decline at 8000 Hz (P = .12). Speech reception threshold significantly improved (20 vs 15 dB; P < .001). This large review found an association of TM perforation size with surgical success and an improvement in speech reception threshold, air conduction at 250 to 2000 Hz, air-bone gap at 500 to 2000 Hz, and worsening bone conduction at 4000 Hz. Patients with high-frequency hearing loss due to TM perforation should not anticipate significant recovery from type I tympanoplasty. Hearing loss at higher frequencies may require postoperative hearing rehabilitation.
Audiologic and subjective evaluation of Baha® Attract device.
Pérez-Carbonell, Tomàs; Pla-Gil, Ignacio; Redondo-Martínez, Jaume; Morant-Ventura, Antonio; García-Callejo, Francisco Javier; Marco-Algarra, Jaime
We included 9 patients implanted with Baha ® Attract. All our patients were evaluated by free field tonal audiometry, free field verbal audiometry and free field verbal audiometry with background noise, all the tests were performed with and without the device. To evaluate the subjective component of the implantation, we used the Glasgow Benefit Inventory (GBI) and Abbreviated Profile of Hearing Aid Benefit (APHAB). The auditive assessment with the device showed average auditive thresholds of 35.8dB with improvements of 25.8dB over the previous situation. Speech reception thresholds were 37dB with Baha ® Attract, showing improvements of 23dB. Maximum discrimination thresholds showed an average gain of 60dB with the device. Baha ® Attract achieves auditive improvements in patients for whom it is correctly indicated, with a consequent positive subjective evaluation. This study shows the attenuation effect in transcutaneous transmission, that prevents the device achieving greater improvements. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.
Rader, T; Fastl, H; Baumann, U
2017-03-01
After implantation of cochlear implants with hearing preservation for combined electronic acoustic stimulation (EAS), the residual acoustic hearing ability relays fundamental speech frequency information in the low frequency range. With the help of acoustic simulation of EAS hearing perception the impact of frequency and level fine structure of speech signals can be systematically examined. The aim of this study was to measure the speech reception threshold (SRT) under various noise conditions with acoustic EAS simulation by variation of the frequency and level information of the fundamental frequency f0 of speech. The study was carried out to determine to what extent the SRT is impaired by modification of the f0 fine structure. Using partial tone time pattern analysis an acoustic EAS simulation of the speech material from the Oldenburg sentence test (OLSA) was generated. In addition, determination of the f0 curve of the speech material was conducted. Subsequently, either the parameter frequency or level of f0 was fixed in order to remove one of the two fine contour information of the speech signal. The processed OLSA sentences were used to determine the SRT in background noise under various test conditions. The conditions "f0 fixed frequency" and "f0 fixed level" were tested under two different situations, under "amplitude modulated background noise" and "continuous background noise" conditions. A total of 24 subjects with normal hearing participated in the study. The SRT in background noise for the condition "f0 fixed frequency" was more favorable in continuous noise with 2.7 dB and in modulated noise with 0.8 dB compared to the condition "f0 fixed level" with 3.7 dB and 2.9 dB, respectively. In the simulation of speech perception with cochlear implants and acoustic components, the level information of the fundamental frequency had a stronger impact on speech intelligibility than the frequency information. The method of simulation of transmission of cochlear implants allows investigation of how various parameters influence speech intelligibility in subjects with normal hearing.
NASA Astrophysics Data System (ADS)
Samardzic, Nikolina
The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly, as an example of the significance of speech intelligibility evaluation in the context of an applicable listening environment, as indicated in this research, it was found that the jury test participants required on average an approximate 3 dB increase in sound pressure level of speech material while driving and listening compared to when just listening, for an equivalent speech intelligibility performance and the same listening task.
van Hoesel, Richard J M
2015-04-01
One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.
Erb, Julia; Ludwig, Alexandra Annemarie; Kunke, Dunja; Fuchs, Michael; Obleser, Jonas
2018-04-24
Psychoacoustic tests assessed shortly after cochlear implantation are useful predictors of the rehabilitative speech outcome. While largely independent, both spectral and temporal resolution tests are important to provide an accurate prediction of speech recognition. However, rapid tests of temporal sensitivity are currently lacking. Here, we propose a simple amplitude modulation rate discrimination (AMRD) paradigm that is validated by predicting future speech recognition in adult cochlear implant (CI) patients. In 34 newly implanted patients, we used an adaptive AMRD paradigm, where broadband noise was modulated at the speech-relevant rate of ~4 Hz. In a longitudinal study, speech recognition in quiet was assessed using the closed-set Freiburger number test shortly after cochlear implantation (t0) as well as the open-set Freiburger monosyllabic word test 6 months later (t6). Both AMRD thresholds at t0 (r = -0.51) and speech recognition scores at t0 (r = 0.56) predicted speech recognition scores at t6. However, AMRD and speech recognition at t0 were uncorrelated, suggesting that those measures capture partially distinct perceptual abilities. A multiple regression model predicting 6-month speech recognition outcome with deafness duration and speech recognition at t0 improved from adjusted R = 0.30 to adjusted R = 0.44 when AMRD threshold was added as a predictor. These findings identify AMRD thresholds as a reliable, nonredundant predictor above and beyond established speech tests for CI outcome. This AMRD test could potentially be developed into a rapid clinical temporal-resolution test to be integrated into the postoperative test battery to improve the reliability of speech outcome prognosis.
Result on speech perception after conversion from Spectra® to Freedom®.
Magalhães, Ana Tereza de Matos; Goffi-Gomez, Maria Valéria Schmidt; Hoshino, Ana Cristina; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; Brito, Rubens
2012-04-01
New technology in the Freedom® speech processor for cochlear implants was developed to improve how incoming acoustic sound is processed; this applies not only for new users, but also for previous generations of cochlear implants. To identify the contribution of this technology-- the Nucleus 22®--on speech perception tests in silence and in noise, and on audiometric thresholds. A cross-sectional cohort study was undertaken. Seventeen patients were selected. The last map based on the Spectra® was revised and optimized before starting the tests. Troubleshooting was used to identify malfunction. To identify the contribution of the Freedom® technology for the Nucleus22®, auditory thresholds and speech perception tests were performed in free field in sound-proof booths. Recorded monosyllables and sentences in silence and in noise (SNR = 0dB) were presented at 60 dBSPL. The nonparametric Wilcoxon test for paired data was used to compare groups. Freedom® applied for the Nucleus22® showed a statistically significant difference in all speech perception tests and audiometric thresholds. The Freedom® technology improved the performance of speech perception and audiometric thresholds of patients with Nucleus 22®.
Tchoungui Oyono, Lilly; Pascoe, Michelle; Singh, Shajila
2018-05-17
The purpose of this study was to determine the prevalence of speech and language disorders in French-speaking preschool-age children in Yaoundé, the capital city of Cameroon. A total of 460 participants aged 3-5 years were recruited from the 7 communes of Yaoundé using a 2-stage cluster sampling method. Speech and language assessment was undertaken using a standardized speech and language test, the Evaluation du Langage Oral (Khomsi, 2001), which was purposefully renormed on the sample. A predetermined cutoff of 2 SDs below the normative mean was applied to identify articulation, expressive language, and receptive language disorders. Fluency and voice disorders were identified using clinical judgment by a speech-language pathologist. Overall prevalence was calculated as follows: speech disorders, 14.7%; language disorders, 4.3%; and speech and language disorders, 17.1%. In terms of disorders, prevalence findings were as follows: articulation disorders, 3.6%; expressive language disorders, 1.3%; receptive language disorders, 3%; fluency disorders, 8.4%; and voice disorders, 3.6%. Prevalence figures are higher than those reported for other countries and emphasize the urgent need to develop speech and language services for the Cameroonian population.
Ching, Teresa Yc; Zhang, Vicky W; Flynn, Christopher; Burns, Lauren; Button, Laura; Hou, Sanna; McGhie, Karen; Van Buynder, Patricia
2017-07-07
We investigated the factors influencing speech perception in babble for 5-year-old children with hearing loss who were using hearing aids (HAs) or cochlear implants (CIs). Speech reception thresholds (SRTs) for 50% correct identification were measured in two conditions - speech collocated with babble, and speech with spatially separated babble. The difference in SRTs between the two conditions give a measure of binaural unmasking, commonly known as spatial release from masking (SRM). Multiple linear regression analyses were conducted to examine the influence of a range of demographic factors on outcomes. Participants were 252 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Children using HAs or CIs required a better signal-to-noise ratio to achieve the same level of performance as their normal-hearing peers but demonstrated SRM of a similar magnitude. For children using HAs, speech perception was significantly influenced by cognitive and language abilities. For children using CIs, age at CI activation and language ability were significant predictors of speech perception outcomes. Speech perception in children with hearing loss can be enhanced by improving their language abilities. Early age at cochlear implantation was also associated with better outcomes.
Mehraei, Golbarg; Gallun, Frederick J; Leek, Marjorie R; Bernstein, Joshua G W
2014-07-01
Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4-32 Hz), spectral ripple density [0.5-4 cycles/octave (c/o)] and carrier center frequency (500-4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4-12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements.
Harrison, Linda J; McLeod, Sharynne
2010-04-01
To determine risk and protective factors for speech and language impairment in early childhood. Data are presented for a nationally representative sample of 4,983 children participating in the Longitudinal Study of Australian Children (described in McLeod & Harrison, 2009). Thirty-one child, parent, family, and community factors previously reported as being predictors of speech and language impairment were tested as predictors of (a) parent-rated expressive speech/language concern and (b) receptive language concern, (c) use of speech-language pathology services, and (d) low receptive vocabulary. Bivariate logistic regression analyses confirmed 29 of the identified factors. However, when tested concurrently with other predictors in multivariate analyses, only 19 remained significant: 9 for 2-4 outcomes and 10 for 1 outcome. Consistent risk factors were being male, having ongoing hearing problems, and having a more reactive temperament. Protective factors were having a more persistent and sociable temperament and higher levels of maternal well-being. Results differed by outcome for having an older sibling, parents speaking a language other than English, and parental support for children's learning at home. Identification of children requiring speech and language assessment requires consideration of the context of family life as well as biological and psychosocial factors intrinsic to the child.
Measures for assessing architectural speech security (privacy) of closed offices and meeting rooms.
Gover, Bradford N; Bradley, John S
2004-12-01
Objective measures were investigated as predictors of the speech security of closed offices and rooms. A new signal-to-noise type measure is shown to be a superior indicator for security than existing measures such as the Articulation Index, the Speech Intelligibility Index, the ratio of the loudness of speech to that of noise, and the A-weighted level difference of speech and noise. This new measure is a weighted sum of clipped one-third-octave-band signal-to-noise ratios; various weightings and clipping levels are explored. Listening tests had 19 subjects rate the audibility and intelligibility of 500 English sentences, filtered to simulate transmission through various wall constructions, and presented along with background noise. The results of the tests indicate that the new measure is highly correlated with sentence intelligibility scores and also with three security thresholds: the threshold of intelligibility (below which speech is unintelligible), the threshold of cadence (below which the cadence of speech is inaudible), and the threshold of audibility (below which speech is inaudible). The ratio of the loudness of speech to that of noise, and simple A-weighted level differences are both shown to be well correlated with these latter two thresholds (cadence and audibility), but not well correlated with intelligibility.
Zamaninezhad, Ladan; Hohmann, Volker; Büchner, Andreas; Schädler, Marc René; Jürgens, Tim
2017-02-01
This study introduces a speech intelligibility model for cochlear implant users with ipsilateral preserved acoustic hearing that aims at simulating the observed speech-in-noise intelligibility benefit when receiving simultaneous electric and acoustic stimulation (EA-benefit). The model simulates the auditory nerve spiking in response to electric and/or acoustic stimulation. The temporally and spatially integrated spiking patterns were used as the final internal representation of noisy speech. Speech reception thresholds (SRTs) in stationary noise were predicted for a sentence test using an automatic speech recognition framework. The model was employed to systematically investigate the effect of three physiologically relevant model factors on simulated SRTs: (1) the spatial spread of the electric field which co-varies with the number of electrically stimulated auditory nerves, (2) the "internal" noise simulating the deprivation of auditory system, and (3) the upper bound frequency limit of acoustic hearing. The model results show that the simulated SRTs increase monotonically with increasing spatial spread for fixed internal noise, and also increase with increasing the internal noise strength for a fixed spatial spread. The predicted EA-benefit does not follow such a systematic trend and depends on the specific combination of the model parameters. Beyond 300 Hz, the upper bound limit for preserved acoustic hearing is less influential on speech intelligibility of EA-listeners in stationary noise. The proposed model-predicted EA-benefits are within the range of EA-benefits shown by 18 out of 21 actual cochlear implant listeners with preserved acoustic hearing. Copyright © 2016 Elsevier B.V. All rights reserved.
Lorens, Artur; Zgoda, Małgorzata; Obrycka, Anita; Skarżynski, Henryk
2010-12-01
Presently, there are only few studies examining the benefits of fine structure information in coding strategies. Against this background, this study aims to assess the objective and subjective performance of children experienced with the C40+ cochlear implant using the CIS+ coding strategy who were upgraded to the OPUS 2 processor using FSP and HDCIS. In this prospective study, 60 children with more than 3.5 years of experience with the C40+ cochlear implant were upgraded to the OPUS 2 processor and fit and tested with HDCIS (Interval I). After 3 months of experience with HDCIS, they were fit with the FSP coding strategy (Interval II) and tested with all strategies (FSP, HDCIS, CIS+). After an additional 3-4 months, they were assessed on all three strategies and asked to choose their take-home strategy (Interval III). The children were tested using the Adaptive Auditory Speech Test which measures speech reception threshold (SRT) in quiet and noise at each test interval. The children were also asked to rate on a Visual Analogue Scale their satisfaction and coding strategy preference when listening to speech and a pop song. However, since not all tests could be performed at one single visit, some children were not able complete all tests at all intervals. At the study endpoint, speech in quiet showed a significant difference in SRT of 1.0 dB between FSP and HDCIS, with FSP performing better. FSP proved a better strategy compared with CIS+, showing lower SRT results of 5.2 dB. Speech in noise tests showed FSP to be significantly better than CIS+ by 0.7 dB, and HDCIS to be significantly better than CIS+ by 0.8 dB. Both satisfaction and coding strategy preference ratings also revealed that FSP and HDCIS strategies were better than CIS+ strategy when listening to speech and music. FSP was better than HDCIS when listening to speech. This study demonstrates that long-term pediatric users of the COMBI 40+ are able to upgrade to a newer processor and coding strategy without compromising their listening performance and even improving their performance with FSP after a short time of experience. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Influence of contralateral acoustic hearing on adult bimodal outcomes after cochlear implantation.
Plant, Kerrie; van Hoesel, Richard; McDermott, Hugh; Dawson, Pamela; Cowan, Robert
2016-08-01
To examine post-implantation benefit and time taken to acclimate to the cochlear implant for adult candidates with more hearing in the contralateral non-implanted ear than has been previously considered within local candidacy guidelines. Prospective, within-subject experimental design. Forty postlingual hearing-impaired adult subjects with a contralateral ear word score in quiet ranging from 27% to 100% (median 67%). Post-implantation improvement of 2.4 dB and 4.0 dB was observed on a sentence in coincident babble test at presentation levels of 65 and 55 dB SPL respectively, and a 2.1 dB benefit in spatial release from masking (SRM) advantage observed when the noise location favoured the implanted side. Significant post-operative group mean change of between 2.1 and 3.0 was observed on the sub-scales of the speech, spatial, and qualities (SSQ) questionnaire. Degree of post-implantation speech reception threshold (SRT) benefit on the coincident babble test and on perception of soft speech and sounds in the environment was greater for subjects with less contralateral hearing. The degree of contralateral acoustic hearing did not affect time taken to acclimate to the device. The findings from this study support cochlear implantation for candidates with substantial acoustic hearing in the contralateral ear, and provide guidance regarding post-implantation expectations.
Davidson, Lisa S; Skinner, Margaret W; Holstad, Beth A; Fears, Beverly T; Richter, Marie K; Matusofsky, Margaret; Brenner, Christine; Holden, Timothy; Birath, Amy; Kettel, Jerrica L; Scollie, Susan
2009-06-01
The purpose of this study was to examine the effects of a wider instantaneous input dynamic range (IIDR) setting on speech perception and comfort in quiet and noise for children wearing the Nucleus 24 implant system and the Freedom speech processor. In addition, children's ability to understand soft and conversational level speech in relation to aided sound-field thresholds was examined. Thirty children (age, 7 to 17 years) with the Nucleus 24 cochlear implant system and the Freedom speech processor with two different IIDR settings (30 versus 40 dB) were tested on the Consonant Nucleus Consonant (CNC) word test at 50 and 60 dB SPL, the Bamford-Kowal-Bench Speech in Noise Test, and a loudness rating task for four-talker speech noise. Aided thresholds for frequency-modulated tones, narrowband noise, and recorded Ling sounds were obtained with the two IIDRs and examined in relation to CNC scores at 50 dB SPL. Speech Intelligibility Indices were calculated using the long-term average speech spectrum of the CNC words at 50 dB SPL measured at each test site and aided thresholds. Group mean CNC scores at 50 dB SPL with the 40 IIDR were significantly higher (p < 0.001) than with the 30 IIDR. Group mean CNC scores at 60 dB SPL, loudness ratings, and the signal to noise ratios-50 for Bamford-Kowal-Bench Speech in Noise Test were not significantly different for the two IIDRs. Significantly improved aided thresholds at 250 to 6000 Hz as well as higher Speech Intelligibility Indices afforded improved audibility for speech presented at soft levels (50 dB SPL). These results indicate that an increased IIDR provides improved word recognition for soft levels of speech without compromising comfort of higher levels of speech sounds or sentence recognition in noise.
Speech and language development in cognitively delayed children with cochlear implants.
Holt, Rachael Frush; Kirk, Karen Iler
2005-04-01
The primary goals of this investigation were to examine the speech and language development of deaf children with cochlear implants and mild cognitive delay and to compare their gains with those of children with cochlear implants who do not have this additional impairment. We retrospectively examined the speech and language development of 69 children with pre-lingual deafness. The experimental group consisted of 19 children with cognitive delays and no other disabilities (mean age at implantation = 38 months). The control group consisted of 50 children who did not have cognitive delays or any other identified disability. The control group was stratified by primary communication mode: half used total communication (mean age at implantation = 32 months) and the other half used oral communication (mean age at implantation = 26 months). Children were tested on a variety of standard speech and language measures and one test of auditory skill development at 6-month intervals. The results from each test were collapsed from blocks of two consecutive 6-month intervals to calculate group mean scores before implantation and at 1-year intervals after implantation. The children with cognitive delays and those without such delays demonstrated significant improvement in their speech and language skills over time on every test administered. Children with cognitive delays had significantly lower scores than typically developing children on two of the three measures of receptive and expressive language and had significantly slower rates of auditory-only sentence recognition development. Finally, there were no significant group differences in auditory skill development based on parental reports or in auditory-only or multimodal word recognition. The results suggest that deaf children with mild cognitive impairments benefit from cochlear implantation. Specifically, improvements are evident in their ability to perceive speech and in their reception and use of language. However, it may be reduced relative to their typically developing peers with cochlear implants, particularly in domains that require higher level skills, such as sentence recognition and receptive and expressive language. These findings suggest that children with mild cognitive deficits be considered for cochlear implantation with less trepidation than has been the case in the past. Although their speech and language gains may be tempered by their cognitive abilities, these limitations do not appear to preclude benefit from cochlear implant stimulation, as assessed by traditional measures of speech and language development.
Comparing Binaural Pre-processing Strategies III
Warzybok, Anna; Ernst, Stephan M. A.
2015-01-01
A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level. PMID:26721922
Comparing Binaural Pre-processing Strategies II
Hu, Hongmei; Krawczyk-Becker, Martin; Marquardt, Daniel; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Bomke, Katrin; Plotz, Karsten; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-01-01
Several binaural audio signal enhancement algorithms were evaluated with respect to their potential to improve speech intelligibility in noise for users of bilateral cochlear implants (CIs). 50% speech reception thresholds (SRT50) were assessed using an adaptive procedure in three distinct, realistic noise scenarios. All scenarios were highly nonstationary, complex, and included a significant amount of reverberation. Other aspects, such as the perfectly frontal target position, were idealized laboratory settings, allowing the algorithms to perform better than in corresponding real-world conditions. Eight bilaterally implanted CI users, wearing devices from three manufacturers, participated in the study. In all noise conditions, a substantial improvement in SRT50 compared to the unprocessed signal was observed for most of the algorithms tested, with the largest improvements generally provided by binaural minimum variance distortionless response (MVDR) beamforming algorithms. The largest overall improvement in speech intelligibility was achieved by an adaptive binaural MVDR in a spatially separated, single competing talker noise scenario. A no-pre-processing condition and adaptive differential microphones without a binaural link served as the two baseline conditions. SRT50 improvements provided by the binaural MVDR beamformers surpassed the performance of the adaptive differential microphones in most cases. Speech intelligibility improvements predicted by instrumental measures were shown to account for some but not all aspects of the perceptually obtained SRT50 improvements measured in bilaterally implanted CI users. PMID:26721921
The development and validation of the Closed-set Mandarin Sentence (CMS) test.
Tao, Duo-Duo; Fu, Qian-Jie; Galvin, John J; Yu, Ya-Feng
2017-09-01
Matrix-styled sentence tests offer a closed-set paradigm that may be useful when evaluating speech intelligibility. Ideally, sentence test materials should reflect the distribution of phonemes within the target language. We developed and validated the Closed-set Mandarin Sentence (CMS) test to assess Mandarin speech intelligibility in noise. CMS test materials were selected to be familiar words and to represent the natural distribution of vowels, consonants, and lexical tones found in Mandarin Chinese. Ten key words in each of five categories (Name, Verb, Number, Color, and Fruit) were produced by a native Mandarin talker, resulting in a total of 50 words that could be combined to produce 100,000 unique sentences. Normative data were collected in 10 normal-hearing, adult Mandarin-speaking Chinese listeners using a closed-set test paradigm. Two test runs were conducted for each subject, and 20 sentences per run were randomly generated while ensuring that each word was presented only twice in each run. First, the level of the words in each category were adjusted to produce equal intelligibility in noise. Test-retest reliability for word-in-sentence recognition was excellent according to Cronbach's alpha (0.952). After the category level adjustments, speech reception thresholds (SRTs) for sentences in noise, defined as the signal-to-noise ratio (SNR) that produced 50% correct whole sentence recognition, were adaptively measured by adjusting the SNR according to the correctness of response. The mean SRT was -7.9 (SE=0.41) and -8.1 (SE=0.34) dB for runs 1 and 2, respectively. The mean standard deviation across runs was 0.93 dB, and paired t-tests showed no significant difference between runs 1 and 2 (p=0.74) despite random sentences being generated for each run and each subject. The results suggest that the CMS provides large stimulus set with which to repeatedly and reliably measure Mandarin-speaking listeners' speech understanding in noise using a closed-set paradigm.
Hearing gain with a BAHA test-band in patients with single-sided deafness.
Kim, Do-Youn; Kim, Tae Su; Shim, Byoung Soo; Jin, In Suk; Ahn, Joong Ho; Chung, Jong Woo; Yoon, Tae Hyun; Park, Hong Ju
2014-01-01
It is assumed that preoperative use of a bone-anchored hearing aid (BAHA) test-band will give a patient lower gain compared to real post-operative gain because of the reduction of energy through the scalp when using a test-band. Hearing gains using a BAHA test-band were analyzed in patients with unilateral hearing loss. Nineteen patients with unilateral sensorineural hearing loss were enrolled. A test-band, which was connected to BAHA Intenso with full-on gain, was put on the mastoid. Conventional air-conduction (AC) pure-tone averages (PTAs) and sound-field PTAs and speech reception thresholds (SRTs) were obtained in conditions A (the better ear naked), B (the better ear plugged), and C (the better ear plugged with a test-band on the poorer mastoid). Air-conduction PTAs of the poorer and better ears were 91 ± 19 and 18 ± 8 dB HL. Sound-field PTAs in condition B were higher than those in condition A (54 vs. 26 dB HL), which means that earplugs can block the sound grossly up to 54 dB HL through the better ears. The aided PTAs (24 ± 6 dB HL) in condition C were similar to those of the better ears in condition A (26±9 dB HL), though condition C showed higher thresholds at 500 Hz and lower thresholds at 1 and 2kHz when compared to condition A. The hearing thresholds using a test-band were similar to the published results of BAHA users with the volume to most comfortable level (MCL). Our findings showed that a BAHA test-band on the poorer ear could transmit sound to the cochlea as much as the better ears can hear. The increased functional gain at 1 and 2kHz reflects the technical characteristics of BAHA processor. The reduction of energy through the scalp when using a test-band seems to be offset by the difference of output by setting the volume to full-on gain and using a high-powered speech processor. Preoperative hearing gains using a test-band with full-on gain seems to be similar to the post-operative gains of BAHA users with the volume to MCL. © 2013.
Baker, Shaun; Centric, Aaron; Chennupati, Sri Kiran
2015-10-01
Bone-anchored hearing devices are an accepted treatment option for hearing restoration in various types of hearing loss. Traditional devices have a percutaneous abutment for attachment of the sound processor that contributes to a high complication rate. Previously, our institution reported on the Sophono (Boulder, CO, USA) abutment-free system that produced similar audiologic results to devices with abutments. Recently, Cochlear Americas (Centennial, CO, USA) released an abutment-free bone-anchored hearing device, the BAHA Attract. In contrast to the Sophono implant, the BAHA Attract utilizes an osseointegrated implant. This study aims to demonstrate patient benefit abutment-free devices, compare the results of the two abutment-free devices, and examine complication rates. A retrospective chart review was conducted for the first eleven Sophono implanted patients and for the first six patients implanted with the BAHA Attract at our institution. Subsequently, we analyzed patient demographics, audiometric data, clinical course and outcomes. Average improvement for the BAHA Attract in pure-tone average (PTA) and speech reception threshold (SRT) was 41dB hearing level (dBHL) and 56dBHL, respectively. Considering all frequencies, the BAHA Attract mean improvement was 39dBHL (range 32-45dBHL). The Sophono average improvement in PTA and SRT was 38dBHL and 39dBHL, respectively. The mean improvement with Sophono for all frequencies was 34dBHL (range 24-43dBHL). Significant improvements in both pure-tone averages and speech reception threshold for both devices were achieved. In direct comparison of the two separate devices using the chi-square test, the PTA and SRT data between the two devices do not show a statistically significant difference (p-value 0.68 and 0.56, respectively). The complication rate for these abutment-free devices is lower than that of those featuring the transcutaneous abutment, although more studies are needed to further assess this potential advantage. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Buss, Emily; Leibold, Lori J.; Porter, Heather L.; Grose, John H.
2017-01-01
Children perform more poorly than adults on a wide range of masked speech perception paradigms, but this effect is particularly pronounced when the masker itself is also composed of speech. The present study evaluated two factors that might contribute to this effect: the ability to perceptually isolate the target from masker speech, and the ability to recognize target speech based on sparse cues (glimpsing). Speech reception thresholds (SRTs) were estimated for closed-set, disyllabic word recognition in children (5–16 years) and adults in a one- or two-talker masker. Speech maskers were 60 dB sound pressure level (SPL), and they were either presented alone or in combination with a 50-dB-SPL speech-shaped noise masker. There was an age effect overall, but performance was adult-like at a younger age for the one-talker than the two-talker masker. Noise tended to elevate SRTs, particularly for older children and adults, and when summed with the one-talker masker. Removing time-frequency epochs associated with a poor target-to-masker ratio markedly improved SRTs, with larger effects for younger listeners; the age effect was not eliminated, however. Results were interpreted as indicating that development of speech-in-speech recognition is likely impacted by development of both perceptual masking and the ability recognize speech based on sparse cues. PMID:28464682
Mehraei, Golbarg; Gallun, Frederick J.; Leek, Marjorie R.; Bernstein, Joshua G. W.
2014-01-01
Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4–32 Hz), spectral ripple density [0.5–4 cycles/octave (c/o)] and carrier center frequency (500–4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4–12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements. PMID:24993215
ERIC Educational Resources Information Center
Wald, Mike
2006-01-01
The potential use of Automatic Speech Recognition to assist receptive communication is explored. The opportunities and challenges that this technology presents students and staff to provide captioning of speech online or in classrooms for deaf or hard of hearing students and assist blind, visually impaired or dyslexic learners to read and search…
Di Berardino, F; Tognola, G; Paglialonga, A; Alpini, D; Grandori, F; Cesarani, A
2010-08-01
To assess whether different compact disk recording protocols, used to prepare speech test material, affect the reliability and comparability of speech audiometry testing. We conducted acoustic analysis of compact disks used in clinical practice, to determine whether speech material had been recorded using similar procedures. To assess the impact of different recording procedures on speech test outcomes, normal hearing subjects were tested using differently prepared compact disks, and their psychometric curves compared. Acoustic analysis revealed that speech material had been recorded using different protocols. The major difference was the gain between the levels at which the speech material and the calibration signal had been recorded. Although correct calibration of the audiometer was performed for each compact disk before testing, speech recognition thresholds and maximum intelligibility thresholds differed significantly between compact disks (p < 0.05), and were influenced by the gain between the recording level of the speech material and the calibration signal. To ensure the reliability and comparability of speech test outcomes obtained using different compact disks, it is recommended to check for possible differences in the recording gains used to prepare the compact disks, and then to compensate for any differences before testing.
Using Speech Recall in Hearing Aid Fitting and Outcome Evaluation Under Ecological Test Conditions.
Lunner, Thomas; Rudner, Mary; Rosenbom, Tove; Ågren, Jessica; Ng, Elaine Hoi Ning
2016-01-01
In adaptive Speech Reception Threshold (SRT) tests used in the audiological clinic, speech is presented at signal to noise ratios (SNRs) that are lower than those generally encountered in real-life communication situations. At higher, ecologically valid SNRs, however, SRTs are insensitive to changes in hearing aid signal processing that may be of benefit to listeners who are hard of hearing. Previous studies conducted in Swedish using the Sentence-final Word Identification and Recall test (SWIR) have indicated that at such SNRs, the ability to recall spoken words may be a more informative measure. In the present study, a Danish version of SWIR, known as the Sentence-final Word Identification and Recall Test in a New Language (SWIRL) was introduced and evaluated in two experiments. The objective of experiment 1 was to determine if the Swedish results demonstrating benefit from noise reduction signal processing for hearing aid wearers could be replicated in 25 Danish participants with mild to moderate symmetrical sensorineural hearing loss. The objective of experiment 2 was to compare direct-drive and skin-drive transmission in 16 Danish users of bone-anchored hearing aids with conductive hearing loss or mixed sensorineural and conductive hearing loss. In experiment 1, performance on SWIRL improved when hearing aid noise reduction was used, replicating the Swedish results and generalizing them across languages. In experiment 2, performance on SWIRL was better for direct-drive compared with skin-drive transmission conditions. These findings indicate that spoken word recall can be used to identify benefits from hearing aid signal processing at ecologically valid, positive SNRs where SRTs are insensitive.
Echolalia and comprehension in autistic children.
Roberts, J M
1989-06-01
The research reported in this paper investigates the phenomenon of echolalia in the speech of autistic children by examining the relationship between the frequency of echolalia and receptive language ability. The receptive language skills of 10 autistic children were assessed, and spontaneous speech samples were recorded. Analysis of these data showed that those children with poor receptive language skills produced significantly more echolalic utterances than those children whose receptive skills were more age-appropriate. Children who produced fewer echolalic utterances, and had more advanced receptive language ability, evidenced a higher proportion of mitigated echolalia. The most common type of mitigation was echo plus affirmation or denial.
Fan, Yue; Zhang, Ying; Wang, Pu; Wang, Zhen; Zhu, Xiaoli; Yang, Hua; Chen, Xiaowei
2014-04-01
The bone-anchored hearing device (BAHD) was not introduced in China until 2010. To our knowledge, this is the first study to assess the efficacy of Chinese Mandarin-speaking patients with bilateral aural atresia. To evaluate the speech recognition of Chinese Mandarin-speaking patients with BAHDs as well as patients' satisfaction using 2 questionnaires. A retrospective case review of 16 patients with bilateral aural atresia conducted at a tertiary referral center. A BAHD was implanted during auricle reconstruction surgery or after the auricle was rebuilt. A surgical method to combine the BAHD implantation with the second stage of ear reconstruction was introduced. Speech audiometry test and mean pure-tone threshold results were compared among patients with unaided hearing and those with BAHDs. Scores from the BAHD user questionnaire and Glasgow Children's Benefit Inventory (GCBI) were used to measure patients' satisfaction and subjective health benefit. The mean (SD) speech discrimination scores measured in a sound field with a presentation level of 45 dB HL (hearing level) were 6.7% (7.4%) unaided and 86.5% (4.4%) with a BAHD. Scores with a presentation level of 65 dB HL were 56.5% (7.4%) unaided and 90.1% (3.4%) with a BAHD. The speech reception threshold was 60.6 (7.5) dB HL unaided and 24.7 (5.0) dB HL with a BAHD. The mean (SD) pure-tone threshold of the patients was 61.6 (7.8) dB HL unaided and 23.8 (5.9) dB HL with a BAHD. The BAHD application questionnaire demonstrated excellent patient satisfaction. The mean (SD) benefit score of GCBI was 45.6 (14.4). For aural atresia, the BAHD has been one of the most reliable methods of auditory rehabilitation. It can improve the patient's word recognition performance and quality of life. The technique of BAHD implantation combined with auricular reconstruction in a 2-stages-in-1 surgery and the modified incision of patients with reconstructed auricle proved to be safe and effective.
Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo
2009-04-01
The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.
Determining the energetic and informational components of speech-on-speech masking
Kidd, Gerald; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Clayton, Kameron K.; Best, Virginia
2016-01-01
Identification of target speech was studied under masked conditions consisting of two or four independent speech maskers. In the reference conditions, the maskers were colocated with the target, the masker talkers were the same sex as the target, and the masker speech was intelligible. The comparison conditions, intended to provide release from masking, included different-sex target and masker talkers, time-reversal of the masker speech, and spatial separation of the maskers from the target. Significant release from masking was found for all comparison conditions. To determine whether these reductions in masking could be attributed to differences in energetic masking, ideal time-frequency segregation (ITFS) processing was applied so that the time-frequency units where the masker energy dominated the target energy were removed. The remaining target-dominated “glimpses” were reassembled as the stimulus. Speech reception thresholds measured using these resynthesized ITFS-processed stimuli were the same for the reference and comparison conditions supporting the conclusion that the amount of energetic masking across conditions was the same. These results indicated that the large release from masking found under all comparison conditions was due primarily to a reduction in informational masking. Furthermore, the large individual differences observed generally were correlated across the three masking release conditions. PMID:27475139
Spriet, Ann; Van Deun, Lieselot; Eftaxiadis, Kyriaky; Laneau, Johan; Moonen, Marc; van Dijk, Bas; van Wieringen, Astrid; Wouters, Jan
2007-02-01
This paper evaluates the benefit of the two-microphone adaptive beamformer BEAM in the Nucleus Freedom cochlear implant (CI) system for speech understanding in background noise by CI users. A double-blind evaluation of the two-microphone adaptive beamformer BEAM and a hardware directional microphone was carried out with five adult Nucleus CI users. The test procedure consisted of a pre- and post-test in the lab and a 2-wk trial period at home. In the pre- and post-test, the speech reception threshold (SRT) with sentences and the percentage correct phoneme scores for CVC words were measured in quiet and background noise at different signal-to-noise ratios. Performance was assessed for two different noise configurations (with a single noise source and with three noise sources) and two different noise materials (stationary speech-weighted noise and multitalker babble). During the 2-wk trial period at home, the CI users evaluated the noise reduction performance in different listening conditions by means of the SSQ questionnaire. In addition to the perceptual evaluation, the noise reduction performance of the beamformer was measured physically as a function of the direction of the noise source. Significant improvements of both the SRT in noise (average improvement of 5-16 dB) and the percentage correct phoneme scores (average improvement of 10-41%) were observed with BEAM compared to the standard hardware directional microphone. In addition, the SSQ questionnaire and subjective evaluation in controlled and real-life scenarios suggested a possible preference for the beamformer in noisy environments. The evaluation demonstrates that the adaptive noise reduction algorithm BEAM in the Nucleus Freedom CI-system may significantly increase the speech perception by cochlear implantees in noisy listening conditions. This is the first monolateral (adaptive) noise reduction strategy actually implemented in a mainstream commercial CI.
Speech Intelligibility in Various Noise Conditions with the Nucleus® 5 CP810 Sound Processor.
Dillier, Norbert; Lai, Wai Kong
2015-06-11
The Nucleus(®) 5 System Sound Processor (CP810, Cochlear™, Macquarie University, NSW, Australia) contains two omnidirectional microphones. They can be configured as a fixed directional microphone combination (called Zoom) or as an adaptive beamformer (called Beam), which adjusts the directivity continuously to maximally reduce the interfering noise. Initial evaluation studies with the CP810 had compared performance and usability of the new processor in comparison with the Freedom™ Sound Processor (Cochlear™) for speech in quiet and noise for a subset of the processing options. This study compares the two processing options suggested to be used in noisy environments, Zoom and Beam, for various sound field conditions using a standardized speech in noise matrix test (Oldenburg sentences test). Nine German-speaking subjects who previously had been using the Freedom speech processor and subsequently were upgraded to the CP810 device participated in this series of additional evaluation tests. The speech reception threshold (SRT for 50% speech intelligibility in noise) was determined using sentences presented via loudspeaker at 65 dB SPL in front of the listener and noise presented either via the same loudspeaker (S0N0) or at 90 degrees at either the ear with the sound processor (S0NCI+) or the opposite unaided ear (S0NCI-). The fourth noise condition consisted of three uncorrelated noise sources placed at 90, 180 and 270 degrees. The noise level was adjusted through an adaptive procedure to yield a signal to noise ratio where 50% of the words in the sentences were correctly understood. In spatially separated speech and noise conditions both Zoom and Beam could improve the SRT significantly. For single noise sources, either ipsilateral or contralateral to the cochlear implant sound processor, average improvements with Beam of 12.9 and 7.9 dB in SRT were found. The average SRT of -8 dB for Beam in the diffuse noise condition (uncorrelated noise from both sides and back) is truly remarkable and comparable to the performance of normal hearing listeners in the same test environment. The static directivity (Zoom) option in the diffuse noise condition still provides a significant benefit of 5.9 dB in comparison with the standard omnidirectional microphone setting. These results indicate that CI recipients may improve their speech recognition in noisy environments significantly using these directional microphone-processing options.
Grammar without Speech Production: The Case of Labrador Inuttitut Heritage Receptive Bilinguals
ERIC Educational Resources Information Center
Sherkina-Lieber, Marina; Perez-Leroux, Ana T.; Johns, Alana
2011-01-01
We examine morphosyntactic knowledge of Labrador Inuttitut by Inuit receptive bilinguals (RBs)--heritage speakers who are capable of comprehension, but produce little or no speech. A grammaticality judgment study suggests that RBs possess sensitivity to morphosyntactic violations, though to a lesser degree than fluent bilinguals. Low-proficiency…
Fractionated Stereotactic Radiotherapy of Vestibular Schwannomas Accelerates Hearing Loss
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rasmussen, Rune, E-mail: rune333@gmail.com; Claesson, Magnus; Stangerup, Sven-Eric
2012-08-01
Objective: To evaluate long-term tumor control and hearing preservation rates in patients with vestibular schwannoma treated with fractionated stereotactic radiotherapy (FSRT), comparing hearing preservation rates to an untreated control group. The relationship between radiation dose to the cochlea and hearing preservation was also investigated. Methods and Materials: Forty-two patients receiving FSRT between 1997 and 2008 with a minimum follow-up of 2 years were included. All patients received 54 Gy in 27-30 fractions during 5.5-6.0 weeks. Clinical and audiometry data were collected prospectively. From a 'wait-and-scan' group, 409 patients were selected as control subjects, matched by initial audiometric parameters. Radiation dosemore » to the cochlea was measured using the original treatment plan and then related to changes in acoustic parameters. Results: Actuarial 2-, 4-, and 10-year tumor control rates were 100%, 91.5%, and 85.0%, respectively. Twenty-one patients had serviceable hearing before FSRT, 8 of whom (38%) retained serviceable hearing at 2 years after FSRT. No patients retained serviceable hearing after 10 years. At 2 years, hearing preservation rates in the control group were 1.8 times higher compared with the group receiving FSRT (P=.007). Radiation dose to the cochlea was significantly correlated to deterioration of the speech reception threshold (P=.03) but not to discrimination loss. Conclusion: FSRT accelerates the naturally occurring hearing loss in patients with vestibular schwannoma. Our findings, using fractionation of radiotherapy, parallel results using single-dose radiation. The radiation dose to the cochlea is correlated to hearing loss measured as the speech reception threshold.« less
Glennen, Sharon
2014-07-01
The author followed 56 internationally adopted children during the first 3 years after adoption to determine how and when they reached age-expected language proficiency in Standard American English. The influence of age of adoption was measured, along with the relationship between early and later language and speech outcomes. Children adopted from Eastern Europe at ages 12 months to 4 years, 11 months, were assessed 5 times across 3 years. Norm-referenced measures of receptive and expressive language and articulation were compared over time. In addition, mean length of utterance (MLU) was measured. Across all children, receptive language reached age-expected levels more quickly than expressive language. Children adopted at ages 1 and 2 "caught up" more quickly than children adopted at ages 3 and 4. Three years after adoption, there was no difference in test scores across age of adoption groups, and the percentage of children with language or speech delays matched population estimates. MLU was within the average range 3 years after adoption but significantly lower than other language test scores. Three years after adoption, age of adoption did not influence language or speech outcomes, and most children reached age-expected language levels. Expressive syntax as measured by MLU was an area of relative weakness.
Boyd, Paul J
2006-12-01
The principal task in the programming of a cochlear implant (CI) speech processor is the setting of the electrical dynamic range (output) for each electrode, to ensure that a comfortable loudness percept is obtained for a range of input levels. This typically involves separate psychophysical measurement of electrical threshold ([theta] e) and upper tolerance levels using short current bursts generated by the fitting software. Anecdotal clinical experience and some experimental studies suggest that the measurement of [theta]e is relatively unimportant and that the setting of upper tolerance limits is more critical for processor programming. The present study aims to test this hypothesis and examines in detail how acoustic thresholds and speech recognition are affected by setting of the lower limit of the output ("Programming threshold" or "PT") to understand better the influence of this parameter and how it interacts with certain other programming parameters. Test programs (maps) were generated with PT set to artificially high and low values and tested on users of the MED-EL COMBI 40+ CI system. Acoustic thresholds and speech recognition scores (sentence tests) were measured for each of the test maps. Acoustic thresholds were also measured using maps with a range of output compression functions ("maplaws"). In addition, subjective reports were recorded regarding the presence of "background threshold stimulation" which is occasionally reported by CI users if PT is set to relatively high values when using the CIS strategy. Manipulation of PT was found to have very little effect. Setting PT to minimum produced a mean 5 dB (S.D. = 6.25) increase in acoustic thresholds, relative to thresholds with PT set normally, and had no statistically significant effect on speech recognition scores on a sentence test. On the other hand, maplaw setting was found to have a significant effect on acoustic thresholds (raised as maplaw is made more linear), which provides some theoretical explanation as to why PT has little effect when using the default maplaw of c = 500. Subjective reports of background threshold stimulation showed that most users could perceive a relatively loud auditory percept, in the absence of microphone input, when PT was set to double the behaviorally measured electrical thresholds ([theta]e), but that this produced little intrusion when microphone input was present. The results of these investigations have direct clinical relevance, showing that setting of PT is indeed relatively unimportant in terms of speech discrimination, but that it is worth ensuring that PT is not set excessively high, as this can produce distracting background stimulation. Indeed, it may even be set to minimum values without deleterious effect.
The influence of informational masking in reverberant, multi-talker environments.
Westermann, Adam; Buchholz, Jörg M
2015-08-01
The relevance of informational masking (IM) in real-world listening is not well understood. In literature, IM effects of up to 10 dB in measured speech reception thresholds (SRTs) are reported. However, these experiments typically employed simplified spatial configurations and speech corpora that magnified confusions. In this study, SRTs were measured with normal hearing subjects in a simulated cafeteria environment. The environment was reproduced by a 41-channel 3D-loudspeaker array. The target talker was 2 m in front of the listener and masking talkers were either spread throughout the room or colocated with the target. Three types of maskers were realized: one with the same talker as the target (maximum IM), one with talkers different from the target, and one with unintelligible, noise-vocoded talkers (minimal IM). Overall, SRTs improved for the spatially distributed conditions compared to the colocated conditions. Within the spatially distributed conditions, there was no significant difference between thresholds with the different- and vocoded-talker maskers. Conditions with the same-talker masker were the only conditions with substantially higher thresholds, especially in the colocated conditions. These results suggest that IM related to target-masker confusions, at least for normal-hearing listeners, is of low relevance in real-life listening.
Improving Your Child's Listening and Language Skills: A Parent's Guide to Language Development.
ERIC Educational Resources Information Center
Johnson, Ruth; And Others
The parent's guide reviews normal speech and language development and discusses ways in which parents of young children with language problems facilitate that development. Terms such as speech, communication, and receptive and expressive language are defined, and stages in receptive/expressive language development are charted. Implications for…
The Relationship between Socio-Economic Status and Lexical Development
ERIC Educational Resources Information Center
Black, Esther; Peppe, Sue; Gibbon, Fiona
2008-01-01
The British Picture Vocabulary Scale, second edition (BPVS-II), a measure of receptive vocabulary, is widely used by speech and language therapists and researchers into speech and language disorders, as an indicator of language delay, but it has frequently been suggested that receptive vocabulary may be more associated with socio-economic status.…
An Experimental Study of Interference between Receptive and Productive Processes Involving Speech
ERIC Educational Resources Information Center
Goldman-Eisler, Frieda; Cohen, Michele
1975-01-01
Reports an experiment designed to throw light on the interference between the reception and production of speech by controlling the level of interference between decoding and encoding, using hesitancy as an indicator of interference. This proved effective in spotting the levels at which interference takes place. (Author/RM)
Gifford, René H.; Grantham, D. Wesley; Sheffield, Sterling W.; Davis, Timothy J.; Dwyer, Robert; Dorman, Michael F.
2014-01-01
The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from −90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100–900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. PMID:24607490
Gifford, René H; Grantham, D Wesley; Sheffield, Sterling W; Davis, Timothy J; Dwyer, Robert; Dorman, Michael F
2014-06-01
The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from -90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100-900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. Copyright © 2014. Published by Elsevier B.V.
Masso, Sarah; Baker, Elise; McLeod, Sharynne; Wang, Cen
2017-07-12
The aim of this study was to determine if polysyllable accuracy in preschoolers with speech sound disorders (SSD) was related to known predictors of later literacy development: phonological processing, receptive vocabulary, and print knowledge. Polysyllables-words of three or more syllables-are important to consider because unlike monosyllables, polysyllables have been associated with phonological processing and literacy difficulties in school-aged children. They therefore have the potential to help identify preschoolers most at risk of future literacy difficulties. Participants were 93 preschool children with SSD from the Sound Start Study. Participants completed the Polysyllable Preschool Test (Baker, 2013) as well as phonological processing, receptive vocabulary, and print knowledge tasks. Cluster analysis was completed, and 2 clusters were identified: low polysyllable accuracy and moderate polysyllable accuracy. The clusters were significantly different based on 2 measures of phonological awareness and measures of receptive vocabulary, rapid naming, and digit span. The clusters were not significantly different on sound matching accuracy or letter, sound, or print concept knowledge. The participants' poor performance on print knowledge tasks suggested that as a group, they were at risk of literacy difficulties but that there was a cluster of participants at greater risk-those with both low polysyllable accuracy and poor phonological processing.
[Characteristics, advantages, and limits of matrix tests].
Brand, T; Wagener, K C
2017-03-01
Deterioration of communication abilities due to hearing problems is particularly relevant in listening situations with noise. Therefore, speech intelligibility tests in noise are required for audiological diagnostics and evaluation of hearing rehabilitation. This study analyzed the characteristics of matrix tests assessing the 50 % speech recognition threshold in noise. What are their advantages and limitations? Matrix tests are based on a matrix of 50 words (10 five-word sentences with same grammatical structure). In the standard setting, 20 sentences are presented using an adaptive procedure estimating the individual 50 % speech recognition threshold in noise. At present, matrix tests in 17 different languages are available. A high international comparability of matrix tests exists. The German language matrix test (OLSA, male speaker) has a reference 50 % speech recognition threshold of -7.1 (± 1.1) dB SNR. Before using a matrix test for the first time, the test person has to become familiar with the basic speech material using two training lists. Hereafter, matrix tests produce constant results even if repeated many times. Matrix tests are suitable for users of hearing aids and cochlear implants, particularly for assessment of benefit during the fitting process. Matrix tests can be performed in closed form and consequently with non-native listeners, even if the experimenter does not speak the test person's native language. Short versions of matrix tests are available for listeners with a shorter memory span, e.g., children.
ERIC Educational Resources Information Center
Law, J.; Campbell, C.; Roulstone, S.; Adams, C.; Boyle, J.
2008-01-01
Background: Receptive language impairment (RLI) is one of the most significant indicators of negative sequelae for children with speech and language disorders. Despite this, relatively little is known about the most effective treatments for these children in the primary school period. Aims: To explore the relationship between the reported practice…
Dai, Chuanfu; Zhao, Zeqi; Zhang, Duo; Lei, Guanxiong
2018-01-01
Background The aim of this study was to explore the value of the spectral ripple discrimination test in speech recognition evaluation among a deaf (post-lingual) Mandarin-speaking population in China following cochlear implantation. Material/Methods The study included 23 Mandarin-speaking adult subjects with normal hearing (normal-hearing group) and 17 deaf adults who were former Mandarin-speakers, with cochlear implants (cochlear implantation group). The normal-hearing subjects were divided into men (n=10) and women (n=13). The spectral ripple discrimination thresholds between the groups were compared. The correlation between spectral ripple discrimination thresholds and Mandarin speech recognition rates in the cochlear implantation group were studied. Results Spectral ripple discrimination thresholds did not correlate with age (r=−0.19; p=0.22), and there was no significant difference in spectral ripple discrimination thresholds between the male and female groups (p=0.654). Spectral ripple discrimination thresholds of deaf adults with cochlear implants were significantly correlated with monosyllabic recognition rates (r=0.84; p=0.000). Conclusions In a Mandarin Chinese speaking population, spectral ripple discrimination thresholds of normal-hearing individuals were unaffected by both gender and age. Spectral ripple discrimination thresholds were correlated with Mandarin monosyllabic recognition rates of Mandarin-speaking in post-lingual deaf adults with cochlear implants. The spectral ripple discrimination test is a promising method for speech recognition evaluation in adults following cochlear implantation in China. PMID:29806954
Dai, Chuanfu; Zhao, Zeqi; Shen, Weidong; Zhang, Duo; Lei, Guanxiong; Qiao, Yuehua; Yang, Shiming
2018-05-28
BACKGROUND The aim of this study was to explore the value of the spectral ripple discrimination test in speech recognition evaluation among a deaf (post-lingual) Mandarin-speaking population in China following cochlear implantation. MATERIAL AND METHODS The study included 23 Mandarin-speaking adult subjects with normal hearing (normal-hearing group) and 17 deaf adults who were former Mandarin-speakers, with cochlear implants (cochlear implantation group). The normal-hearing subjects were divided into men (n=10) and women (n=13). The spectral ripple discrimination thresholds between the groups were compared. The correlation between spectral ripple discrimination thresholds and Mandarin speech recognition rates in the cochlear implantation group were studied. RESULTS Spectral ripple discrimination thresholds did not correlate with age (r=-0.19; p=0.22), and there was no significant difference in spectral ripple discrimination thresholds between the male and female groups (p=0.654). Spectral ripple discrimination thresholds of deaf adults with cochlear implants were significantly correlated with monosyllabic recognition rates (r=0.84; p=0.000). CONCLUSIONS In a Mandarin Chinese speaking population, spectral ripple discrimination thresholds of normal-hearing individuals were unaffected by both gender and age. Spectral ripple discrimination thresholds were correlated with Mandarin monosyllabic recognition rates of Mandarin-speaking in post-lingual deaf adults with cochlear implants. The spectral ripple discrimination test is a promising method for speech recognition evaluation in adults following cochlear implantation in China.
Barton-Hulsey, Andrea; Sevcik, Rose A; Romski, MaryAnn
2018-05-03
A number of intrinsic factors, including expressive speech skills, have been suggested to place children with developmental disabilities at risk for limited development of reading skills. This study examines the relationship between these factors, speech ability, and children's phonological awareness skills. A nonexperimental study design was used to examine the relationship between intrinsic skills of speech, language, print, and letter-sound knowledge to phonological awareness in 42 children with developmental disabilities between the ages of 48 and 69 months. Hierarchical multiple regression was done to determine if speech ability accounted for a unique amount of variance in phonological awareness skill beyond what would be expected by developmental skills inclusive of receptive language and print and letter-sound knowledge. A range of skill in all areas of direct assessment was found. Children with limited speech were found to have emerging skills in print knowledge, letter-sound knowledge, and phonological awareness. Speech ability did not predict a significant amount of variance in phonological awareness beyond what would be expected by developmental skills of receptive language and print and letter-sound knowledge. Children with limited speech ability were found to have receptive language and letter-sound knowledge that supported the development of phonological awareness skills. This study provides implications for practitioners and researchers concerning the factors related to early reading development in children with limited speech ability and developmental disabilities.
The NTID speech recognition test: NSRT(®).
Bochner, Joseph H; Garrison, Wayne M; Doherty, Karen A
2015-07-01
The purpose of this study was to collect and analyse data necessary for expansion of the NSRT item pool and to evaluate the NSRT adaptive testing software. Participants were administered pure-tone and speech recognition tests including W-22 and QuickSIN, as well as a set of 323 new NSRT items and NSRT adaptive tests in quiet and background noise. Performance on the adaptive tests was compared to pure-tone thresholds and performance on other speech recognition measures. The 323 new items were subjected to Rasch scaling analysis. Seventy adults with mild to moderately severe hearing loss participated in this study. Their mean age was 62.4 years (sd = 20.8). The 323 new NSRT items fit very well with the original item bank, enabling the item pool to be more than doubled in size. Data indicate high reliability coefficients for the NSRT and moderate correlations with pure-tone thresholds (PTA and HFPTA) and other speech recognition measures (W-22, QuickSIN, and SRT). The adaptive NSRT is an efficient and effective measure of speech recognition, providing valid and reliable information concerning respondents' speech perception abilities.
Segmental and Suprasegmental Perception in Children Using Hearing Aids.
Wenrich, Kaitlyn A; Davidson, Lisa S; Uchanski, Rosalie M
Suprasegmental perception (perception of stress, intonation, "how something is said" and "who says it") and segmental speech perception (perception of individual phonemes or perception of "what is said") are perceptual abilities that provide the foundation for the development of spoken language and effective communication. While there are numerous studies examining segmental perception in children with hearing aids (HAs), there are far fewer studies examining suprasegmental perception, especially for children with greater degrees of residual hearing. Examining the relation between acoustic hearing thresholds, and both segmental and suprasegmental perception for children with HAs, may ultimately enable better device recommendations (bilateral HAs, bimodal devices [one CI and one HA in opposite ears], bilateral CIs) for a particular degree of residual hearing. Examining both types of speech perception is important because segmental and suprasegmental cues are affected differentially by the type of hearing device(s) used (i.e., cochlear implant [CI] and/or HA). Additionally, suprathreshold measures, such as frequency resolution ability, may partially predict benefit from amplification and may assist audiologists in making hearing device recommendations. The purpose of this study is to explore the relationship between audibility (via hearing thresholds and speech intelligibility indices), and segmental and suprasegmental speech perception for children with HAs. A secondary goal is to explore the relationships among frequency resolution ability (via spectral modulation detection [SMD] measures), segmental and suprasegmental speech perception, and receptive language in these same children. A prospective cross-sectional design. Twenty-three children, ages 4 yr 11 mo to 11 yr 11 mo, participated in the study. Participants were recruited from pediatric clinic populations, oral schools for the deaf, and mainstream schools. Audiological history and hearing device information were collected from participants and their families. Segmental and suprasegmental speech perception, SMD, and receptive vocabulary skills were assessed. Correlations were calculated to examine the significance (p < 0.05) of relations between audibility and outcome measures. Measures of audibility and segmental speech perception are not significantly correlated, while low-frequency pure-tone average (unaided) is significantly correlated with suprasegmental speech perception. SMD is significantly correlated with all measures (measures of audibility, segmental and suprasegmental perception and vocabulary). Lastly, although age is not significantly correlated with measures of audibility, it is significantly correlated with all other outcome measures. The absence of a significant correlation between audibility and segmental speech perception might be attributed to overall audibility being maximized through well-fit HAs. The significant correlation between low-frequency unaided audibility and suprasegmental measures is likely due to the strong, predominantly low-frequency nature of suprasegmental acoustic properties. Frequency resolution ability, via SMD performance, is significantly correlated with all outcomes and requires further investigation; its significant correlation with vocabulary suggests that linguistic ability may be partially related to frequency resolution ability. Last, all of the outcome measures are significantly correlated with age, suggestive of developmental effects. American Academy of Audiology
Oral motor deficits in speech-impaired children with autism
Belmonte, Matthew K.; Saxena-Chandhok, Tanushree; Cherian, Ruth; Muneer, Reema; George, Lisa; Karanth, Prathibha
2013-01-01
Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive vs. expressive speech/language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills) and 90 (for oral motor skills) typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual. PMID:23847480
Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.
Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200
Auinger, Alice Barbara; Riss, Dominik; Liepins, Rudolfs; Rader, Tobias; Keck, Tilman; Keintzel, Thomas; Kaider, Alexandra; Baumgartner, Wolf-Dieter; Gstoettner, Wolfgang; Arnoldner, Christoph
2017-07-01
It has been shown that patients with electric acoustic stimulation (EAS) perform better in noisy environments than patients with a cochlear implant (CI). One reason for this could be the preserved access to acoustic low-frequency cues including the fundamental frequency (F0). Therefore, our primary aim was to investigate whether users of EAS experience a release from masking with increasing F0 difference between target talker and masking talker. The study comprised 29 patients and consisted of three groups of subjects: EAS users, CI users and normal-hearing listeners (NH). All CI and EAS users were implanted with a MED-EL cochlear implant and had at least 12 months of experience with the implant. Speech perception was assessed with the Oldenburg sentence test (OlSa) using one sentence from the test corpus as speech masker. The F0 in this masking sentence was shifted upwards by 4, 8, or 12 semitones. For each of these masker conditions the speech reception threshold (SRT) was assessed by adaptively varying the masker level while presenting the target sentences at a fixed level. A statistically significant improvement in speech perception was found for increasing difference in F0 between target sentence and masker sentence in EAS users (p = 0.038) and in NH listeners (p = 0.003). In CI users (classic CI or EAS users with electrical stimulation only) speech perception was independent from differences in F0 between target and masker. A release from masking with increasing difference in F0 between target and masking speech was only observed in listeners and configurations in which the low-frequency region was presented acoustically. Thus, the speech information contained in the low frequencies seems to be crucial for allowing listeners to separate multiple sources. By combining acoustic and electric information, EAS users even manage tasks as complicated as segregating the audio streams from multiple talkers. Preserving the natural code, like fine-structure cues in the low-frequency region, seems to be crucial to provide CI users with the best benefit. Copyright © 2017 Elsevier B.V. All rights reserved.
Davidson, Lisa S; Geers, Ann E; Brenner, Christine
2010-10-01
Updated cochlear implant technology and optimized fitting can have a substantial impact on speech perception. The effects of upgrades in processor technology and aided thresholds on word recognition at soft input levels and sentence recognition in noise were examined. We hypothesized that updated speech processors and lower aided thresholds would allow improved recognition of soft speech without compromising performance in noise. 109 teenagers who had used a Nucleus 22-cochlear implant since preschool were tested with their current speech processor(s) (101 unilateral and 8 bilateral): 13 used the Spectra, 22 the ESPrit 22, 61 the ESPrit 3G, and 13 the Freedom. The Lexical Neighborhood Test (LNT) was administered at 70 and 50 dB SPL and the Bamford Kowal Bench sentences were administered in quiet and in noise. Aided thresholds were obtained for frequency-modulated tones from 250 to 4,000 Hz. Results were analyzed using repeated measures analysis of variance. Aided thresholds for the Freedom/3G group were significantly lower (better) than the Spectra/Sprint group. LNT scores at 50 dB were significantly higher for the Freedom/3G group. No significant differences between the 2 groups were found for the LNT at 70 or sentences in quiet or noise. Adolescents using updated processors that allowed for aided detection thresholds of 30 dB HL or better performed the best at soft levels. The BKB in noise results suggest that greater access to soft speech does not compromise listening in noise.
ERIC Educational Resources Information Center
Roberts, Joanne; Price, Johanna; Barnes, Elizabeth; Nelson, Lauren; Burchinal, Margaret; Hennon, Elizabeth A.; Moskowitz, Lauren; Edwards, Anne; Malkin, Cheryl; Anderson, Kathleen; Misenheimer, Jan; Hooper, Stephen R.
2007-01-01
Boys with fragile X syndrome with (n = 49) and without (n = 33) characteristics of autism spectrum disorder, boys with Down syndrome (39), and typically developing boys (n = 41) were compared on standardized measures of receptive vocabulary, expressive vocabulary, and speech administered annually over 4 years. Three major findings emerged. Boys…
The spatial unmasking of speech: evidence for within-channel processing of interaural time delay.
Edmonds, Barrie A; Culling, John F
2005-05-01
Across-frequency processing by common interaural time delay (ITD) in spatial unmasking was investigated by measuring speech reception thresholds (SRTs) for high- and low-frequency bands of target speech presented against concurrent speech or a noise masker. Experiment 1 indicated that presenting one of these target bands with an ITD of +500 micros and the other with zero ITD (like the masker) provided some release from masking, but full binaural advantage was only measured when both target bands were given an ITD of + 500 micros. Experiment 2 showed that full binaural advantage could also be achieved when the high- and low-frequency bands were presented with ITDs of equal but opposite magnitude (+/- 500 micros). In experiment 3, the masker was also split into high- and low-frequency bands with ITDs of equal but opposite magnitude (+/-500 micros). The ITD of the low-frequency target band matched that of the high-frequency masking band and vice versa. SRTs indicated that, as long as the target and masker differed in ITD within each frequency band, full binaural advantage could be achieved. These results suggest that the mechanism underlying spatial unmasking exploits differences in ITD independently within each frequency channel.
Goverts, S Theo; Huysmans, Elke; Kramer, Sophia E; de Groot, Annette M B; Houtgast, Tammo
2011-12-01
Researchers have used the distortion-sensitivity approach in the psychoacoustical domain to investigate the role of auditory processing abilities in speech perception in noise (van Schijndel, Houtgast, & Festen, 2001; Goverts & Houtgast, 2010). In this study, the authors examined the potential applicability of the distortion-sensitivity approach for investigating the role of linguistic abilities in speech understanding in noise. The authors applied the distortion-sensitivity approach by measuring the processing of visually presented masked text in a condition with manipulated syntactic, lexical, and semantic cues and while using the Text Reception Threshold (George et al., 2007; Kramer, Zekveld, & Houtgast, 2009; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) method. Two groups that differed in linguistic abilities were studied: 13 native and 10 non-native speakers of Dutch, all typically hearing university students. As expected, the non-native subjects showed substantially reduced performance. The results of the distortion-sensitivity approach yielded differentiated results on the use of specific linguistic cues in the 2 groups. The results show the potential value of the distortion-sensitivity approach in studying the role of linguistic abilities in speech understanding in noise of individuals with hearing impairment.
Phase effects in masking by harmonic complexes: speech recognition.
Deroche, Mickael L D; Culling, John F; Chatterjee, Monita
2013-12-01
Harmonic complexes that generate highly modulated temporal envelopes on the basilar membrane (BM) mask a tone less effectively than complexes that generate relatively flat temporal envelopes, because the non-linear active gain of the BM selectively amplifies a low-level tone in the dips of a modulated masker envelope. The present study examines a similar effect in speech recognition. Speech reception thresholds (SRTs) were measured for a voice masked by harmonic complexes with partials in sine phase (SP) or in random phase (RP). The masker's fundamental frequency (F0) was 50, 100 or 200 Hz. SRTs were considerably lower for SP than for RP maskers at 50-Hz F0, but the two converged at 100-Hz F0, while at 200-Hz F0, SRTs were a little higher for SP than RP maskers. The results were similar whether the target voice was male or female and whether the masker's spectral profile was flat or speech-shaped. Although listening in the masker dips has been shown to play a large role for artificial stimuli such as Schroeder-phase complexes at high levels, it contributes weakly to speech recognition in the presence of harmonic maskers with different crest factors at more moderate sound levels (65 dB SPL). Copyright © 2013 Elsevier B.V. All rights reserved.
Zekveld, Adriana A; Festen, Joost M; Kramer, Sophia E
2013-08-01
In this study, the authors assessed the influence of masking level (29% or 71% sentence perception) and test modality on the processing load during language perception as reflected by the pupil response. In addition, the authors administered a delayed cued stimulus recall test to examine whether processing load affected the encoding of the stimuli in memory. Participants performed speech and text reception threshold tests, during which the pupil response was measured. In the cued recall test, the first half of correctly perceived sentences was presented, and participants were asked to complete the sentences. Reading and listening span tests of working memory capacity were presented as well. Regardless of test modality, the pupil response indicated higher processing load in the 29% condition than in the 71% correct condition. Cued recall was better for the 29% condition. The consistent effect of masking level on the pupil response during listening and reading support the validity of the pupil response as a measure of processing load during language perception. The absent relation between pupil response and cued recall may suggest that cued recall is not directly related to processing load, as reflected by the pupil response.
Wardenga, Nina; Batsoulis, Cornelia; Wagener, Kirsten C; Brand, Thomas; Lenarz, Thomas; Maier, Hannes
2015-01-01
The aim of this study was to determine the relationship between hearing loss and speech reception threshold (SRT) in a fixed noise condition using the German Oldenburg sentence test (OLSA). After training with two easily-audible lists of the OLSA, SRTs were determined monaurally with headphones at a fixed noise level of 65 dB SPL using a standard adaptive procedure, converging to 50% speech intelligibility. Data was obtained from 315 ears of 177 subjects with hearing losses ranging from -5 to 90 dB HL pure-tone average (PTA, 0.5, 1, 2, 3 kHz). Two domains were identified with a linear dependence of SRT on PTA. The SRT increased with a slope of 0.094 ± 0.006 dB SNR/dB HL (standard deviation (SD) of residuals = 1.17 dB) for PTAs < 47 dB HL and with a slope of 0.811 ± 0.049 dB SNR/dB HL (SD of residuals = 5.54 dB) for higher PTAs. The OLSA can be applied to subjects with a wide range of hearing losses. With 65 dB SPL fixed noise presentation level the SRT is determined by listening in noise for PTAs < ∼47 dB HL, and above it is determined by listening in quiet.
Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.
Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva
2016-01-01
Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.
Brännström, K Jonas; Lantz, Johannes; Nielsen, Lars Holme; Olsen, Steen Østergaard
2014-02-01
Outcome measures can be used to improve the quality of the rehabilitation by identifying and understanding which variables influence the outcome. This information can be used to improve outcomes for clients. In clinical practice, pure-tone audiometry, speech reception thresholds (SRTs), and speech discrimination scores (SDSs) in quiet or in noise are common assessments made prior to hearing aid (HA) fittings. It is not known whether SRT and SDS in quiet relate to HA outcome measured with the International Outcome Inventory for Hearing Aids (IOI-HA). The aim of the present study was to investigate the relationship between pure-tone average (PTA), SRT, and SDS in quiet and IOI-HA in both first-time and experienced HA users. SRT and SDS were measured in a sample of HA users who also responded to the IOI-HA. Fifty-eight Danish-speaking adult HA users. The psychometric properties were evaluated and compared to previous studies using the IOI-HA. The associations and differences between the outcome scores and a number of descriptive variables (age, gender, fitted monaurally/binaurally with HA, first-time/experienced HA users, years of HA use, time since last HA fitting, best ear PTA, best ear SRT, or best ear SDS) were examined. A multiple forward stepwise regression analysis was conducted using scores on the separate IOI-HA items, the global score, and scores on the introspection and interaction subscales as dependent variables to examine whether the descriptive variables could predict these outcome measures. Scores on single IOI-HA items, the global score, and scores on the introspection (items 1, 2, 4, and 7) and interaction (items 3, 5, and 6) subscales closely resemble those previously reported. Multiple regression analysis showed that the best ear SDS predicts about 18-19% of the outcome on items 3 and 5 separately, and about 16% on the interaction subscale (sum of items 3, 5, and 6) CONCLUSIONS: The best ears SDS explains some of the variance displayed in the IOI-HA global score and the interaction subscale. The relation between SDS and IOI-HA suggests that a poor unaided SDS might in itself be a limiting factor for the HA rehabilitation efficacy and hence the IOI-HA outcome. The clinician could use this information to align the user's HA expectations to what is within possible reach. American Academy of Audiology.
Lifetime leisure music exposure associated with increased frequency of tinnitus.
Moore, David R; Zobay, Oliver; Mackinnon, Robert C; Whitmer, William M; Akeroyd, Michael A
2017-04-01
Tinnitus has been linked to noise exposure, a common form of which is listening to music as a leisure activity. The relationship between tinnitus and type and duration of music exposure is not well understood. We conducted an internet-based population study that asked participants questions about lifetime music exposure and hearing, and included a hearing test involving speech intelligibility in noise, the High Frequency Digit Triplets Test. 4950 people aged 17-75 years completed all questions and the hearing test. Results were analyzed using multinomial regression models. High exposure to leisure music, hearing difficulty, increasing age and workplace noise exposure were independently associated with increased tinnitus. Three forms of music exposure (pubs/clubs, concerts, personal music players) did not differ in their relationship to tinnitus. More males than females reported tinnitus. The objective measure of speech reception threshold had only a minimal relationship with tinnitus. Self-reported hearing difficulty was more strongly associated with tinnitus, but 76% of people reporting usual or constant tinnitus also reported little or no hearing difficulty. Overall, around 40% of participants of all ages reported never experiencing tinnitus, while 29% reported sometimes, usually or constantly experiencing tinnitus that lasted more than 5 min. Together, the results suggest that tinnitus is much more common than hearing loss, but that there is little association between the two, especially among the younger adults disproportionately sampled in this study. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Wang, Yang; Naylor, Graham; Kramer, Sophia E; Zekveld, Adriana A; Wendt, Dorothea; Ohlenforst, Barbara; Lunner, Thomas
People with hearing impairment are likely to experience higher levels of fatigue because of effortful listening in daily communication. This hearing-related fatigue might not only constrain their work performance but also result in withdrawal from major social roles. Therefore, it is important to understand the relationships between fatigue, listening effort, and hearing impairment by examining the evidence from both subjective and objective measurements. The aim of the present study was to investigate these relationships by assessing subjectively measured daily-life fatigue (self-report questionnaires) and objectively measured listening effort (pupillometry) in both normally hearing and hearing-impaired participants. Twenty-seven normally hearing and 19 age-matched participants with hearing impairment were included in this study. Two self-report fatigue questionnaires Need For Recovery and Checklist Individual Strength were given to the participants before the test session to evaluate the subjectively measured daily fatigue. Participants were asked to perform a speech reception threshold test with single-talker masker targeting a 50% correct response criterion. The pupil diameter was recorded during the speech processing, and we used peak pupil dilation (PPD) as the main outcome measure of the pupillometry. No correlation was found between subjectively measured fatigue and hearing acuity, nor was a group difference found between the normally hearing and the hearing-impaired participants on the fatigue scores. A significant negative correlation was found between self-reported fatigue and PPD. A similar correlation was also found between Speech Intelligibility Index required for 50% correct and PPD. Multiple regression analysis showed that factors representing "hearing acuity" and "self-reported fatigue" had equal and independent associations with the PPD during the speech in noise test. Less fatigue and better hearing acuity were associated with a larger pupil dilation. To the best of our knowledge, this is the first study to investigate the relationship between a subjective measure of daily-life fatigue and an objective measure of pupil dilation, as an indicator of listening effort. These findings help to provide an empirical link between pupil responses, as observed in the laboratory, and daily-life fatigue.
Rajan, R; Cainer, K E
2008-06-23
In most everyday settings, speech is heard in the presence of competing sounds and understanding speech requires skills in auditory streaming and segregation, followed by identification and recognition, of the attended signals. Ageing leads to difficulties in understanding speech in noisy backgrounds. In addition to age-related changes in hearing-related factors, cognitive factors also play a role but it is unclear to what extent these are generalized or modality-specific cognitive factors. We examined how ageing in normal-hearing decade age cohorts from 20 to 69 years affected discrimination of open-set speech in background noise. We used two types of sentences of similar structural and linguistic characteristics but different masking levels (i.e. differences in signal-to-noise ratios required for detection of sentences in a standard masker) so as to vary sentence demand, and two background maskers (one causing purely energetic masking effects and the other causing energetic and informational masking) to vary load conditions. There was a decline in performance (measured as speech reception thresholds for perception of sentences in noise) in the oldest cohort for both types of sentences, but only in the presence of the more demanding informational masker. We interpret these results to indicate a modality-specific decline in cognitive processing, likely a decrease in the ability to use acoustic and phonetic cues efficiently to segregate speech from background noise, in subjects aged >60.
The RetroX auditory implant for high-frequency hearing loss.
Garin, P; Genard, F; Galle, C; Jamart, J
2004-07-01
The objective of this study was to analyze the subjective satisfaction and measure the hearing gain provided by the RetroX (Auric GmbH, Rheine, Germany), an auditory implant of the external ear. We conducted a retrospective case review. We conducted this study at a tertiary referral center at a university hospital. We studied 10 adults with high-frequency sensori-neural hearing loss (ski-slope audiogram). The RetroX consists of an electronic unit sited in the postaural sulcus connected to a titanium tube implanted under the auricle between the sulcus and the entrance of the external auditory canal. Implanting requires only minor surgery under local anesthesia. Main outcome measures were a satisfaction questionnaire, pure-tone audiometry in quiet, speech audiometry in quiet, speech audiometry in noise, and azimuth audiometry (hearing threshold in function of sound source location within the horizontal plane at ear level). : Subjectively, all 10 patients are satisfied or even extremely satisfied with the hearing improvement provided by the RetroX. They wear the implant daily, from morning to evening. We observe a statistically significant improvement of pure-tone thresholds at 1, 2, and 4 kHz. In quiet, the speech reception threshold improves by 9 dB. Speech audiometry in noise shows that intelligibility improves by 26% for a signal-to-noise ratio of -5 dB, by 18% for a signal-to-noise ratio of 0 dB, and by 13% for a signal-to-noise ratio of +5 dB. Localization audiometry indicates that the skull masks sound contralateral to the implanted ear. Of the 10 patients, one had acoustic feedback and one presented with a granulomatous reaction to the foreign body that necessitated removing the implant. The RetroX auditory implant is a semi-implantable hearing aid without occlusion of the external auditory canal. It provides a new therapeutic alternative for managing high-frequency hearing loss.
Gage, Nicole M; Eliashiv, Dawn S; Isenberg, Anna L; Fillmore, Paul T; Kurelowech, Lacey; Quint, Patti J; Chung, Jeffrey M; Otis, Shirley M
2011-06-01
Neuroimaging studies have shed light on cortical language organization, with findings implicating the left and right temporal lobes in speech processing converging to a left-dominant pattern. Findings highlight the fact that the state of theoretical language knowledge is ahead of current clinical language mapping methods, motivating a rethinking of these approaches. The authors used magnetoencephalography and multiple tasks in seven candidates for resective epilepsy surgery to investigate language organization. The authors scanned 12 control subjects to investigate the time course of bilateral receptive speech processes. Laterality indices were calculated for left and right hemisphere late fields ∼150 to 400 milliseconds. The authors report that (1) in healthy adults, speech processes activated superior temporal regions bilaterally converging to a left-dominant pattern, (2) in four of six patients, this was reversed, with bilateral processing converging to a right-dominant pattern, and (3) in three of four of these patients, receptive and expressive language processes were laterally discordant. Results provide evidence that receptive and expressive language may have divergent hemispheric dominance. Right-sided receptive language dominance in epilepsy patients emphasizes the need to assess both receptive and expressive language. Findings indicate that it is critical to use multiple tasks tapping separable aspects of language function to provide sensitive and specific estimates of language localization in surgical patients.
Koelewijn, Thomas; Versfeld, Niek J; Kramer, Sophia E
2017-10-01
For people with hearing difficulties, following a conversation in a noisy environment requires substantial cognitive processing, which is often perceived as effortful. Recent studies with normal hearing (NH) listeners showed that the pupil dilation response, a measure of cognitive processing load, is affected by 'attention related' processes. How these processes affect the pupil dilation response for hearing impaired (HI) listeners remains unknown. Therefore, the current study investigated the effect of auditory attention on various pupil response parameters for 15 NH adults (median age 51 yrs.) and 15 adults with mild to moderate sensorineural hearing loss (median age 52 yrs.). Both groups listened to two different sentences presented simultaneously, one to each ear and partially masked by stationary noise. Participants had to repeat either both sentences or only one, for which they had to divide or focus attention, respectively. When repeating one sentence, the target sentence location (left or right) was either randomized or blocked across trials, which in the latter case allowed for a better spatial focus of attention. The speech-to-noise ratio was adjusted to yield about 50% sentences correct for each task and condition. NH participants had lower ('better') speech reception thresholds (SRT) than HI participants. The pupil measures showed no between-group effects, with the exception of a shorter peak latency for HI participants, which indicated a shorter processing time. Both groups showed higher SRTs and a larger pupil dilation response when two sentences were processed instead of one. Additionally, SRTs were higher and dilation responses were larger for both groups when the target location was randomized instead of fixed. We conclude that although HI participants could cope with less noise than the NH group, their ability to focus attention on a single talker, thereby improving SRTs and lowering cognitive processing load, was preserved. Shorter peak latencies could indicate that HI listeners adapt their listening strategy by not processing some information, which reduces processing time and thereby listening effort. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Chang, Hung-Yue; Luo, Ching-Hsing; Lo, Tun-Shin; Chen, Hsiao-Chuan; Huang, Kuo-You; Liao, Wen-Huei; Su, Mao-Chang; Liu, Shu-Yu; Wang, Nan-Mai
2017-08-28
This study investigated whether a self-designed assistive listening device (ALD) that incorporates an adaptive dynamic range optimization (ADRO) amplification strategy can surpass a commercially available monaurally worn linear ALD, SM100. Both subjective and objective measurements were implemented. Mandarin Hearing-In-Noise Test (MHINT) scores were the objective measurement, whereas participant satisfaction was the subjective measurement. The comparison was performed in a mixed design (i.e., subjects' hearing status being mild or moderate, quiet versus noisy, and linear versus ADRO scheme). The participants were two groups of hearing-impaired subjects, nine mild and eight moderate, respectively. The results of the ADRO system revealed a significant difference in the MHINT sentence reception threshold (SRT) in noisy environments between monaurally aided and unaided conditions, whereas the linear system did not. The benchmark results showed that the ADRO scheme is effectively beneficial to people who experience mild or moderate hearing loss in noisy environments. The satisfaction rating regarding overall speech quality indicated that the participants were satisfied with the speech quality of both ADRO and linear schemes in quiet environments, and they were more satisfied with ADRO than they with the linear scheme in noisy environments.
Gorgias on Madison Avenue: Sophistry and the Rhetoric of Advertising.
ERIC Educational Resources Information Center
Matcuk, Matt
Using sophistic theory and focusing on intersections in the practice and reception of sophistry and advertising, a study analyzed a contemporary advertising campaign. A number of extrinsic similarities between sophistic and advertising rhetoric exist: their commercial basis, their popular reception as dishonest speech, and the reception of both as…
Marschik, Peter B.; Vollmann, Ralf; Bartl-Pokorny, Katrin D.; Green, Vanessa A.; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa
2018-01-01
Objective We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant (PSV) of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. Methods For this study we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples, and picture stories to elicit narrative competences. Results Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Conclusion Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note. PMID:23870013
Marschik, Peter B; Vollmann, Ralf; Bartl-Pokorny, Katrin D; Green, Vanessa A; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa
2014-08-01
We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.
Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performance.
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2013-02-01
To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) and pure-tone hearing thresholds. Participants included 111 middle- to older-age adults (range = 45-78) with audiometric configurations ranging from normal hearing levels to moderate sensorineural hearing loss. In addition to using audiometric testing, the authors also used such evaluation measures as the QuickSIN, the SSQ, and the cABR. Multiple linear regression analysis indicated that the inclusion of brainstem variables in a model with QuickSIN, hearing thresholds, and age accounted for 30% of the variance in the Speech subtest of the SSQ, compared with significantly less variance (19%) when brainstem variables were not included. The authors' results demonstrate the cABR's efficacy for predicting self-reported speech-in-noise perception difficulties. The fact that the cABR predicts more variance in self-reported speech-in-noise (SIN) perception than either the QuickSIN or hearing thresholds indicates that the cABR provides additional insight into an individual's ability to hear in background noise. In addition, the findings underscore the link between the cABR and hearing in noise.
Saunders, Gabrielle H; Forsline, Anna
2006-06-01
Results of objective clinical tests (e.g., measures of speech understanding in noise) often conflict with subjective reports of hearing aid benefit and satisfaction. The Performance-Perceptual Test (PPT) is an outcome measure in which objective and subjective evaluations are made by using the same test materials, testing format, and unit of measurement (signal-to-noise ratio, S/N), permitting a direct comparison between measured and perceived ability to hear. Two variables are measured: a Performance Speech Reception Threshold in Noise (SRTN) for 50% correct performance and a Perceptual SRTN, which is the S/N at which listeners perceive that they can understand the speech material. A third variable is computed: the Performance-Perceptual Discrepancy (PPDIS); it is the difference between the Performance and Perceptual SRTNs and measures the extent to which listeners "misjudge" their hearing ability. Saunders et al. in 2004 examined the relation between PPT scores and unaided hearing handicap. In this publication, the relations between the PPT, residual aided handicap, and hearing aid satisfaction are described. Ninety-four individuals between the ages of 47 and 86 yr participated. All had symmetrical sensorineural hearing loss and had worn binaural hearing aids for at least 6 wk before participating. All subjects underwent routine audiological examination and completed the PPT, the Hearing Handicap Inventory for the Elderly/Adults (HHIE/A), and the Satisfaction for Amplification in Daily Life questionnaire. Sixty-five subjects attended one research visit for participation in this study, and 29 attended a second visit to complete the PPT a second time. Performance and Perceptual SRTN and PPDIS scores were normally distributed and showed excellent test-retest reliability. Aided SRTNs were significantly better than unaided SRTNs; aided and unaided PPDIS values did not differ. Stepwise multiple linear regression showed that the PPDIS, the Performance SRTN, and age were significant predictors of scores on the HHIE/A such that greater reported handicap is associated with underestimating hearing ability, poorer aided ability to understand speech in noise, and being younger. Scores on the Satisfaction with Amplification in Daily Life were not well explained by the PPT, age, or audiometric thresholds. When individuals were grouped by their HHIE/A scores, it was seen that individuals who report more handicap than expected based on their audiometric thresholds, have a more negative PPDIS, i.e., underestimate their hearing ability, relative to individuals who report expected handicap, who in turn have a more negative PPDIS than individuals who report less handicap than expected. No such patterns were apparent for the Performance SRTN. The study showed the PPT to be a reliable outcome measure that can provide more information than a performance measure and/or a questionnaire measure alone, in that the PPDIS can provide the clinician with an explanation for discrepant objective and subjective reports of hearing difficulties. The finding that self-reported handicap is affected independently by both actual ability to hear and the (mis)perception of ability to hear underscores the difficulty clinicians encounter when trying to interpret outcomes questionnaires. We suggest that this variable should be measured and taken into account when interpreting questionnaires and counseling patients.
Role of masker predictability in the cocktail party problem1
Jones, Gary L.; Litovsky, Ruth Y.
2008-01-01
In studies of the cocktail party problem, the number and locations of maskers are typically fixed throughout a block of trials, which leaves out uncertainty that exists in real-world environments. The current experiments examined whether there is (1) improved speech intelligibility and (2) increased spatial release from masking (SRM), as predictability of the number∕locations of speech maskers is increased. In the first experiment, subjects identified a target word presented at a fixed level in the presence of 0, 1, or 2 maskers as predictability of the masker configuration ranged from 10% to 80%. The second experiment examined speech reception thresholds and SRM as (a) predictability of the masker configuration is increased from 20% to 80% and∕or (b) the complexity of the listening environment is decreased. In the third experiment, predictability of the masker configuration was increased from 20% up to 100% while minimizing the onset delay between maskers and the target. All experiments showed no effect of predictability of the masker configuration on speech intelligibility or SRM. These results suggest that knowing the number and location(s) of maskers may not necessarily contribute significantly to solving the cocktail party problem, at least not when the location of the target is known. PMID:19206808
Speech recognition by bilateral cochlear implant users in a cocktail-party setting
Loizou, Philipos C.; Hu, Yi; Litovsky, Ruth; Yu, Gongqiang; Peters, Robert; Lake, Jennifer; Roland, Peter
2009-01-01
Unlike prior studies with bilateral cochlear implant users which considered only one interferer, the present study considered realistic listening situations wherein multiple interferers were present and in some cases originating from both hemifields. Speech reception thresholds were measured in bilateral users unilaterally and bilaterally in four different spatial configurations, with one and three interferers consisting of modulated noise or competing talkers. The data were analyzed in terms of binaural benefits including monaural advantage (better-ear listening) and binaural interaction. The total advantage (overall spatial release) received was 2–5 dB and was maintained with multiple interferers present. This advantage was dominated by the monaural advantage, which ranged from 1 to 6 dB and was largest when the interferers were mostly energetic. No binaural-interaction benefit was found in the present study with either type of interferer (speech or noise). While the total and monaural advantage obtained for noise interferers was comparable to that attained by normal-hearing listeners, it was considerably lower for speech interferers. This suggests that bilateral users are less capable of taking advantage of binaural cues, in particular, under conditions of informational masking. Furthermore, the use of noise interferers does not adequately reflect the difficulties experienced by bilateral users in real-life situations. PMID:19173424
A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments
Colburn, H. Steven
2016-01-01
Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. PMID:27698261
A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.
Mi, Jing; Colburn, H Steven
2016-10-03
Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.
Neher, Tobias; Wagener, Kirsten C; Latzel, Matthias
2017-09-01
Hearing aid (HA) users can differ markedly in their benefit from directional processing (or beamforming) algorithms. The current study therefore investigated candidacy for different bilateral directional processing schemes. Groups of elderly listeners with symmetric (N = 20) or asymmetric (N = 19) hearing thresholds for frequencies below 2 kHz, a large spread in the binaural intelligibility level difference (BILD), and no difference in age, overall degree of hearing loss, or performance on a measure of selective attention took part. Aided speech reception was measured using virtual acoustics together with a simulation of a linked pair of completely occluding behind-the-ear HAs. Five processing schemes and three acoustic scenarios were used. The processing schemes differed in the tradeoff between signal-to-noise ratio (SNR) improvement and binaural cue preservation. The acoustic scenarios consisted of a frontal target talker presented against two speech maskers from ±60° azimuth or spatially diffuse cafeteria noise. For both groups, a significant interaction between BILD, processing scheme, and acoustic scenario was found. This interaction implied that, in situations with lateral speech maskers, HA users with BILDs larger than about 2 dB profited more from preserved low-frequency binaural cues than from greater SNR improvement, whereas for smaller BILDs the opposite was true. Audiometric asymmetry reduced the influence of binaural hearing. In spatially diffuse noise, the maximal SNR improvement was generally beneficial. N 0 S π detection performance at 500 Hz predicted the benefit from low-frequency binaural cues. Together, these findings provide a basis for adapting bilateral directional processing to individual and situational influences. Further research is needed to investigate their generalizability to more realistic HA conditions (e.g., with low-frequency vent-transmitted sound). Copyright © 2017 Elsevier B.V. All rights reserved.
Sahli, A Sanem; Belgin, Erol
2017-07-01
Speech and language assessment is very important in early diagnosis of children with hearing and speech disorders. Aim of this study is to determine the validity and reliability of Preschool Language Scale (5th edition) test with its Turkish translation and adaptation. Our study is conducted on 1320 children aged between 0-7 years 11 months. While 1044 of these children have normal hearing, language and speech development, 276 of them have receptive and/or expressive language disorder. After the English-Turkish and Turkish-English translations of PLS-5 made by two experts command of both languages, some of the test items are reorganized because of the grammatical features of Turkish and the cultural structure of the country. The pilot study was conducted with 378 children. The test which is reorganized in the light of data obtained in pilot application, is applied to children chosen randomly with layering technique from different regions of Turkey, then 15 days later the first test applied again to 120 children. While 1044 of 1320 children aged between 0 and 7 years 11 months are normal, 276 of them have receptive and/or expressive language disorder. While 98 of 103 healthy children of 120 taken under the second evaluation have normal language development, 8 of 9 who used to have language development disorder in the past still remaining (Kappa coefficient:0,468, p<0,001). Pearson correaltion coefficient for TPLS-5 standard gauge are; IA raw score:0,937, IED raw score: 0,908 and TDP: 0,887 respectively. Correlation coefficient for age equivalance is found as IA:0,871, IED: 0,896, TDP: 0,887. TPLS-5 is the first and only language test in our country that can evaluate receptive and/or expressive language skills of children aged between 0-7 years 11 months. Results of the study show that TPLS-5 is a valid and reliable language test for the Turkish children. Copyright © 2017. Published by Elsevier B.V.
Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant.
Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa
2016-07-01
The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied.
2013-01-01
Children with severe hearing loss most likely receive the greatest benefit from a cochlear implant (CI) when implanted at less than 2 years of age. Children with a hearing loss may also benefit greater from binaural sensory stimulation. Four children who received their first CI under 12 months of age were included in this study. Effects on auditory development were determined using the German LittlEARS Auditory Questionnaire, closed- and open-set monosyllabic word tests, aided free-field, the Mainzer and Göttinger speech discrimination tests, Monosyllabic-Trochee-Polysyllabic (MTP), and Listening Progress Profile (LiP). Speech production and grammar development were evaluated using a German language speech development test (SETK), reception of grammar test (TROG-D) and active vocabulary test (AWST-R). The data showed that children implanted under 12 months of age reached open-set monosyllabic word discrimination at an age of 24 months. LiP results improved over time, and children recognized 100% of words in the MTP test after 12 months. All children performed as well as or better than their hearing peers in speech production and grammar development. SETK showed that the speech development of these children was in general age appropriate. The data suggests that early hearing loss intervention benefits speech and language development and supports the trend towards early cochlear implantation. Furthermore, the data emphasizes the potential benefits associated with bilateral implantation. PMID:23509653
May-Mederake, Birgit; Shehata-Dieler, Wafaa
2013-01-01
Children with severe hearing loss most likely receive the greatest benefit from a cochlear implant (CI) when implanted at less than 2 years of age. Children with a hearing loss may also benefit greater from binaural sensory stimulation. Four children who received their first CI under 12 months of age were included in this study. Effects on auditory development were determined using the German LittlEARS Auditory Questionnaire, closed- and open-set monosyllabic word tests, aided free-field, the Mainzer and Göttinger speech discrimination tests, Monosyllabic-Trochee-Polysyllabic (MTP), and Listening Progress Profile (LiP). Speech production and grammar development were evaluated using a German language speech development test (SETK), reception of grammar test (TROG-D) and active vocabulary test (AWST-R). The data showed that children implanted under 12 months of age reached open-set monosyllabic word discrimination at an age of 24 months. LiP results improved over time, and children recognized 100% of words in the MTP test after 12 months. All children performed as well as or better than their hearing peers in speech production and grammar development. SETK showed that the speech development of these children was in general age appropriate. The data suggests that early hearing loss intervention benefits speech and language development and supports the trend towards early cochlear implantation. Furthermore, the data emphasizes the potential benefits associated with bilateral implantation.
Mesnildrey, Quentin; Hilkhuysen, Gaston; Macherey, Olivier
2016-02-01
Noise- and sine-carrier vocoders are often used to acoustically simulate the information transmitted by a cochlear implant (CI). However, sine-waves fail to mimic the broad spread of excitation produced by a CI and noise-bands contain intrinsic modulations that are absent in CIs. The present study proposes pulse-spreading harmonic complexes (PSHCs) as an alternative acoustic carrier in vocoders. Sentence-in-noise recognition was measured in 12 normal-hearing subjects for noise-, sine-, and PSHC-vocoders. Consistent with the amount of intrinsic modulations present in each vocoder condition, the average speech reception threshold obtained with the PSHC-vocoder was higher than with sine-vocoding but lower than with noise-vocoding.
Spencer, Caroline; Weber-Fox, Christine
2014-01-01
Purpose In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Methods Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9–5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. Results CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Conclusion Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. PMID:25173455
Hearing in middle age: a population snapshot of 40–69 year olds in the UK
Dawes, Piers; Fortnum, Heather; Moore, David R.; Emsley, Richard; Norman, Paul; Cruickshanks, Karen; Davis, Adrian; Edmondson-Jones, Mark; McCormack, Abby; Lutman, Mark; Munro, Kevin
2014-01-01
Objective To report population-based prevalence of hearing impairment based on speech recognition in noise testing in a large and inclusive sample of UK adults aged 40 to 69 years. The present study is the first to report such data. Prevalence of tinnitus and use of hearing aids is also reported. Design The research was conducted using the UK Biobank resource. The better-ear unaided speech reception threshold was measured adaptively using the Digit Triplet Test (n = 164,770). Self-report data on tinnitus, hearing aid use, noise exposure as well as demographic variables were collected. Results Overall, 10.7% of adults (95%CI 10.5–10.9%) had significant hearing impairment. Prevalence of tinnitus was 16.9% (95%CI 16.6–17.1%) and hearing aid use was 2.0% (95%CI 1.9–2.1%). Odds of hearing impairment increased with age, with a history of work- and music-related noise exposure, for lower socioeconomic background and for ethnic minority backgrounds. Males were at no higher risk of hearing impairment than females. Conclusion Around 1 in 10 adults aged 40 to 69 years have substantial hearing impairment. The reasons for excess risk of hearing impairment particularly for those from low socioeconomic and ethnic minority backgrounds require identification, as this represents a serious health inequality. The underutilization of hearing aids has altered little since the 1980s, and is a major cause for concern. PMID:24518430
Association of Age Related Macular Degeneration and Age Related Hearing Impairment
Ghasemi, Hassan; Pourakbari, Malihe Shahidi; Entezari, Morteza; Yarmohammadi, Mohammad Ebrahim
2016-01-01
Purpose: To evaluate the association between age-related macular degeneration (ARMD) and sensory neural hearing impairment (SHI). Methods: In this case-control study, hearing status of 46 consecutive patients with ARMD were compared with 46 age-matched cases without clinical ARMD as a control group. In all patients, retinal involvements were confirmed by clinical examination, fluorescein angiography (FA) and optical coherence tomography (OCT). All participants were examined with an otoscope and underwent audiological tests including pure tone audiometry (PTA), speech reception threshold (SRT), speech discrimination score (SDS), tympanometry, reflex tests and auditory brainstem response (ABR). Results: A significant (P = 0.009) association was present between ARMD, especially with exudative and choroidal neovascularization (CNV) components, and age-related hearing impairment primarily involving high frequencies. Patients had higher SRT and lower SDS against anticipated presbycusis than control subjects. Similar results were detected in exudative, CNV and scar patterns supporting an association between late ARMD with SRT and SDS abnormalities. ABR showed significantly prolonged wave I and IV latency times in ARMD (P = 0.034 and 0.022, respectively). Average latency periods for wave I in geographic atrophy (GA) and CNV, and that for wave IV in drusen patterns of ARMD were significantly higher than controls (P = 0.030, 0.007 and 0.050, respectively). Conclusion: The association between ARMD and age-related SHI may be attributed to common anatomical components such as melanin in these two sensory organs. PMID:27195086
Le Prell, Colleen G; Brungart, Douglas S
2016-09-01
In humans, the accepted clinical standards for detecting hearing loss are the behavioral audiogram, based on the absolute detection threshold of pure-tones, and the threshold auditory brainstem response (ABR). The audiogram and the threshold ABR are reliable and sensitive measures of hearing thresholds in human listeners. However, recent results from noise-exposed animals demonstrate that noise exposure can cause substantial neurodegeneration in the peripheral auditory system without degrading pure-tone audiometric thresholds. It has been suggested that clinical measures of auditory performance conducted with stimuli presented above the detection threshold may be more sensitive than the behavioral audiogram in detecting early-stage noise-induced hearing loss in listeners with audiometric thresholds within normal limits. Supra-threshold speech-in-noise testing and supra-threshold ABR responses are reviewed here, given that they may be useful supplements to the behavioral audiogram for assessment of possible neurodegeneration in noise-exposed listeners. Supra-threshold tests may be useful for assessing the effects of noise on the human inner ear, and the effectiveness of interventions designed to prevent noise trauma. The current state of the science does not necessarily allow us to define a single set of best practice protocols. Nonetheless, we encourage investigators to incorporate these metrics into test batteries when feasible, with an effort to standardize procedures to the greatest extent possible as new reports emerge.
Law, J; Campbell, C; Roulstone, S; Adams, C; Boyle, J
2008-01-01
Receptive language impairment (RLI) is one of the most significant indicators of negative sequelae for children with speech and language disorders. Despite this, relatively little is known about the most effective treatments for these children in the primary school period. To explore the relationship between the reported practice of speech and language practitioners and the underlying rationales for the therapy that they provide. A phenomenological approach was adopted, drawing on the experiences of speech and language practitioners. Practitioners completed a questionnaire relating to their practice for a single child with receptive language impairment within the 5-11 age range, providing details and rationales for three recent therapy activities. The responses of 56 participants were coded. All the children described experienced marked receptive language impairments, in the main associated with expressive language difficulties and/or social communication problems. The relative homogeneity of the presenting symptoms in terms of test performance was not reflected in the highly differentiated descriptions of intervention. One of the key determinants of how therapists described their practice was the child's age. As the child develops the therapists appeared to shift from a 'skills acquisition' orientation to a 'meta-cognitive' orientation, that is they move away from teaching specific linguistic behaviours towards teaching children strategies for thinking and using their language. A third of rationales refer to explicit theories but only half of these refer to the work of specific authors. Many of these were theories of practice rather than theories of deficit, and of those that do cite specific theories, no less than 29 different authors were cited many of whom might best be described as translators of existing theories rather than generators of novel theories. While theories of the deficit dominate the literature they appear to play a relatively small part in the eclectic practice of speech and language therapists. Theories of therapy may develop relatively independent of theories of deficit. While this may not present a problem for the practitioner, whose principal focus is remediation, it may present a problem for the researcher developing intervention efficacy studies, where the theory of the deficit will need to be well-defined in order to describe both the subgroup of children under investigation and the parameters of the deficit to be targeted in intervention.
Speech and language development in 2-year-old children with cerebral palsy.
Hustad, Katherine C; Allison, Kristen; McFadd, Emily; Riehle, Katherine
2014-06-01
We examined early speech and language development in children who had cerebral palsy. Questions addressed whether children could be classified into early profile groups on the basis of speech and language skills and whether there were differences on selected speech and language measures among groups. Speech and language assessments were completed on 27 children with CP who were between the ages of 24 and 30 months (mean age 27.1 months; SD 1.8). We examined several measures of expressive and receptive language, along with speech intelligibility. Two-step cluster analysis was used to identify homogeneous groups of children based on their performance on the seven dependent variables characterizing speech and language performance. Three groups of children identified were those not yet talking (44% of the sample); those whose talking abilities appeared to be emerging (41% of the sample); and those who were established talkers (15% of the sample). Group differences were evident on all variables except receptive language skills. 85% of 2-year-old children with CP in this study had clinical speech and/or language delays relative to age expectations. Findings suggest that children with CP should receive speech and language assessment and treatment at or before 2 years of age.
NASA Technical Reports Server (NTRS)
Creecy, R.
1974-01-01
A speech modulated white noise device is reported that gives the rhythmic characteristics of a speech signal for intelligible reception by deaf persons. The signal is composed of random amplitudes and frequencies as modulated by the speech envelope characteristics of rhythm and stress. Time intensity parameters of speech are conveyed through the vibro-tactile sensation stimuli.
Word production inconsistency of Singaporean-English-speaking adolescents with Down Syndrome.
Wong, Betty; Brebner, Chris; McCormack, Paul; Butcher, Andy
2015-01-01
The nature of speech disorders in individuals with Down Syndrome (DS) remains controversial despite various explanations put forth in the literature to account for the observed speech profiles. A high level of word production inconsistency in children with DS has led researchers to query whether the inconsistency continues into adolescence, and if the inconsistency stems from inconsistent phonological disorder (IPD) or childhood apraxia of speech (CAS). Of the studies that have been published, most suggest that the speech profile of individuals with DS is delayed, while a few recent studies suggest a combination of delayed and disordered patterns. However, no studies have explored the nature of word production inconsistency in this population, and the relationship between word production inconsistency, receptive vocabulary and severity of speech disorder. To investigate in a pilot study the extent of word production inconsistency in adolescents with DS and to examine the correlations between word production inconsistency, measures of receptive vocabulary, severity of speech disorder and oromotor skills in adolescents with DS. The participants were 32 native speakers of Singaporean-English adolescents, comprising 16 participants with DS and 16 typically developing (TD) participants. The participants completed a battery of standardized speech and language assessments, including The Diagnostic Evaluation of Articulation and Phonology (DEAP) assessment. Results from each test were correlated to determine relationships. Qualitative analyses were also carried out on all the data collected. In this study, seven out of 16 participants with DS scored above 40% on word production inconsistency, a diagnostic criterion for IPD. In addition, all participants with DS performed poorly on the oromotor assessment of DEAP. The overall speech profile observed did not exactly correspond with the cluster symptoms observed in children with IPD or CAS. Word production inconsistency is a noticeable feature in the speech of individuals with DS. In addition, the speech profiles of individuals with DS consist of atypical and unusual errors alongside developmental errors. Significant correlations were found between the measures investigated, suggesting that speech disorder in DS is multifactorial. The results from this study will help to improve differential diagnosis of speech disorders and individualized treatment plans in the population with DS. © 2015 Royal College of Speech and Language Therapists.
Hearing loss in former prisoners of war of the Japanese.
Grossman, T W; Kerr, H D; Byrd, J C
1996-09-01
To describe the prevalence, degree, and types of hearing loss present in a group of older American veterans who had been prisoners of war of the Japanese. A descriptive study. A Veterans Affairs university hospital. Seventy-five male veterans, mean age 68 (+/- 3.6) years. Hearing aids were prescribed for eight veterans. Subjects were examined, and pure tone air and bone conduction, speech reception threshold, and speech discrimination were determined. Results were compared with age- and sex-matched controls from the largest recent American population study of hearing loss. 95% of subjects had been imprisoned longer than 33 months. Starvation conditions (100%), head trauma (85%), and trauma-related loss of consciousness (23%) were commonly reported. A total of 73% complained of hearing loss, and 29% (22/75) dated its onset to captivity. Most of those with the worst losses in hearing and speech discrimination were found in this subgroup. When the entire group was compared with published age- and sex-matched controls from the Framingham Study, no significant differences were found. We advocate screening examinations and long-term follow-up of populations with similar histories of starvation, head trauma, and torture.
[Receptive and expressive speech development in children with cochlear implant].
Streicher, B; Kral, K; Hahn, M; Lang-Roth, R
2015-04-01
This study's aim is the assessment of language development of children with Cochlea Implant (CI). It focusses on receptive and expressive language development as well as auditory memory skills. Grimm's language development test (SETK 3-5) evaluates receptive, expressive language development and auditory memory. Data of 49 children who received their implant within their first 3 years of life were compared to the norms of hearing children at the age of 3.0-3.5 years. According to the age at implantation the cohort was subdivided in 3 groups: cochlear implantation within the first 12 months of life (group 1), during the 13th and 24th months of life (group 2) and after 25 or more months of life (group 3). It was possible to collect complete data of all SETK 3-5 subtests in 63% of the participants. A homogeneous profile of all subtests indicates a balanced receptive and expressive language development. Thus reduces the gap between hearing/language age and chronological age. Receptive and expressive language and auditory memory milestones in children implanted within their first year of life are achieved earlier in comparison to later implanted children. The Language Test for Children (SETK 3-5) is an appropriate test procedure to be used for language assessment of children who received a CI. It can be used from age 3 on to administer data on receptive and expressive language development and auditory memory. © Georg Thieme Verlag KG Stuttgart · New York.
Lalonde, Kaylah; Holt, Rachael Frush
2017-01-01
Purpose This preliminary investigation explored potential cognitive and linguistic sources of variance in 2-year-olds’ speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method Twenty typically developing 2-year-olds completed the newly modified toddler change/no-change procedure. Behavioral tests and parent report questionnaires were used to measure several cognitive and linguistic constructs. Stepwise linear regression was used to relate discrimination sensitivity to the cognitive and linguistic measures. In addition, discrimination results from the current experiment were compared with those from 2-year-old children tested in a previous experiment. Results Receptive vocabulary and working memory explained 56.6% of variance in discrimination performance. Performance was not different on the modified toddler change/no-change procedure used in the current experiment from in a previous investigation, which used the original version of the procedure. Conclusions The relationship between speech discrimination and receptive vocabulary and working memory provides further evidence that the procedure is sensitive to the strength of perceptual representations. The role for working memory might also suggest that there are specific subject-related, nonsensory factors limiting the applicability of the procedure to children who have not reached the necessary levels of cognitive and linguistic development. PMID:24023371
Lalonde, Kaylah; Holt, Rachael Frush
2014-02-01
This preliminary investigation explored potential cognitive and linguistic sources of variance in 2-year-olds’ speech-sound discrimination by using the toddler change/ no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Twenty typically developing 2-year-olds completed the newly modified toddler change/no-change procedure. Behavioral tests and parent report questionnaires were used to measure several cognitive and linguistic constructs. Stepwise linear regression was used to relate discrimination sensitivity to the cognitive and linguistic measures. In addition, discrimination results from the current experiment were compared with those from 2-year-old children tested in a previous experiment. Receptive vocabulary and working memory explained 56.6% of variance in discrimination performance. Performance was not different on the modified toddler change/no-change procedure used in the current experiment from in a previous investigation, which used the original version of the procedure. The relationship between speech discrimination and receptive vocabulary and working memory provides further evidence that the procedure is sensitive to the strength of perceptual representations. The role for working memory might also suggest that there are specific subject-related, nonsensory factors limiting the applicability of the procedure to children who have not reached the necessary levels of cognitive and linguistic development.
Segal, Nili; Shkolnik, Mark; Kochba, Anat; Segal, Avichai; Kraus, Mordechai
2007-01-01
We evaluated the correlation of asymmetric hearing loss, in a random population of patients with mild to moderate sensorineural hearing loss, to several clinical factors such as age, sex, handedness, and noise exposure. We randomly selected, from 8 hearing institutes in Israel, 429 patients with sensorineural hearing loss of at least 30 dB at one frequency and a speech reception threshold not exceeding 30 dB. Patients with middle ear disease or retrocochlear disorders were excluded. The results of audiometric examinations were compared binaurally and in relation to the selected factors. The left ear's hearing threshold level was significantly higher than that of the right ear at all frequencies except 1.0 kHz (p < .05). One hundred fifty patients (35%) had asymmetric hearing loss (more than 10 dB difference between ears). In most of the patients (85%) the binaural difference in hearing threshold level, at any frequency, was less than 20 dB. Age, handedness, and sex were not found to be correlated to asymmetric hearing loss. Noise exposure was found to be correlated to asymmetric hearing loss.
Assessment of central auditory processing in a group of workers exposed to solvents.
Fuente, Adrian; McPherson, Bradley; Muñoz, Verónica; Pablo Espina, Juan
2006-12-01
Despite having normal hearing thresholds and speech recognition thresholds, results for central auditory tests were abnormal in a group of workers exposed to solvents. Workers exposed to solvents may have difficulties in everyday listening situations that are not related to a decrement in hearing thresholds. A central auditory processing disorder may underlie these difficulties. To study central auditory processing abilities in a group of workers occupationally exposed to a mix of organic solvents. Ten workers exposed to a mix of organic solvents and 10 matched non-exposed workers were studied. The test battery comprised pure-tone audiometry, tympanometry, acoustic reflex measurement, acoustic reflex decay, dichotic digit, pitch pattern sequence, masking level difference, filtered speech, random gap detection and hearing-in-noise tests. All the workers presented normal hearing thresholds and no signs of middle ear abnormalities. Workers exposed to solvents had lower results in comparison with the control group and previously reported normative data, in the majority of the tests.
Cox, Robyn M; Alexander, Genevieve C; Johnson, Jani; Rivera, Izel
2011-01-01
We investigated the prevalence of cochlear dead regions in listeners with hearing losses similar to those of many hearing aid wearers, and explored the impact of these dead regions on speech perception. Prevalence of dead regions was assessed using the Threshold Equalizing Noise test (TEN(HL)). Speech recognition was measured using high-frequency emphasis (HFE) Quick Speech In Noise (QSIN) test stimuli and low-pass filtered HFE QSIN stimuli. About one third of subjects tested positive for a dead region at one or more frequencies. Also, groups without and with dead regions both benefited from additional high-frequency speech cues. PMID:21522068
Speech Perception with Music Maskers by Cochlear Implant Users and Normal-Hearing Listeners
ERIC Educational Resources Information Center
Eskridge, Elizabeth N.; Galvin, John J., III; Aronoff, Justin M.; Li, Tianhao; Fu, Qian-Jie
2012-01-01
Purpose: The goal of this study was to investigate how the spectral and temporal properties in background music may interfere with cochlear implant (CI) and normal-hearing listeners' (NH) speech understanding. Method: Speech-recognition thresholds (SRTs) were adaptively measured in 11 CI and 9 NH subjects. CI subjects were tested while using their…
Nuesse, Theresa; Steenken, Rike; Neher, Tobias; Holube, Inga
2018-01-01
Elderly listeners are known to differ considerably in their ability to understand speech in noise. Several studies have addressed the underlying factors that contribute to these differences. These factors include audibility, and age-related changes in supra-threshold auditory processing abilities, and it has been suggested that differences in cognitive abilities may also be important. The objective of this study was to investigate associations between performance in cognitive tasks and speech recognition under different listening conditions in older adults with either age appropriate hearing or hearing-impairment. To that end, speech recognition threshold (SRT) measurements were performed under several masking conditions that varied along the perceptual dimensions of dip listening, spatial separation, and informational masking. In addition, a neuropsychological test battery was administered, which included measures of verbal working and short-term memory, executive functioning, selective and divided attention, and lexical and semantic abilities. Age-matched groups of older adults with either age-appropriate hearing (ENH, n = 20) or aided hearing impairment (EHI, n = 21) participated. In repeated linear regression analyses, composite scores of cognitive test outcomes (evaluated using PCA) were included to predict SRTs. These associations were different for the two groups. When hearing thresholds were controlled for, composed cognitive factors were significantly associated with the SRTs for the ENH listeners. Whereas better lexical and semantic abilities were associated with lower (better) SRTs in this group, there was a negative association between attentional abilities and speech recognition in the presence of spatially separated speech-like maskers. For the EHI group, the pure-tone thresholds (averaged across 0.5, 1, 2, and 4 kHz) were significantly associated with the SRTs, despite the fact that all signals were amplified and therefore in principle audible. PMID:29867654
Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey
2017-09-01
Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Reception of distorted speech.
DOT National Transportation Integrated Search
1973-12-01
Noise, either in the form of masking or in the form of distortion products, interferes with speech intelligibility. When the signal-to-noise ratio is bad enough, articulation can drop to unacceptably--even dangerously--low levels. However, listeners ...
Brand, Thomas
2018-01-01
In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect “binaural sluggishness.” In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization–cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism. PMID:29338577
Hauth, Christopher F; Brand, Thomas
2018-01-01
In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect "binaural sluggishness." In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization-cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism.
Söderlund, Göran B. W.; Jobs, Elisabeth Nilsson
2016-01-01
The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6–9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure. PMID:26858679
Firszt, Jill B; Reeder, Ruth M; Holden, Laura K
At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of covariables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc), and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-sex-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal-hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal-hearing participant groups were not significantly different for speech in noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments, and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates.
Potts, Lisa G; Skinner, Margaret W; Litovsky, Ruth A; Strube, Michael J; Kuk, Francis
2009-06-01
The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing). This study evaluated the effect of wearing a cochlear implant in one ear and a digital hearing aid in the opposite ear on speech recognition and localization. A repeated-measures correlational study was completed. Nineteen adult Cochlear Nucleus 24 implant recipients participated in the study. The participants were fit with a Widex Senso Vita 38 hearing aid to achieve maximum audibility and comfort within their dynamic range. Soundfield thresholds, loudness growth, speech recognition, localization, and subjective questionnaires were obtained six-eight weeks after the hearing aid fitting. Testing was completed in three conditions: hearing aid only, cochlear implant only, and cochlear implant and hearing aid (bimodal). All tests were repeated four weeks after the first test session. Repeated-measures analysis of variance was used to analyze the data. Significant effects were further examined using pairwise comparison of means or in the case of continuous moderators, regression analyses. The speech-recognition and localization tasks were unique, in that a speech stimulus presented from a variety of roaming azimuths (140 degree loudspeaker array) was used. Performance in the bimodal condition was significantly better for speech recognition and localization compared to the cochlear implant-only and hearing aid-only conditions. Performance was also different between these conditions when the location (i.e., side of the loudspeaker array that presented the word) was analyzed. In the bimodal condition, the speech-recognition and localization tasks were equal regardless of which side of the loudspeaker array presented the word, while performance was significantly poorer for the monaural conditions (hearing aid only and cochlear implant only) when the words were presented on the side with no stimulation. Binaural loudness summation of 1-3 dB was seen in soundfield thresholds and loudness growth in the bimodal condition. Measures of the audibility of sound with the hearing aid, including unaided thresholds, soundfield thresholds, and the Speech Intelligibility Index, were significant moderators of speech recognition and localization. Based on the questionnaire responses, participants showed a strong preference for bimodal stimulation. These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid.
Speech and Language Development in 2 Year Old Children with Cerebral Palsy
Hustad, Katherine C.; Allison, Kristen; McFadd, Emily; Riehle, Katherine
2013-01-01
Objective We examined early speech and language development in children who had cerebral palsy. Questions addressed whether children could be classified into early profile groups on the basis of speech and language skills and whether there were differences on selected speech and language measures among groups. Methods Speech and language assessments were completed on 27 children with CP who were between the ages of 24-30 months (mean age 27.1 months; SD 1.8). We examined several measures of expressive and receptive language, along with speech intelligibility. Results 2-step cluster analysis was used to identify homogeneous groups of children based on their performance on the 7 dependent variables characterizing speech and language performance. Three groups of children identified were those not yet talking (44% of the sample); those whose talking abilities appeared to be emerging (41% of the sample); and those who were established talkers (15% of the sample). Group differences were evident on all variables except receptive language skills. Conclusion 85% of 2 year old children with CP in this study had clinical speech and /or language delays relative to age expectations. Findings suggest that children with CP should receive speech and language assessment and treatment to identify and treat those with delays at or before 2 years of age. PMID:23627373
Magalhães, Ana Tereza de Matos; Goffi-Gomez, M Valéria Schmidt; Hoshino, Ana Cristina; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; Brito, Rubens
2013-09-01
To identify the technological contributions of the newer version of speech processor to the first generation of multichannel cochlear implant and the satisfaction of users of the new technology. Among the new features available, we focused on the effect of the frequency allocation table, the T-SPL and C-SPL, and the preprocessing gain adjustments (adaptive dynamic range optimization). Prospective exploratory study. Cochlear implant center at hospital. Cochlear implant users of the Spectra processor with speech recognition in closed set. Seventeen patients were selected between the ages of 15 and 82 and deployed for more than 8 years. The technology update of the speech processor for the Nucleus 22. To determine Freedom's contribution, thresholds and speech perception tests were performed with the last map used with the Spectra and the maps created for Freedom. To identify the effect of the frequency allocation table, both upgraded and converted maps were programmed. One map was programmed with 25 dB T-SPL and 65 dB C-SPL and the other map with adaptive dynamic range optimization. To assess satisfaction, SADL and APHAB were used. All speech perception tests and all sound field thresholds were statistically better with the new speech processor; 64.7% of patients preferred maintaining the same frequency table that was suggested for the older processor. The sound field threshold was statistically significant at 500, 1,000, 1,500, and 2,000 Hz with 25 dB T-SPL/65 dB C-SPL. Regarding patient's satisfaction, there was a statistically significant improvement, only in the subscale of speech in noise abilities and phone use. The new technology improved the performance of patients with the first generation of multichannel cochlear implant.
Developmental language and speech disability.
Spiel, G; Brunner, E; Allmayer, B; Pletz, A
2001-09-01
Speech disabilities (articulation deficits) and language disorders--expressive (vocabulary) receptive (language comprehension) are not uncommon in children. An overview of these along with a global description of the impairment of communication as well as clinical characteristics of language developmental disorders are presented in this article. The diagnostic tables, which are applied in the European and Anglo-American speech areas, ICD-10 and DSM-IV, have been explained and compared. Because of their strengths and weaknesses an alternative classification of language and speech developmental disorders is proposed, which allows a differentiation between expressive and receptive language capabilities with regard to the semantic and the morphological/syntax domains. Prevalence and comorbidity rates, psychosocial influences, biological factors and the biological social interaction have been discussed. The necessity of the use of standardized examinations is emphasised. General logopaedic treatment paradigms, specific therapy concepts and an overview of prognosis have been described.
Dynamics of infant cortical auditory evoked potentials (CAEPs) for tone and speech tokens.
Cone, Barbara; Whitaker, Richard
2013-07-01
Cortical auditory evoked potentials (CAEPs) to tones and speech sounds were obtained in infants to: (1) further knowledge of auditory development above the level of the brainstem during the first year of life; (2) establish CAEP input-output functions for tonal and speech stimuli as a function of stimulus level and (3) elaborate the data-base that establishes CAEP in infants tested while awake using clinically relevant stimuli, thus providing methodology that would have translation to pediatric audiological assessment. Hypotheses concerning CAEP development were that the latency and amplitude input-output functions would reflect immaturity in encoding stimulus level. In a second experiment, infants were tested with the same stimuli used to evoke the CAEPs. Thresholds for these stimuli were determined using observer-based psychophysical techniques. The hypothesis was that the behavioral thresholds would be correlated with CAEP input-output functions because of shared cortical response areas known to be active in sound detection. 36 infants, between the ages of 4 and 12 months (mean=8 months, s.d.=1.8 months) and 9 young adults (mean age 21 years) with normal hearing were tested. First, CAEPs amplitude and latency input-output functions were obtained for 4 tone bursts and 7 speech tokens. The tone bursts stimuli were 50 ms tokens of pure tones at 0.5, 1.0, 2.0 and 4.0 kHz. The speech sound tokens, /a/, /i/, /o/, /u/, /m/, /s/, and /∫/, were created from natural speech samples and were also 50 ms in duration. CAEPs were obtained for tone burst and speech token stimuli at 10 dB level decrements in descending order from 70 dB SPL. All CAEP tests were completed while the infants were awake and engaged in quiet play. For the second experiment, observer-based psychophysical methods were used to establish perceptual threshold for the same speech sound and tone tokens. Infant CAEP component latencies were prolonged by 100-150 ms in comparison to adults. CAEP latency-intensity input output functions were steeper in infants compared to adults. CAEP amplitude growth functions with respect to stimulus SPL are adult-like at this age, particularly for the earliest component, P1-N1. Infant perceptual thresholds were elevated with respect to those found in adults. Furthermore, perceptual thresholds were higher, on average, than levels at which CAEPs could be obtained. When CAEP amplitudes were plotted with respect to perceptual threshold (dB SL), the infant CAEP amplitude growth slopes were steeper than in adults. Although CAEP latencies indicate immaturity in neural transmission at the level of the cortex, amplitude growth with respect to stimulus SPL is adult-like at this age, particularly for the earliest component, P1-N1. The latency and amplitude input-output functions may provide additional information as to how infants perceive stimulus level. The reasons for the discrepancy between electrophysiologic and perceptual threshold may be due to immaturity in perceptual temporal resolution abilities and the broad-band listening strategy employed by infants. The findings from the current study can be translated to the clinical setting. It is possible to use tonal or speech sound tokens to evoke CAEPs in an awake, passively alert infant, and thus determine whether these sounds activate the auditory cortex. This could be beneficial in the verification of hearing aid or cochlear implant benefit. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Hoth, S
2016-08-01
The Freiburg speech intelligibility test according to DIN 45621 was introduced around 60 years ago. For decades, and still today, the Freiburg test has been a standard whose relevance extends far beyond pure audiometry. It is used primarily to determine the speech perception threshold (based on two-digit numbers) and the ability to discriminate speech at suprathreshold presentation levels (based on monosyllabic nouns). Moreover, it is a measure of the degree of disability, the requirement for and success of technical hearing aids (auxiliaries directives), and the compensation for disability and handicap (Königstein recommendation). In differential audiological diagnostics, the Freiburg test contributes to the distinction between low- and high-frequency hearing loss, as well as to identification of conductive, sensory, neural, and central disorders. Currently, the phonemic and perceptual balance of the monosyllabic test lists is subject to critical discussions. Obvious deficiencies exist for testing speech recognition in noise. In this respect, alternatives such as sentence or rhyme tests in closed-answer inventories are discussed.
Characterization of hearing loss in aged type II diabetics
Frisina, Susan T.; Mapes, Frances; Kim, SungHee; Frisina, D. Robert; Frisina, Robert D.
2009-01-01
Presbycusis – age-related hearing loss – is the number one communicative disorder and a significant chronic medical condition of the aged. Little is known about how type II diabetes, another prevalent age-related medical condition, and presbycusis interact. The present investigation aimed to comprehensively characterize the nature of hearing impairment in aged type II diabetics. Hearing tests measuring both peripheral (cochlea) and central (brainstem and cortex) auditory processing were utilized. The majority of differences between the hearing abilities of the aged diabetics and their age-matched controls were found in measures of inner ear function. For example, large differences were found in pure-tone audiograms, wideband noise and speech reception thresholds, and otoacoustic emissions. The greatest deficits tended to be at low frequencies. In addition, there was a strong tendency for diabetes to affect the right ear more than the left. One possible interpretation is that as one develops presbycusis, the right ear advantage is lost, and this decline is accelerated by diabetes. In contrast, auditory processing tests that measure both peripheral and central processing showed fewer declines between the elderly diabetics and the control group. Consequences of elevated blood sugar levels as possible underlying physiological mechanisms for the hearing loss are discussed. PMID:16309862
Schreitmüller, Stefan; Frenken, Miriam; Bentz, Lüder; Ortmann, Magdalene; Walger, Martin; Meister, Hartmut
Watching a talker's mouth is beneficial for speech reception (SR) in many communication settings, especially in noise and when hearing is impaired. Measures for audiovisual (AV) SR can be valuable in the framework of diagnosing or treating hearing disorders. This study addresses the lack of standardized methods in many languages for assessing lipreading, AV gain, and integration. A new method is validated that supplements a German speech audiometric test with visualizations of the synthetic articulation of an avatar that was used, for it is feasible to lip-sync auditory speech in a highly standardized way. Three hypotheses were formed according to the literature on AV SR that used live or filmed talkers. It was tested whether respective effects could be reproduced with synthetic articulation: (1) cochlear implant (CI) users have a higher visual-only SR than normal-hearing (NH) individuals, and younger individuals obtain higher lipreading scores than older persons. (2) Both CI and NH gain from presenting AV over unimodal (auditory or visual) sentences in noise. (3) Both CI and NH listeners efficiently integrate complementary auditory and visual speech features. In a controlled, cross-sectional study with 14 experienced CI users (mean age 47.4) and 14 NH individuals (mean age 46.3, similar broad age distribution), lipreading, AV gain, and integration of a German matrix sentence test were assessed. Visual speech stimuli were synthesized by the articulation of the Talking Head system "MASSY" (Modular Audiovisual Speech Synthesizer), which displayed standardized articulation with respect to the visibility of German phones. In line with the hypotheses and previous literature, CI users had a higher mean visual-only SR than NH individuals (CI, 38%; NH, 12%; p < 0.001). Age was correlated with lipreading such that within each group, younger individuals obtained higher visual-only scores than older persons (rCI = -0.54; p = 0.046; rNH = -0.78; p < 0.001). Both CI and NH benefitted by AV over unimodal speech as indexed by calculations of the measures visual enhancement and auditory enhancement (each p < 0.001). Both groups efficiently integrated complementary auditory and visual speech features as indexed by calculations of the measure integration enhancement (each p < 0.005). Given the good agreement between results from literature and the outcome of supplementing an existing validated auditory test with synthetic visual cues, the introduced method can be considered an interesting candidate for clinical and scientific applications to assess measures important for AV SR in a standardized manner. This could be beneficial for optimizing the diagnosis and treatment of individual listening and communication disorders, such as cochlear implantation.
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Zhang, Xuyang; Gantz, Bruce; Froman, Rebecca; Olszewski, Carol
2007-06-01
The purposes of this study were to (a) examine the accuracy of cochlear implant recipients who use different types of devices and signal processing strategies on pitch ranking as a function of size of interval and frequency range and (b) to examine the relations between this pitch perception measure and demographic variables, melody recognition, and speech reception in background noise. One hundred fourteen cochlear implant users and 21 normal-hearing adults were tested on a pitch discrimination task (pitch ranking) that required them to determine direction of pitch change as a function of base frequency and interval size. Three groups were tested: (a) long electrode cochlear implant users (N = 101); (b) short electrode users that received acoustic plus electrical stimulation (A+E) (N = 13); and (c) a normal-hearing (NH) comparison group (N = 21). Pitch ranking was tested at standard frequencies of 131 to 1048 Hz, and the size of the pitch-change intervals ranged from 1 to 4 semitones. A generalized linear mixed model (GLMM) was fit to predict pitch ranking and to determine if group differences exist as a function of base frequency and interval size. Overall significance effects were measured with Chi-square tests and individual effects were measured with t-tests. Pitch ranking accuracy was correlated with demographic measures (age at time of testing, length of profound deafness, months of implant use), frequency difference limens, familiar melody recognition, and two measures of speech reception in noise. The long electrode recipients performed significantly poorer on pitch discrimination than the NH and A+E group. The A+E users performed similarly to the NH listeners as a function of interval size in the lower base frequency range, but their pitch discrimination scores deteriorated slightly in the higher frequency range. The long electrode recipients, although less accurate than participants in the NH and A+E groups, tended to perform with greater accuracy within the higher frequency range. There were statistically significant correlations between pitch ranking and familiar melody recognition as well as with pure-tone frequency difference limens at 200 and 400 Hz. Low-frequency acoustic hearing improves pitch discrimination as compared with traditional, electric-only cochlear implants. These findings have implications for musical tasks such as familiar melody recognition.
The relevance of receptive vocabulary in reading comprehension.
Nalom, Ana Flávia de Oliveira; Soares, Aparecido José Couto; Cárnio, Maria Silvia
2015-01-01
To characterize the performance of students from the 5th year of primary school, with and without indicatives of reading and writing disorders, in receptive vocabulary and reading comprehension of sentences and texts, and to verify possible correlations between both. This study was approved by the Research Ethics Committee of the institution (no. 098/13). Fifty-two students in the 5th year from primary school, with and without indicatives of reading and writing disorders, and from two public schools participated in this study. After signing the informed consent and having a speech therapy assessment for the application of inclusion criteria, the students were submitted to a specific test for standardized evaluation of receptive vocabulary and reading comprehension. The data were studied using statistical analysis through the Kruskal-Wallis test, analysis of variance techniques, and Spearman's rank correlation coefficient with level of significance to be 0.05. A receiver operating characteristic (ROC) curve (was constructed in which reading comprehension was considered as gold standard. The students without indicatives of reading and writing disorders presented a better performance in all tests. No significant correlation was found between the tests that evaluated reading comprehension in either group. A correlation was found between reading comprehension of texts and receptive vocabulary in the group without indicatives. In the absence of indicatives of reading and writing disorders, the presence of a good range of vocabulary highly contributes to a proficient reading comprehension of texts.
Rossi, N F; Giacheti, C M
2017-07-01
Williams syndrome (WS) phenotype is described as unique and intriguing. The aim of this study was to investigate the associations between speech-language abilities, general cognitive functioning and behavioural problems in individuals with WS, considering age effects and speech-language characteristics of WS sub-groups. The study's participants were 26 individuals with WS and their parents. General cognitive functioning was assessed with the Wechsler Intelligence Scale. Peabody Picture Vocabulary Test, Token Test and the Cookie Theft Picture test were used as speech-language measures. Five speech-language characteristics were evaluated from a 30-min conversation (clichés, echolalia, perseverative speech, exaggerated prosody and monotone intonation). The Child Behaviour Checklist (CBCL 6-18) was used to assess behavioural problems. Higher single-word receptive vocabulary and narrative vocabulary were negatively associated with CBCL T-scores for Social Problems, Aggressive Behaviour and Total Problems. Speech rate was negatively associated with the CBCL Withdrawn/Depressed T-score. Monotone intonation was associated with shy behaviour, as well as exaggerated prosody with talkative behaviour. WS with perseverative speech and exaggerated prosody presented higher scores on Thought Problems. Echolalia was significantly associated with lower Verbal IQ. No significant association was found between IQ and behaviour problems. Age-associated effects were observed only for the Aggressive Behaviour scale. Associations reported in the present study may represent an insightful background for future predictive studies of speech-language, cognition and behaviour problems in WS. © 2017 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Cochlear Implantation in the Very Young Child: Issues Unique to the Under-1 Population
Cosetti, Maura; Roland, J. Thomas
2010-01-01
Since the advent of cochlear implantation, candidacy criteria have slowly broadened to include increasingly younger patients. Spurred by evidence demonstrating both perioperative safety and significantly increased speech and language benefit with early auditory intervention, children younger than 12 months of age are now being successfully implanted at many centers. This review highlights the unique challenges involved in cochlear implantation in the very young child, specifically diagnosis and certainty of testing, anesthetic risk, surgical technique, intraoperative testing and postoperative programming, long-term safety, development of receptive and expressive language, and outcomes of speech perception. Overall, the current body of literature indicates that cochlear implantation prior to 1 year of age is both safe and efficacious. PMID:20483813
ERIC Educational Resources Information Center
Masso, Sarah; Baker, Elise; McLeod, Sharynne; Wang, Cen
2017-01-01
Purpose: The aim of this study was to determine if polysyllable accuracy in preschoolers with speech sound disorders (SSD) was related to known predictors of later literacy development: phonological processing, receptive vocabulary, and print knowledge. Polysyllables--words of three or more syllables--are important to consider because unlike…
ERIC Educational Resources Information Center
Overby, Megan S.; Masterson, Julie J.; Preston, Jonathan L.
2015-01-01
Purpose: This archival investigation examined the relationship between preliteracy speech sound production skill (SSPS) and spelling in Grade 3 using a dataset in which children's receptive vocabulary was generally within normal limits, speech therapy was not provided until Grade 2, and phonological awareness instruction was discouraged at the…
Overall intelligibility, articulation, resonance, voice and language in a child with Nager syndrome.
Van Lierde, Kristiane M; Luyten, Anke; Mortier, Geert; Tijskens, Anouk; Bettens, Kim; Vermeersch, Hubert
2011-02-01
The purpose of this study was to provide a description of the language and speech (intelligibility, voice, resonance, articulation) in a 7-year-old Dutch speaking boy with Nager syndrome. To reveal these features comparison was made with an age and gender related child with a similar palatal or hearing problem. Language was tested with an age appropriate language test namely the Dutch version of the Clinical Evaluation of Language Fundamentals. Regarding articulation a phonetic inventory, phonetic analysis and phonological process analysis was performed. A nominal scale with four categories was used to judge the overall speech intelligibility. A voice and resonance assessment included a videolaryngostroboscopy, a perceptual evaluation, acoustic analysis and nasometry. The most striking communication problems in this child were expressive and receptive language delay, moderately impaired speech intelligibility, the presence of phonetic and phonological disorders, resonance disorders and a high-pitched voice. The explanation for this pattern of communication is not completely straightforward. The language and the phonological impairment, only present in the child with the Nager syndrome, are not part of a more general developmental delay. The resonance disorders can be related to the cleft palate, but were not present in the child with the isolated cleft palate. One might assume that the cul-de-sac resonance and the much decreased mandibular movement and the restricted tongue lifting are caused by the restricted jaw mobility and micrognathia. To what extent the suggested mandibular distraction osteogenesis in early childhood allows increased mandibular movement and better speech outcome with increased oral resonance is subject for further research. According to the results of this study the speech and language management must be focused on receptive and expressive language skills and linguistic conceptualization, correct phonetic placement and the modification of hypernasality and nasal emission. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Gray, Lincoln; Kesser, Bradley; Cole, Erika
2009-09-01
Unilateral hearing loss causes difficulty hearing in noise (the "cocktail party effect") due to absence of redundancy, head-shadow, and binaural squelch. This study explores the emergence of the head-shadow and binaural squelch effects in children with unilateral congenital aural atresia undergoing surgery to correct their hearing deficit. Adding patients and data from a similar study previously published, we also evaluate covariates such as the age of the patient, surgical outcome, and complexity of the task that might predict the extent of binaural benefit--patients' ability to "use" their new ear--when understanding speech in noise. Patients with unilateral congenital aural atresia were tested for their ability to understand speech in noise before and again 1 month after surgery to repair their atresia. In a sound-attenuating booth participants faced a speaker that produced speech signals with noise 90 degrees to the side of the normal (non-atretic) ear and again to the side of the atretic ear. The Hearing in Noise Test (HINT for adults or HINT-C for children) was used to estimate the patients' speech reception thresholds. The speech-in-noise test (SPIN) or the Pediatric Speech Intelligibility (PSI) Test was used in the previous study. There was consistent improvement, averaging 5dB regardless of age, in the ability to take advantage of head-shadow in understanding speech with noise to the side of the non-atretic (normal) ear. There was, in contrast, a strong negative linear effect of age (r(2)=.78, selecting patients over 8 years) in the emergence of binaural squelch to understand speech with noise to the side of the atretic ear. In patients over 8 years, this trend replicated over different studies and different tests. Children less than 8 years, however, showed less improvement in the HINT-C than in the PSI after surgery with noise toward their atretic ear (effect size=3). No binaural result was correlated with degree of hearing improvement after surgery. All patients are able to take advantage of a favorable signal-to-noise ratio in their newly opened ear; that is with noise toward the side of the normal ear (but this physical, bilateral, head-shadow effect need not involve true central binaural processing). With noise toward the atretic ear, the emergence of binaural squelch replicates between two studies for all but the youngest patients. Approximately 2dB of binaural gain is lost for each decade that surgery is delayed, and zero (or poorer) binaural benefit is predicted after 38 years of age. Older adults do more poorly, possibly secondary to their long period of auditory deprivation. At the youngest ages, however, binaural results are different in open- and closed-set speech tests; the more complex hearing tasks may involve a greater cognitive load. Other cognitive abilities (late evoked potentials, grey matter in auditory cortex, and multitasking) show similar effects of age, peaking at the same late-teen/young-adult period. Longer follow-up is likely critical for the understanding of these data. Getting a new ear may be--like multitasking--challenging for the youngest and oldest subjects.
Chamber of Commerce reception for Dr. Lucas
NASA Technical Reports Server (NTRS)
1986-01-01
Dr. William R. Lucas, Marshall's fourth Center Director (1974-1986), delivers a speech in front of a picture of the lunar landscape with Earth looming in the background while attending a Huntsville Chamber of Commerce reception honoring his achievements as Director of Marshall Space Flight Center (MSFC).
Lovett, Rosemary; Summerfield, Quentin; Vickers, Deborah
2013-06-01
The Toy Discrimination Test measures children's ability to discriminate spoken words. Previous assessments of reliability tested children with normal hearing or mild hearing impairment, and most studies used a version of the test without a masking sound. We assessed test-retest reliability for children with hearing impairment using maskers of broadband noise and two-talker babble. Stimuli were presented from a loudspeaker. The signal-to-noise ratio (SNR) was varied adaptively to estimate the speech-reception threshold (SRT) corresponding to 70.7% correct performance. Participants completed each masked condition twice. Fifty-five children with permanent hearing impairment participated, aged 3.0 to 6.3 years. Thirty-four children used acoustic hearing aids; 21 children used cochlear implants. For the noise masker, the within-subject standard deviation of SRTs was 2.4 dB, and the correlation between first and second SRT was + 0.73. For the babble masker, corresponding values were 2.7 dB and + 0.60. Reliability was similar for children with hearing aids and children with cochlear implants. The results can inform the interpretation of scores from individual children. If a child completes a condition twice in different listening situations (e.g. aided and unaided), a difference between scores ≥ 7.5 dB would be statistically significant (p <.05).
De Ceulaer, Geert; Bestel, Julie; Mülder, Hans E; Goldbeck, Felix; de Varebeke, Sebastien Pierre Janssens; Govaerts, Paul J
2016-05-01
Roger is a digital adaptive multi-channel remote microphone technology that wirelessly transmits a speaker's voice directly to a hearing instrument or cochlear implant sound processor. Frequency hopping between channels, in combination with repeated broadcast, avoids interference issues that have limited earlier generation FM systems. This study evaluated the benefit of the Roger Pen transmitter microphone in a multiple talker network (MTN) for cochlear implant users in a simulated noisy conversation setting. Twelve post-lingually deafened adult Advanced Bionics CII/HiRes 90K recipients were recruited. Subjects used a Naida CI Q70 processor with integrated Roger 17 receiver. The test environment simulated four people having a meal in a noisy restaurant, one the CI user (listener), and three companions (talkers) talking non-simultaneously in a diffuse field of multi-talker babble. Speech reception thresholds (SRTs) were determined without the Roger Pen, with one Roger Pen, and with three Roger Pens in an MTN. Using three Roger Pens in an MTN improved the SRT by 14.8 dB over using no Roger Pen, and by 13.1 dB over using a single Roger Pen (p < 0.0001). The Roger Pen in an MTN provided statistically and clinically significant improvement in speech perception in noise for Advanced Bionics cochlear implant recipients. The integrated Roger 17 receiver made it easy for users of the Naida CI Q70 processor to take advantage of the Roger system. The listening advantage and ease of use should encourage more clinicians to recommend and fit Roger in adult cochlear implant patients.
Mukari, Siti Zamratol-Mai Sarah; Umat, Cila; Razak, Ummu Athiyah Abdul
2011-07-01
The aim of the present study was to compare the benefit of monaural versus binaural ear-level frequency modulated (FM) fitting on speech perception in noise in children with normal hearing. Reception threshold for sentences (RTS) was measured in no-FM, monaural FM, and binaural FM conditions in 22 normally developing children with bilateral normal hearing, aged 8 to 9 years old. Data were gathered using the Pediatric Malay Hearing in Noise Test (P-MyHINT) with speech presented from front and multi-talker babble presented from 90°, 180°, 270° azimuths in a sound treated booth. The results revealed that the use of either monaural or binaural ear level FM receivers provided significantly better mean RTSs than the no-FM condition (P<0.001). However, binaural FM did not produce a significantly greater benefit in mean RTS than monaural fitting. The benefit of binaural over monaural FM varies across individuals; while binaural fitting provided better RTSs in about 50% of study subjects, there were those in whom binaural fitting resulted in either deterioration or no additional improvement compared to monaural FM fitting. The present study suggests that the use of monaural ear-level FM receivers in children with normal hearing might provide similar benefit as binaural use. Individual subjects' variations of binaural FM benefit over monaural FM suggests that the decision to employ monaural or binaural fitting should be individualized. It should be noted however, that the current study recruits typically developing normal hearing children. Future studies involving normal hearing children with high risk of having difficulty listening in noise is indicated to see if similar findings are obtained.
2016-01-01
The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658
Potts, Lisa G.; Skinner, Margaret W.; Litovsky, Ruth A.; Strube, Michael J; Kuk, Francis
2010-01-01
Background The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing). Purpose This study evaluated the effect of wearing a cochlear implant in one ear and a digital hearing aid in the opposite ear on speech recognition and localization. Research Design A repeated-measures correlational study was completed. Study Sample Nineteen adult Cochlear Nucleus 24 implant recipients participated in the study. Intervention The participants were fit with a Widex Senso Vita 38 hearing aid to achieve maximum audibility and comfort within their dynamic range. Data Collection and Analysis Soundfield thresholds, loudness growth, speech recognition, localization, and subjective questionnaires were obtained six–eight weeks after the hearing aid fitting. Testing was completed in three conditions: hearing aid only, cochlear implant only, and cochlear implant and hearing aid (bimodal). All tests were repeated four weeks after the first test session. Repeated-measures analysis of variance was used to analyze the data. Significant effects were further examined using pairwise comparison of means or in the case of continuous moderators, regression analyses. The speech-recognition and localization tasks were unique, in that a speech stimulus presented from a variety of roaming azimuths (140 degree loudspeaker array) was used. Results Performance in the bimodal condition was significantly better for speech recognition and localization compared to the cochlear implant–only and hearing aid–only conditions. Performance was also different between these conditions when the location (i.e., side of the loudspeaker array that presented the word) was analyzed. In the bimodal condition, the speech-recognition and localization tasks were equal regardless of which side of the loudspeaker array presented the word, while performance was significantly poorer for the monaural conditions (hearing aid only and cochlear implant only) when the words were presented on the side with no stimulation. Binaural loudness summation of 1–3 dB was seen in soundfield thresholds and loudness growth in the bimodal condition. Measures of the audibility of sound with the hearing aid, including unaided thresholds, soundfield thresholds, and the Speech Intelligibility Index, were significant moderators of speech recognition and localization. Based on the questionnaire responses, participants showed a strong preference for bimodal stimulation. Conclusions These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid. PMID:19594084
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex; Trawick, David
1991-01-01
The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2012-01-01
Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137
Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise
2013-05-01
To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.
Comparison of Audiological Findings in Patients with Vestibular Migraine and Migraine
Kırkım, Günay; Mutlu, Başak; Tanriverdizade, Tural; Keskinoğlu, Pembe; Güneri, Enis Alpin; Akdal, Gülden
2017-01-01
Objective The aim of this study was to investigate and compare the auditory findings in vestibular migraine (VM) and migraine patients without a history of vertigo. Methods This study was conducted on 44 patients diagnosed with definite VM and 31 patients diagnosed with migraine who were followed and treated between January 2011 and February 2015. Also, 52 healthy subjects were included in this study as a control group. All participants underwent a detailed otorhinolaryngological examination followed by audiological evaluation, including pure tone audiometry, speech reception threshold, speech recognition score, and acoustic immitancemetry. Results In the VM group, there were 16 patients (36.4%) with tinnitus, while in the other groups we did not observe any patients with tinnitus. The rate of tinnitus in the VM group was significantly higher in comparison to other groups (p<0.05). None of the groups had any patients with permanent or fluctuating sensorineural hearing loss. Conclusion We conclude that patients with VM should be closely and longitudinally followed up for the early detection of other otological symptoms and possible occurrence of sensorineural hearing loss in the long term. PMID:29515927
Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J
2013-06-01
To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.
Determination of the Potential Benefit of Time-Frequency Gain Manipulation
Anzalone, Michael C.; Calandruccio, Lauren; Doherty, Karen A.; Carney, Laurel H.
2008-01-01
Objective The purpose of this study was to determine the maximum benefit provided by a time-frequency gain-manipulation algorithm for noise-reduction (NR) based on an ideal detector of speech energy. The amount of detected energy necessary to show benefit using this type of NR algorithm was examined, as well as the necessary speed and frequency resolution of the gain manipulation. Design NR was performed using time-frequency gain manipulation, wherein the gains of individual frequency bands depended on the absence or presence of speech energy within each band. Three different experiments were performed: (1) NR using ideal detectors, (2) NR with nonideal detectors, and (3) NR with ideal detectors and different processing speeds and frequency resolutions. All experiments were performed using the Hearing-in-Noise test (HINT). A total of 6 listeners with normal hearing and 14 listeners with hearing loss were tested. Results HINT thresholds improved for all listeners with NR based on the ideal detectors used in Experiment I. The nonideal detectors of Experiment II required detection of at least 90% of the speech energy before an improvement was seen in HINT thresholds. The results of Experiment III demonstrated that relatively high temporal resolution (<100 msec) was required by the NR algorithm to improve HINT thresholds. Conclusions The results indicated that a single-microphone NR system based on time-frequency gain manipulation improved the HINT thresholds of listeners. However, to obtain benefit in speech intelligibility, the detectors used in such a strategy were required to detect an unrealistically high percentage of the speech energy and to perform the gain manipulations on a fast temporal basis. PMID:16957499
ERIC Educational Resources Information Center
Ingvalson, Erin M.; Lansford, Kaitlin L.; Fedorova, Valeriya; Fernandez, Gabriel
2017-01-01
Purpose: Previous research has demonstrated equivocal findings related to the effect of listener age on intelligibility ratings of dysarthric speech. The aim of the present study was to investigate the mechanisms that support younger and older adults' perception of speech by talkers with dysarthria. Method: Younger and older adults identified…
ERIC Educational Resources Information Center
Erdener, Dogu; Burnham, Denis
2018-01-01
Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…
Warzybok, Anna; Brand, Thomas; Wagener, Kirsten C; Kollmeier, Birger
2015-01-01
The current study investigates the extent to which the linguistic complexity of three commonly employed speech recognition tests and second language proficiency influence speech recognition thresholds (SRTs) in noise in non-native listeners. SRTs were measured for non-natives and natives using three German speech recognition tests: the digit triplet test (DTT), the Oldenburg sentence test (OLSA), and the Göttingen sentence test (GÖSA). Sixty-four non-native and eight native listeners participated. Non-natives can show native-like SRTs in noise only for the linguistically easy speech material (DTT). Furthermore, the limitation of phonemic-acoustical cues in digit triplets affects speech recognition to the same extent in non-natives and natives. For more complex and less familiar speech materials, non-natives, ranging from basic to advanced proficiency in German, require on average 3-dB better signal-to-noise ratio for the OLSA and 6-dB for the GÖSA to obtain 50% speech recognition compared to native listeners. In clinical audiology, SRT measurements with a closed-set speech test (i.e. DTT for screening or OLSA test for clinical purposes) should be used with non-native listeners rather than open-set speech tests (such as the GÖSA or HINT), especially if a closed-set version in the patient's own native language is available.
Binaural Speech Understanding With Bilateral Cochlear Implants in Reverberation.
Kokkinakis, Kostas
2018-03-08
The purpose of this study was to investigate whether bilateral cochlear implant (CI) listeners who are fitted with clinical processors are able to benefit from binaural advantages under reverberant conditions. Another aim of this contribution was to determine whether the magnitude of each binaural advantage observed inside a highly reverberant environment differs significantly from the magnitude measured in a near-anechoic environment. Ten adults with postlingual deafness who are bilateral CI users fitted with either Nucleus 5 or Nucleus 6 clinical sound processors (Cochlear Corporation) participated in this study. Speech reception thresholds were measured in sound field and 2 different reverberation conditions (0.06 and 0.6 s) as a function of the listening condition (left, right, both) and the noise spatial location (left, front, right). The presence of the binaural effects of head-shadow, squelch, summation, and spatial release from masking in the 2 different reverberation conditions tested was determined using nonparametric statistical analysis. In the bilateral population tested, when the ambient reverberation time was equal to 0.6 s, results indicated strong positive effects of head-shadow and a weaker spatial release from masking advantage, whereas binaural squelch and summation contributed no statistically significant benefit to bilateral performance under this acoustic condition. These findings are consistent with those of previous studies, which have demonstrated that head-shadow yields the most pronounced advantage in noise. The finding that spatial release from masking produced little to almost no benefit in bilateral listeners is consistent with the hypothesis that additive reverberation degrades spatial cues and negatively affects binaural performance. The magnitude of 4 different binaural advantages was measured on the same group of bilateral CI subjects fitted with clinical processors in 2 different reverberation conditions. The results of this work demonstrate the impeding properties of reverberation on binaural speech understanding. In addition, results indicate that CI recipients who struggle in everyday listening environments are also more likely to benefit less in highly reverberant environments from their bilateral processors.
Effect of signal to noise ratio on the speech perception ability of older adults
Shojaei, Elahe; Ashayeri, Hassan; Jafari, Zahra; Zarrin Dast, Mohammad Reza; Kamali, Koorosh
2016-01-01
Background: Speech perception ability depends on auditory and extra-auditory elements. The signal- to-noise ratio (SNR) is an extra-auditory element that has an effect on the ability to normally follow speech and maintain a conversation. Speech in noise perception difficulty is a common complaint of the elderly. In this study, the importance of SNR magnitude as an extra-auditory effect on speech perception in noise was examined in the elderly. Methods: The speech perception in noise test (SPIN) was conducted on 25 elderly participants who had bilateral low–mid frequency normal hearing thresholds at three SNRs in the presence of ipsilateral white noise. These participants were selected by available sampling method. Cognitive screening was done using the Persian Mini Mental State Examination (MMSE) test. Results: Independent T- test, ANNOVA and Pearson Correlation Index were used for statistical analysis. There was a significant difference in word discrimination scores at silence and at three SNRs in both ears (p≤0.047). Moreover, there was a significant difference in word discrimination scores for paired SNRs (0 and +5, 0 and +10, and +5 and +10 (p≤0.04)). No significant correlation was found between age and word recognition scores at silence and at three SNRs in both ears (p≥0.386). Conclusion: Our results revealed that decreasing the signal level and increasing the competing noise considerably reduced the speech perception ability in normal hearing at low–mid thresholds in the elderly. These results support the critical role of SNRs for speech perception ability in the elderly. Furthermore, our results revealed that normal hearing elderly participants required compensatory strategies to maintain normal speech perception in challenging acoustic situations. PMID:27390712
Visual-motor integration performance in children with severe specific language impairment.
Nicola, K; Watter, P
2016-09-01
This study investigated (1) the visual-motor integration (VMI) performance of children with severe specific language impairment (SLI), and any effect of age, gender, socio-economic status and concomitant speech impairment; and (2) the relationship between language and VMI performance. It is hypothesized that children with severe SLI would present with VMI problems irrespective of gender and socio-economic status; however, VMI deficits will be more pronounced in younger children and those with concomitant speech impairment. Furthermore, it is hypothesized that there will be a relationship between VMI and language performance, particularly in receptive scores. Children enrolled between 2000 and 2008 in a school dedicated to children with severe speech-language impairments were included, if they met the criteria for severe SLI with or without concomitant speech impairment which was verified by a government organization. Results from all initial standardized language and VMI assessments found during a retrospective review of chart files were included. The final study group included 100 children (males = 76), from 4 to 14 years of age with mean language scores at least 2SD below the mean. For VMI performance, 52% of the children scored below -1SD, with 25% of the total group scoring more than 1.5SD below the mean. Age, gender and the addition of a speech impairment did not impact on VMI performance; however, children living in disadvantaged suburbs scored significantly better than children residing in advantaged suburbs. Receptive language scores of the Clinical Evaluation of Language Fundamentals was the only score associated with and able to predict VMI performance. A small subgroup of children with severe SLI will also have poor VMI skills. The best predictor of poor VMI is receptive language scores on the Clinical Evaluation of Language Fundamentals. Children with poor receptive language performance may benefit from VMI assessment and multidisciplinary management. © 2016 John Wiley & Sons Ltd.
Evaluation of auditory functions for Royal Canadian Mounted Police officers.
Vaillancourt, Véronique; Laroche, Chantal; Giguère, Christian; Beaulieu, Marc-André; Legault, Jean-Pierre
2011-06-01
Auditory fitness for duty (AFFD) testing is an important element in an assessment of workers' ability to perform job tasks safely and effectively. Functional hearing is particularly critical to job performance in law enforcement. Most often, assessment is based on pure-tone detection thresholds; however, its validity can be questioned and challenged in court. In an attempt to move beyond the pure-tone audiogram, some organizations like the Royal Canadian Mounted Police (RCMP) are incorporating additional testing to supplement audiometric data in their AFFD protocols, such as measurements of speech recognition in quiet and/or in noise, and sound localization. This article reports on the assessment of RCMP officers wearing hearing aids in speech recognition and sound localization tasks. The purpose was to quantify individual performance in different domains of hearing identified as necessary components of fitness for duty, and to document the type of hearing aids prescribed in the field and their benefit for functional hearing. The data are to help RCMP in making more informed decisions regarding AFFD in officers wearing hearing aids. The proposed new AFFD protocol included unaided and aided measures of speech recognition in quiet and in noise using the Hearing in Noise Test (HINT) and sound localization in the left/right (L/R) and front/back (F/B) horizontal planes. Sixty-four officers were identified and selected by the RCMP to take part in this study on the basis of hearing thresholds exceeding current audiometrically based criteria. This article reports the results of 57 officers wearing hearing aids. Based on individual results, 49% of officers were reclassified from nonoperational status to operational with limitations on fine hearing duties, given their unaided and/or aided performance. Group data revealed that hearing aids (1) improved speech recognition thresholds on the HINT, the effects being most prominent in Quiet and in conditions of spatial separation between target and noise (Noise Right and Noise Left) and least considerable in Noise Front; (2) neither significantly improved nor impeded L/R localization; and (3) substantially increased F/B errors in localization in a number of cases. Additional analyses also pointed to the poor ability of threshold data to predict functional abilities for speech in noise (r² = 0.26 to 0.33) and sound localization (r² = 0.03 to 0.28). Only speech in quiet (r² = 0.68 to 0.85) is predicted adequately from threshold data. Combined with previous findings, results indicate that the use of hearing aids can considerably affect F/B localization abilities in a number of individuals. Moreover, speech understanding in noise and sound localization abilities were poorly predicted from pure-tone thresholds, demonstrating the need to specifically test these abilities, both unaided and aided, when assessing AFFD. Finally, further work is needed to develop empirically based hearing criteria for the RCMP and identify best practices in hearing aid fittings for optimal functional hearing abilities. American Academy of Audiology.
Casto, Kristen L; Cho, Timothy H
2012-09-01
This case report describes the in-flight speech intelligibility evaluation of an aircraft crewmember with pure tone audiometric thresholds that exceed the U.S. Army's flight standards. Results of in-flight speech intelligibility testing highlight the inability to predict functional auditory abilities from pure tone audiometry and underscore the importance of conducting validated functional hearing evaluations to determine aviation fitness-for-duty.
Firszt, Jill B.; Reeder, Ruth M.; Holden, Laura K.
2016-01-01
Objectives At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of co-variables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. Design The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc) and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-gender-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Results Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal hearing participant groups were not significantly different for speech-in-noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Conclusions Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates. PMID:28067750
Carter, Julie Anne; Murira, Grace; Gona, Joseph; Tumaini, Judy; Lees, Janet; Neville, Brian George; Newton, Charles Richard
2013-01-01
This study sought to adapt a battery of Western speech and language assessment tools to a rural Kenyan setting. The tool was developed for children whose first language was KiGiryama, a Bantu language. A total of 539 Kenyan children (males=271, females=268, ethnicity=100% Kigiryama. Data were collected from 303 children admitted to hospital with severe malaria and 206 age-matched children recruited from the village communities. The language assessments were based upon the Content, Form and Use (C/F/U) model. The assessment was based upon the adapted versions of the Peabody Picture Vocabulary Test, Test for the Reception of Grammar, Renfrew Action Picture Test, Pragmatics Profile of Everyday Communication Skills in Children, Test of Word Finding and language specific tests of lexical semantics, higher level language. Preliminary measures of construct validity suggested that the theoretical assumptions behind the construction of the assessments were appropriate and re-test and inter-rater reliability scores were acceptable. These findings illustrate the potential to adapt Western speech and language assessments in other languages and settings, particularly those in which there is a paucity of standardised tools. PMID:24294109
Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T.; Brand, Thomas
2016-01-01
To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. PMID:27604782
Objective Assessment of Listening Effort: Coregistration of Pupillometry and EEG.
Miles, Kelly; McMahon, Catherine; Boisvert, Isabelle; Ibrahim, Ronny; de Lissa, Peter; Graham, Petra; Lyxell, Björn
2017-01-01
Listening to speech in noise is effortful, particularly for people with hearing impairment. While it is known that effort is related to a complex interplay between bottom-up and top-down processes, the cognitive and neurophysiological mechanisms contributing to effortful listening remain unknown. Therefore, a reliable physiological measure to assess effort remains elusive. This study aimed to determine whether pupil dilation and alpha power change, two physiological measures suggested to index listening effort, assess similar processes. Listening effort was manipulated by parametrically varying spectral resolution (16- and 6-channel noise vocoding) and speech reception thresholds (SRT; 50% and 80%) while 19 young, normal-hearing adults performed a speech recognition task in noise. Results of off-line sentence scoring showed discrepancies between the target SRTs and the true performance obtained during the speech recognition task. For example, in the SRT80% condition, participants scored an average of 64.7%. Participants' true performance levels were therefore used for subsequent statistical modelling. Results showed that both measures appeared to be sensitive to changes in spectral resolution (channel vocoding), while pupil dilation only was also significantly related to their true performance levels (%) and task accuracy (i.e., whether the response was correctly or partially recalled). The two measures were not correlated, suggesting they each may reflect different cognitive processes involved in listening effort. This combination of findings contributes to a growing body of research aiming to develop an objective measure of listening effort.
Continuous multiword recognition performance of young and elderly listeners in ambient noise
NASA Astrophysics Data System (ADS)
Sato, Hiroshi
2005-09-01
Hearing threshold shift due to aging is known as a dominant factor to degrade speech recognition performance in noisy conditions. On the other hand, cognitive factors of aging-relating speech recognition performance in various speech-to-noise conditions are not well established. In this study, two kinds of speech test were performed to examine how working memory load relates to speech recognition performance. One is word recognition test with high-familiarity, four-syllable Japanese words (single-word test). In this test, each word was presented to listeners; the listeners were asked to write the word down on paper with enough time to answer. In the other test, five continuous word were presented to listeners and listeners were asked to write the word down after just five words were presented (multiword test). Both tests were done in various speech-to-noise ratios under 50-dBA Hoth spectrum noise with more than 50 young and elderly subjects. The results of two experiments suggest that (1) Hearing level is related to scores of both tests. (2) Scores of single-word test are well correlated with those of multiword test. (3) Scores of multiword test are not improved as speech-to-noise ratio improves in the condition where scores of single-word test reach their ceiling.
Examining Second Language Receptive Knowledge of Collocation and Factors That Affect Learning
ERIC Educational Resources Information Center
Nguyen, Thi My Hang; Webb, Stuart
2017-01-01
This study investigated Vietnamese EFL learners' knowledge of verb-noun and adjective-noun collocations at the first three 1,000 word frequency levels, and the extent to which five factors (node word frequency, collocation frequency, mutual information score, congruency, and part of speech) predicted receptive knowledge of collocation. Knowledge…
ERIC Educational Resources Information Center
Ebbels, Susan H.; Maric, Nataša; Murphy, Aoife; Turner, Gail
2014-01-01
Background: Little evidence exists for the effectiveness of therapy for children with receptive language difficulties, particularly those whose difficulties are severe and persistent. Aims: To establish the effectiveness of explicit speech and language therapy with visual support for secondary school-aged children with language impairments…
Wireless and acoustic hearing with bone-anchored hearing devices.
Bosman, Arjan J; Mylanus, Emmanuel A M; Hol, Myrthe K S; Snik, Ad F M
2015-07-01
The efficacy of wireless connectivity in bone-anchored hearing was studied by comparing the wireless and acoustic performance of the Ponto Plus sound processor from Oticon Medical relative to the acoustic performance of its predecessor, the Ponto Pro. Nineteen subjects with more than two years' experience with a bone-anchored hearing device were included. Thirteen subjects were fitted unilaterally and six bilaterally. Subjects served as their own control. First, subjects were tested with the Ponto Pro processor. After a four-week acclimatization period performance the Ponto Plus processor was measured. In the laboratory wireless and acoustic input levels were made equal. In daily life equal settings of wireless and acoustic input were used when watching TV, however when using the telephone the acoustic input was reduced by 9 dB relative to the wireless input. Speech scores for microphone with Ponto Pro and for both input modes of the Ponto Plus processor were essentially equal when equal input levels of wireless and microphone inputs were used. Only the TV-condition showed a statistically significant (p <5%) lower speech reception threshold for wireless relative to microphone input. In real life, evaluation of speech quality, speech intelligibility in quiet and noise, and annoyance by ambient noise, when using landline phone, mobile telephone, and watching TV showed a clear preference (p <1%) for the Ponto Plus system with streamer over the microphone input. Due to the small number of respondents with landline phone (N = 7) the result for noise annoyance was only significant at the 5% level. Equal input levels for acoustic and wireless inputs results in equal speech scores, showing a (near) equivalence for acoustic and wireless sound transmission with Ponto Pro and Ponto Plus. The default 9-dB difference between microphone and wireless input when using the telephone results in a substantial wireless benefit when using the telephone. The preference of wirelessly transmitted audio when watching TV can be attributed to the relatively poor sound quality of backward facing loudspeakers in flat screen TVs. The ratio of wireless and acoustic input can be easily set to the user's preference with the streamer's volume control.
Camilleri, Bernard; Botting, Nicola
2013-01-01
Children's low scores on vocabulary tests are often erroneously interpreted as reflecting poor cognitive and/or language skills. It may be necessary to incorporate the measurement of word-learning ability in estimating children's lexical abilities. To explore the reliability and validity of the Dynamic Assessment of Word Learning (DAWL), a new dynamic assessment of receptive vocabulary. A dynamic assessment (DA) of word learning ability was developed and adopted within a nursery school setting with 15 children aged between 3;07 and 4;03, ten of whom had been referred to speech and language therapy. A number of quantitative measures were derived from the DA procedure, including measures of children's ability to identify the targeted items and to generalize to a second exemplar, as well as measures of children's ability to retain the targeted items. Internal, inter-rater and test-retest reliability of the DAWL was established as well as correlational measures of concurrent and predictive validity. The DAWL was found to provide both quantitative and qualitative information which could be used to improve the accuracy of differential diagnosis and the understanding of processes underlying the child's performance. The latter can be used for the purpose of designing more individualized interventions. © 2013 Royal College of Speech and Language Therapists.
Speech and Oral Motor Profile after Childhood Hemispherectomy
ERIC Educational Resources Information Center
Liegeois, Frederique; Morgan, Angela T.; Stewart, Lorna H.; Cross, J. Helen; Vogel, Adam P.; Vargha-Khadem, Faraneh
2010-01-01
Hemispherectomy (disconnection or removal of an entire cerebral hemisphere) is a rare surgical procedure used for the relief of drug-resistant epilepsy in children. After hemispherectomy, contralateral hemiplegia persists whereas gross expressive and receptive language functions can be remarkably spared. Motor speech deficits have rarely been…
A Framework for Speech Activity Detection Using Adaptive Auditory Receptive Fields.
Carlin, Michael A; Elhilali, Mounya
2015-12-01
One of the hallmarks of sound processing in the brain is the ability of the nervous system to adapt to changing behavioral demands and surrounding soundscapes. It can dynamically shift sensory and cognitive resources to focus on relevant sounds. Neurophysiological studies indicate that this ability is supported by adaptively retuning the shapes of cortical spectro-temporal receptive fields (STRFs) to enhance features of target sounds while suppressing those of task-irrelevant distractors. Because an important component of human communication is the ability of a listener to dynamically track speech in noisy environments, the solution obtained by auditory neurophysiology implies a useful adaptation strategy for speech activity detection (SAD). SAD is an important first step in a number of automated speech processing systems, and performance is often reduced in highly noisy environments. In this paper, we describe how task-driven adaptation is induced in an ensemble of neurophysiological STRFs, and show how speech-adapted STRFs reorient themselves to enhance spectro-temporal modulations of speech while suppressing those associated with a variety of nonspeech sounds. We then show how an adapted ensemble of STRFs can better detect speech in unseen noisy environments compared to an unadapted ensemble and a noise-robust baseline. Finally, we use a stimulus reconstruction task to demonstrate how the adapted STRF ensemble better captures the spectrotemporal modulations of attended speech in clean and noisy conditions. Our results suggest that a biologically plausible adaptation framework can be applied to speech processing systems to dynamically adapt feature representations for improving noise robustness.
Lamprecht-Dinnesen, A; Sick, U; Sandrieser, P; Illg, A; Lesinski-Schiedat, A; Döring, W H; Müller-Deile, J; Kiefer, J; Matthias, K; Wüst, A; Konradi, E; Riebandt, M; Matulat, P; Von Der Haar-Heise, S; Swart, J; Elixmann, K; Neumann, K; Hildmann, A; Coninx, F; Meyer, V; Gross, M; Kruse, E; Lenarz, T
2002-10-01
Since autumn 1998 the multicenter interdisciplinary study group "Test Materials for CI Children" has been compiling a uniform examination tool for evaluation of speech and hearing development after cochlear implantation in childhood. After studying the relevant literature, suitable materials were checked for practical applicability, modified and provided with criteria for execution and break-off. For data acquisition, observation forms for preparation of a PC-version were developed. The evaluation set contains forms for master data with supplements relating to postoperative processes. The hearing tests check supra-threshold hearing with loudness scaling for children, speech comprehension in silence (Mainz and Göttingen Test for Speech Comprehension in Childhood) and phonemic differentiation (Oldenburg Rhyme Test for Children), the central auditory processes of detection, discrimination, identification and recognition (modification of the "Frankfurt Functional Hearing Test for Children") and audiovisual speech perception (Open Paragraph Tracking, Kiel Speech Track Program). The materials for speech and language development comprise phonetics-phonology, lexicon and semantics (LOGO Pronunciation Test), syntax and morphology (analysis of spontaneous speech), language comprehension (Reynell Scales), communication and pragmatics (observation forms). The MAIS and MUSS modified questionnaires are integrated. The evaluation set serves quality assurance and permits factor analysis as well as controls for regularity through the multicenter comparison of long-term developmental trends after cochlear implantation.
NASA Technical Reports Server (NTRS)
Leibfritz, Gilbert H.; Larson, Howard K.
1987-01-01
Compact speech synthesizer useful traveling companion to speech-handicapped. User simply enters statement on board, and synthesizer converts statement into spoken words. Battery-powered and housed in briefcase, easily carried on trips. Unit used on telephones and face-to-face communication. Synthesizer consists of micro-computer with memory-expansion module, speech-synthesizer circuit, batteries, recharger, dc-to-dc converter, and telephone amplifier. Components, commercially available, fit neatly in 17-by 13-by 5-in. briefcase. Weighs about 20 lb (9 kg) and operates and recharges from ac receptable.
Study of operator's information in the course of complicated and combined activity
NASA Astrophysics Data System (ADS)
Krylova, N. V.; Bokovikov, A. K.
1982-08-01
Speech characteristics of operators performing control and observation operations, Information reception, transmission and processing, and decision making when exposed to the real stress of parachute jumps were investigated. Form and content speech characteristics whose variations reflect the level of operators adaptation to stressful activities are reported.
Some Generalization and Follow-Up Measures on Autistic Children in Behavior Therapy.
ERIC Educational Resources Information Center
Lovaas, O. Ivar; And Others
Reported was a behavior therapy program emphasizing language training for 20 autistic children who variously exhibited apparent sensory deficit, severe affect isolation, self stimulatory behavior, mutism, echolalic speech, absence of receptive speech and social and self help behaviors, and self destructive tendencies. The treatment emphasized…
Auditory verbal habilitation is associated with improved outcome for children with cochlear implant.
Percy-Smith, Lone; Tønning, Tenna Lindbjerg; Josvassen, Jane Lignel; Mikkelsen, Jeanette Hølledig; Nissen, Lena; Dieleman, Eveline; Hallstrøm, Maria; Cayé-Thomasen, Per
2018-01-01
To study the impact of (re)habilitation strategy on speech-language outcomes for early, cochlear implanted children enrolled in different intervention programmes post implant. Data relate to a total of 130 children representing two pediatric cohorts consisting of 94 and 36 subjects, respectively. The two cohorts had different speech and language intervention following cochlear implantation, i.e. standard habilitation vs. auditory verbal (AV) intervention. Three tests of speech and language were applied covering language areas of receptive and productive vocabulary and language understanding. Children in AV intervention outperformed children in standard habilitation on all three tests of speech and language. When effect of intervention was adjusted with other covariates children in AV intervention still had higher odds at performing at age equivalent speech and language levels. Compared to standard intervention, AV intervention is associated with improved outcome for children with CI. Based on this finding, we recommend that all children with HI should be offered this intervention and it is, therefore, highly relevant when National boards of Health and Social Affairs recommend basing the habilitation on principles from AV practice. It should be noted, that a minority of children use spoken language with sign support. For this group it is, however, still important that educational services provide auditory skills training.
Pfiffner, Flurin; Kompis, Martin; Stieger, Christof
2009-10-01
To investigate correlations between preoperative hearing thresholds and postoperative aided thresholds and speech understanding of users of Bone-anchored Hearing Aids (BAHA). Such correlations may be useful to estimate the postoperative outcome with BAHA from preoperative data. Retrospective case review. Tertiary referral center. : Ninety-two adult unilaterally implanted BAHA users in 3 groups: (A) 24 subjects with a unilateral conductive hearing loss, (B) 38 subjects with a bilateral conductive hearing loss, and (C) 30 subjects with single-sided deafness. Preoperative air-conduction and bone-conduction thresholds and 3-month postoperative aided and unaided sound-field thresholds as well as speech understanding using German 2-digit numbers and monosyllabic words were measured and analyzed. Correlation between preoperative air-conduction and bone-conduction thresholds of the better and of the poorer ear and postoperative aided thresholds as well as correlations between gain in sound-field threshold and gain in speech understanding. Aided postoperative sound-field thresholds correlate best with BC threshold of the better ear (correlation coefficients, r2 = 0.237 to 0.419, p = 0.0006 to 0.0064, depending on the group of subjects). Improvements in sound-field threshold correspond to improvements in speech understanding. When estimating expected postoperative aided sound-field thresholds of BAHA users from preoperative hearing thresholds, the BC threshold of the better ear should be used. For the patient groups considered, speech understanding in quiet can be estimated from the improvement in sound-field thresholds.
Vision impairment and dual sensory problems in middle age
Dawes, Piers; Dickinson, Christine; Emsley, Richard; Bishop, Paul; Cruickshanks, Karen; Edmondson-Jones, Mark; McCormack, Abby; Fortnum, Heather; Moore, David R.; Norman, Paul; Munro, Kevin
2014-01-01
Purpose Vision and hearing impairments are known to increase in middle age. In this study we describe the prevalence of vision impairment and dual sensory impairment in UK adults aged 40 to 69 years in a very large and recently ascertained data set. The associations between vision impairment, age, sex, socioeconomic status, and ethnicity are reported. Methods This research was conducted using the UK Biobank Resource, with subsets of UK Biobank data analysed with respect to self-report of eye problems and glasses use. Better-eye visual acuity with habitually worn refractive correction was assessed with a logMAR chart (n = 116,682). Better-ear speech reception threshold was measured with an adaptive speech in noise test, the Digit Triplet Test (n = 164,770). Prevalence estimates were weighted with respect to UK 2001 Census data. Results Prevalence of mild visual impairment and low vision was estimated at 15.2% (95% CI 14.9–15.5%) and 0.9% (95% CI 0.8–1.0%), respectively. Use of glasses was 88.0% (95% CI 87.9–88.1%). The prevalence of dual sensory impairment was 3.1% (95% CI 3.0–3.2%) and there was a nine-fold increase in the prevalence of dual sensory problems between the youngest and oldest age groups. Older adults, those from low socioeconomic and ethnic minority backgrounds were most at risk for vision problems. Conclusions Mild vision impairment is common in middle aged UK adults, despite widespread use of spectacles. Possible barriers to optometric care for those from low socioeconomic and ethnic minority backgrounds may require attention. A higher than expected prevalence of dual impairment suggests that hearing and vision problems share common causes. Optometrists should consider screening for hearing problems, particularly among older adults. PMID:24888710
Language Sampling for Preschoolers With Severe Speech Impairments
Ragsdale, Jamie; Bustos, Aimee
2016-01-01
Purpose The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Method Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Results Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur–Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Conclusion Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information. PMID:27552110
Language Sampling for Preschoolers With Severe Speech Impairments.
Binger, Cathy; Ragsdale, Jamie; Bustos, Aimee
2016-11-01
The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information.
Rapid tuning shifts in human auditory cortex enhance speech intelligibility
Holdgraf, Christopher R.; de Heer, Wendy; Pasley, Brian; Rieger, Jochem; Crone, Nathan; Lin, Jack J.; Knight, Robert T.; Theunissen, Frédéric E.
2016-01-01
Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement' in understanding speech. PMID:27996965
Rader, Tobias; Fastl, Hugo; Baumann, Uwe
2013-01-01
The aim of the study was to measure and compare speech perception in users of electric-acoustic stimulation (EAS) supported by a hearing aid in the unimplanted ear and in bilateral cochlear implant (CI) users under different noise and sound field conditions. Gap listening was assessed by comparing performance in unmodulated and modulated Comité Consultatif International Téléphonique et Télégraphique (CCITT) noise conditions, and binaural interaction was investigated by comparing single source and multisource sound fields. Speech perception in noise was measured using a closed-set sentence test (Oldenburg Sentence Test, OLSA) in a multisource noise field (MSNF) consisting of a four-loudspeaker array with independent noise sources and a single source in frontal position (S0N0). Speech simulating noise (Fastl-noise), CCITT-noise (continuous), and OLSA-noise (pseudo continuous) served as noise sources with different temporal patterns. Speech tests were performed in two groups of subjects who were using either EAS (n = 12) or bilateral CIs (n = 10). All subjects in the EAS group were fitted with a high-power hearing aid in the opposite ear (bimodal EAS). The average group score on monosyllable in quiet was 68.8% (EAS) and 80.5% (bilateral CI). A group of 22 listeners with normal hearing served as controls to compare and evaluate potential gap listening effects in implanted patients. Average speech reception thresholds in the EAS group were significantly lower than those for the bilateral CI group in all test conditions (CCITT 6.1 dB, p = 0.001; Fastl-noise 5.4 dB, p < 0.01; Oldenburg-(OL)-noise 1.6 dB, p < 0.05). Bilateral CI and EAS user groups showed a significant improvement of 4.3 dB (p = 0.004) and 5.4 dB (p = 0.002) between S0N0 and MSNF sound field conditions respectively, which signifies advantages caused by bilateral interaction in both groups. Performance in the control group showed a significant gap listening effect with a difference of 6.5 dB between modulated and unmodulated noise in S0N0, and a difference of 3.0 dB in MSNF. The ability to "glimpse" into short temporal masker gaps was absent in both groups of implanted subjects. Combined EAS in one ear supported by a hearing aid on the contralateral ear provided significantly improved speech perception compared with bilateral cochlear implantation. Although the scores for monosyllable words in quiet were higher in the bilateral CI group, the EAS group performed better in different noise and sound field conditions. Furthermore, the results indicated that binaural interaction between EAS in one ear and residual acoustic hearing in the opposite ear enhances speech perception in complex noise situations. Both bilateral CI and bimodal EAS users did not benefit from short temporal masker gaps, therefore the better performance of the EAS group in modulated noise conditions could be explained by the improved transmission of fundamental frequency cues in the lower-frequency region of acoustic hearing, which might foster the grouping of auditory objects.
Isaacson, M D; Srinivasan, S; Lloyd, L L
2010-01-01
MathSpeak is a set of rules for non speaking of mathematical expressions. These rules have been incorporated into a computerised module that translates printed mathematics into the non-ambiguous MathSpeak form for synthetic speech rendering. Differences between individual utterances produced with the translator module are difficult to discern because of insufficient pausing between utterances; hence, the purpose of this study was to develop an algorithm for improving the synthetic speech rendering of MathSpeak. To improve synthetic speech renderings, an algorithm for inserting pauses was developed based upon recordings of middle and high school math teachers speaking mathematic expressions. Efficacy testing of this algorithm was conducted with college students without disabilities and high school/college students with visual impairments. Parameters measured included reception accuracy, short-term memory retention, MathSpeak processing capacity and various rankings concerning the quality of synthetic speech renderings. All parameters measured showed statistically significant improvements when the algorithm was used. The algorithm improves the quality and information processing capacity of synthetic speech renderings of MathSpeak. This increases the capacity of individuals with print disabilities to perform mathematical activities and to successfully fulfill science, technology, engineering and mathematics academic and career objectives.
Finke, Mareike; Strauß-Schier, Angelika; Kludt, Eugen; Büchner, Andreas; Illg, Angelika
2017-05-01
Treatment with cochlear implants (CIs) in single-sided deaf individuals started less than a decade ago. CIs can successfully reduce incapacitating tinnitus on the deaf ear and allow, so some extent, the restoration of binaural hearing. Until now, systematic evaluations of subjective CI benefit in post-lingually single-sided deaf individuals and analyses of speech intelligibility outcome for the CI in isolation have been lacking. For the prospective part of this study, the Bern Benefit in Single-Sided Deafness Questionnaire (BBSS) was administered to 48 single-sided deaf CI users to evaluate the subjectively perceived CI benefit across different listening situations. In the retrospective part, speech intelligibility outcome with the CI up to 12 month post-activation was compared between 100 single-sided deaf CI users and 125 bilaterally implanted CI users (2nd implant). The positive median ratings in the BBSS differed significantly from zero for all items suggesting that most individuals with single-sided deafness rate their CI as beneficial across listening situations. The speech perception scores in quiet and noise improved significantly over time in both groups of CI users. Speech intelligibility with the CI in isolation was significantly better in bilaterally implanted CI users (2nd implant) compared to the scores obtained from single-sided deaf CI users. Our results indicate that CI users with single-sided deafness can reach open set speech understanding with their CI in isolation, encouraging the extension of the CI indication to individuals with normal hearing on the contralateral ear. Compared to the performance reached with bilateral CI users' second implant, speech reception threshold are lower, indicating an aural preference and dominance of the normal hearing ear. The results from the BBSS propose good satisfaction with the CI across several listening situations. Copyright © 2017 Elsevier B.V. All rights reserved.
Kollmeier, Birger; Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T; Brand, Thomas
2016-09-07
To characterize the individual patient's hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The "typical" audiogram shapes from Bisgaard et al with or without a "typical" level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. © The Author(s) 2016.
Cued Speech Transliteration: Effects of Accuracy and Lag Time on Message Intelligibility
ERIC Educational Resources Information Center
Krause, Jean C.; Lopez, Katherine A.
2017-01-01
This paper is the second in a series concerned with the level of access afforded to students who use educational interpreters. The first paper (Krause & Tessler, 2016) focused on factors affecting accuracy of messages produced by Cued Speech (CS) transliterators (expression). In this study, factors affecting intelligibility (reception by deaf…
ERIC Educational Resources Information Center
Waring, Rebecca; Eadie, Patricia; Liow, Susan Rickard; Dodd, Barbara
2017-01-01
While little is known about why children make speech errors, it has been hypothesized that cognitive-linguistic factors may underlie phonological speech sound disorders. This study compared the phonological short-term and phonological working memory abilities (using immediate memory tasks) and receptive vocabulary size of 14 monolingual preschool…
ERIC Educational Resources Information Center
Darley, Frederic L., Ed.
The conference proceedings of scientists specializing in language processes and neurophysiological mechanisms are reported to stimulate a cross-over of interest and research in the central brain phenomena (reception, understanding, retention, integration, formulation, and expression) as they relate to speech and language. Eighteen research reports…
Comprehension: an overlooked component in augmented language development.
Sevcik, Rose A
2006-02-15
Despite the importance of children's receptive skills as a foundation for later productive word use, the role of receptive language traditionally has received very limited attention since the focus in linguistic development has centered on language production. For children with significant developmental disabilities and communication impairments, augmented language systems have been devised as a tool both for language input and output. The role of both speech and symbol comprehension skills is emphasized in this paper. Data collected from two longitudinal studies of children and youth with severe disabilities and limited speech serve as illustrations in this paper. The acquisition and use of the System for Augmenting Language (SAL) was studied in home and school settings. Communication behaviors of the children and youth and their communication partners were observed and language assessment measures were collected. Two patterns of symbol learning and achievement--beginning and advanced--were observed. Extant speech comprehension skills brought to the augmented language learning task impacted the participants' patterns of symbol learning and use. Though often overlooked, the importance of speech and symbol comprehension skills were underscored in the studies described. Future areas for research are identified.
Detection Performance of an Operator Using Lofar
1976-04-01
Blur on Perimetric Thresholds," Aroh. of Opthalmology , 68:2, pp. 240-51 (1962). 15. Ferree, C.E., Rand, G., Hardy, C, "Refraction for the...Static Perimetric Technique Believed to Test Receptive Field Properties III Clinical Trials," Am. Jour. Opthalmology , 70:2, pp. 244-272 (Aug. 1970
Chang, Jiwon; Ryou, Namhyung; Jun, Hyung Jin; Hwang, Soon Young; Song, Jae-Jun; Chae, Sung Won
2016-01-01
Objectives In the present study, we aimed to determine the effect of both active and passive smoking on the prevalence of the hearing impairment and the hearing thresholds in different age groups through the analysis of data collected from the Korea National Health and Nutrition Examination Survey (KNHANES). Study Design Cross-sectional epidemiological study. Methods The KNHANES is an ongoing population study that started in 1998. We included a total of 12,935 participants aged ≥19 years in the KNHANES, from 2010 to 2012, in the present study. Pure-tone audiometric (PTA) testing was conducted and the frequencies tested were 0.5, 1, 2, 3, 4, and 6 kHz. Smoking status was categorized into three groups; current smoking group, passive smoking group and non-smoking group. Results In the current smoking group, the prevalence of speech-frequency bilateral hearing impairment was increased in ages of 40−69, and the rate of high frequency bilateral hearing impairment was elevated in ages of 30−79. When we investigated the impact of smoking on hearing thresholds, we found that the current smoking group had significantly increased hearing thresholds compared to the passive smoking group and non-smoking groups, across all ages in both speech-relevant and high frequencies. The passive smoking group did not have an elevated prevalence of either speech-frequency bilateral hearing impairment or high frequency bilateral hearing impairment, except in ages of 40s. However, the passive smoking group had higher hearing thresholds than the non-smoking group in the 30s and 40s age groups. Conclusion Current smoking was associated with hearing impairment in both speech-relevant frequency and high frequency across all ages. However, except in the ages of 40s, passive smoking was not related to hearing impairment in either speech-relevant or high frequencies. PMID:26756932
Sound frequency affects speech emotion perception: results from congenital amusia
Lolli, Sydney L.; Lewenstein, Ari D.; Basurto, Julian; Winnik, Sean; Loui, Psyche
2015-01-01
Congenital amusics, or “tone-deaf” individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech. PMID:26441718
Assessment of directionality performances: comparison between Freedom and CP810 sound processors.
Razza, Sergio; Albanese, Greta; Ermoli, Lucilla; Zaccone, Monica; Cristofari, Eliana
2013-10-01
To compare speech recognition in noise for the Nucleus Freedom and CP810 sound processors using different directional settings among those available in the SmartSound portfolio. Single-subject, repeated measures study. Tertiary care referral center. Thirty-one monoaurally and binaurally implanted subjects (24 children and 7 adults) were enrolled. They were all experienced Nucleus Freedom sound processor users and achieved a 100% open set word recognition score in quiet listening conditions. Each patient was fitted with the Freedom and the CP810 processor. The program setting incorporated Adaptive Dynamic Range Optimization (ADRO) and adopted the directional algorithm BEAM (both devices) and ZOOM (only on CP810). Speech reception threshold (SRT) was assessed in a free-field layout, with disyllabic word list and interfering multilevel babble noise in the 3 different pre-processing configurations. On average, CP810 improved significantly patients' SRTs as compared to Freedom SP after 1 hour of use. Instead, no significant difference was observed in patients' SRT between the BEAM and the ZOOM algorithm fitted in the CP810 processor. The results suggest that hardware developments achieved in the design of CP810 allow an immediate and relevant directional advantage as compared to the previous-generation Freedom device.
Ionizing Radiation and the Ear
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borsanyi, Steven J.
The effects of ionizing radiation on the ears of 100 patients were studied in the course of treatment of malignant head and neck tumors by teleradiation using Co 60. Early changes consisted of radiation otitis media and a transient vasculitis of the vessels of the inner ear, resulting in hearing loss, tinnitus, and temporary recruitment. While no permanent changes were detected microscopically shortly after the completion of radiation in the cochlea or labyrinth, late changes sometimes occurred in the temporal bone as a result of an obliterating endarteritis. The late changes were separate entities caused primarily by obliterating endarteritis andmore » alterations in the collagen. Radiation affected the hearing of individuals selectively. When hearing threshold shift did occur, the shift was not great. The 4000 cps frequency showed a greater deficit in hearing capacity during the tests, while the area least affected appeared to be in the region of 2000 cps. The shift in speech reception was not significant and it was correlated with the over-all change in response to pure tones. Discrimination did not appear to be affected. Proper shielding of the ear with lead during radiation, when possible, eliminated most complications. (H.R.D.)« less
Spencer, Caroline; Weber-Fox, Christine
2014-09-01
In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9-5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. At the end of this activity the reader will be able to: (1) describe the current status of nonlinguistic and linguistic predictors for recovery and persistence of stuttering; (2) summarize current evidence regarding the potential value of consonant cluster articulation and nonword repetition abilities in helping to predict stuttering outcome in preschool children; (3) discuss the current findings in relation to potential implications for theories of developmental stuttering; (4) discuss the current findings in relation to potential considerations for the evaluation and treatment of developmental stuttering. Copyright © 2014 Elsevier Inc. All rights reserved.
Accurate or assumed: visual learning in children with ASD.
Trembath, David; Vivanti, Giacomo; Iacono, Teresa; Dissanayake, Cheryl
2015-10-01
Children with autism spectrum disorder (ASD) are often described as visual learners. We tested this assumption in an experiment in which 25 children with ASD, 19 children with global developmental delay (GDD), and 17 typically developing (TD) children were presented a series of videos via an eye tracker in which an actor instructed them to manipulate objects in speech-only and speech + pictures conditions. We found no group differences in visual attention to the stimuli. The GDD and TD groups performed better when pictures were available, whereas the ASD group did not. Performance of children with ASD and GDD was positively correlated with visual attention and receptive language. We found no evidence of a prominent visual learning style in the ASD group.
ERIC Educational Resources Information Center
Nordberg, Ann; Dahlgren Sandberg, Annika; Miniscalco, Carmela
2015-01-01
Background: Research on retelling ability and cognition is limited in children with cerebral palsy (CP) and speech impairment. Aims: To explore the impact of expressive and receptive language, narrative discourse dimensions (Narrative Assessment Profile measures), auditory and visual memory, theory of mind (ToM) and non-verbal cognition on the…
Optimization of programming parameters in children with the advanced bionics cochlear implant.
Baudhuin, Jacquelyn; Cadieux, Jamie; Firszt, Jill B; Reeder, Ruth M; Maxson, Jerrica L
2012-05-01
Cochlear implants provide access to soft intensity sounds and therefore improved audibility for children with severe-to-profound hearing loss. Speech processor programming parameters, such as threshold (or T-level), input dynamic range (IDR), and microphone sensitivity, contribute to the recipient's program and influence audibility. When soundfield thresholds obtained through the speech processor are elevated, programming parameters can be modified to improve soft sound detection. Adult recipients show improved detection for low-level sounds when T-levels are set at raised levels and show better speech understanding in quiet when wider IDRs are used. Little is known about the effects of parameter settings on detection and speech recognition in children using today's cochlear implant technology. The overall study aim was to assess optimal T-level, IDR, and sensitivity settings in pediatric recipients of the Advanced Bionics cochlear implant. Two experiments were conducted. Experiment 1 examined the effects of two T-level settings on soundfield thresholds and detection of the Ling 6 sounds. One program set T-levels at 10% of most comfortable levels (M-levels) and another at 10 current units (CUs) below the level judged as "soft." Experiment 2 examined the effects of IDR and sensitivity settings on speech recognition in quiet and noise. Participants were 11 children 7-17 yr of age (mean 11.3) implanted with the Advanced Bionics High Resolution 90K or CII cochlear implant system who had speech recognition scores of 20% or greater on a monosyllabic word test. Two T-level programs were compared for detection of the Ling sounds and frequency modulated (FM) tones. Differing IDR/sensitivity programs (50/0, 50/10, 70/0, 70/10) were compared using Ling and FM tone detection thresholds, CNC (consonant-vowel nucleus-consonant) words at 50 dB SPL, and Hearing in Noise Test for Children (HINT-C) sentences at 65 dB SPL in the presence of four-talker babble (+8 signal-to-noise ratio). Outcomes were analyzed using a paired t-test and a mixed-model repeated measures analysis of variance (ANOVA). T-levels set 10 CUs below "soft" resulted in significantly lower detection thresholds for all six Ling sounds and FM tones at 250, 1000, 3000, 4000, and 6000 Hz. When comparing programs differing by IDR and sensitivity, a 50 dB IDR with a 0 sensitivity setting showed significantly poorer thresholds for low frequency FM tones and voiced Ling sounds. Analysis of group mean scores for CNC words in quiet or HINT-C sentences in noise indicated no significant differences across IDR/sensitivity settings. Individual data, however, showed significant differences between IDR/sensitivity programs in noise; the optimal program differed across participants. In pediatric recipients of the Advanced Bionics cochlear implant device, manually setting T-levels with ascending loudness judgments should be considered when possible or when low-level sounds are inaudible. Study findings confirm the need to determine program settings on an individual basis as well as the importance of speech recognition verification measures in both quiet and noise. Clinical guidelines are suggested for selection of programming parameters in both young and older children. American Academy of Audiology.
Liu, Chang; Jin, Su-Hyun
2015-11-01
This study investigated whether native listeners processed speech differently from non-native listeners in a speech detection task. Detection thresholds of Mandarin Chinese and Korean vowels and non-speech sounds in noise, frequency selectivity, and the nativeness of Mandarin Chinese and Korean vowels were measured for Mandarin Chinese- and Korean-native listeners. The two groups of listeners exhibited similar non-speech sound detection and frequency selectivity; however, the Korean listeners had better detection thresholds of Korean vowels than Chinese listeners, while the Chinese listeners performed no better at Chinese vowel detection than the Korean listeners. Moreover, thresholds predicted from an auditory model highly correlated with behavioral thresholds of the two groups of listeners, suggesting that detection of speech sounds not only depended on listeners' frequency selectivity, but also might be affected by their native language experience. Listeners evaluated their native vowels with higher nativeness scores than non-native listeners. Native listeners may have advantages over non-native listeners when processing speech sounds in noise, even without the required phonetic processing; however, such native speech advantages might be offset by Chinese listeners' lower sensitivity to vowel sounds, a characteristic possibly resulting from their sparse vowel system and their greater cognitive and attentional demands for vowel processing.
Impact of a Moving Noise Masker on Speech Perception in Cochlear Implant Users
Weissgerber, Tobias; Rader, Tobias; Baumann, Uwe
2015-01-01
Objectives Previous studies investigating speech perception in noise have typically been conducted with static masker positions. The aim of this study was to investigate the effect of spatial separation of source and masker (spatial release from masking, SRM) in a moving masker setup and to evaluate the impact of adaptive beamforming in comparison with fixed directional microphones in cochlear implant (CI) users. Design Speech reception thresholds (SRT) were measured in S0N0 and in a moving masker setup (S0Nmove) in 12 normal hearing participants and 14 CI users (7 subjects bilateral, 7 bimodal with a hearing aid in the contralateral ear). Speech processor settings were a moderately directional microphone, a fixed beamformer, or an adaptive beamformer. The moving noise source was generated by means of wave field synthesis and was smoothly moved in a shape of a half-circle from one ear to the contralateral ear. Noise was presented in either of two conditions: continuous or modulated. Results SRTs in the S0Nmove setup were significantly improved compared to the S0N0 setup for both the normal hearing control group and the bilateral group in continuous noise, and for the control group in modulated noise. There was no effect of subject group. A significant effect of directional sensitivity was found in the S0Nmove setup. In the bilateral group, the adaptive beamformer achieved lower SRTs than the fixed beamformer setting. Adaptive beamforming improved SRT in both CI user groups substantially by about 3 dB (bimodal group) and 8 dB (bilateral group) depending on masker type. Conclusions CI users showed SRM that was comparable to normal hearing subjects. In listening situations of everyday life with spatial separation of source and masker, directional microphones significantly improved speech perception with individual improvements of up to 15 dB SNR. Users of bilateral speech processors with both directional microphones obtained the highest benefit. PMID:25970594
Auditory Temporal Acuity Probed With Cochlear Implant Stimulation and Cortical Recording
Kirby, Alana E.
2010-01-01
Cochlear implants stimulate the auditory nerve with amplitude-modulated (AM) electric pulse trains. Pulse rates >2,000 pulses per second (pps) have been hypothesized to enhance transmission of temporal information. Recent studies, however, have shown that higher pulse rates impair phase locking to sinusoidal AM in the auditory cortex and impair perceptual modulation detection. Here, we investigated the effects of high pulse rates on the temporal acuity of transmission of pulse trains to the auditory cortex. In anesthetized guinea pigs, signal-detection analysis was used to measure the thresholds for detection of gaps in pulse trains at rates of 254, 1,017, and 4,069 pps and in acoustic noise. Gap-detection thresholds decreased by an order of magnitude with increases in pulse rate from 254 to 4,069 pps. Such a pulse-rate dependence would likely influence speech reception through clinical speech processors. To elucidate the neural mechanisms of gap detection, we measured recovery from forward masking after a 196.6-ms pulse train. Recovery from masking was faster at higher carrier pulse rates and masking increased linearly with current level. We fit the data with a dual-exponential recovery function, consistent with a peripheral and a more central process. High-rate pulse trains evoked less central masking, possibly due to adaptation of the response in the auditory nerve. Neither gap detection nor forward masking varied with cortical depth, indicating that these processes are likely subcortical. These results indicate that gap detection and modulation detection are mediated by two separate neural mechanisms. PMID:19923242
Lee, Soo Jung; Park, Kyung Won; Kim, Lee-Suk; Kim, HyangHee
2016-06-01
Along with auditory function, cognitive function contributes to speech perception in the presence of background noise. Older adults with cognitive impairment might, therefore, have more difficulty perceiving speech-in-noise than their peers who have normal cognitive function. We compared the effects of noise level and cognitive function on speech perception in patients with amnestic mild cognitive impairment (aMCI), cognitively normal older adults, and cognitively normal younger adults. We studied 14 patients with aMCI and 14 age-, education-, and hearing threshold-matched cognitively intact older adults as experimental groups, and 14 younger adults as a control group. We assessed speech perception with monosyllabic word and sentence recognition tests at four noise levels: quiet condition and signal-to-noise ratio +5 dB, 0 dB, and -5 dB. We also evaluated the aMCI group with a neuropsychological assessment. Controlling for hearing thresholds, we found that the aMCI group scored significantly lower than both the older adults and the younger adults only when the noise level was high (signal-to-noise ratio -5 dB). At signal-to-noise ratio -5 dB, both older groups had significantly lower scores than the younger adults on the sentence recognition test. The aMCI group's sentence recognition performance was related to their executive function scores. Our findings suggest that patients with aMCI have more problems communicating in noisy situations in daily life than do their cognitively healthy peers and that older listeners with more difficulties understanding speech in noise should be considered for testing of neuropsychological function as well as hearing.
Speech training alters tone frequency tuning in rat primary auditory cortex
Engineer, Crystal T.; Perez, Claudia A.; Carraway, Ryan S.; Chang, Kevin Q.; Roland, Jarod L.; Kilgard, Michael P.
2013-01-01
Previous studies in both humans and animals have documented improved performance following discrimination training. This enhanced performance is often associated with cortical response changes. In this study, we tested the hypothesis that long-term speech training on multiple tasks can improve primary auditory cortex (A1) responses compared to rats trained on a single speech discrimination task or experimentally naïve rats. Specifically, we compared the percent of A1 responding to trained sounds, the responses to both trained and untrained sounds, receptive field properties of A1 neurons, and the neural discrimination of pairs of speech sounds in speech trained and naïve rats. Speech training led to accurate discrimination of consonant and vowel sounds, but did not enhance A1 response strength or the neural discrimination of these sounds. Speech training altered tone responses in rats trained on six speech discrimination tasks but not in rats trained on a single speech discrimination task. Extensive speech training resulted in broader frequency tuning, shorter onset latencies, a decreased driven response to tones, and caused a shift in the frequency map to favor tones in the range where speech sounds are the loudest. Both the number of trained tasks and the number of days of training strongly predict the percent of A1 responding to a low frequency tone. Rats trained on a single speech discrimination task performed less accurately than rats trained on multiple tasks and did not exhibit A1 response changes. Our results indicate that extensive speech training can reorganize the A1 frequency map, which may have downstream consequences on speech sound processing. PMID:24344364
Benefit of the UltraZoom beamforming technology in noise in cochlear implant users.
Mosnier, Isabelle; Mathias, Nathalie; Flament, Jonathan; Amar, Dorith; Liagre-Callies, Amelie; Borel, Stephanie; Ambert-Dahan, Emmanuèle; Sterkers, Olivier; Bernardeschi, Daniele
2017-09-01
The objectives of the study were to demonstrate the audiological and subjective benefits of the adaptive UltraZoom beamforming technology available in the Naída CI Q70 sound processor, in cochlear-implanted adults upgraded from a previous generation sound processor. Thirty-four adults aged between 21 and 89 years (mean 53 ± 19) were prospectively included. Nine subjects were unilaterally implanted, 11 bilaterally and 14 were bimodal users. The mean duration of cochlear implant use was 7 years (range 5-15 years). Subjects were tested in quiet with monosyllabic words and in noise with the adaptive French Matrix test in the best-aided conditions. The test setup contained a signal source in front of the subject and three noise sources at +/-90° and 180°. The noise was presented at a fixed level of 65 dB SPL and the level of speech signal was varied to obtain the speech reception threshold (SRT). During the upgrade visit, subjects were tested with the Harmony and with the Naída CI sound processors in omnidirectional microphone configuration. After a take-home phase of 2 months, tests were repeated with the Naída CI processor with and without UltraZoom. Subjective assessment of the sound quality in daily environments was recorded using the APHAB questionnaire. No difference in performance was observed in quiet between the two processors. The Matrix test in noise was possible in the 21 subjects with the better performance. No difference was observed between the two processors for performance in noise when using the omnidirectional microphone. At the follow-up session, the median SRT with the Naída CI processor with UltraZoom was -4 dB compared to -0.45 dB without UltraZoom. The use of UltraZoom improved the median SRT by 3.6 dB (p < 0.0001, Wilcoxon paired test). When looking at the APHAB outcome, improvement was observed for speech understanding in noisy environments (p < 0.01) and in aversive situations (p < 0.05) in the group of 21 subjects who were able to perform the Matrix test in noise and for speech understanding in noise (p < 0.05) in the group of 13 subjects with the poorest performance, who were not able to perform the Matrix test in noise. The use of UltraZoom beamforming technology, available on the new sound processor Naída CI, improves speech performance in difficult and realistic noisy conditions when the cochlear implant user needs to focus on the person speaking at the front. Using the APHAB questionnaire, a subjective benefit for listening in background noise was also observed in subjects with good performance as well as in those with poor performance. This study highlighted the importance of upgrading CI recipients to new technology and to include assessment in noise and subjective feedback evaluation as part of the process.
Speech Recognition Thresholds for Multilingual Populations.
ERIC Educational Resources Information Center
Ramkissoon, Ishara
2001-01-01
This article traces the development of speech audiometry in the United States and reports on the current status, focusing on the needs of a multilingual population in terms of measuring speech recognition threshold (SRT). It also discusses sociolinguistic considerations, alternative SRT stimuli for second language learners, and research on using…
ERIC Educational Resources Information Center
Lee, Andrew H.; Lyster, Roy
2016-01-01
This study investigated the effects of different types of corrective feedback (CF) provided during second language (L2) speech perception training. One hundred Korean learners of L2 English, randomly assigned to five groups (n = 20 per group), participated in eight computer-assisted perception training sessions targeting two minimal pairs of…
ERIC Educational Resources Information Center
Charlesworth, Dacia
2010-01-01
Invention deals with the content of a speech, arrangement involves placing the content in an order that is most strategic, style focuses on selecting linguistic devices, such as metaphor, to make the message more appealing, memory assists the speaker in delivering the message correctly, and delivery ideally enables great reception of the message.…
Adapting a receptive vocabulary test for preschool-aged Greek-speaking children.
Okalidou, Areti; Syrika, Asimina; Beckman, Mary E; Edwards, Jan R
2011-01-01
Receptive vocabulary is an important measure for language evaluations. Therefore, norm-referenced receptive vocabulary tests are widely used in several languages. However, a receptive vocabulary test has not yet been normed for Modern Greek. To adapt an American English vocabulary test, the Receptive One-Word Picture Vocabulary Test-II (ROWPVT-II), for Modern Greek for use with Greek-speaking preschool children. The list of 170 English words on ROWPVT-II was adapted by (1) developing two lists (A and B) of Greek words that would match either the target English word or another concept corresponding to one of the pictured objects in the four-picture array; and (2) determining a developmental order for the chosen Greek words for preschool-aged children. For the first task, adult word frequency measures were used to select the words for the Greek wordlist. For the second task, 427 children, 225 boys and 202 girls, ranging in age from 2;0 years to 5;11 years, were recruited from urban and suburban areas of Greece. A pilot study of the two word lists was performed with the aim of comparing an equal number of list A and list B responses for each age group and deriving a new developmental list order. The relative difficulty of each Greek word item, that is, its accuracy score, was calculated by taking the average proportion of correct responses across ages for that word. Subsequently, the word accuracy scores in the two lists were compared via regression analysis, which yielded a highly significant relationship (R(2) = 0.97; p < 0.0001) and a few outlier pairs (via residuals). Further analysis used the original relative ranking order along with the derived ranking order from the average accuracy scores of the two lists in order to determine which word item from the two lists was a better fit. Finally, new starting levels (basals) were established for preschool ages. The revised word list can serve as the basis for adapting a receptive vocabulary test for Greek preschool-aged children. Further steps need to be taken when testing larger numbers of 2;0 to 5;11-year-old children on the revised word list for determination of norms. This effort will facilitate early identification and remediation of language disorders in Modern Greek-speaking children. © 2010 Royal College of Speech & Language Therapists.
Thresholds of information leakage for speech security outside meeting rooms.
Robinson, Matthew; Hopkins, Carl; Worrall, Ken; Jackson, Tim
2014-09-01
This paper describes an approach to provide speech security outside meeting rooms where a covert listener might attempt to extract confidential information. Decision-based experiments are used to establish a relationship between an objective measurement of the Speech Transmission Index (STI) and a subjective assessment relating to the threshold of information leakage. This threshold is defined for a specific percentage of English words that are identifiable with a maximum safe vocal effort (e.g., "normal" speech) used by the meeting participants. The results demonstrate that it is possible to quantify an offset that links STI with a specific threshold of information leakage which describes the percentage of words identified. The offsets for male talkers are shown to be approximately 10 dB larger than for female talkers. Hence for speech security it is possible to determine offsets for the threshold of information leakage using male talkers as the "worst case scenario." To define a suitable threshold of information leakage, the results show that a robust definition can be based upon 1%, 2%, or 5% of words identified. For these percentages, results are presented for offset values corresponding to different STI values in a range from 0.1 to 0.3.
NASA Astrophysics Data System (ADS)
Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui
2012-04-01
Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.
Hearing loss in children with otitis media with effusion: a systematic review.
Cai, Ting; McPherson, Bradley
2017-02-01
Otitis media with effusion (OME) is the presence of non-purulent inflammation in the middle ear. Hearing impairment is frequently associated with OME. Pure tone audiometry and speech audiometry are two of the most primarily utilised auditory assessments and provide valuable behavioural and functional estimation on hearing loss. This paper was designed to review and analyse the effects of the presence of OME on children's listening abilities. A systematic and descriptive review. Twelve articles reporting frequency-specific pure tone thresholds and/or speech perception measures in children with OME were identified using PubMed, Ovid, Web of Science, ProQuest and Google Scholar search platforms. The hearing loss related to OME averages 18-35 dB HL. The air conduction configuration is roughly flat with a slight elevation at 2000 Hz and a nadir at 8000 Hz. Both speech-in-quiet and speech-in-noise perception have been found to be impaired. OME imposes a series of disadvantages on hearing sensitivity and speech perception in children. Further studies investigating the full range of frequency-specific pure tone thresholds, and that adopt standardised speech test materials are advocated to evaluate hearing related disabilities with greater comprehensiveness, comparability and enhanced consideration of their real life implications.
Emotions in freely varying and mono-pitched vowels, acoustic and EGG analyses.
Waaramaa, Teija; Palo, Pertti; Kankare, Elina
2015-12-01
Vocal emotions are expressed either by speech or singing. The difference is that in singing the pitch is predetermined while in speech it may vary freely. It was of interest to study whether there were voice quality differences between freely varying and mono-pitched vowels expressed by professional actors. Given their profession, actors have to be able to express emotions both by speech and singing. Electroglottogram and acoustic analyses of emotional utterances embedded in expressions of freely varying vowels [a:], [i:], [u:] (96 samples) and mono-pitched protracted vowels (96 samples) were studied. Contact quotient (CQEGG) was calculated using 35%, 55%, and 80% threshold levels. Three different threshold levels were used in order to evaluate their effects on emotions. Genders were studied separately. The results suggested significant gender differences for CQEGG 80% threshold level. SPL, CQEGG, and F4 were used to convey emotions, but to a lesser degree, when F0 was predetermined. Moreover, females showed fewer significant variations than males. Both genders used more hypofunctional phonation type in mono-pitched utterances than in the expressions with freely varying pitch. The present material warrants further study of the interplay between CQEGG threshold levels and formant frequencies, and listening tests to investigate the perceptual value of the mono-pitched vowels in the communication of emotions.
Na, Sung Dae; Wei, Qun; Seong, Ki Woong; Cho, Jin Ho; Kim, Myoung Nam
2018-01-01
The conventional methods of speech enhancement, noise reduction, and voice activity detection are based on the suppression of noise or non-speech components of the target air-conduction signals. However, air-conduced speech is hard to differentiate from babble or white noise signals. To overcome this problem, the proposed algorithm uses the bone-conduction speech signals and soft thresholding based on the Shannon entropy principle and cross-correlation of air- and bone-conduction signals. A new algorithm for speech detection and noise reduction is proposed, which makes use of the Shannon entropy principle and cross-correlation with the bone-conduction speech signals to threshold the wavelet packet coefficients of the noisy speech. The proposed method can be get efficient result by objective quality measure that are PESQ, RMSE, Correlation, SNR. Each threshold is generated by the entropy and cross-correlation approaches in the decomposed bands using the wavelet packet decomposition. As a result, the noise is reduced by the proposed method using the MATLAB simulation. To verify the method feasibility, we compared the air- and bone-conduction speech signals and their spectra by the proposed method. As a result, high performance of the proposed method is confirmed, which makes it quite instrumental to future applications in communication devices, noisy environment, construction, and military operations.
Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.
Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga
2015-11-01
Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.
Fostick, Leah; Babkoff, Harvey; Zukerman, Gil
2014-06-01
To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Fifty-five sleep-deprived young adults were compared with 29 aging adults (older than 60 years) and with 18 young controls on auditory temporal order judgment (TOJ) and on speech perception tasks (Experiment 1). The sleep deprived were also compared with 51 dyslexic readers and with the young controls on TOJ and phonological awareness tasks (One-Minute Test for Pseudowords, Phoneme Deletion, Pig Latin, and Spoonerism; Experiment 2). Sleep deprivation resulted in longer TOJ thresholds, poorer speech perception, and poorer nonword reading compared with controls. The TOJ thresholds of the sleep deprived were comparable to those of the aging adults, but their pattern of speech performance differed. They also performed better on TOJ and phonological awareness than dyslexic readers. A variety of linguistic skills are affected by sleep deprivation. The comparison of sleep-deprived individuals with other groups with known difficulties in these linguistic skills might suggest that different groups exhibit common difficulties.
Gifford, René H; Revit, Lawrence J
2010-01-01
Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects' preferred listening programs as well as with the addition of either Beam preprocessing (Cochlear Corporation) or the T-Mic accessory option (Advanced Bionics). In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition, a standard t-test was run to evaluate effectiveness across manufacturer for improving the SRT in noise. In Experiment 2, 16 of the 20 Cochlear Corporation subjects were reassessed obtaining an SRT in noise using the manufacturer-suggested "Everyday," "Noise," and "Focus" preprocessing strategies. A repeated-measures ANOVA was employed to assess the effects of preprocessing. The primary findings were (i) both Noise and Focus preprocessing strategies (Cochlear Corporation) significantly improved the SRT in noise as compared to Everyday preprocessing, (ii) the T-Mic accessory option (Advanced Bionics) significantly improved the SRT as compared to the BTE mic, and (iii) Focus preprocessing and the T-Mic resulted in similar degrees of improvement that were not found to be significantly different from one another. Options available in current cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise with both Cochlear Corporation and Advanced Bionics systems. For Cochlear Corporation recipients, Focus preprocessing yields the best speech-recognition performance in a complex listening environment; however, it is recommended that Noise preprocessing be used as the new default for everyday listening environments to avoid the need for switching programs throughout the day. For Advanced Bionics recipients, the T-Mic offers significantly improved performance in noise and is recommended for everyday use in all listening environments. American Academy of Audiology.
Effectiveness of the Directional Microphone in the Baha® Divino™
Oeding, Kristi; Valente, Michael; Kerckhoff, Jessica
2010-01-01
Background Patients with unilateral sensorineural hearing loss (USNHL) experience great difficulty listening to speech in noisy environments. A directional microphone (DM) could potentially improve speech recognition in this difficult listening environment. It is well known that DMs in behind-the-ear (BTE) and custom hearing aids can provide a greater signal-to-noise ratio (SNR) in comparison to an omnidirectional microphone (OM) to improve speech recognition in noise for persons with hearing impairment. Studies examining the DM in bone anchored auditory osseointegrated implants (Baha), however, have been mixed, with little to no benefit reported for the DM compared to an OM. Purpose The primary purpose of this study was to determine if there are statistically significant differences in the mean reception threshold for sentences (RTS in dB) in noise between the OM and DM in the Baha® Divino™. The RTS of these two microphone modes was measured utilizing two loudspeaker arrays (speech from 0° and noise from 180° or a diffuse eight-loudspeaker array) and with the better ear open or closed with an earmold impression and noise attenuating earmuff. Subjective benefit was assessed using the Abbreviated Profile of Hearing Aid Benefit (APHAB) to compare unaided and aided (Divino OM and DM combined) problem scores. Research Design A repeated measures design was utilized, with each subject counterbalanced to each of the eight treatment levels for three independent variables: (1) microphone (OM and DM), (2) loudspeaker array (180° and diffuse), and (3) better ear (open and closed). Study Sample Sixteen subjects with USNHL currently utilizing the Baha were recruited from Washington University’s Center for Advanced Medicine and the surrounding area. Data Collection and Analysis Subjects were tested at the initial visit if they entered the study wearing the Divino or after at least four weeks of acclimatization to a loaner Divino. The RTS was determined utilizing Hearing in Noise Test (HINT) sentences in the R-Space™ system, and subjective benefit was determined utilizing the APHAB. A three-way repeated measures analysis of variance (ANOVA) and a paired samples t-test were utilized to analyze results of the HINT and APHAB, respectively. Results Results revealed statistically significant differences within microphone (p < 0.001; directional advantage of 3.2 dB), loudspeaker array (p = 0.046; 180° advantage of 1.1 dB), and better ear conditions (p < 0.001; open ear advantage of 4.9 dB). Results from the APHAB revealed statistically and clinically significant benefit for the Divino relative to unaided on the subscales of Ease of Communication (EC) (p = 0.037), Background Noise (BN) (p < 0.001), and Reverberation (RV) (p = 0.005). Conclusions The Divino’s DM provides a statistically significant improvement in speech recognition in noise compared to the OM for subjects with USNHL. Therefore, it is recommended that audiologists consider selecting a Baha with a DM to provide improved speech recognition performance in noisy listening environments. PMID:21034701
Effectiveness of the directional microphone in the Baha® Divino™.
Oeding, Kristi; Valente, Michael; Kerckhoff, Jessica
2010-09-01
Patients with unilateral sensorineural hearing loss (USNHL) experience great difficulty listening to speech in noisy environments. A directional microphone (DM) could potentially improve speech recognition in this difficult listening environment. It is well known that DMs in behind-the-ear (BTE) and custom hearing aids can provide a greater signal-to-noise ratio (SNR) in comparison to an omnidirectional microphone (OM) to improve speech recognition in noise for persons with hearing impairment. Studies examining the DM in bone anchored auditory osseointegrated implants (Baha), however, have been mixed, with little to no benefit reported for the DM compared to an OM. The primary purpose of this study was to determine if there are statistically significant differences in the mean reception threshold for sentences (RTS in dB) in noise between the OM and DM in the Baha® Divino™. The RTS of these two microphone modes was measured utilizing two loudspeaker arrays (speech from 0° and noise from 180° or a diffuse eight-loudspeaker array) and with the better ear open or closed with an earmold impression and noise attenuating earmuff. Subjective benefit was assessed using the Abbreviated Profile of Hearing Aid Benefit (APHAB) to compare unaided and aided (Divino OM and DM combined) problem scores. A repeated measures design was utilized, with each subject counterbalanced to each of the eight treatment levels for three independent variables: (1) microphone (OM and DM), (2) loudspeaker array (180° and diffuse), and (3) better ear (open and closed). Sixteen subjects with USNHL currently utilizing the Baha were recruited from Washington University's Center for Advanced Medicine and the surrounding area. Subjects were tested at the initial visit if they entered the study wearing the Divino or after at least four weeks of acclimatization to a loaner Divino. The RTS was determined utilizing Hearing in Noise Test (HINT) sentences in the R-Space™ system, and subjective benefit was determined utilizing the APHAB. A three-way repeated measures analysis of variance (ANOVA) and a paired samples t-test were utilized to analyze results of the HINT and APHAB, respectively. Results revealed statistically significant differences within microphone (p < 0.001; directional advantage of 3.2 dB), loudspeaker array (p = 0.046; 180° advantage of 1.1 dB), and better ear conditions (p < 0.001; open ear advantage of 4.9 dB). Results from the APHAB revealed statistically and clinically significant benefit for the Divino relative to unaided on the subscales of Ease of Communication (EC) (p = 0.037), Background Noise (BN) (p < 0.001), and Reverberation (RV) (p = 0.005). The Divino's DM provides a statistically significant improvement in speech recognition in noise compared to the OM for subjects with USNHL. Therefore, it is recommended that audiologists consider selecting a Baha with a DM to provide improved speech recognition performance in noisy listening environments. American Academy of Audiology.
Negative blood oxygen level dependent signals during speech comprehension.
Rodriguez Moreno, Diana; Schiff, Nicholas D; Hirsch, Joy
2015-05-01
Speech comprehension studies have generally focused on the isolation and function of regions with positive blood oxygen level dependent (BOLD) signals with respect to a resting baseline. Although regions with negative BOLD signals in comparison to a resting baseline have been reported in language-related tasks, their relationship to regions of positive signals is not fully appreciated. Based on the emerging notion that the negative signals may represent an active function in language tasks, the authors test the hypothesis that negative BOLD signals during receptive language are more associated with comprehension than content-free versions of the same stimuli. Regions associated with comprehension of speech were isolated by comparing responses to passive listening to natural speech to two incomprehensible versions of the same speech: one that was digitally time reversed and one that was muffled by removal of high frequencies. The signal polarity was determined by comparing the BOLD signal during each speech condition to the BOLD signal during a resting baseline. As expected, stimulation-induced positive signals relative to resting baseline were observed in the canonical language areas with varying signal amplitudes for each condition. Negative BOLD responses relative to resting baseline were observed primarily in frontoparietal regions and were specific to the natural speech condition. However, the BOLD signal remained indistinguishable from baseline for the unintelligible speech conditions. Variations in connectivity between brain regions with positive and negative signals were also specifically related to the comprehension of natural speech. These observations of anticorrelated signals related to speech comprehension are consistent with emerging models of cooperative roles represented by BOLD signals of opposite polarity.
Negative Blood Oxygen Level Dependent Signals During Speech Comprehension
Rodriguez Moreno, Diana; Schiff, Nicholas D.
2015-01-01
Abstract Speech comprehension studies have generally focused on the isolation and function of regions with positive blood oxygen level dependent (BOLD) signals with respect to a resting baseline. Although regions with negative BOLD signals in comparison to a resting baseline have been reported in language-related tasks, their relationship to regions of positive signals is not fully appreciated. Based on the emerging notion that the negative signals may represent an active function in language tasks, the authors test the hypothesis that negative BOLD signals during receptive language are more associated with comprehension than content-free versions of the same stimuli. Regions associated with comprehension of speech were isolated by comparing responses to passive listening to natural speech to two incomprehensible versions of the same speech: one that was digitally time reversed and one that was muffled by removal of high frequencies. The signal polarity was determined by comparing the BOLD signal during each speech condition to the BOLD signal during a resting baseline. As expected, stimulation-induced positive signals relative to resting baseline were observed in the canonical language areas with varying signal amplitudes for each condition. Negative BOLD responses relative to resting baseline were observed primarily in frontoparietal regions and were specific to the natural speech condition. However, the BOLD signal remained indistinguishable from baseline for the unintelligible speech conditions. Variations in connectivity between brain regions with positive and negative signals were also specifically related to the comprehension of natural speech. These observations of anticorrelated signals related to speech comprehension are consistent with emerging models of cooperative roles represented by BOLD signals of opposite polarity. PMID:25412406
Potgieter, Jenni-Marí; Swanepoel, De Wet; Myburgh, Hermanus Carel; Smits, Cas
2017-11-20
This study determined the effect of hearing loss and English-speaking competency on the South African English digits-in-noise hearing test to evaluate its suitability for use across native (N) and non-native (NN) speakers. A prospective cross-sectional cohort study of N and NN English adults with and without sensorineural hearing loss compared pure-tone air conduction thresholds to the speech reception threshold (SRT) recorded with the smartphone digits-in-noise hearing test. A rating scale was used for NN English listeners' self-reported competence in speaking English. This study consisted of 454 adult listeners (164 male, 290 female; range 16 to 90 years), of whom 337 listeners had a best ear four-frequency pure-tone average (4FPTA; 0.5, 1, 2, and 4 kHz) of ≤25 dB HL. A linear regression model identified three predictors of the digits-in-noise SRT, namely, 4FPTA, age, and self-reported English-speaking competence. The NN group with poor self-reported English-speaking competence (≤5/10) performed significantly (p < 0.01) poorer than the N and NN (≥6/10) groups on the digits-in-noise test. Screening characteristics of the test improved with separate cutoff values depending on English-speaking competence for the N and NN groups (≥6/10) and NN group alone (≤5/10). Logistic regression models, which include age in the analysis, showed a further improvement in sensitivity and specificity for both groups (area under the receiver operating characteristic curve, 0.962 and 0.903, respectively). Self-reported English-speaking competence had a significant influence on the SRT obtained with the smartphone digits-in-noise test. A logistic regression approach considering SRT, self-reported English-speaking competence, and age as predictors of best ear 4FPTA >25 dB HL showed that the test can be used as an accurate hearing screening tool for N and NN English speakers. The smartphone digits-in-noise test, therefore, allows testing in a multilingual population familiar with English digits using dynamic cutoff values that can be chosen according to self-reported English-speaking competence and age.
Peter, Beate
2013-01-01
This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD and 11 controls. Syllable durations from a syllable repetition task served as an estimate of maximal oral movement speed. In two imitation tasks, nonwords and clapped rhythms, unstressed vowel durations and quarter-note clap intervals served as estimates of oral and hand movement speed, respectively. Syllable durations were significantly correlated with vowel durations and hand clap intervals. Sentence imitation was correlated with all three timed movement measures. Clustering on syllable repetition durations produced three clusters that also differed in sentence imitation scores. Results are consistent with limited movement speeds across motor systems and SSD subtypes defined by motor speeds as a corollary of expressive language abilities. PMID:22411590
Peter, Beate
2012-12-01
This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD and 11 controls. Syllable durations from a syllable repetition task served as an estimate of maximal oral movement speed. In two imitation tasks, nonwords and clapped rhythms, unstressed vowel durations and quarter-note clap intervals served as estimates of oral and hand movement speed, respectively. Syllable durations were significantly correlated with vowel durations and hand clap intervals. Sentence imitation was correlated with all three timed movement measures. Clustering on syllable repetition durations produced three clusters that also differed in sentence imitation scores. Results are consistent with limited movement speeds across motor systems and SSD subtypes defined by motor speeds as a corollary of expressive language abilities.
Bernstein, Joshua G. W.; Summers, Van; Iyer, Nandini; Brungart, Douglas S.
2012-01-01
Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups. PMID:23039460
Arrabito, G R; McFadden, S M; Crabtree, R B
2001-07-01
Auditory speech thresholds were measured in this study. Subjects were required to discriminate a female voice recording of three-digit numbers in the presence of diotic speech babble. The voice stimulus was spatialized at 11 static azimuth positions on the horizontal plane using three different head-related transfer functions (HRTFs) measured on individuals who did not participate in this study. The diotic presentation of the voice stimulus served as the control condition. The results showed that two of the HRTFS performed similarly and had significantly lower auditory speech thresholds than the third HRTF. All three HRTFs yielded significantly lower auditory speech thresholds compared with the diotic presentation of the voice stimulus, with the largest difference at 60 degrees azimuth. The practical implications of these results suggest that lower headphone levels of the communication system in military aircraft can be achieved without sacrificing intelligibility, thereby lessening the risk of hearing loss.
Lopez Valdes, Alejandro; Mc Laughlin, Myles; Viani, Laura; Walshe, Peter; Smith, Jaclyn; Zeng, Fan-Gang; Reilly, Richard B.
2014-01-01
Cochlear implants (CIs) can partially restore functional hearing in deaf individuals. However, multiple factors affect CI listener's speech perception, resulting in large performance differences. Non-speech based tests, such as spectral ripple discrimination, measure acoustic processing capabilities that are highly correlated with speech perception. Currently spectral ripple discrimination is measured using standard psychoacoustic methods, which require attentive listening and active response that can be difficult or even impossible in special patient populations. Here, a completely objective cortical evoked potential based method is developed and validated to assess spectral ripple discrimination in CI listeners. In 19 CI listeners, using an oddball paradigm, cortical evoked potential responses to standard and inverted spectrally rippled stimuli were measured. In the same subjects, psychoacoustic spectral ripple discrimination thresholds were also measured. A neural discrimination threshold was determined by systematically increasing the number of ripples per octave and determining the point at which there was no longer a significant difference between the evoked potential response to the standard and inverted stimuli. A correlation was found between the neural and the psychoacoustic discrimination thresholds (R2 = 0.60, p<0.01). This method can objectively assess CI spectral resolution performance, providing a potential tool for the evaluation and follow-up of CI listeners who have difficulty performing psychoacoustic tests, such as pediatric or new users. PMID:24599314
Lopez Valdes, Alejandro; Mc Laughlin, Myles; Viani, Laura; Walshe, Peter; Smith, Jaclyn; Zeng, Fan-Gang; Reilly, Richard B
2014-01-01
Cochlear implants (CIs) can partially restore functional hearing in deaf individuals. However, multiple factors affect CI listener's speech perception, resulting in large performance differences. Non-speech based tests, such as spectral ripple discrimination, measure acoustic processing capabilities that are highly correlated with speech perception. Currently spectral ripple discrimination is measured using standard psychoacoustic methods, which require attentive listening and active response that can be difficult or even impossible in special patient populations. Here, a completely objective cortical evoked potential based method is developed and validated to assess spectral ripple discrimination in CI listeners. In 19 CI listeners, using an oddball paradigm, cortical evoked potential responses to standard and inverted spectrally rippled stimuli were measured. In the same subjects, psychoacoustic spectral ripple discrimination thresholds were also measured. A neural discrimination threshold was determined by systematically increasing the number of ripples per octave and determining the point at which there was no longer a significant difference between the evoked potential response to the standard and inverted stimuli. A correlation was found between the neural and the psychoacoustic discrimination thresholds (R2=0.60, p<0.01). This method can objectively assess CI spectral resolution performance, providing a potential tool for the evaluation and follow-up of CI listeners who have difficulty performing psychoacoustic tests, such as pediatric or new users.
ERIC Educational Resources Information Center
Schlauch, Robert S.; Han, Heekyung J.; Yu, Tzu-Ling J.; Carney, Edward
2017-01-01
Purpose: The purpose of this article is to examine explanations for pure-tone average-spondee threshold differences in functional hearing loss. Method: Loudness magnitude estimation functions were obtained from 24 participants for pure tones (0.5 and 1.0 kHz), vowels, spondees, and speech-shaped noise as a function of level (20-90 dB SPL).…
Improving speech perception in noise for children with cochlear implants.
Gifford, René H; Olund, Amy P; Dejong, Melissa
2011-10-01
Current cochlear implant recipients are achieving increasingly higher levels of speech recognition; however, the presence of background noise continues to significantly degrade speech understanding for even the best performers. Newer generation Nucleus cochlear implant sound processors can be programmed with SmartSound strategies that have been shown to improve speech understanding in noise for adult cochlear implant recipients. The applicability of these strategies for use in children, however, is not fully understood nor widely accepted. To assess speech perception for pediatric cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether Nucleus sound processor SmartSound strategies yield improved sentence recognition in noise for children who learn language through the implant. Single subject, repeated measures design. Twenty-two experimental subjects with cochlear implants (mean age 11.1 yr) and 25 control subjects with normal hearing (mean age 9.6 yr) participated in this prospective study. Speech reception thresholds (SRT) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the experimental subjects' everyday program incorporating Adaptive Dynamic Range Optimization (ADRO) as well as with the addition of Autosensitivity control (ASC). Adaptive SRTs with the Hearing In Noise Test (HINT) sentences were obtained for all 22 experimental subjects, and performance-in percent correct-was assessed in a fixed +6 dB SNR (signal-to-noise ratio) for a six-subject subset. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the SmartSound setting on the SRT in noise. The primary findings mirrored those reported previously with adult cochlear implant recipients in that the addition of ASC to ADRO significantly improved speech recognition in noise for pediatric cochlear implant recipients. The mean degree of improvement in the SRT with the addition of ASC to ADRO was 3.5 dB for a mean SRT of 10.9 dB SNR. Thus, despite the fact that these children have acquired auditory/oral speech and language through the use of their cochlear implant(s) equipped with ADRO, the addition of ASC significantly improved their ability to recognize speech in high levels of diffuse background noise. The mean SRT for the control subjects with normal hearing was 0.0 dB SNR. Given that the mean SRT for the experimental group was 10.9 dB SNR, despite the improvements in performance observed with the addition of ASC, cochlear implants still do not completely overcome the speech perception deficit encountered in noisy environments accompanying the diagnosis of severe-to-profound hearing loss. SmartSound strategies currently available in latest generation Nucleus cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise for pediatric cochlear implant recipients. Despite the reluctance of pediatric audiologists to utilize SmartSound settings for regular use, the results of the current study support the addition of ASC to ADRO for everyday listening environments to improve speech perception in a child's typical everyday program. American Academy of Audiology.
Venezia, Jonathan H.; Hickok, Gregory; Richards, Virginia M.
2016-01-01
Speech intelligibility depends on the integrity of spectrotemporal patterns in the signal. The current study is concerned with the speech modulation power spectrum (MPS), which is a two-dimensional representation of energy at different combinations of temporal and spectral (i.e., spectrotemporal) modulation rates. A psychophysical procedure was developed to identify the regions of the MPS that contribute to successful reception of auditory sentences. The procedure, based on the two-dimensional image classification technique known as “bubbles” (Gosselin and Schyns (2001). Vision Res. 41, 2261–2271), involves filtering (i.e., degrading) the speech signal by removing parts of the MPS at random, and relating filter patterns to observer performance (keywords identified) over a number of trials. The result is a classification image (CImg) or “perceptual map” that emphasizes regions of the MPS essential for speech intelligibility. This procedure was tested using normal-rate and 2×-time-compressed sentences. The results indicated: (a) CImgs could be reliably estimated in individual listeners in relatively few trials, (b) CImgs tracked changes in spectrotemporal modulation energy induced by time compression, though not completely, indicating that “perceptual maps” deviated from physical stimulus energy, and (c) the bubbles method captured variance in intelligibility not reflected in a common modulation-based intelligibility metric (spectrotemporal modulation index or STMI). PMID:27586738
Change in Psychosocial Health Status Over 5 Years in Relation to Adults' Hearing Ability in Noise.
Stam, Mariska; Smit, Jan H; Twisk, Jos W R; Lemke, Ulrike; Smits, Cas; Festen, Joost M; Kramer, Sophia E
The aim of this study was to establish the longitudinal relationship between hearing ability in noise and psychosocial health outcomes (i.e., loneliness, anxiety, depression, distress, and somatization) in adults aged 18 to 70 years. An additional objective was to determine whether a change in hearing ability in noise over a period of 5 years was associated with a change in psychosocial functioning. Subgroup effects for a range of factors were investigated. Longitudinal data of the web-based Netherlands Longitudinal Study on Hearing (NL-SH) (N = 508) were analyzed. The ability to recognize speech in noise (i.e., the speech-reception-threshold [SRTn]) was measured with an online digit triplet test at baseline and at 5-year follow-up. Psychosocial health status was assessed by online questionnaires. Multiple linear regression analyses and longitudinal statistical analyses (i.e., generalized estimating equations) were performed. Poorer SRTn was associated longitudinally with more feelings of emotional and social loneliness. For participants with a high educational level, the longitudinal association between SRTn and social loneliness was significant. Changes in hearing ability and loneliness appeared significantly associated only for specific subgroups: those with stable pattern of hearing aid nonuse (increased emotional and social loneliness), who entered matrimony (increased social loneliness), and low educational level (less emotional loneliness). No significant longitudinal associations were found between hearing ability and anxiety, depression, distress, or somatization. Hearing ability in noise was longitudinally associated with loneliness. Decline in hearing ability in noise was related to increase in loneliness for specific subgroups of participants. One of these subgroups included participants whose hearing deteriorated over 5 years, but who continued to report nonuse of hearing aids. This is an important and alarming finding that needs further investigation.
A Study of the Combined Use of a Hearing Aid and Tactual Aid in an Adult with Profound Hearing Loss
ERIC Educational Resources Information Center
Reed, Charlotte M.; Delhorne, Lorraine A.
2006-01-01
This study examined the benefits of the combined used of a hearing aid and tactual aid to supplement lip-reading in the reception of speech and for the recognition of environmental sounds in an adult with profound hearing loss. Speech conditions included lip-reading alone (L), lip-reading + tactual aid (L+TA) lip-reading + hearing aid (L+HA) and…
What Is an Arteriovenous Malformation (AVM)?
... sensory information, such as interpretation of pain and temperature, light touch, vibration and more. The temporal lobe functions to process things related to hearing, memory, learning and receptive speech. The occipital lobe functions to ...
Potts, Lisa G; Kolb, Kelly A
2014-04-01
Difficulty understanding speech in the presence of background noise is a common report among cochlear implant (CI) recipients. Several speech-processing options designed to improve speech recognition, especially in noise, are currently available in the Cochlear Nucleus CP810 speech processor. These include adaptive dynamic range optimization (ADRO), autosensitivity control (ASC), Beam, and Zoom. The purpose of this study was to evaluate CI recipients' speech-in-noise recognition to determine which currently available processing option or options resulted in best performance in a simulated restaurant environment. Experimental study with one study group. The independent variable was speech-processing option, and the dependent variable was the reception threshold for sentences score. Thirty-two adult CI recipients. Eight processing options were tested: Beam, Beam + ASC, Beam + ADRO, Beam + ASC + ADRO, Zoom, Zoom + ASC, Zoom + ADRO, and Zoom + ASC + ADRO. Participants repeated Hearing in Noise Test sentences presented at a 0° azimuth, with R-Space restaurant noise presented from a 360° eight-loudspeaker array at 70 dB sound pressure level. A one-way repeated-measures analysis of variance was used to analyze differences in Beam options, Zoom options, and Beam versus Zoom options. Among the Beam options, Beam + ADRO was significantly poorer than Beam only, Beam + ASC, and Beam + ASC + ADRO. A 1.6-dB difference was observed between the best (Beam only) and poorest (Beam + ADRO) options. Among the Zoom options, Zoom only and Zoom + ADRO were significantly poorer than Zoom + ASC. A 2.2-dB difference was observed between the best (Zoom + ASC) and poorest (Zoom only) options. The comparison between Beam and Zoom options showed one significant difference, with Zoom only significantly poorer than Beam only. No significant difference was found between the other Beam and Zoom options (Beam + ASC vs Zoom + ASC, Beam + ADRO vs Zoom + ADRO, and Beam + ASC + ADRO vs Zoom + ASC + ADRO). The best processing option varied across subjects, with an almost equal number of participants performing best with a Beam option (n = 15) compared with a Zoom option (n = 17). There were no significant demographic or audiological moderating variables for any option. The results showed no significant differences between adaptive directionality (Beam) and fixed directionality (Zoom) when ASC was active in the R-Space environment. This finding suggests that noise-reduction processing is extremely valuable in loud semidiffuse environments in which the effectiveness of directional filtering might be diminished. However, there was no significant difference between the Beam-only and Beam + ASC options, which is most likely related to the additional noise cancellation performed by the Beam option (i.e., two-stage directional filtering and noise cancellation). In addition, the processing options with ADRO resulted in the poorest performances. This could be related to how the CI recipients were programmed or the loud noise level used in this study. The best processing option varied across subjects, but the majority performed best with directional filtering (Beam or Zoom) in combination with ASC. Therefore in a loud semidiffuse environment, the use of either Beam + ASC or Zoom + ASC is recommended. American Academy of Audiology.
Abdul Wahab, Noor Alaudin; Zakaria, Mohd Normani; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Wahab, Suzaily
2017-11-01
The present, case-control, study investigates binaural hearing performance in schizophrenia patients towards sentences presented in quiet and noise. Participants were twenty-one healthy controls and sixteen schizophrenia patients with normal peripheral auditory functions. The binaural hearing was examined in four listening conditions by using the Malay version of hearing in noise test. The syntactically and semantically correct sentences were presented via headphones to the randomly selected subjects. In each condition, the adaptively obtained reception thresholds for speech (RTS) were used to determine RTS noise composite and spatial release from masking. Schizophrenia patients demonstrated significantly higher mean RTS value relative to healthy controls (p=0.018). The large effect size found in three listening conditions, i.e., in quiet (d=1.07), noise right (d=0.88) and noise composite (d=0.90) indicates statistically significant difference between the groups. However, noise front and noise left conditions show medium (d=0.61) and small (d=0.50) effect size respectively. No statistical difference between groups was noted in regards to spatial release from masking on right (p=0.305) and left (p=0.970) ear. The present findings suggest an abnormal unilateral auditory processing in central auditory pathway in schizophrenia patients. Future studies to explore the role of binaural and spatial auditory processing were recommended.
Koohi, Nehzat; Vickers, Deborah; Chandrashekar, Hoskote; Tsang, Benjamin; Werring, David; Bamiou, Doris-Eva
2017-03-01
Auditory disability due to impaired auditory processing (AP) despite normal pure-tone thresholds is common after stroke, and it leads to isolation, reduced quality of life and physical decline. There are currently no proven remedial interventions for AP deficits in stroke patients. This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. Fifty stroke patients had baseline audiological assessments, AP tests and completed the (modified) Amsterdam Inventory for Auditory Disability and Hearing Handicap Inventory for Elderly questionnaires. Nine out of these 50 patients were diagnosed with disordered AP based on severe deficits in understanding speech in background noise but with normal pure-tone thresholds. These nine patients underwent spatial speech-in-noise testing in a sound-attenuating chamber (the "crescent of sound") with and without FM systems. The signal-to-noise ratio (SNR) for 50% correct speech recognition performance was measured with speech presented from 0° azimuth and competing babble from ±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SNRs measured with co-located speech and babble and SNRs measured with spatially separated speech and babble. The SRM significantly improved when babble was spatially separated from target speech, while the patients had the FM systems in their ears compared to without the FM systems. Personal FM systems may substantially improve speech-in-noise deficits in stroke patients who are not eligible for conventional hearing aids. FMs are feasible in stroke patients and show promise to address impaired AP after stroke. Implications for Rehabilitation This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. All cases significantly improved speech perception in noise with the FM systems, when noise was spatially separated from the speech signal by 90° compared with unaided listening. Personal FM systems are feasible in stroke patients, and may be of benefit in just under 20% of this population, who are not eligible for conventional hearing aids.
Ekström, Seth-Reino; Borg, Erik
2011-01-01
The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC) testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA) noise and speech spectrum-filtered noise (SPN)]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA). The results showed a significant effect of piano performance speed and octave (P<.01). Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01) and SPN (P<.05). Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01), but there were smaller differences between masking conditions (P<.01). It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.
Individual differences in children’s private speech: The role of imaginary companions
Davis, Paige E.; Meins, Elizabeth; Fernyhough, Charles
2013-01-01
Relations between children’s imaginary companion status and their engagement in private speech during free play were investigated in a socially diverse sample of 5-year-olds (N = 148). Controlling for socioeconomic status, receptive verbal ability, total number of utterances, and duration of observation, there was a main effect of imaginary companion status on type of private speech. Children who had imaginary companions were more likely to engage in covert private speech compared with their peers who did not have imaginary companions. These results suggest that the private speech of children with imaginary companions is more internalized than that of their peers who do not have imaginary companions and that social engagement with imaginary beings may fulfill a similar role to social engagement with real-life partners in the developmental progression of private speech. PMID:23978382
Perceptual learning for speech in noise after application of binary time-frequency masks
Ahmadi, Mahnaz; Gross, Vauna L.; Sinex, Donal G.
2013-01-01
Ideal time-frequency (TF) masks can reject noise and improve the recognition of speech-noise mixtures. An ideal TF mask is constructed with prior knowledge of the target speech signal. The intelligibility of a processed speech-noise mixture depends upon the threshold criterion used to define the TF mask. The study reported here assessed the effect of training on the recognition of speech in noise after processing by ideal TF masks that did not restore perfect speech intelligibility. Two groups of listeners with normal hearing listened to speech-noise mixtures processed by TF masks calculated with different threshold criteria. For each group, a threshold criterion that initially produced word recognition scores between 0.56–0.69 was chosen for training. Listeners practiced with one set of TF-masked sentences until their word recognition performance approached asymptote. Perceptual learning was quantified by comparing word-recognition scores in the first and last training sessions. Word recognition scores improved with practice for all listeners with the greatest improvement observed for the same materials used in training. PMID:23464038
Hostile reception greets Bottomley Congress speech.
1990-04-04
Health Minister Virginia Bottomley's attempts to persuade delegates that there are 'exciting' opportunities for nurses in the Government's plans for the health service failed, as she faced growing hostility from the audience at RCN Congress last week.
Development and preliminary evaluation of a new test of ongoing speech comprehension.
Best, Virginia; Keidser, Gitte; Buchholz, Jӧrg M; Freeston, Katrina
2016-01-01
The overall goal of this work is to create new speech perception tests that more closely resemble real world communication and offer an alternative or complement to the commonly used sentence recall test. We describe the development of a new ongoing speech comprehension test based on short everyday passages and on-the-go questions. We also describe the results of an experiment conducted to compare the psychometric properties of this test to those of a sentence test. Both tests were completed by a group of listeners that included normal hearers as well as hearing-impaired listeners who participated with and without their hearing aids. Overall, the psychometric properties of the two tests were similar, and thresholds were significantly correlated. However, there was some evidence of age/cognitive effects in the comprehension test that were not revealed by the sentence test. This new comprehension test promises to be useful for the larger goal of creating laboratory tests that combine realistic acoustic environments with realistic communication tasks. Further efforts will be required to assess whether the test can ultimately improve predictions of real-world outcomes.
Neuroanatomical and resting state EEG power correlates of central hearing loss in older adults.
Giroud, Nathalie; Hirsiger, Sarah; Muri, Raphaela; Kegel, Andrea; Dillier, Norbert; Meyer, Martin
2018-01-01
To gain more insight into central hearing loss, we investigated the relationship between cortical thickness and surface area, speech-relevant resting state EEG power, and above-threshold auditory measures in older adults and younger controls. Twenty-three older adults and 13 younger controls were tested with an adaptive auditory test battery to measure not only traditional pure-tone thresholds, but also above individual thresholds of temporal and spectral processing. The participants' speech recognition in noise (SiN) was evaluated, and a T1-weighted MRI image obtained for each participant. We then determined the cortical thickness (CT) and mean cortical surface area (CSA) of auditory and higher speech-relevant regions of interest (ROIs) with FreeSurfer. Further, we obtained resting state EEG from all participants as well as data on the intrinsic theta and gamma power lateralization, the latter in accordance with predictions of the Asymmetric Sampling in Time hypothesis regarding speech processing (Poeppel, Speech Commun 41:245-255, 2003). Methodological steps involved the calculation of age-related differences in behavior, anatomy and EEG power lateralization, followed by multiple regressions with anatomical ROIs as predictors for auditory performance. We then determined anatomical regressors for theta and gamma lateralization, and further constructed all regressions to investigate age as a moderator variable. Behavioral results indicated that older adults performed worse in temporal and spectral auditory tasks, and in SiN, despite having normal peripheral hearing as signaled by the audiogram. These behavioral age-related distinctions were accompanied by lower CT in all ROIs, while CSA was not different between the two age groups. Age modulated the regressions specifically in right auditory areas, where a thicker cortex was associated with better auditory performance in older adults. Moreover, a thicker right supratemporal sulcus predicted more rightward theta lateralization, indicating the functional relevance of the right auditory areas in older adults. The question how age-related cortical thinning and intrinsic EEG architecture relates to central hearing loss has so far not been addressed. Here, we provide the first neuroanatomical and neurofunctional evidence that cortical thinning and lateralization of speech-relevant frequency band power relates to the extent of age-related central hearing loss in older adults. The results are discussed within the current frameworks of speech processing and aging.
Role of the middle ear muscle apparatus in mechanisms of speech signal discrimination
NASA Technical Reports Server (NTRS)
Moroz, B. S.; Bazarov, V. G.; Sachenko, S. V.
1980-01-01
A method of impedance reflexometry was used to examine 101 students with hearing impairment in order to clarify the interrelation between speech discrimination and the state of the middle ear muscles. Ability to discriminate speech signals depends to some extent on the functional state of intraaural muscles. Speech discrimination was greatly impaired in the absence of stapedial muscle acoustic reflex, in the presence of low thresholds of stimulation and in very small values of reflex amplitude increase. Discrimination was not impeded in positive AR, high values of relative thresholds and normal increase of reflex amplitude in response to speech signals with augmenting intensity.
Watson, Rose Mary; Pennington, Lindsay
2015-01-01
Communication difficulties are common in cerebral palsy (CP) and are frequently associated with motor, intellectual and sensory impairments. Speech and language therapy research comprises single-case experimental design and small group studies, limiting evidence-based intervention and possibly exacerbating variation in practice. To describe the assessment and intervention practices of speech-language therapist (SLTs) in the UK in their management of communication difficulties associated with CP in childhood. An online survey of the assessments and interventions employed by UK SLTs working with children and young people with CP was conducted. The survey was publicized via NHS trusts, the Royal College of Speech and Language Therapists (RCSLT) and private practice associations using a variety of social media. The survey was open from 5 December 2011 to 30 January 2012. Two hundred and sixty-five UK SLTs who worked with children and young people with CP in England (n = 199), Wales (n = 13), Scotland (n = 36) and Northern Ireland (n = 17) completed the survey. SLTs reported using a wide variety of published, standardized tests, but most commonly reported assessing oromotor function, speech, receptive and expressive language, and communication skills by observation or using assessment schedules they had developed themselves. The most highly prioritized areas for intervention were: dysphagia, alternative and augmentative (AAC)/interaction and receptive language. SLTs reported using a wide variety of techniques to address difficulties in speech, language and communication. Some interventions used have no supporting evidence. Many SLTs felt unable to estimate the hours of therapy per year children and young people with CP and communication disorders received from their service. The assessment and management of communication difficulties associated with CP in childhood varies widely in the UK. Lack of standard assessment practices prevents comparisons across time or services. The adoption of a standard set of agreed clinical measures would enable benchmarking of service provision, permit the development of large-scale research studies using routine clinical data and facilitate the identification of potential participants for research studies in the UK. Some interventions provided lack evidence. Recent systematic reviews could guide intervention, but robust evidence is needed in most areas addressed in clinical practice. © 2015 The Authors International Journal of Language & Communication Disorders published by John Wiley & Sons Ltd on behalf of Royal College of Speech and Language Therapists.
Gifford, René H.; Revit, Lawrence J.
2014-01-01
Background Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. Purpose To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Research Design Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Study Sample Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Intervention Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects’ preferred listening programs as well as with the addition of either Beam™ preprocessing (Cochlear Corporation) or the T-Mic® accessory option (Advanced Bionics). Data Collection and Analysis In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition, a standard t-test was run to evaluate effectiveness across manufacturer for improving the SRT in noise. In Experiment 2, 16 of the 20 Cochlear Corporation subjects were reassessed obtaining an SRT in noise using the manufacturer-suggested “Everyday,” “Noise,” and “Focus” preprocessing strategies. A repeated-measures ANOVA was employed to assess the effects of preprocessing. Results The primary findings were (i) both Noise and Focus preprocessing strategies (Cochlear Corporation) significantly improved the SRT in noise as compared to Everyday preprocessing, (ii) the T-Mic accessory option (Advanced Bionics) significantly improved the SRT as compared to the BTE mic, and (iii) Focus preprocessing and the T-Mic resulted in similar degrees of improvement that were not found to be significantly different from one another. Conclusion Options available in current cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise with both Cochlear Corporation and Advanced Bionics systems. For Cochlear Corporation recipients, Focus preprocessing yields the best speech-recognition performance in a complex listening environment; however, it is recommended that Noise preprocessing be used as the new default for everyday listening environments to avoid the need for switching programs throughout the day. For Advanced Bionics recipients, the T-Mic offers significantly improved performance in noise and is recommended for everyday use in all listening environments. PMID:20807480
Grose, John H; Buss, Emily; Hall, Joseph W
2017-01-01
The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and any equipment or interconnected system or subsystem of equipment that is used in the creation..., display, switching, interchange, transmission, or reception of data or information. For example, HVAC... following body systems: Neurological; musculoskeletal; special sense organs; respiratory, including speech...
Code of Federal Regulations, 2013 CFR
2013-01-01
... and any equipment or interconnected system or subsystem of equipment that is used in the creation..., display, switching, interchange, transmission, or reception of data or information. For example, HVAC... following body systems: Neurological; musculoskeletal; special sense organs; respiratory, including speech...
Code of Federal Regulations, 2014 CFR
2014-01-01
... and any equipment or interconnected system or subsystem of equipment that is used in the creation..., display, switching, interchange, transmission, or reception of data or information. For example, HVAC... following body systems: Neurological; musculoskeletal; special sense organs; respiratory, including speech...
Bierer, Julie Arenberg; Nye, Amberly D
2014-01-01
Objective The objective of the present study, performed in cochlear implant listeners, was to examine how the level of current required to detect single-channel electrical pulse trains relates to loudness perception on the same channel. The working hypothesis was that channels with relatively high thresholds, when measured with a focused current pattern, interface poorly to the auditory nerve. For such channels a smaller dynamic range between perceptual threshold and the most comfortable loudness would result, in part, from a greater sensitivity to changes in electrical field spread compared to low-threshold channels. The narrower range of comfortable listening levels may have important implications for speech perception. Design Data were collected from eight, adult cochlear implant listeners implanted with the HiRes90k cochlear implant (Advanced Bionics Corp.). The partial tripolar (pTP) electrode configuration, consisting of one intracochlear active electrode, two flanking electrodes carrying a fraction (σ) of the return current, and an extracochlear ground, was used for stimulation. Single-channel detection thresholds and most comfortable listening levels were acquired using the most focused pTP configuration possible (σ ≥ 0.8) to identify three channels for further testing – those with the highest, median, and lowest thresholds – for each subject. Threshold, equal-loudness contours (at 50% of the monopolar dynamic range), and loudness growth functions were measured for each of these three test channels using various partial tripolar fractions. Results For all test channels, thresholds increased as the electrode configuration became more focused. The rate of increase with the focusing parameter σ was greatest for the high-threshold channel compared to the median- and low-threshold channels. The 50% equal-loudness contours exhibited similar rates of increase in level across test channels and subjects. Additionally, test channels with the highest thresholds had the narrowest dynamic ranges (for σ ≥ 0.5) and steepest growth of loudness functions for all electrode configurations. Conclusions Together with previous studies using focused stimulation, the results suggest that auditory responses to electrical stimuli at both threshold and suprathreshold current levels are not uniform across the electrode array of individual cochlear implant listeners. Specifically, the steeper growth of loudness and thus smaller dynamic ranges observed for high-threshold channels are consistent with a degraded electrode-neuron interface, which could stem from lower numbers of functioning auditory neurons or a relatively large distance between the neurons and electrodes. These findings may have potential implications for how stimulation levels are set during the clinical mapping procedure, particularly for speech-processing strategies that use focused electrical fields. PMID:25036146
Improved Objective Measurements for Speech Quality Testing
1985-01-01
criterion with the possible exception of the channel vocoder, for which the spread in subjective responses was slightly less than des ired. 4 10 -a_- 14...n 5 19-.0 H ,e -- * ,- p--. -. a’ - - -:== .. LJ. . U .... :1±, .L±--± i- -.. ni m.,in h U ’ el l . ORII STCO 1RE PROIOP)LES p =.. •,,• mI.. 1 .S OI...parameter III specifies thce threshold between objectively interrupted and non-interrupted speech. In the foi- aula apecifying RATIO, mf is the index of the
Houser, Dorian S; Finneran, James J
2006-09-01
Variable stimulus presentation methods are used in auditory evoked potential (AEP) estimates of cetacean hearing sensitivity, each of which might affect stimulus reception and hearing threshold estimates. This study quantifies differences in underwater hearing thresholds obtained by AEP and behavioral means. For AEP estimates, a transducer embedded in a suction cup (jawphone) was coupled to the dolphin's lower jaw for stimulus presentation. Underwater AEP thresholds were obtained for three dolphins in San Diego Bay and for one dolphin in a quiet pool. Thresholds were estimated from the envelope following response at carrier frequencies ranging from 10 to 150 kHz. One animal, with an atypical audiogram, demonstrated significantly greater hearing loss in the right ear than in the left. Across test conditions, the range and average difference between AEP and behavioral threshold estimates were consistent with published comparisons between underwater behavioral and in-air AEP thresholds. AEP thresholds for one animal obtained in-air and in a quiet pool demonstrated a range of differences of -10 to 9 dB (mean = 3 dB). Results suggest that for the frequencies tested, the presentation of sound stimuli through a jawphone, underwater and in-air, results in acceptable differences to AEP threshold estimates.
Ebbels, Susan H; Marić, Nataša; Murphy, Aoife; Turner, Gail
2014-01-01
Little evidence exists for the effectiveness of therapy for children with receptive language difficulties, particularly those whose difficulties are severe and persistent. To establish the effectiveness of explicit speech and language therapy with visual support for secondary school-aged children with language impairments focusing on comprehension of coordinating conjunctions in a randomized control trial with an assessor blind to group status. Fourteen participants (aged 11;3-16;1) with severe RELI (mean standard scores: CELF4 ELS = 48, CELF4 RLS = 53 and TROG-2 = 57), but higher non-verbal (Matrices = 83) and visual perceptual skills (Test of Visual Perceptual Skills (TVPS) = 86) were randomly assigned to two groups: therapy versus waiting controls. In Phase 1, the therapy group received eight 30-min individual sessions of explicit teaching with visual support (Shape Coding) with their usual SLT. In Phase 2, the waiting controls received the same therapy. The participants' comprehension was tested pre-, post-Phase 1 and post-Phase 2 therapy on (1) a specific test of the targeted conjunctions, (2) the TROG-2 and (3) a test of passives. After Phase 1, the therapy group showed significantly more progress than the waiting controls on the targeted conjunctions (d = 1.6) and overall TROG-2 standard score (d = 1.4). The two groups did not differ on the passives test. After Phase 2, the waiting controls made similar progress to those in the original therapy group, who maintained their previous progress. Neither group showed progress on passives. When the two groups were combined, significant progress was found on the specific conjunctions (d = 1.3) and TROG-2 raw (d = 1.1) and standard scores (d = 0.9). Correlations showed no measures taken (including Matrices and TVPS) correlated significantly with progress on the targeted conjunctions or the TROG-2. Four hours of Shape Coding therapy led to significant gains on comprehension of coordinating conjunctions which were maintained after 4 months. Given the significant progress at a group level and the lack of reliable predictors of progress, this approach could be offered to other children with similar difficulties to the participants. However, the intervention was delivered one-to-one by speech and language therapists, thus the effectiveness of this therapy method with other methods of delivery remains to be evaluated. © 2013 Royal College of Speech and Language Therapists.
Using the structure of natural scenes and sounds to predict neural response properties in the brain
NASA Astrophysics Data System (ADS)
Deweese, Michael
2014-03-01
The natural scenes and sounds we encounter in the world are highly structured. The fact that animals and humans are so efficient at processing these sensory signals compared with the latest algorithms running on the fastest modern computers suggests that our brains can exploit this structure. We have developed a sparse mathematical representation of speech that minimizes the number of active model neurons needed to represent typical speech sounds. The model learns several well-known acoustic features of speech such as harmonic stacks, formants, onsets and terminations, but we also find more exotic structures in the spectrogra representation of sound such as localized checkerboard patterns and frequency-modulated excitatory subregions flanked by suppressive sidebands. Moreover, several of these novel features resemble neuronal receptive fields reported in the Inferior Colliculus (IC), as well as auditory thalamus (MGBv) and primary auditory cortex (A1), and our model neurons exhibit the same tradeoff in spectrotemporal resolution as has been observed in IC. To our knowledge, this is the first demonstration that receptive fields of neurons in the ascending mammalian auditory pathway beyond the auditory nerve can be predicted based on coding principles and the statistical properties of recorded sounds. We have also developed a biologically-inspired neural network model of primary visual cortex (V1) that can learn a sparse representation of natural scenes using spiking neurons and strictly local plasticity rules. The representation learned by our model is in good agreement with measured receptive fields in V1, demonstrating that sparse sensory coding can be achieved in a realistic biological setting.
Caversaccio, Marco
2014-01-01
Objective. To compare hearing and speech understanding between a new, nonskin penetrating Baha system (Baha Attract) to the current Baha system using a skin-penetrating abutment. Methods. Hearing and speech understanding were measured in 16 experienced Baha users. The transmission path via the abutment was compared to a simulated Baha Attract transmission path by attaching the implantable magnet to the abutment and then by adding a sample of artificial skin and the external parts of the Baha Attract system. Four different measurements were performed: bone conduction thresholds directly through the sound processor (BC Direct), aided sound field thresholds, aided speech understanding in quiet, and aided speech understanding in noise. Results. The simulated Baha Attract transmission path introduced an attenuation starting from approximately 5 dB at 1000 Hz, increasing to 20–25 dB above 6000 Hz. However, aided sound field threshold shows smaller differences and aided speech understanding in quiet and in noise does not differ significantly between the two transmission paths. Conclusion. The Baha Attract system transmission path introduces predominately high frequency attenuation. This attenuation can be partially compensated by adequate fitting of the speech processor. No significant decrease in speech understanding in either quiet or in noise was found. PMID:25140314
Samango-Sprouse, Carole; Lawson, Patrick; Sprouse, Courtney; Stapleton, Emily; Sadeghin, Teresa; Gropman, Andrea
2016-05-01
Kleefstra syndrome (KS) is a rare neurogenetic disorder most commonly caused by deletion in the 9q34.3 chromosomal region and is associated with intellectual disabilities, severe speech delay, and motor planning deficits. To our knowledge, this is the first patient (PQ, a 6-year-old female) with a 9q34.3 deletion who has near normal intelligence, and developmental dyspraxia with childhood apraxia of speech (CAS). At 6, the Wechsler Preschool and Primary Intelligence testing (WPPSI-III) revealed a Verbal IQ of 81 and Performance IQ of 79. The Beery Buktenica Test of Visual Motor Integration, 5th Edition (VMI) indicated severe visual motor deficits: VMI = 51; Visual Perception = 48; Motor Coordination < 45. On the Receptive One Word Picture Vocabulary Test-R (ROWPVT-R), she had standard scores of 96 and 99 in contrast to an Expressive One Word Picture Vocabulary-R (EOWPVT-R) standard scores of 73 and 82, revealing a discrepancy in vocabulary domains on both evaluations. Preschool Language Scale-4 (PLS-4) on PQ's first evaluation reveals a significant difference between auditory comprehension and expressive communication with standard scores of 78 and 57, respectively, further supporting the presence of CAS. This patient's near normal intelligence expands the phenotypic profile as well as the prognosis associated with KS. The identification of CAS in this patient provides a novel explanation for the previously reported speech delay and expressive language disorder. Further research is warranted on the impact of CAS on intelligence and behavioral outcome in KS. Therapeutic and prognostic implications are discussed. © 2016 Wiley Periodicals, Inc.
Seeing visual word forms: spatial summation, eccentricity and spatial configuration.
Kao, Chien-Hui; Chen, Chien-Chung
2012-06-01
We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.
Outcomes of Late Implantation in Usher Syndrome Patients.
Hoshino, Ana Cristina H; Echegoyen, Agustina; Goffi-Gomez, Maria Valéria Schmidt; Tsuji, Robinson Koji; Bento, Ricardo Ferreira
2017-04-01
Introduction Usher syndrome (US) is an autosomal recessive disorder characterized by hearing loss and progressive visual impairment. Some deaf Usher syndrome patients learn to communicate using sign language. During adolescence, as they start losing vision, they are usually referred to cochlear implantation as a salvage for their new condition. Is a late implantation beneficial to these children? Objective The objective of this study is to describe the outcomes of US patients who received cochlear implants at a later age. Methods This is a retrospective study of ten patients diagnosed with US1. We collected pure-tone thresholds and speech perception tests from pre and one-year post implant. Results Average age at implantation was 18.9 years (5-49). Aided average thresholds were 103 dB HL and 35 dB HL pre and one-year post implant, respectively. Speech perception was only possible to be measured in four patients preoperatively, who scored 13.3; 26.67; 46% vowels and 56% 4-choice. All patients except one had some kind of communication. Two were bilingual. After one year of using the device, seven patients were able to perform the speech tests (from four-choice to close set sentences) and three patients abandoned the use of the implant. Conclusion We observed that detection of sounds can be achieved with late implantation, but speech recognition is only possible in patients with previous hearing stimulation, since it depends on the development of hearing skills and the maturation of the auditory pathways.
Spatial Release from Masking in Children: Effects of Simulated Unilateral Hearing Loss
Corbin, Nicole E.; Buss, Emily; Leibold, Lori J.
2016-01-01
Objectives The purpose of this study was twofold: 1) to determine the effect of an acute simulated unilateral hearing loss on children’s spatial release from masking in two-talker speech and speech-shaped noise, and 2) to develop a procedure to be used in future studies that will assess spatial release from masking in children who have permanent unilateral hearing loss. There were three main predictions. First, spatial release from masking was expected to be larger in two-talker speech than speech-shaped noise. Second, simulated unilateral hearing loss was expected to worsen performance in all listening conditions, but particularly in the spatially separated two-talker speech masker. Third, spatial release from masking was expected to be smaller for children than for adults in the two-talker masker. Design Participants were 12 children (8.7 to 10.9 yrs) and 11 adults (18.5 to 30.4 yrs) with normal bilateral hearing. Thresholds for 50%-correct recognition of Bamford-Kowal-Bench sentences were measured adaptively in continuous two-talker speech or speech-shaped noise. Target sentences were always presented from a loudspeaker at 0° azimuth. The masker stimulus was either co-located with the target or spatially separated to +90° or −90° azimuth. Spatial release from masking was quantified as the difference between thresholds obtained when the target and masker were co-located and thresholds obtained when the masker was presented from +90° or − 90°. Testing was completed both with and without a moderate simulated unilateral hearing loss, created with a foam earplug and supra-aural earmuff. A repeated-measures design was used to compare performance between children and adults, and performance in the no-plug and simulated-unilateral-hearing-loss conditions. Results All listeners benefited from spatial separation of target and masker stimuli on the azimuth plane in the no-plug listening conditions; this benefit was larger in two-talker speech than in speech-shaped noise. In the simulated-unilateral-hearing-loss conditions, a positive spatial release from masking was observed only when the masker was presented ipsilateral to the simulated unilateral hearing loss. In the speech-shaped noise masker, spatial release from masking in the no-plug condition was similar to that obtained when the masker was presented ipsilateral to the simulated unilateral hearing loss. In contrast, in the two-talker speech masker, spatial release from masking in the no-plug condition was much larger than that obtained when the masker was presented ipsilateral to the simulated unilateral hearing loss. When either masker was presented contralateral to the simulated unilateral hearing loss, spatial release from masking was negative. This pattern of results was observed for both children and adults, although children performed more poorly overall. Conclusions Children and adults with normal bilateral hearing experience greater spatial release from masking for a two-talker speech than a speech-shaped noise masker. Testing in a two-talker speech masker revealed listening difficulties in the presence of disrupted binaural input that were not observed in a speech-shaped noise masker. This procedure offers promise for the assessment of spatial release from masking in children with permanent unilateral hearing loss. PMID:27787392
Attias, Joseph; Greenstein, Tally; Peled, Miriam; Ulanovski, David; Wohlgelernter, Jay; Raveh, Eyal
The aim of the study was to compare auditory and speech outcomes and electrical parameters on average 8 years after cochlear implantation between children with isolated auditory neuropathy (AN) and children with sensorineural hearing loss (SNHL). The study was conducted at a tertiary, university-affiliated pediatric medical center. The cohort included 16 patients with isolated AN with current age of 5 to 12.2 years who had been using a cochlear implant for at least 3.4 years and 16 control patients with SNHL matched for duration of deafness, age at implantation, type of implant, and unilateral/bilateral implant placement. All participants had had extensive auditory rehabilitation before and after implantation, including the use of conventional hearing aids. Most patients received Cochlear Nucleus devices, and the remainder either Med-El or Advanced Bionics devices. Unaided pure-tone audiograms were evaluated before and after implantation. Implantation outcomes were assessed by auditory and speech recognition tests in quiet and in noise. Data were also collected on the educational setting at 1 year after implantation and at school age. The electrical stimulation measures were evaluated only in the Cochlear Nucleus implant recipients in the two groups. Similar mapping and electrical measurement techniques were used in the two groups. Electrical thresholds, comfortable level, dynamic range, and objective neural response telemetry threshold were measured across the 22-electrode array in each patient. Main outcome measures were between-group differences in the following parameters: (1) Auditory and speech tests. (2) Residual hearing. (3) Electrical stimulation parameters. (4) Correlations of residual hearing at low frequencies with electrical thresholds at the basal, middle, and apical electrodes. The children with isolated AN performed equally well to the children with SNHL on auditory and speech recognition tests in both quiet and noise. More children in the AN group than the SNHL group were attending mainstream educational settings at school age, but the difference was not statistically significant. Significant between-group differences were noted in electrical measurements: the AN group was characterized by a lower current charge to reach subjective electrical thresholds, lower comfortable level and dynamic range, and lower telemetric neural response threshold. Based on pure-tone audiograms, the children with AN also had more residual hearing before and after implantation. Highly positive coefficients were found on correlation analysis between T levels across the basal and midcochlear electrodes and low-frequency acoustic thresholds. Prelingual children with isolated AN who fail to show expected oral and auditory progress after extensive rehabilitation with conventional hearing aids should be considered for cochlear implantation. Children with isolated AN had similar pattern as children with SNHL on auditory performance tests after cochlear implantation. The lower current charge required to evoke subjective and objective electrical thresholds in children with AN compared with children with SNHL may be attributed to the contribution to electrophonic hearing from the remaining neurons and hair cells. In addition, it is also possible that mechanical stimulation of the basilar membrane, as in acoustic stimulation, is added to the electrical stimulation of the cochlear implant.
Engaged listeners: shared neural processing of powerful political speeches
Häcker, Frank E. K.; Honey, Christopher J.; Hasson, Uri
2015-01-01
Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners’ brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages. PMID:25653012
1986-07-08
Dr. William R. Lucas, Marshall's fourth Center Director (1974-1986), delivers a speech in front of a picture of the lunar landscape with Earth looming in the background while attending a Huntsville Chamber of Commerce reception honoring his achievements as Director of Marshall Space Flight Center (MSFC).
Development and preliminary evaluation of a new test of ongoing speech comprehension
Best, Virginia; Keidser, Gitte; Buchholz, Jörg M.; Freeston, Katrina
2016-01-01
Objective The overall goal of this work is to create new speech perception tests that more closely resemble real world communication and offer an alternative or complement to the commonly used sentence recall test. Design We describe the development of a new ongoing speech comprehension test based on short everyday passages and on-the-go questions. We also describe the results of an experiment conducted to compare the psychometric properties of this test to those of a sentence test. Study Sample Both tests were completed by a group of listeners that included normal hearers as well as hearing-impaired listeners who participated with and without their hearing aids. Results Overall, the psychometric properties of the two tests were similar, and thresholds were significantly correlated. However, there was some evidence of age/cognitive effects in the comprehension test that were not revealed by the sentence test. Conclusions This new comprehension test promises to be useful for the larger goal of creating laboratory tests that combine realistic acoustic environments with realistic communication tasks. Further efforts will be required to assess whether the test can ultimately improve predictions of real-world outcomes. PMID:26158403
Longitudinal predictors of aided speech audibility in infants and children
McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Bentler, Ruth; Holte, Lenore; Roush, Patricia; Oleson, Jacob; Van Buren, John; Moeller, Mary Pat
2015-01-01
Objectives Amplification is a core component of early intervention for children who are hard of hearing (CHH), but hearing aids (HAs) have unique effects that may be independent from other components of the early intervention process, such as caregiver training or speech and language intervention. The specific effects of amplification are rarely described in studies of developmental outcomes. The primary purpose of this manuscript is to quantify aided speech audibility during the early childhood years and examine the factors that influence audibility with amplification for children in the Outcomes of Children with Hearing Loss (OCHL) study. Design Participants were 288 children with permanent hearing loss who were followed as part of the OCHL study. All of the children in this analysis had bilateral hearing loss and wore air-conduction behind-the-ear HAs. At every study visit, hearing thresholds were measured using developmentally-appropriate behavioral methods. Data were obtained for a total of 1043 audiometric evaluations across all subjects for the first four study visits. In addition, the aided audibility of speech through the HA was assessed using probe microphone measures. Hearing thresholds and aided audibility were analyzed. Repeated-measures analyses of variance were conducted to determine if patterns of thresholds and aided audibility were significantly different between ears (left vs. right) or across the first four study visits. Furthermore, a cluster analysis was performed based on the aided audibility at entry into the study, aided audibility at the child’s final visit, and change in aided audibility between these two intervals to determine if there were different patterns of longitudinal aided audibility within the sample. Results Eighty-four percent of children in the study had stable audiometric thresholds during the study, defined as threshold changes <10 dB for any single study visit. There were no significant differences in hearing thresholds, aided audibility, or deviation of the HA fitting from prescriptive targets between ears or across test intervals for the first four visits. Approximately 35% of the children in the study had aided audibility that was below the average for the normative range for the Speech Intelligibility Index (SII) based on degree of hearing loss. The cluster analysis of longitudinal aided audibility revealed three distinct groups of children: a group with consistently high aided audibility throughout the study, a group with decreasing audibility during the study, and a group with consistently low aided audibility. Conclusions The current results indicated that approximately 65% of children in the study had adequate aided audibility of speech and stable hearing during the study period. Limited audibility was associated with greater degrees of hearing loss and larger deviations from prescriptive targets. Studies of developmental outcomes will help to determine how aided audibility is necessary to affects developmental outcomes in CHH. PMID:26731156
Dickson, Kirstin; Marshall, Marjorie; Boyle, James; McCartney, Elspeth; O'Hare, Anne; Forbes, John
2009-01-01
The study is the first within trial cost analysis of direct versus indirect and individual versus group modes of speech-and-language therapy for children with primary language impairment. To compare the short-run resource consequences of the four interventions alongside the effects achieved measured by standardized scores on a test of expressive and receptive language. The study design was a cost analysis integrated within a randomized controlled trial using a 2x2 factorial design (direct/indirect versus individual/group therapy) together with a control group that received usual levels of community-based speech-and-language therapy. Research interventions were delivered in school settings in Scotland, UK. Children aged between 6 and 11 years, attending a mainstream school, with standard scores on the Clinical Evaluation of Language Fundamentals (CELF-III(UK)) of less than -1.25 standard deviation (SD) (receptive and/or expressive) and non-verbal IQ on the Wechsler Abbreviated Scale of Intelligence (WASI) above 75, and no reported hearing loss, no moderate/severe articulation/phonology/dysfluency problems or otherwise requiring individual work with a speech-and-language therapist. The intervention involved speech-and-language therapists and speech-and-language therapy assistants working with individual children or small groups of children. A therapy manual was constructed to assist the choice of procedures and activities for intervention. The cost analysis focused on the salary and travel costs associated with each mode of intervention. The cumulative distribution of total costs arising from the time of randomization to post-intervention assessment was estimated. Arithmetic mean costs were compared and reported with their 95% confidence intervals. The results of the intention-to-treat analysis revealed that there were no significant post-intervention differences between direct and indirect modes of therapy, or between individual and group modes on any of the primary language outcome measures. The cost analysis identified indirect therapy, particularly indirect group therapy, as the least costly of the intervention modes with direct individual therapy as the most costly option. The programme cost of providing therapy in practice over 30 weeks for children could represent between 30% and 75% of the total gross revenue spend in primary school per pupil, depending on the choice of assistant led group therapy or therapist-led individual therapy. This study suggests that speech-and-language therapy assistants can act as effective surrogates for speech-and-language therapists in delivering cost-effective services to children with primary language impairment. The resource gains from adopting a group-based approach may ensure that effective therapy is provided to more children in a more efficient way.
Ten-year follow-up of a consecutive series of children with multichannel cochlear implants.
Uziel, Alain S; Sillon, Martine; Vieu, Adrienne; Artieres, Françoise; Piron, Jean-Pierre; Daures, Jean-Pierre; Mondain, Michel
2007-08-01
To assess a group of children who consecutively received implants more than 10 years after implantation with regard to speech perception, speech intelligibility, receptive language level, and academic/occupational status. A prospective longitudinal study. Pediatric referral center for cochlear implantation. Eighty-two prelingually deafened children received the Nucleus multichannel cochlear implant. Cochlear implantation with Cochlear Nucleus CI22 implant. The main outcome measures were open-set Phonetically Balanced Kindergarten word test, discrimination of sentences in noise, connective discourse tracking (CDT) using voice and telephone, speech intelligibility rating (SIR), vocabulary knowledge measured using the Peabody Picture Vocabulary Test (Revised), academic performance on French language, foreign language, and mathematics, and academic/occupational status. After 10 years of implant experience, 79 children (96%) reported that they always wear the device; 79% (65 of 82 children) could use the telephone. The mean scores were 72% for the Phonetically Balanced Kindergarten word test, 44% for word recognition in noise, 55.3 words per minute for the CDT, and 33 words per minute for the CDT via telephone. Thirty-three children (40%) developed speech intelligible to the average listener (SIR 5), and 22 (27%) developed speech intelligible to a listener with little experience of deaf person's speech (SIR 4). The measures of vocabulary showed that most (76%) of children who received implants scored below the median value of their normally hearing peers. The age at implantation was the most important factor that may influence the postimplant outcomes. Regarding educational/vocational status, 6 subjects attend universities, 3 already have a professional activity, 14 are currently at high school level, 32 are at junior high school level, 6 additional children are enrolled in a special unit for children with disability, and 3 children are still attending elementary schools. Seventeen are in further noncompulsory education studying a range of subjects at vocational level. This long-term report shows that many profoundly hearing-impaired children using cochlear implants can develop functional levels of speech perception and production, attain age-appropriate oral language, develop competency level in a language other than their primary language, and achieve satisfactory academic performance.
Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W
2015-01-01
As cochlear implant (CI) acceptance increases and candidacy criteria are expanded, these devices are increasingly recommended for individuals with less than profound hearing loss. As a result, many individuals who receive a CI also retain acoustic hearing, often in the low frequencies, in the nonimplanted ear (i.e., bimodal hearing) and in some cases in the implanted ear (i.e., hybrid hearing) which can enhance the performance achieved by the CI alone. However, guidelines for clinical decisions pertaining to cochlear implantation are largely based on expectations for postsurgical speech-reception performance with the CI alone in auditory-only conditions. A more comprehensive prediction of postimplant performance would include the expected effects of residual acoustic hearing and visual cues on speech understanding. An evaluation of auditory-visual performance might be particularly important because of the complementary interaction between the speech information relayed by visual cues and that contained in the low-frequency auditory signal. The goal of this study was to characterize the benefit provided by residual acoustic hearing to consonant identification under auditory-alone and auditory-visual conditions for CI users. Additional information regarding the expected role of residual hearing in overall communication performance by a CI listener could potentially lead to more informed decisions regarding cochlear implantation, particularly with respect to recommendations for or against bilateral implantation for an individual who is functioning bimodally. Eleven adults 23 to 75 years old with a unilateral CI and air-conduction thresholds in the nonimplanted ear equal to or better than 80 dB HL for at least one octave frequency between 250 and 1000 Hz participated in this study. Consonant identification was measured for conditions involving combinations of electric hearing (via the CI), acoustic hearing (via the nonimplanted ear), and speechreading (visual cues). The results suggest that the benefit to CI consonant-identification performance provided by the residual acoustic hearing is even greater when visual cues are also present. An analysis of consonant confusions suggests that this is because the voicing cues provided by the residual acoustic hearing are highly complementary with the mainly place-of-articulation cues provided by the visual stimulus. These findings highlight the need for a comprehensive prediction of trimodal (acoustic, electric, and visual) postimplant speech-reception performance to inform implantation decisions. The increased influence of residual acoustic hearing under auditory-visual conditions should be taken into account when considering surgical procedures or devices that are intended to preserve acoustic hearing in the implanted ear. This is particularly relevant when evaluating the candidacy of a current bimodal CI user for a second CI (i.e., bilateral implantation). Although recent developments in CI technology and surgical techniques have increased the likelihood of preserving residual acoustic hearing, preservation cannot be guaranteed in each individual case. Therefore, the potential gain to be derived from bilateral implantation needs to be weighed against the possible loss of the benefit provided by residual acoustic hearing.
Neural representation of consciously imperceptible speech sound differences.
Allen, J; Kraus, N; Bradlow, A
2000-10-01
The concept of subliminal perception has been a subject of interest and controversy for decades. Of interest in the present investigation was whether a neurophysiologic index of stimulus change could be elicited to speech sound contrasts that were consciously indiscriminable. The stimuli were chosen on the basis of each individual subject's discrimination threshold. The speech stimuli (which varied along an F3 onset frequency continuum from /da/ to /ga/) were synthesized so that the acoustical properties of the stimuli could be tightly controlled. Subthreshold and suprathreshold stimuli were chosen on the basis of behavioral ability demonstrated during psychophysical testing. A significant neural representation of stimulus change, reflected by the mismatch negativity response, was obtained in all but 1 subject in response to subthreshold stimuli. Grand average responses differed significantly from responses obtained in a control condition consisting of physiologic responses elicited by physically identical stimuli. Furthermore, responses to suprathreshold stimuli (close to threshold) did not differ significantly from subthreshold responses with respect to latency, amplitude, or area. These results suggest that neural representation of consciously imperceptible stimulus differences occurs and that this representation occurs at a preattentive level.
Mackersie, Carol; Boothroyd, Arthur; Lithgow, Alexandra
2018-06-11
The objective was to determine self-adjusted output response and speech intelligibility index (SII) in individuals with mild to moderate hearing loss and to measure the effects of prior hearing aid experience. Thirteen hearing aid users and 13 nonusers, with similar group-mean pure-tone thresholds, listened to prerecorded and preprocessed sentences spoken by a man. Starting with a generic level and spectrum, participants adjusted (1) overall level, (2) high-frequency boost, and (3) low-frequency cut. Participants took a speech perception test after an initial adjustment before making a final adjustment. The three self-selected parameters, along with individual thresholds and real-ear-to-coupler differences, were used to compute output levels and SIIs for the starting and two self-adjusted conditions. The values were compared with an NAL second nonlinear threshold-based prescription (NAL-NL2) and, for the hearing aid users, performance of their existing hearing aids. All participants were able to complete the self-adjustment process. The generic starting condition provided outputs (between 2 and 8 kHz) and SIIs that were significantly below those prescribed by NAL-NL2. Both groups increased SII to values that were not significantly different from prescription. The hearing aid users, but not the nonusers, increased high-frequency output and SII significantly after taking the speech perception test. Seventeen of the 26 participants (65%) met an SII criterion of 60% under the generic starting condition. The proportion increased to 23 out of 26 (88%) after the final self-adjustment. Of the 13 hearing aid users, 8 (62%) met the 60% criterion with their existing hearing aids. With the final self-adjustment, 12 out of 13 (92%) met this criterion. The findings support the conclusion that user self-adjustment of basic amplification characteristics can be both feasible and effective with or without prior hearing aid experience.
Speech processing in children with functional articulation disorders.
Gósy, Mária; Horváth, Viktória
2015-03-01
This study explored auditory speech processing and comprehension abilities in 5-8-year-old monolingual Hungarian children with functional articulation disorders (FADs) and their typically developing peers. Our main hypothesis was that children with FAD would show co-existing auditory speech processing disorders, with different levels of these skills depending on the nature of the receptive processes. The tasks included (i) sentence and non-word repetitions, (ii) non-word discrimination and (iii) sentence and story comprehension. Results suggest that the auditory speech processing of children with FAD is underdeveloped compared with that of typically developing children, and largely varies across task types. In addition, there are differences between children with FAD and controls in all age groups from 5 to 8 years. Our results have several clinical implications.
Speech concerns at 5 years and adult educational and mental health outcomes.
Muir, Colette; O'Callaghan, Michael J; Bor, William; Najman, Jake M; Williams, Gail M
2011-07-01
To determine if parent-reported speech concerns at 5 years predict poorer educational and mental health outcomes at 21 years independent of social context and child's receptive language, behaviour and motor concerns at 5 years. To determine if these adult outcomes are mediated by school performance at 14 years. Information on speech concerns at 5- and 21-year outcomes was available for 3193 participants from a birth cohort of 7223 infants. At 5, child behaviour was measured using a behavioural checklist, and at 21 years, it was measured by the Young Adult Self-Report. Peabody Picture Vocabulary Test-Revised at 5 years was not available for all children. Maternal mental health and social information at 5 years and educational outcomes at 14 years and 21 years were collected prospectively by questionnaire. Potential confounding and mediating factors were analysed using logistic regression. Children with speech concerns were less likely to have completed secondary school (P < 0.01) or gained better overall position (OP) scores (P < 0.001). OP scores rank students in Queensland applying for tertiary entrance. There was no association with mental health outcomes. Findings were independent of maternal and social factors, and motor concerns, though attenuated by behaviour and Peabody Picture Vocabulary Test-Revised scores. In the model adjusted for these factors, any concerns predicted OP score 1-11 (odds ratio 0.58; 95% confidence interval 0.42, 0.79), though if academic functioning at 14 was included, no associations were significant. Maternal-reported speech concerns at 5 years predict poorer educational though not adult mental health outcomes. © 2011 The Authors. Journal of Paediatrics and Child Health © 2011 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
John, Andrew B; Kreisman, Brian M
2017-09-01
Extended high-frequency (EHF) audiometry is useful for evaluating ototoxic exposures and may relate to speech recognition, localisation and hearing aid benefit. There is a need to determine whether common clinical practice for EHF audiometry using tone and noise stimuli is reliable. We evaluated equivalence and compared test-retest (TRT) reproducibility for audiometric thresholds obtained using pure tones and narrowband noise (NBN) from 0.25 to 16 kHz. Thresholds and test-retest reproducibility for stimuli in the conventional (0.25-6 kHz) and EHF (8-16 kHz) frequency ranges were compared in a repeated-measures design. A total of 70 ears of adults with normal hearing. Thresholds obtained using NBN were significantly lower than thresholds obtained using pure tones from 0.5 to 16 kHz, but not 0.25 kHz. Good TRT reproducibility (within 2 dB) was observed for both stimuli at all frequencies. Responses at the lower limit of the presentation range for NBN centred at 14 and 16 kHz suggest unreliability for NBN as a threshold stimulus at these frequencies. Thresholds in the conventional and EHF ranges showed good test-retest reproducibility, but differed between stimulus types. Care should be taken when comparing pure-tone thresholds with NBN thresholds especially at these frequencies.
Simultaneous Communication Supports Learning in Noise by Cochlear Implant Users
Blom, Helen C.; Marschark, Marc; Machmer, Elizabeth
2017-01-01
Objectives This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant users in noisy contexts. Methods Forty eight college students who were active cochlear implant users, watched videos of three short presentations, the text versions of which were standardized at the 8th grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants’ spoken language and sign language skills were obtained via self-reports and objective assessments. Results When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants’ receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. Discussion Students who are cochlear implant users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those cochlear implant users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. Conclusion Accompanying spoken language with signs can benefit learners who are cochlear implant users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined. PMID:28010675
Simultaneous communication supports learning in noise by cochlear implant users.
Blom, Helen; Marschark, Marc; Machmer, Elizabeth
2017-01-01
This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant (CI) users in noisy contexts. Forty eight college students who were active CI users, watched videos of three short presentations, the text versions of which were standardized at the 8 th -grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants' spoken language and sign language skills were obtained via self-reports and objective assessments. When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants' receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. Students who are CI users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those CI users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. Accompanying spoken language with signs can benefit learners who are CI users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined.
Speech as a pilot input medium
NASA Technical Reports Server (NTRS)
Plummer, R. P.; Coler, C. R.
1977-01-01
The speech recognition system under development is a trainable pattern classifier based on a maximum-likelihood technique. An adjustable uncertainty threshold allows the rejection of borderline cases for which the probability of misclassification is high. The syntax of the command language spoken may be used as an aid to recognition, and the system adapts to changes in pronunciation if feedback from the user is available. Words must be separated by .25 second gaps. The system runs in real time on a mini-computer (PDP 11/10) and was tested on 120,000 speech samples from 10- and 100-word vocabularies. The results of these tests were 99.9% correct recognition for a vocabulary consisting of the ten digits, and 99.6% recognition for a 100-word vocabulary of flight commands, with a 5% rejection rate in each case. With no rejection, the recognition accuracies for the same vocabularies were 99.5% and 98.6% respectively.
Speech training alters consonant and vowel responses in multiple auditory cortex fields
Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.
2015-01-01
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927
Frizelle, Pauline; Harte, Jennifer; O'Sullivan, Kathleen; Fletcher, Paul; Gibbon, Fiona
2017-01-01
The receptive language measure information-carrying word (ICW) level, is used extensively by speech and language therapists in the UK and Ireland. Despite this it has never been validated via its relationship to any other relevant measures. This study aims to validate the ICW measure by investigating the relationship between the receptive ICW score of children with specific language impairment (SLI) and their performance on standardized memory and language assessments. Twenty-seven children with SLI, aged between 5;07 and 8;11, completed a sentence comprehension task in which the instructions gradually increased in number of ICWs. The children also completed subtests from The Working Memory Test Battery for children and The Clinical Evaluation of Language Fundamentals- 4. Results showed that there was a significant positive relationship between both language and memory measures and children's ICW score. While both receptive and expressive language were significant in their contribution to children's ICW score, the contribution of memory was solely determined by children's working memory ability. ICW score is in fact a valid measure of the language ability of children with SLI. However therapists should also be cognisant of its strong association with working memory when using this construct in assessment or intervention methods.
Scherer, Nancy J; Baker, Shauna; Kaiser, Ann; Frey, Jennifer R
2018-01-01
Objective This study compares the early speech and language development of children with cleft palate with or without cleft lip who were adopted internationally with children born in the United States. Design Prospective longitudinal description of early speech and language development between 18 and 36 months of age. Participants This study compares four children (age range = 19 to 38 months) with cleft palate with or without cleft lip who were adopted internationally with four children (age range = 19 to 38 months) with cleft palate with or without cleft lip who were born in the United States, matched for age, gender, and cleft type across three time points over 10 to 12 months. Main Outcome Measures Children's speech-language skills were analyzed using standardized tests, parent surveys, language samples, and single-word phonological assessments to determine differences between the groups. Results The mean scores for the children in the internationally adopted group were lower than the group born in the United States at all three time points for expressive language and speech sound production measures. Examination of matched pairs demonstrated observable differences for two of the four pairs. No differences were observed in cognitive performance and receptive language measures. Conclusions The results suggest a cumulative effect of later palate repair and/or a variety of health and environmental factors associated with their early circumstances that persist to age 3 years. Early intervention to address the trajectory of speech and language is warranted. Given the findings from this small pilot study, a larger study of the long-term speech and language development of children who are internationally adopted and have cleft palate with or without cleft lip is recommended.
Sequencing Stories in Spanish and English.
ERIC Educational Resources Information Center
Steckbeck, Pamela Meza
The guide was designed for speech pathologists, bilingual teachers, and specialists in English as a second language who work with Spanish-speaking children. The guide contains twenty illustrated stories that facilitate the learning of auditory sequencing, auditory and visual memory, receptive and expressive vocabulary, and expressive language…
A cascaded neuro-computational model for spoken word recognition
NASA Astrophysics Data System (ADS)
Hoya, Tetsuya; van Leeuwen, Cees
2010-03-01
In human speech recognition, words are analysed at both pre-lexical (i.e., sub-word) and lexical (word) levels. The aim of this paper is to propose a constructive neuro-computational model that incorporates both these levels as cascaded layers of pre-lexical and lexical units. The layered structure enables the system to handle the variability of real speech input. Within the model, receptive fields of the pre-lexical layer consist of radial basis functions; the lexical layer is composed of units that perform pattern matching between their internal template and a series of labels, corresponding to the winning receptive fields in the pre-lexical layer. The model adapts through self-tuning of all units, in combination with the formation of a connectivity structure through unsupervised (first layer) and supervised (higher layers) network growth. Simulation studies show that the model can achieve a level of performance in spoken word recognition similar to that of a benchmark approach using hidden Markov models, while enabling parallel access to word candidates in lexical decision making.
Effects of low-dose topiramate on language function in children with migraine
Han, Seung-A; Yang, Eu Jeen; Kong, Younghwa; Joo, Chan-Uhng
2017-01-01
Purpose This study aimed to verify the safety of low-dose topiramate on language development in pediatric patients with migraine. Methods Thirty newly diagnosed pediatric patients with migraine who needed topiramate were enrolled and assessed twice with standard language tests, including the Test of Language Problem Solving Abilities (TOPs), Receptive and Expressive Vocabulary Test, Urimal Test of Articulation and Phonology, and computerized speech laboratory analysis. Data were collected before treatment, and topiramate as monotherapy was sustained for at least 3 months. The mean follow-up period was 4.3±2.7 months. The mean topiramate dosage was 0.9 mg/kg/day. Results The patient's mean age was 144.1±42.3 months (male-to-female ratio, 9:21). The values of all the language parameters of the TOPs were not changed significantly after the topiramate treatment as follows: Determine cause, from 15.0±4.4 to 15.4±4.8 (P>0.05); making inference, from 17.6±5.6 to 17.5±6.6 (P>0.05); predicting, from 11.5±4.5 to 12.3±4.0 (P>0.05); and total TOPs score, from 44.1± 13.4 to 45.3±13.6 (P>0.05). The total mean length of utterance in words during the test decreased from 44.1±13.4 to 45.3±13.6 (P<0.05). The Receptive and Expressive Vocabulary Test results decreased from 97.7±22.1 to 96.3±19.9 months, and from 81.8±23.4 to 82.3±25.4 months, respectively (P>0.05). In the articulation and phonology validation in both groups, speech pitch and energy were not significant, and all the vowel test results showed no other significant values. Conclusion No significant difference was found in the language-speaking ability between the patients; however, the number of vocabularies used decreased. Therefore, topiramate should be used cautiously for children with migraine. PMID:28861114
Paatsch, Louise E; Blamey, Peter J; Sarant, Julia Z; Bow, Catherine P
2006-01-01
A group of 21 hard-of-hearing and deaf children attending primary school were trained by their teachers on the production of selected consonants and on the meanings of selected words. Speech production, vocabulary knowledge, reading aloud, and speech perception measures were obtained before and after each type of training. The speech production training produced a small but significant improvement in the percentage of consonants correctly produced in words. The vocabulary training improved knowledge of word meanings substantially. Performance on speech perception and reading aloud were significantly improved by both types of training. These results were in accord with the predictions of a mathematical model put forward to describe the relationships between speech perception, speech production, and language measures in children (Paatsch, Blamey, Sarant, Martin, & Bow, 2004). These training data demonstrate that the relationships between the measures are causal. In other words, improvements in speech production and vocabulary performance produced by training will carry over into predictable improvements in speech perception and reading scores. Furthermore, the model will help educators identify the most effective methods of improving receptive and expressive spoken language for individual children who are deaf or hard of hearing.
Jones, Gary L.; Ho Won, Jong; Drennan, Ward R.; Rubinstein, Jay T.
2013-01-01
Cochlear implant (CI) users can achieve remarkable speech understanding, but there is great variability in outcomes that is only partially accounted for by age, residual hearing, and duration of deafness. Results might be improved with the use of psychophysical tests to predict which sound processing strategies offer the best potential outcomes. In particular, the spectral-ripple discrimination test offers a time-efficient, nonlinguistic measure that is correlated with perception of both speech and music by CI users. Features that make this “one-point” test time-efficient, and thus potentially clinically useful, are also connected to controversy within the CI field about what the test measures. The current work examined the relationship between thresholds in the one-point spectral-ripple test, in which stimuli are presented acoustically, and interaction indices measured under the controlled conditions afforded by direct stimulation with a research processor. Results of these studies include the following: (1) within individual subjects there were large variations in the interaction index along the electrode array, (2) interaction indices generally decreased with increasing electrode separation, and (3) spectral-ripple discrimination improved with decreasing mean interaction index at electrode separations of one, three, and five electrodes. These results indicate that spectral-ripple discrimination thresholds can provide a useful metric of the spectral resolution of CI users. PMID:23297914
Jones, Gary L; Won, Jong Ho; Drennan, Ward R; Rubinstein, Jay T
2013-01-01
Cochlear implant (CI) users can achieve remarkable speech understanding, but there is great variability in outcomes that is only partially accounted for by age, residual hearing, and duration of deafness. Results might be improved with the use of psychophysical tests to predict which sound processing strategies offer the best potential outcomes. In particular, the spectral-ripple discrimination test offers a time-efficient, nonlinguistic measure that is correlated with perception of both speech and music by CI users. Features that make this "one-point" test time-efficient, and thus potentially clinically useful, are also connected to controversy within the CI field about what the test measures. The current work examined the relationship between thresholds in the one-point spectral-ripple test, in which stimuli are presented acoustically, and interaction indices measured under the controlled conditions afforded by direct stimulation with a research processor. Results of these studies include the following: (1) within individual subjects there were large variations in the interaction index along the electrode array, (2) interaction indices generally decreased with increasing electrode separation, and (3) spectral-ripple discrimination improved with decreasing mean interaction index at electrode separations of one, three, and five electrodes. These results indicate that spectral-ripple discrimination thresholds can provide a useful metric of the spectral resolution of CI users.
Engaged listeners: shared neural processing of powerful political speeches.
Schmälzle, Ralf; Häcker, Frank E K; Honey, Christopher J; Hasson, Uri
2015-08-01
Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners' brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Development of a Voice Activity Controlled Noise Canceller
Abid Noor, Ali O.; Samad, Salina Abdul; Hussain, Aini
2012-01-01
In this paper, a variable threshold voice activity detector (VAD) is developed to control the operation of a two-sensor adaptive noise canceller (ANC). The VAD prohibits the reference input of the ANC from containing some strength of actual speech signal during adaptation periods. The novelty of this approach resides in using the residual output from the noise canceller to control the decisions made by the VAD. Thresholds of full-band energy and zero-crossing features are adjusted according to the residual output of the adaptive filter. Performance evaluation of the proposed approach is quoted in terms of signal to noise ratio improvements as well mean square error (MSE) convergence of the ANC. The new approach showed an improved noise cancellation performance when tested under several types of environmental noise. Furthermore, the computational power of the adaptive process is reduced since the output of the adaptive filter is efficiently calculated only during non-speech periods. PMID:22778667
Effect of increased IIDR in the nucleus freedom cochlear implant system.
Holden, Laura K; Skinner, Margaret W; Fourakis, Marios S; Holden, Timothy A
2007-10-01
The objective of this study was to evaluate the effect of the increased instantaneous input dynamic range (IIDR) in the Nucleus Freedom cochlear implant (CI) system on recipients' ability to perceive soft speech and speech in noise. Ten adult Freedom CI recipients participated. Two maps differing in IIDR were placed on each subject's processor at initial activation. The IIDR was set to 30 dB for one map and 40 dB for the other. Subjects used both maps for at least one month prior to speech perception testing. Results revealed significantly higher scores for words (50 dB SPL), for sentences in background babble (65 dB SPL), and significantly lower sound field threshold levels with the 40 compared to the 30 dB IIDR map. Ceiling effects may have contributed to non-significant findings for sentences in quiet (50 dB SPL). The Freedom's increased IIDR allows better perception of soft speech and speech in noise.
The performance-perceptual test and its relationship to unaided reported handicap.
Saunders, Gabrielle H; Forsline, Anna; Fausti, Stephen A
2004-04-01
Measurement of hearing aid outcomes is necessary for demonstration of treatment efficacy, third-party payment, and cost-benefit analysis. Outcomes are usually measured with hearing-related questionnaires and/or tests of speech recognition. However, results from these two types of test often conflict. In this paper, we provide data from a new test measure, known as the Performance-Perceptual Test (PPT), in which subjective and performance aspects of hearing in noise are measured using the same test materials and procedures. A Performance Speech Reception Threshold (SRTN) and a Perceptual SRTN are measured using the Hearing In Noise Test materials and adaptive procedure. A third variable, the discrepancy between these two SRTNs, is also computed. It measures the accuracy with which subjects assess their own hearing ability and is referred to as the Performance-Perceptual Discrepancy (PPDIS). One hundred seven subjects between 24 and 83 yr of age took part. Thirty-three subjects had normal hearing, while the remaining seventy-four had symmetrical sensorineural hearing loss. Of the subjects with impaired hearing, 24 wore hearing aids and 50 did not. All subjects underwent routine audiological examination and completed the PPT and the Hearing Handicap Inventory for the Elderly/Adults on two occasions, between 1 and 2 wk apart. The PPT was conducted for unaided listening with the masker level set to 50, 65, and 80 dB SPL. PPT data show that the subjects with normal hearing have significantly better Performance and Perceptual SRTNs at each test level than the subjects with impaired hearing but that PPDIS values do not differ between the groups. Test-retest reliability for the PPT is excellent (r-values > 0.93 for all conditions). Stepwise multiple regression analysis showed that the Performance SRTN, the PPDIS, and age explain 40% of the variance in reported handicap (Hearing Handicap Inventory for the Elderly/Adults scores). More specifically, poorer performance, underestimation of hearing ability and younger age result in greater reported handicap, and vice versa. Reported handicap consists of a performance component and a (mis)perception component, as measured by the Performance SRTN and the PPDIS respectively. The PPT should thus prove to be a valuable tool for better understanding why some individuals complain of hearing difficulties but have only a mild hearing loss or conversely report few difficulties in the presence of substantial impairment. The measure would thus seem to provide both an explanation and a counseling tool for patients for whom there is a mismatch between reported and measured hearing difficulties.
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers.
Liu, Fang; Chan, Alice H D; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C M
2016-07-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production.
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers
Liu, Fang; Chan, Alice H. D.; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C. M.
2016-01-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production. PMID:27475178
Gieseler, Anja; Tahden, Maike A. S.; Thiel, Christiane M.; Wagener, Kirsten C.; Meis, Markus; Colonius, Hans
2017-01-01
Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners (mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age, and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU. PMID:28270784
Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Wagener, Kirsten C; Meis, Markus; Colonius, Hans
2017-01-01
Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners ( mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age , and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU.
NASA Astrophysics Data System (ADS)
Saweikis, Meghan; Surprenant, Aimée M.; Davies, Patricia; Gallant, Don
2003-10-01
While young and old subjects with comparable audiograms tend to perform comparably on speech recognition tasks in quiet environments, the older subjects have more difficulty than the younger subjects with recognition tasks in degraded listening conditions. This suggests that factors other than an absolute threshold may account for some of the difficulty older listeners have on recognition tasks in noisy environments. Many metrics, including the Speech Intelligibility Index (SII), used to measure speech intelligibility, only consider an absolute threshold when accounting for age related hearing loss. Therefore these metrics tend to overestimate the performance for elderly listeners in noisy environments [Tobias et al., J. Acoust. Soc. Am. 83, 859-895 (1988)]. The present studies examine the predictive capabilities of the SII in an environment with automobile noise present. This is of interest because people's evaluation of the automobile interior sound is closely linked to their ability to carry on conversations with their fellow passengers. The four studies examine whether, for subjects with age related hearing loss, the accuracy of the SII can be improved by incorporating factors other than an absolute threshold into the model. [Work supported by Ford Motor Company.
On the Etiology of Listening Difficulties in Noise Despite Clinically Normal Audiograms
2017-01-01
Many people with difficulties following conversations in noisy settings have “clinically normal” audiograms, that is, tone thresholds better than 20 dB HL from 0.1 to 8 kHz. This review summarizes the possible causes of such difficulties, and examines established as well as promising new psychoacoustic and electrophysiologic approaches to differentiate between them. Deficits at the level of the auditory periphery are possible even if thresholds remain around 0 dB HL, and become probable when they reach 10 to 20 dB HL. Extending the audiogram beyond 8 kHz can identify early signs of noise-induced trauma to the vulnerable basal turn of the cochlea, and might point to “hidden” losses at lower frequencies that could compromise speech reception in noise. Listening difficulties can also be a consequence of impaired central auditory processing, resulting from lesions affecting the auditory brainstem or cortex, or from abnormal patterns of sound input during developmental sensitive periods and even in adulthood. Such auditory processing disorders should be distinguished from (cognitive) linguistic deficits, and from problems with attention or working memory that may not be specific to the auditory modality. Improved diagnosis of the causes of listening difficulties in noise should lead to better treatment outcomes, by optimizing auditory training procedures to the specific deficits of individual patients, for example. PMID:28002080
Bone conduction reception: head sensitivity mapping.
McBride, Maranda; Letowski, Tomasz; Tran, Phuong
2008-05-01
This study sought to identify skull locations that are highly sensitive to bone conduction (BC) auditory signal reception and could be used in the design of military radio communication headsets. In Experiment 1, pure tone signals were transmitted via BC to 11 skull locations of 14 volunteers seated in a quiet environment. In Experiment 2, the same signals were transmitted via BC to nine skull locations of 12 volunteers seated in an environment with 60 decibels of white background noise. Hearing threshold levels for each signal per location were measured. In the quiet condition, the condyle had the lowest mean threshold for all signals followed by the jaw angle, mastoid and vertex. In the white noise condition, the condyle also had the lowest mean threshold followed by the mastoid, vertex and temple. Overall results of both experiments were very similar and implicated the condyle as the most effective location.
Cat colour vision: one cone process or several?
Daw, N. W.; Pearlman, A. L.
1969-01-01
1. Peripheral mechanisms that might contribute to colour vision in the cat have been investigated by recording from single units in the lateral geniculate and optic tract. Evidence is presented that the input to these cells comes from a single class of cones with a single spectral sensitivity. 2. In cats with pupils dilated a background level of 10-30 cd/m2 was sufficient to saturate the rod system for all units. When the rods were saturated, the spectral sensitivity of all units peaked at 556 nm; this was true both for centre and periphery of the receptive field. The spectral sensitivity curve was slightly narrower than the Dartnall nomogram. It could not be shifted by chromatic adaptation with red, green, blue or yellow backgrounds. 3. In the mesopic range (0·1-10 cd/m2), the threshold could be predicted in terms of two mechanisms, a cone mechanism with spectral sensitivity peaking at 556 nm, and a rod mechanism with spectral sensitivity at 500 nm. The mechanisms were separated and their increment threshold curves measured by testing with one colour against a background of another colour. All units had input from both rods and cones. The changeover from rods to cones occurred at the same level of adaptation in both centre and periphery of the receptive field. Threshold for the cones was between 0·04 and 0·25 cd/m2 with the pupil dilated, for a spot covering the centre of the receptive field. 4. None of the results was found to vary between lateral geniculate and optic tract, with layer in the lateral geniculate, or with distance from area centralis in the visual field. 5. The lack of evidence for more than one cone type suggests that colour discrimination in the cat may be a phenomenon of mesopic vision, based on differences in spectral sensitivity of the rods and a single class of cones. PMID:5767891
The Role of the Arcuate Fasciculus in Conduction Aphasia
ERIC Educational Resources Information Center
Bernal, Byron; Ardila, Alfredo
2009-01-01
In aphasia literature, it has been considered that a speech repetition defect represents the main constituent of conduction aphasia. Conduction aphasia has frequently been interpreted as a language impairment due to lesions of the arcuate fasciculus (AF) that disconnect receptive language areas from expressive ones. Modern neuroradiological…
Narrative Skills in Children with Selective Mutism: An Exploratory Study
ERIC Educational Resources Information Center
McInnes, Alison; Fung, Daniel; Manassis, Katharina; Fiksenbaum, Lisa; Tannock, Rosemary
2004-01-01
Selective mutism (SM) is a rare and complex disorder associated with anxiety symptoms and speech-language deficits; however, the nature of these language deficits has not been studied systematically. A novel cross-disciplinary assessment protocol was used to assess anxiety and nonverbal cognitive, receptive language, and expressive narrative…
Effects of Amplification, Speechreading, and Classroom Environments on Reception of Speech
ERIC Educational Resources Information Center
Blair, James C.
1977-01-01
Compared with 18 hard-of-hearing students (7 to 14-years-old) were two sources of amplification (binaural ear-level hearing aids and R F auditory training units with environmental microphones on) in "ideal" and "typical" classroom noise levels, with and without visual speechreading cues provided. (Author/IM)
Probing the Electrode–Neuron Interface With Focused Cochlear Implant Stimulation
Bierer, Julie Arenberg
2010-01-01
Cochlear implants are highly successful neural prostheses for persons with severe or profound hearing loss who gain little benefit from hearing aid amplification. Although implants are capable of providing important spectral and temporal cues for speech perception, performance on speech tests is variable across listeners. Psychophysical measures obtained from individual implant subjects can also be highly variable across implant channels. This review discusses evidence that such variability reflects deviations in the electrode–neuron interface, which refers to an implant channel's ability to effectively stimulate the auditory nerve. It is proposed that focused electrical stimulation is ideally suited to assess channel-to-channel irregularities in the electrode–neuron interface. In implant listeners, it is demonstrated that channels with relatively high thresholds, as measured with the tripolar configuration, exhibit broader psychophysical tuning curves and smaller dynamic ranges than channels with relatively low thresholds. Broader tuning implies that frequency-specific information intended for one population of neurons in the cochlea may activate more distant neurons, and a compressed dynamic range could make it more difficult to resolve intensity-based information, particularly in the presence of competing noise. Degradation of both types of cues would negatively affect speech perception. PMID:20724356
Perception of environmental sounds by experienced cochlear implant patients.
Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan
2011-01-01
Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries, or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds, and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Seventeen experienced postlingually deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern, and temporal order for tones tests), and a backward digit recall test. The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants, and r = 0.48 for vowels. HINT and CNC scores in quiet moderately correlated with the temporal order for tones. However, the correlation between speech and environmental sounds changed little after partialling out the variance due to other variables. Present findings indicate that environmental sound identification is difficult for CI patients. They further suggest that speech and environmental sounds may overlap considerably in their perceptual processing. Certain spectrotemproral processing abilities are separately associated with speech and environmental sound performance. However, they do not appear to mediate the relationship between speech and environmental sounds in CI patients. Environmental sound rehabilitation may be beneficial to some patients. Environmental sound testing may have potential diagnostic applications, especially with difficult-to-test populations and might also be predictive of speech performance for prelingually deafened patients with cochlear implants.
[Clinical diagnosis of Treacher Collins syndrome and the efficacy of using BAHA].
Wang, Y B; Chen, X W; Wang, P; Fan, X M; Fan, Y; Liu, Q; Gao, Z Q
2017-04-20
Objective: To evaluate the efficacy of soft or implanted BAHA in the patients of Treacher Collins syndrome(TCS). Method: Six patients of TCS were studied. The Teber scoring system was used to evaluate the deformity degree. The air and bone auditory thresholds were assessed by auditory brain stem response(ABR). The infant-toddler meaningful auditory integration scale(IT-MAIS) was used to assess the auditory development at three time levels: baseline,3 months and 6 months. The hearing threshold and speech recognition score were measured under unaided and aided conditions. Result: The average score of deformity degree was 14.0±0.6. The TCOF1 gene was tested in two patients. The bone conduction hearing thresholds of patients was(18.0±4.5)dBnHL and the air conduction hearing thresholds was (70.5±7.0)dBnHL. The IT-MAIS total, detection and perception scores were improved significantly after wearing softband BAHA and approached the normal level in the 2 patients under 2 years old. The hearing thresholds of 6 patients in unaided and softband BAHA conditions were(65.8±3.8)dBHL and (30.0±3.2)dBHL ( P <0.01) respectively, and 1 implanted BAHA was 15 dBHL. The speech recognition scores of 3 patients in unaided and softband BAHA conditions were(31.7±3.5)% and(86.0±1.7)%( P <0.05) respectively, and 1 implanted BAHA was 96%. Conclusion: Whenever the patient was diagnosed as TCS by the clinical manifestations and genetic testing, BAHA system could help to rehabilitate the hearing to a normal condition. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
The role of the supplementary motor area for speech and language processing.
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2016-09-01
Apart from its function in speech motor control, the supplementary motor area (SMA) has largely been neglected in models of speech and language processing in the brain. The aim of this review paper is to summarize more recent work, suggesting that the SMA has various superordinate control functions during speech communication and language reception, which is particularly relevant in case of increased task demands. The SMA is subdivided into a posterior region serving predominantly motor-related functions (SMA proper) whereas the anterior part (pre-SMA) is involved in higher-order cognitive control mechanisms. In analogy to motor triggering functions of the SMA proper, the pre-SMA seems to manage procedural aspects of cognitive processing. These latter functions, among others, comprise attentional switching, ambiguity resolution, context integration, and coordination between procedural and declarative memory structures. Regarding language processing, this refers, for example, to the use of inner speech mechanisms during language encoding, but also to lexical disambiguation, syntax and prosody integration, and context-tracking. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ruffin, Chad V.; Kronenberger, William G.; Colson, Bethany G.; Henning, Shirley C.; Pisoni, David B.
2013-01-01
This study investigated long-term speech and language outcomes in 51 prelingually deaf children, adolescents, and young adults who received cochlear implants (CIs) prior to 7 years of age and used their implants for at least 7 years. Average speech perception scores were similar to those found in prior research with other samples of experienced CI users. Mean language test scores were lower than norm-referenced scores from nationally representative normal-hearing, typically-developing samples, although a majority of the CI users scored within one standard deviation of the normative mean or higher on the Peabody Picture Vocabulary Test, Fourth Edition (63%) and Clinical Evaluation of Language Fundamentals, Fourth Edition (69%). Speech perception scores were negatively associated with a meningitic etiology of hearing loss, older age at implantation, poorer pre-implant unaided pure tone average thresholds, lower family income, and the use of Total Communication. Users of CIs for 15 years or more were more likely to have these characteristics and were more likely to score lower on measures of speech perception compared to users of CIs for 14 years or less. The aggregation of these risk factors in the > 15 years of CI use subgroup accounts for their lower speech perception scores and may stem from more conservative CI candidacy criteria in use at the beginning of pediatric cochlear implantation. PMID:23988907
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
Rana, Baljeet; Buchholz, Jörg M; Morgan, Catherine; Sharma, Mridula; Weller, Tobias; Konganda, Shivali Appaiah; Shirai, Kyoko; Kawano, Atsushi
2017-01-01
Binaural hearing helps normal-hearing listeners localize sound sources and understand speech in noise. However, it is not fully understood how far this is the case for bilateral cochlear implant (CI) users. To determine the potential benefits of bilateral over unilateral CIs, speech comprehension thresholds (SCTs) were measured in seven Japanese bilateral CI recipients using Helen test sentences (translated into Japanese) in a two-talker speech interferer presented from the front (co-located with the target speech), ipsilateral to the first-implanted ear (at +90° or -90°), and spatially symmetric at ±90°. Spatial release from masking was calculated as the difference between co-located and spatially separated SCTs. Localization was assessed in the horizontal plane by presenting either male or female speech or both simultaneously. All measurements were performed bilaterally and unilaterally (with the first implanted ear) inside a loudspeaker array. Both SCTs and spatial release from masking were improved with bilateral CIs, demonstrating mean bilateral benefits of 7.5 dB in spatially asymmetric and 3 dB in spatially symmetric speech mixture. Localization performance varied strongly between subjects but was clearly improved with bilateral over unilateral CIs with the mean localization error reduced by 27°. Surprisingly, adding a second talker had only a negligible effect on localization.
Temporal plasticity in auditory cortex improves neural discrimination of speech sounds
Engineer, Crystal T.; Shetake, Jai A.; Engineer, Navzer D.; Vrana, Will A.; Wolf, Jordan T.; Kilgard, Michael P.
2017-01-01
Background Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. Objective/Hypothesis We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. Methods VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Results Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. Conclusion This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. PMID:28131520
Duarte, Alexandre Scalli Mathias; Ng, Ronny Tah Yen; de Carvalho, Guilherme Machado; Guimarães, Alexandre Caixeta; Pinheiro, Laiza Araujo Mohana; Costa, Everardo Andrade da; Gusmão, Reinaldo Jordão
2015-01-01
The clinical evaluation of subjects with occupational noise exposure has been difficult due to the discrepancy between auditory complaints and auditory test results. This study aimed to evaluate the contralateral acoustic reflex thresholds of workers exposed to high levels of noise, and to compare these results to the subjects' auditory complaints. This clinical retrospective study evaluated 364 workers between 1998 and 2005; their contralateral acoustic reflexes were compared to auditory complaints, age, and noise exposure time by chi-squared, Fisher's, and Spearman's tests. The workers' age ranged from 18 to 50 years (mean=39.6), and noise exposure time from one to 38 years (mean=17.3). We found that 15.1% (55) of the workers had bilateral hearing loss, 38.5% (140) had bilateral tinnitus, 52.8% (192) had abnormal sensitivity to loud sounds, and 47.2% (172) had speech recognition impairment. The variables hearing loss, speech recognition impairment, tinnitus, age group, and noise exposure time did not show relationship with acoustic reflex thresholds; however, all complaints demonstrated a statistically significant relationship with Metz recruitment at 3000 and 4000Hz bilaterally. There was no significance relationship between auditory complaints and acoustic reflexes. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Temporal and speech processing skills in normal hearing individuals exposed to occupational noise.
Kumar, U Ajith; Ameenudin, Syed; Sangamanatha, A V
2012-01-01
Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13), 41 50 ( = 13), 41-50 (n = 9), and 51-60 (n = 6) years and their non-noise-exposed counterparts (n = 30 in each age group). Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.
Langereis, Margreet; Vermeulen, Anneke
2015-06-01
This study aimed to evaluate the long term effects of CI on auditory, language, educational and social-emotional development of deaf children in different educational-communicative settings. The outcomes of 58 children with profound hearing loss and normal non-verbal cognition, after 60 months of CI use have been analyzed. At testing the children were enrolled in three different educational settings; in mainstream education, where spoken language is used or in hard-of-hearing education where sign supported spoken language is used and in bilingual deaf education, with Sign Language of the Netherlands and Sign Supported Dutch. Children were assessed on auditory speech perception, receptive language, educational attainment and wellbeing. Auditory speech perception of children with CI in mainstream education enable them to acquire language and educational levels that are comparable to those of their normal hearing peers. Although the children in mainstream and hard-of-hearing settings show similar speech perception abilities, language development in children in hard-of-hearing settings lags significantly behind. Speech perception, language and educational attainments of children in deaf education remained extremely poor. Furthermore more children in mainstream and hard-of-hearing environments are resilient than in deaf educational settings. Regression analyses showed an important influence of educational setting. Children with CI who are placed in early intervention environments that facilitate auditory development are able to achieve good auditory speech perception, language and educational levels on the long term. Most parents of these children report no social-emotional concerns. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Cupples, Linda; Ching, Teresa Yc; Button, Laura; Seeto, Mark; Zhang, Vicky; Whitfield, Jessica; Gunnourie, Miriam; Martin, Louise; Marnane, Vivienne
2017-09-12
This study investigated the factors influencing 5-year language, speech and everyday functioning of children with congenital hearing loss. Standardised tests including PLS-4, PPVT-4 and DEAP were directly administered to children. Parent reports on language (CDI) and everyday functioning (PEACH) were collected. Regression analyses were conducted to examine the influence of a range of demographic variables on outcomes. Participants were 339 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Children's average receptive and expressive language scores were approximately 1 SD below the mean of typically developing children, and scores on speech production and everyday functioning were more than 1 SD below. Regression models accounted for 70-23% of variance in scores across different tests. Earlier CI switch-on and higher non-verbal ability were associated with better outcomes in most domains. Earlier HA fitting and use of oral communication were associated with better outcomes on directly administered language assessments. Severity of hearing loss and maternal education influenced outcomes of children with HAs. The presence of additional disabilities affected outcomes of children with CIs. The findings provide strong evidence for the benefits of early HA fitting and early CI for improving children's outcomes.
ERIC Educational Resources Information Center
Wong, Simpson W. L.; Chow, Bonnie Wing-Yin; Ho, Connie Suk-Han; Waye, Mary M. Y.; Bishop, Dorothy V. M.
2014-01-01
This twin study examined the relative contributions of genes and environment on 2nd language reading acquisition of Chinese-speaking children learning English. We examined whether specific skills-visual word recognition, receptive vocabulary, phonological awareness, phonological memory, and speech discrimination-in the 1st and 2nd languages have…
Dual Diathesis-Stressor Model of Emotional and Linguistic Contributions to Developmental Stuttering
ERIC Educational Resources Information Center
Walden, Tedra A.; Frankel, Carl B.; Buhr, Anthony P.; Johnson, Kia N.; Conture, Edward G.; Karrass, Jan M.
2012-01-01
This study assessed emotional and speech-language contributions to childhood stuttering. A dual diathesis-stressor framework guided this study, in which both linguistic requirements and skills, and emotion and its regulation, are hypothesized to contribute to stuttering. The language diathesis consists of expressive and receptive language skills.…
Sign Language Echolalia in Deaf Children with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Shield, Aaron; Cooley, Frances; Meier, Richard P.
2017-01-01
Purpose: We present the first study of echolalia in deaf, signing children with autism spectrum disorder (ASD). We investigate the nature and prevalence of sign echolalia in native-signing children with ASD, the relationship between sign echolalia and receptive language, and potential modality differences between sign and speech. Method: Seventeen…
Bone Conduction Systems for Full-Face Respirators: Speech Intelligibility Analysis
2014-04-01
Communication Headset Design . International Journal of Industrial Ergonomics 2008a, 38, 1038–1044. McBride, M.; Letowski, T.; Tran, P. Bone Conduction Reception...Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other...7 3.3 Study Design
Acoustical considerations for secondary uses of government facilities
NASA Astrophysics Data System (ADS)
Evans, Jack B.
2003-10-01
Government buildings are by their nature, public and multi-functional. Whether in meetings, presentations, documentation processing, work instructions or dispatch, speech communications are critical. Full-time occupancy facilities may require sleep or rest areas adjacent to active spaces. Rooms designed for some other primary use may be used for public assembly, receptions or meetings. In addition, environmental noise impacts to the building or from the building should be considered, especially where adjacent to hospitals, hotels, apartments or other urban sensitive land uses. Acoustical criteria and design parameters for reverberation, background noise and sound isolation should enhance speech intelligibility and privacy. This presentation looks at unusual spaces and unexpected uses of spaces with regard to room acoustics and noise control. Examples of various spaces will be discussed, including an atrium used for reception and assembly, multi-jurisdictional (911) emergency control center, frequent or long-duration use of emergency generators, renovations of historically significant buildings, and the juxtaposition of acoustically incompatible functions. Brief case histories of acoustical requirements, constraints and design solutions will be presented, including acoustical measurements, plan illustrations and photographs. Acoustical criteria for secondary functional uses of spaces will be proposed.
Prosody perception and musical pitch discrimination in adults using cochlear implants.
Kalathottukaren, Rose Thomas; Purdy, Suzanne C; Ballard, Elaine
2015-07-01
This study investigated prosodic perception and musical pitch discrimination in adults using cochlear implants (CI), and examined the relationship between prosody perception scores and non-linguistic auditory measures, demographic variables, and speech recognition scores. Participants were given four subtests of the PEPS-C (profiling elements of prosody in speech-communication), the adult paralanguage subtest of the DANVA 2 (diagnostic analysis of non verbal accuracy 2), and the contour and interval subtests of the MBEA (Montreal battery of evaluation of amusia). Twelve CI users aged 25;5 to 78;0 years participated. CI participants performed significantly more poorly than normative values for New Zealand adults for PEPS-C turn-end, affect, and contrastive stress reception subtests, but were not different from the norm for the chunking reception subtest. Performance on the DANVA 2 adult paralanguage subtest was lower than the normative mean reported by Saindon (2010) . Most of the CI participants performed at chance level on both MBEA subtests. CI users have difficulty perceiving prosodic information accurately. Difficulty in understanding different aspects of prosody and music may be associated with reduced pitch perception ability.
Dunn, Camille C; Walker, Elizabeth A; Oleson, Jacob; Kenworthy, Maura; Van Voorst, Tanya; Tomblin, J. Bruce; Ji, Haihong; Kirk, Karen I; McMurray, Bob; Hanson, Marlan; Gantz, Bruce J
2013-01-01
Objectives Few studies have examined the long-term effect of age at implantation on outcomes using multiple data points in children with cochlear implants. The goal of this study was to determine if age at implantation has a significant, lasting impact on speech perception, language, and reading performance for children with prelingual hearing loss. Design A linear mixed model framework was utilized to determine the effect of age at implantation on speech perception, language, and reading abilities in 83 children with prelingual hearing loss who received cochlear implants by age 4. The children were divided into two groups based on their age at implantation: 1) under 2 years of age and 2) between 2 and 3.9 years of age. Differences in model specified mean scores between groups were compared at annual intervals from 5 to 13 years of age for speech perception, and 7 to 11 years of age for language and reading. Results After controlling for communication mode, device configuration, and pre-operative pure-tone average, there was no significant effect of age at implantation for receptive language by 8 years of age, expressive language by 10 years of age, reading by 7 years of age. In terms of speech perception outcomes, significance varied between 7 and 13 years of age, with no significant difference in speech perception scores between groups at ages 7, 11 and 13 years. Children who utilized oral communication (OC) demonstrated significantly higher speech perception scores than children who used total communication (TC). OC users tended to have higher expressive language scores than TC users, although this did not reach significance. There was no significant difference between OC and TC users for receptive language or reading scores. Conclusions Speech perception, language, and reading performance continue to improve over time for children implanted before 4 years of age. The current results indicate that the effect of age at implantation diminishes with time, particularly for higher-order skills such as language and reading. Some children who receive CIs after the age of 2 years have the capacity to approximate the language and reading skills of their earlier-implanted peers, suggesting that additional factors may moderate the influence of age at implantation on outcomes over time. PMID:24231628
Psychometrically equivalent bisyllabic words for speech recognition threshold testing in Vietnamese.
Harris, Richard W; McPherson, David L; Hanson, Claire M; Eggett, Dennis L
2017-08-01
This study identified, digitally recorded, edited and evaluated 89 bisyllabic Vietnamese words with the goal of identifying homogeneous words that could be used to measure the speech recognition threshold (SRT) in native talkers of Vietnamese. Native male and female talker productions of 89 Vietnamese bisyllabic words were recorded, edited and then presented at intensities ranging from -10 to 20 dBHL. Logistic regression was used to identify the best words for measuring the SRT. Forty-eight words were selected and digitally edited to have 50% intelligibility at a level equal to the mean pure-tone average (PTA) for normally hearing participants (5.2 dBHL). Twenty normally hearing native Vietnamese participants listened to and repeated bisyllabic Vietnamese words at intensities ranging from -10 to 20 dBHL. A total of 48 male and female talker recordings of bisyllabic words with steep psychometric functions (>9.0%/dB) were chosen for the final bisyllabic SRT list. Only words homogeneous with respect to threshold audibility with steep psychometric function slopes were chosen for the final list. Digital recordings of bisyllabic Vietnamese words are now available for use in measuring the SRT for patients whose native language is Vietnamese.
Xue, L J; Yang, A C; Chen, H; Huang, W X; Guo, J J; Liang, X Y; Chen, Z Q; Zheng, Q L
2017-11-20
Objective: Study of the results and the degree on occupational noise-induced deafness in-to the different high frequency hearing threshold weighted value, in order to provide theoretical basis for the re-vision of diagnostic criteria on occupational noise-induced deafness. Methods: A retrospective study was con-ducted to investigate the cases on the diagnosis of occupational noise-induced deafness in Guangdong province hospital for occupational disease prevention and treatment from January 2016 to January 2017. Based on the re-sults of the 3 hearing test for each test interval greater than 3 days in the hospital, the best threshold of each frequency was obtained, and based on the diagnostic criteria of occupational noise deafness in 2007 edition, Chi square test, t test and variance analysis were used to measure SPSS21.0 data, their differences are tested among the means of speech frequency and the high frequency weighted value into different age group, noise ex-posure group, and diagnostic classification between different dimensions. Results: 1. There were totally 168 cases in accordance with the study plan, male 154 cases, female 14 cases, the average age was 41.18 ±6.07 years old. 2. The diagnosis rate was increased into the weighted value of different high frequency than the mean value of pure speech frequency, the weighted 4 kHz frequency increased by 13.69% (χ(2)=9.880, P =0.002) , 6 kHz increased by 15.47% (χ(2)=9.985, P =0.002) and 4 kHz+6 kHz increased by15.47% (χ(2)=9.985, P =0.002) , the difference was statistically significant. The diagnostic rate of different high threshold had no obvious differ-ence between the genders. 3. The age groups were divided into less than or equal to 40years old group (A group) and 40-50 years old group (group B) , there were higher the diagnostic rate between high frequency weighted 4 kHz (A group χ(2)=3.380, P =0.050; B group χ(2)=4.054, P =0.032) , weighted 6 kHz (A group χ(2)=6.362, P =0.012; B group χ(2)=4.054, P =0.032) , high frequency weighted 4 kHz+6 kHz (A group χ(2)=6.362, P =0.012; B group χ(2)=4.054, P =0.032) than those of speech frequency average value in the same group on oc-cupational noise-induced deafness diagnosis rate, the difference was statistically significant. There was no sig-nificant difference between age groups (χ(2)=2.265, P =0.944) . 4. The better ear's mean value of pure speech fre-quency and the weighted values into different high frequency of working years on each group were compared, working years more than 10 years group was significantly higher than that of average thresholds of each frequen-cy band in 3-5 group ( F =2.271, P =0.001) , 6-10 group ( F =1.563, P =0.046) , the difference was statistically significant. The different high frequency weighted values were higher than those of the mean value of pure speech frequency, and the high frequency weighted 4 kHz+6 kHz had the highest frequency difference, with an average increase of 2.83 dB. 5. The diagnostic rate into weighted different high frequency was higher in the mild, moderate and severe grades than in the pure speech frequency. In the comparison of diagnosis for mild occupational noise-induced deafness, in addition to the weighted 3 kHz high frequency (χ(2)=3.117, P =0.077) had no significant difference, the weighted 4 kHz (χ(2)=10.835, P =0.001) , 6 kHz (χ(2)=9.985, P =0.002) , 3 kHz+4 kHz (χ(2)=6.315, P =0.012) , 3 kHz+6 kHz (χ(2)=6.315, P =0.012) , 4 kHz+6 kHz (χ(2)=9.985, P =0.002) , 3 kHz+4 kHz+6 kHz (χ(2)=7.667, P =0.002) were significantly higher than the diagnosis rate of the mean value of pure speech frequency. There was no significant difference between the two groups in the moderate and se-vere grades ( P >0.05) . Conclusion: Bring into different high frequency hearing threshold weighted value in-creases the diagnostic rate of occupational noise-induced deafness, the weighted 4 kHz, 6 kHz and 4 kHz+ 6 kHz high frequency value affects the result greatly, and the weighted 4 kHz+6 kHz high frequency hearing threshold value is maximum the effect on occupational noise-induced deafness diagnosis.
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates
Malone, Brian J.
2017-01-01
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194
Speech and neurology-chemical impairment correlates
NASA Astrophysics Data System (ADS)
Hayre, Harb S.
2002-05-01
Speech correlates of alcohol/drug impairment and its neurological basis is presented with suggestion for further research in impairment from poly drug/medicine/inhalent/chew use/abuse, and prediagnosis of many neuro- and endocrin-related disorders. Nerve cells all over the body detect chemical entry by smoking, injection, drinking, chewing, or skin absorption, and transmit neurosignals to their corresponding cerebral subsystems, which in turn affect speech centers-Broca's and Wernick's area, and motor cortex. For instance, gustatory cells in the mouth, cranial and spinal nerve cells in the skin, and cilia/olfactory neurons in the nose are the intake sensing nerve cells. Alcohol depression, and brain cell damage were detected from telephone speech using IMPAIRLYZER-TM, and the results of these studies were presented at 1996 ASA meeting in Indianapolis, and 2001 German Acoustical Society-DEGA conference in Hamburg, Germany respectively. Speech based chemical Impairment measure results were presented at the 2001 meeting of ASA in Chicago. New data on neurotolerance based chemical impairment for alcohol, drugs, and medicine shall be presented, and shown not to fully support NIDA-SAMSHA drug and alcohol threshold used in drug testing domain.
Results of FM-TV threshold reduction investigation for the ATS F trust experiment
NASA Technical Reports Server (NTRS)
Brown, J. P.
1972-01-01
An investigation of threshold effects in FM TV was initiated to determine if any simple, low cost techniques were available which can reduce the subjective video threshold, applicable to low cost community TV reception via satellite. Two methods of eliminating these effects were examined: the use of standard video pre-emphasis, and the use of an additional circuit to blank the picture tube during the retrace period.
Schramm, Bianka; Bohnert, Andrea; Keilmann, Annerose
2010-07-01
This study had two aims: (1) to document the auditory and lexical development of children who are deaf and received the first cochlear implant (CI) by the age of 16 months and the second CI by the age of 31 months and (2) to compare these children's results with those of children with normal hearing (NH). This longitudinal study included five children with NH and five with sensorineural deafness. All children of the second group were observed for 36 months after the first fitting of the device (cochlear implant). The auditory development of the CI group was documented every 3 months up to the age of two years in hearing age and chronological age and for the NH group in chronological age. The language development of each NH child was assessed at 12, 18, 24 and 36 months of chronological age. Children with CIs were examined at the same age intervals at chronological and hearing age. In both groups, children showed individual patterns of auditory and language development. The children with CIs developed differently in the amount of receptive and expressive vocabulary compared with the NH control group. Three children in the CI group needed almost 6 months to make gains in speech development that were consistent with what would be expected for their chronological age. Overall, the receptive and expressive development in all children of the implanted group increased with their hearing age. These results indicate that early identification and early implantation is advisable to give children with sensorineural hearing loss a realistic chance to develop satisfactory expressive and receptive vocabulary and also to develop stable phonological, morphological and syntactical skills for school life. On the basis of these longitudinal data, we will be able to develop new diagnostic tools that enable clinicians to assess child's progress in hearing and speech development. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Fluent aphasia in children: definition and natural history.
Klein, S K; Masur, D; Farber, K; Shinnar, S; Rapin, I
1992-01-01
We compared the course of a preschool child we followed for 4 years with published reports of 24 children with fluent aphasia. Our patient spoke fluently within 3 weeks of the injury. She was severely anomic and made many semantic paraphasic errors. Unlike other children with fluent aphasia, her prosody of speech was impaired initially, and her spontaneous language was dominated by stock phrases. Residual deficits include chronic impairment of auditory comprehension, repetition, and word retrieval. She has more disfluencies in spontaneous speech 4 years after her head injury than acutely. School achievement in reading and mathematics remains below age level. Attention to the timing of recovery of fluent speech and to the characteristics of receptive and expressive language over time will permit more accurate description of fluent aphasia in childhood.
Communication in a noisy environment: Perception of one's own voice and speech enhancement
NASA Astrophysics Data System (ADS)
Le Cocq, Cecile
Workers in noisy industrial environments are often confronted to communication problems. Lost of workers complain about not being able to communicate easily with their coworkers when they wear hearing protectors. In consequence, they tend to remove their protectors, which expose them to the risk of hearing loss. In fact this communication problem is a double one: first the hearing protectors modify one's own voice perception; second they interfere with understanding speech from others. This double problem is examined in this thesis. When wearing hearing protectors, the modification of one's own voice perception is partly due to the occlusion effect which is produced when an earplug is inserted in the car canal. This occlusion effect has two main consequences: first the physiological noises in low frequencies are better perceived, second the perception of one's own voice is modified. In order to have a better understanding of this phenomenon, the literature results are analyzed systematically, and a new method to quantify the occlusion effect is developed. Instead of stimulating the skull with a bone vibrator or asking the subject to speak as is usually done in the literature, it has been decided to excite the buccal cavity with an acoustic wave. The experiment has been designed in such a way that the acoustic wave which excites the buccal cavity does not excite the external car or the rest of the body directly. The measurement of the hearing threshold in open and occluded car has been used to quantify the subjective occlusion effect for an acoustic wave in the buccal cavity. These experimental results as well as those reported in the literature have lead to a better understanding of the occlusion effect and an evaluation of the role of each internal path from the acoustic source to the internal car. The speech intelligibility from others is altered by both the high sound levels of noisy industrial environments and the speech signal attenuation due to hearing protectors. A possible solution to this problem is to denoise the speech signal and transmit it under the hearing protector. Lots of denoising techniques are available and are often used for denoising speech in telecommunication. In the framework of this thesis, denoising by wavelet thresholding is considered. A first study on "classical" wavelet denoising technics is conducted in order to evaluate their performance in noisy industrial environments. The tested speech signals are altered by industrial noises according to a wide range of signal to noise ratios. The speech denoised signals are evaluated with four criteria. A large database is obtained and analyzed with a selection algorithm which has been designed for this purpose. This first study has lead to the identification of the influence from the different parameters of the wavelet denoising method on its quality and has identified the "classical" method which has given the best performances in terms of denoising quality. This first study has also generated ideas for designing a new thresholding rule suitable for speech wavelet denoising in an industrial noisy environment. In a second study, this new thresholding rule is presented and evaluated. Its performances are better than the "classical" method found in the first study when the signal to noise ratio from the speech signal is between --10 dB and 15 dB.
Salomon, G; Parving, A
1985-01-01
It is reasoned that for compensation or epidemiological studies an evaluation of hearing disability and the concomitant handicap must include the ability to perceive visual cues. A scaling procedure for hearing- and audiovisual communication handicap is presented. The procedure deviates in two ways from previous handicap assessments: (1) It is based on individual self-assessment of semantic speech perception but can be implemented by means of professional audiological test procedures. (2) The system does not make use of pure-tone auditory thresholds as a predominant audiological principle, but is based on speech perception. The interrelationship between auditory and audiovisual handicap is evaluated. A total score including audio- and audiovisual perception handicap is proposed and a suggestion for disability percentages is presented.
Leite, Renata Aparecida; Wertzner, Haydée Fiszbein; Gonçalves, Isabela Crivellaro; Magliaro, Fernanda Cristina Leite; Matas, Carla Gentile
2014-03-01
This study investigated whether neurophysiologic responses (auditory evoked potentials) differ between typically developed children and children with phonological disorders and whether these responses are modified in children with phonological disorders after speech therapy. The participants included 24 typically developing children (Control Group, mean age: eight years and ten months) and 23 children clinically diagnosed with phonological disorders (Study Group, mean age: eight years and eleven months). Additionally, 12 study group children were enrolled in speech therapy (Study Group 1), and 11 were not enrolled in speech therapy (Study Group 2). The subjects were submitted to the following procedures: conventional audiological, auditory brainstem response, auditory middle-latency response, and P300 assessments. All participants presented with normal hearing thresholds. The study group 1 subjects were reassessed after 12 speech therapy sessions, and the study group 2 subjects were reassessed 3 months after the initial assessment. Electrophysiological results were compared between the groups. Latency differences were observed between the groups (the control and study groups) regarding the auditory brainstem response and the P300 tests. Additionally, the P300 responses improved in the study group 1 children after speech therapy. The findings suggest that children with phonological disorders have impaired auditory brainstem and cortical region pathways that may benefit from speech therapy.
Free Speech for Public Employees: The Supreme Court Strikes a New Balance.
ERIC Educational Resources Information Center
Bernheim, Emily
1986-01-01
In "Connick vs. Myers" the Supreme Court applied a threshold requirement to an employee's First Amendment protection of free speech: speech must be related to public concerns as determined by the content, form, and context of a given statement. Discusses applications of this decision to lower court cases. (MLF)
Mary Watson, Rose; Pennington, Lindsay
2015-01-01
Background Communication difficulties are common in cerebral palsy (CP) and are frequently associated with motor, intellectual and sensory impairments. Speech and language therapy research comprises single-case experimental design and small group studies, limiting evidence-based intervention and possibly exacerbating variation in practice. Aims To describe the assessment and intervention practices of speech–language therapist (SLTs) in the UK in their management of communication difficulties associated with CP in childhood. Methods & Procedures An online survey of the assessments and interventions employed by UK SLTs working with children and young people with CP was conducted. The survey was publicized via NHS trusts, the Royal College of Speech and Language Therapists (RCSLT) and private practice associations using a variety of social media. The survey was open from 5 December 2011 to 30 January 2012. Outcomes & Results Two hundred and sixty-five UK SLTs who worked with children and young people with CP in England (n = 199), Wales (n = 13), Scotland (n = 36) and Northern Ireland (n = 17) completed the survey. SLTs reported using a wide variety of published, standardized tests, but most commonly reported assessing oromotor function, speech, receptive and expressive language, and communication skills by observation or using assessment schedules they had developed themselves. The most highly prioritized areas for intervention were: dysphagia, alternative and augmentative (AAC)/interaction and receptive language. SLTs reported using a wide variety of techniques to address difficulties in speech, language and communication. Some interventions used have no supporting evidence. Many SLTs felt unable to estimate the hours of therapy per year children and young people with CP and communication disorders received from their service. Conclusions & Implications The assessment and management of communication difficulties associated with CP in childhood varies widely in the UK. Lack of standard assessment practices prevents comparisons across time or services. The adoption of a standard set of agreed clinical measures would enable benchmarking of service provision, permit the development of large-scale research studies using routine clinical data and facilitate the identification of potential participants for research studies in the UK. Some interventions provided lack evidence. Recent systematic reviews could guide intervention, but robust evidence is needed in most areas addressed in clinical practice. PMID:25652139
Tai, Yihsin; Husain, Fatima T
2018-04-01
Despite having normal hearing sensitivity, patients with chronic tinnitus may experience more difficulty recognizing speech in adverse listening conditions as compared to controls. However, the association between the characteristics of tinnitus (severity and loudness) and speech recognition remains unclear. In this study, the Quick Speech-in-Noise test (QuickSIN) was conducted monaurally on 14 patients with bilateral tinnitus and 14 age- and hearing-matched adults to determine the relation between tinnitus characteristics and speech understanding. Further, Tinnitus Handicap Inventory (THI), tinnitus loudness magnitude estimation, and loudness matching were obtained to better characterize the perceptual and psychological aspects of tinnitus. The patients reported low THI scores, with most participants in the slight handicap category. Significant between-group differences in speech-in-noise performance were only found at the 5-dB signal-to-noise ratio (SNR) condition. The tinnitus group performed significantly worse in the left ear than in the right ear, even though bilateral tinnitus percept and symmetrical thresholds were reported in all patients. This between-ear difference is likely influenced by a right-ear advantage for speech sounds, as factors related to testing order and fatigue were ruled out. Additionally, significant correlations found between SNR loss in the left ear and tinnitus loudness matching suggest that perceptual factors related to tinnitus had an effect on speech-in-noise performance, pointing to a possible interaction between peripheral and cognitive factors in chronic tinnitus. Further studies, that take into account both hearing and cognitive abilities of patients, are needed to better parse out the effect of tinnitus in the absence of hearing impairment.
Etter, Nicole M; Mckeon, Patrick O; Dressler, Emily V; Andreatta, Richard D
2017-05-03
Current theoretical models suggest the importance of a bidirectional relationship between sensation and production in the vocal tract to maintain lifelong speech skills. The purpose of this study was to assess age-related changes in orofacial skilled force production and to begin defining the orofacial perception-action relationship in healthy adults. Low-level orofacial force control measures (reaction time, rise time, peak force, mean hold force (N) and force hold SD) were collected from 60 adults (19-84 years). Non-parametric Kruskal Wallis tests were performed to identify statistical differences between force and group demographics. Non-parametric Spearman's rank correlations were completed to compare force measures against previously published sensory data from the same cohort of participants. Significant group differences in force control were found for age, sex, speech usage and smoking status. Significant correlational relationships were identified between labial vibrotactile thresholds and several low-level force control measures collected during step and ramp-and-hold conditions. These findings demonstrate age-related alterations in orofacial force production. Furthermore, correlational analysis suggests as vibrotactile detection thresholds increase, the ability to maintain low-level force control accuracy decreases. Possible clinical applications and treatment consequences of these findings for speech disorders in the ageing population are provided.
Buchholz, Jörg M.; Morgan, Catherine; Sharma, Mridula; Weller, Tobias; Konganda, Shivali Appaiah; Shirai, Kyoko; Kawano, Atsushi
2017-01-01
Binaural hearing helps normal-hearing listeners localize sound sources and understand speech in noise. However, it is not fully understood how far this is the case for bilateral cochlear implant (CI) users. To determine the potential benefits of bilateral over unilateral CIs, speech comprehension thresholds (SCTs) were measured in seven Japanese bilateral CI recipients using Helen test sentences (translated into Japanese) in a two-talker speech interferer presented from the front (co-located with the target speech), ipsilateral to the first-implanted ear (at +90° or −90°), and spatially symmetric at ±90°. Spatial release from masking was calculated as the difference between co-located and spatially separated SCTs. Localization was assessed in the horizontal plane by presenting either male or female speech or both simultaneously. All measurements were performed bilaterally and unilaterally (with the first implanted ear) inside a loudspeaker array. Both SCTs and spatial release from masking were improved with bilateral CIs, demonstrating mean bilateral benefits of 7.5 dB in spatially asymmetric and 3 dB in spatially symmetric speech mixture. Localization performance varied strongly between subjects but was clearly improved with bilateral over unilateral CIs with the mean localization error reduced by 27°. Surprisingly, adding a second talker had only a negligible effect on localization. PMID:28752811
Relation between speech-in-noise threshold, hearing loss and cognition from 40-69 years of age.
Moore, David R; Edmondson-Jones, Mark; Dawes, Piers; Fortnum, Heather; McCormack, Abby; Pierzycki, Robert H; Munro, Kevin J
2014-01-01
Healthy hearing depends on sensitive ears and adequate brain processing. Essential aspects of both hearing and cognition decline with advancing age, but it is largely unknown how one influences the other. The current standard measure of hearing, the pure-tone audiogram is not very cognitively demanding and does not predict well the most important yet challenging use of hearing, listening to speech in noisy environments. We analysed data from UK Biobank that asked 40-69 year olds about their hearing, and assessed their ability on tests of speech-in-noise hearing and cognition. About half a million volunteers were recruited through NHS registers. Respondents completed 'whole-body' testing in purpose-designed, community-based test centres across the UK. Objective hearing (spoken digit recognition in noise) and cognitive (reasoning, memory, processing speed) data were analysed using logistic and multiple regression methods. Speech hearing in noise declined exponentially with age for both sexes from about 50 years, differing from previous audiogram data that showed a more linear decline from <40 years for men, and consistently less hearing loss for women. The decline in speech-in-noise hearing was especially dramatic among those with lower cognitive scores. Decreasing cognitive ability and increasing age were both independently associated with decreasing ability to hear speech-in-noise (0.70 and 0.89 dB, respectively) among the population studied. Men subjectively reported up to 60% higher rates of difficulty hearing than women. Workplace noise history associated with difficulty in both subjective hearing and objective speech hearing in noise. Leisure noise history was associated with subjective, but not with objective difficulty hearing. Older people have declining cognitive processing ability associated with reduced ability to hear speech in noise, measured by recognition of recorded spoken digits. Subjective reports of hearing difficulty generally show a higher prevalence than objective measures, suggesting that current objective methods could be extended further.
Relation between Speech-in-Noise Threshold, Hearing Loss and Cognition from 40–69 Years of Age
Moore, David R.; Edmondson-Jones, Mark; Dawes, Piers; Fortnum, Heather; McCormack, Abby; Pierzycki, Robert H.; Munro, Kevin J.
2014-01-01
Background Healthy hearing depends on sensitive ears and adequate brain processing. Essential aspects of both hearing and cognition decline with advancing age, but it is largely unknown how one influences the other. The current standard measure of hearing, the pure-tone audiogram is not very cognitively demanding and does not predict well the most important yet challenging use of hearing, listening to speech in noisy environments. We analysed data from UK Biobank that asked 40–69 year olds about their hearing, and assessed their ability on tests of speech-in-noise hearing and cognition. Methods and Findings About half a million volunteers were recruited through NHS registers. Respondents completed ‘whole-body’ testing in purpose-designed, community-based test centres across the UK. Objective hearing (spoken digit recognition in noise) and cognitive (reasoning, memory, processing speed) data were analysed using logistic and multiple regression methods. Speech hearing in noise declined exponentially with age for both sexes from about 50 years, differing from previous audiogram data that showed a more linear decline from <40 years for men, and consistently less hearing loss for women. The decline in speech-in-noise hearing was especially dramatic among those with lower cognitive scores. Decreasing cognitive ability and increasing age were both independently associated with decreasing ability to hear speech-in-noise (0.70 and 0.89 dB, respectively) among the population studied. Men subjectively reported up to 60% higher rates of difficulty hearing than women. Workplace noise history associated with difficulty in both subjective hearing and objective speech hearing in noise. Leisure noise history was associated with subjective, but not with objective difficulty hearing. Conclusions Older people have declining cognitive processing ability associated with reduced ability to hear speech in noise, measured by recognition of recorded spoken digits. Subjective reports of hearing difficulty generally show a higher prevalence than objective measures, suggesting that current objective methods could be extended further. PMID:25229622
Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng
2014-12-01
To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
[Diagnosis of psychogenic hearing disorders in childhood].
Kothe, C; Fleischer, S; Breitfuss, A; Hess, M
2003-11-01
In comparison with organic hearing loss, which is commonly reported, non-organic hearing loss is under-represented in the literature. The audiological results for 20 patients, aged between 6 and 17 years (mean 11.3), with psychogenic hearing disturbances were analysed prospectively. In 17 cases, the disturbance was bilateral and in three cases unilateral. In no case was the result of an objective hearing test exceptional, while a hearing threshold of between 30 and 100 dB was reported in single ear, pure-tone audiograms. In 12 cases, single ear speech audiograms were unexceptional. Suprathreshold tests, such as the dichotic discrimination test or the speech audiogram with noise disturbance, could lead to a clearer diagnosis in cases of severe psychogenic auditory impairment. In half of the patients, a conflict situation in the school or family was evident. After treatment for this conflict, hearing ability returned to normal. There was no improvement for six patients.
ERIC Educational Resources Information Center
Ebbels, Susan H.; Wright, Lisa; Brockbank, Sally; Godfrey, Caroline; Harris, Catherine; Leniston, Hannah; Neary, Kate; Nicoll, Hilary; Nicoll, Lucy; Scott, Jackie; Maric, Nataša
2017-01-01
Background: Evidence of the effectiveness of therapy for older children with (developmental) language disorder (DLD), and particularly those with receptive language impairments, is very limited. The few existing studies have focused on particular target areas, but none has looked at a whole area of a service. Aims: To establish whether for…
An Investigation of Cultural Differences in Perceived Speaker Effectiveness.
ERIC Educational Resources Information Center
Masterson, John T.; Watson, Norman H.
Culture is a powerful force that may affect the reception and acceptance of communication. To determine if culture has an effect on perceived speaker effectiveness, students in one introductory speech communication class in a Florida university and a variety of courses in a university in the Bahamas, were given an 80-item questionnaire that was…
Does Hearing Several Speakers Reduce Foreign Word Learning?
ERIC Educational Resources Information Center
Ludington, Jason Darryl
2016-01-01
Learning spoken word forms is a vital part of second language learning, and CALL lends itself well to this training. Not enough is known, however, about how auditory variation across speech tokens may affect receptive word learning. To find out, 144 Thai university students with no knowledge of the Patani Malay language learned 24 foreign words in…
Communicative Development in Twins with Discordant Histories of Recurrent Otitis Media.
ERIC Educational Resources Information Center
Hemmer, Virginia Hoey; Ratner, Nan Bernstein
1994-01-01
The communicative abilities of six sets of same-sex, preschool dizygotic twins were examined. In each dyad, one sibling had a strong history of recurrent otitis media (ROM) but the other twin did not. History of ROM was associated with lowered receptive vocabulary, with no consistent effects detected in expressive speech and language tasks.…
ERIC Educational Resources Information Center
Stamou, Anastasia G.; Maroniti, Katerina; Griva, Eleni
2015-01-01
Considering the role of popular cultural texts in shaping sociolinguistic reality, it makes sense to explore how children actually receive those texts and what conceptualisations of sociolinguistic diversity they form through those texts. Therefore, the aim of the present study was to examine Greek young children's views on sociolinguistic…
ECoG Gamma Activity during a Language Task: Differentiating Expressive and Receptive Speech Areas
ERIC Educational Resources Information Center
Towle, Vernon L.; Yoon, Hyun-Ah; Castelle, Michael; Edgar, J. Christopher; Biassou, Nadia M.; Frim, David M.; Spire, Jean-Paul; Kohrman, Michael H.
2008-01-01
Electrocorticographic (ECoG) spectral patterns obtained during language tasks from 12 epilepsy patients (age: 12-44 years) were analyzed in order to identify and characterize cortical language areas. ECoG from 63 subdural electrodes (500 Hz/channel) chronically implanted over frontal, parietal and temporal lobes were examined. Two language tasks…
ERIC Educational Resources Information Center
Yu, Jennifer W.; Wei, Xin; Wagner, Mary
2014-01-01
This study used propensity score techniques on data from the National Longitudinal Transition Study-2 to assess the causal relationship between speech and behavior-based support services and rates of social communication among high school students with Autism Spectrum Disorder (ASD). Findings indicate that receptive language problems were…
Assessing the Relationship between Prosody and Reading Outcomes in Children Using the PEPS-C
ERIC Educational Resources Information Center
Lochrin, Margaret; Arciuli, Joanne; Sharma, Mridula
2015-01-01
This study investigated the relationship between both receptive and expressive prosody and each of three reading outcomes: accuracy of reading aloud words, accuracy of reading aloud nonwords, and comprehension. Participants were 63 children aged 7 to 12 years. To assess prosody, we used the Profiling Elements of Prosody in Speech Communication…
Relationship between Auditory and Cognitive Abilities in Older Adults
Sheft, Stanley
2015-01-01
Objective The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults. Methods Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer’s Disease Center, participants were a community-dwelling cohort of older adults (range 63–98 years) without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61) and White (n=63) subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities. Results Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance. Conclusions For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials. PMID:26237423
2013-01-01
Background Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. Results FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. Conclusion The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral responses. Serial FMAERs may be useful for tracking language change in LKS. Cortical FMAERs may augment invasive cortical language testing in epilepsy surgical patients. The FMAER may be normal in ASD and other language disorders when pathology spares the superior temporal gyrus and surround but presumably involves other brain regions. Ear/mastoid reference electrodes should be avoided and multichannel, reference free recordings utilized. Source analysis may assist in better understanding of complex FMAER findings. PMID:23351174
Spatial Release From Masking in 2-Year-Olds With Normal Hearing and With Bilateral Cochlear Implants
Hess, Christi L.; Misurelli, Sara M.; Litovsky, Ruth Y.
2018-01-01
This study evaluated spatial release from masking (SRM) in 2- to 3-year-old children who are deaf and were implanted with bilateral cochlear implants (BiCIs), and in age-matched normal-hearing (NH) toddlers. Here, we examined whether early activation of bilateral hearing has the potential to promote SRM that is similar to age-matched NH children. Listeners were 13 NH toddlers and 13 toddlers with BiCIs, ages 27 to 36 months. Speech reception thresholds (SRTs) were measured for target speech in front (0°) and for competitors that were either Colocated in front (0°) or Separated toward the right (+90°). SRM was computed as the difference between SRTs in the front versus in the asymmetrical condition. Results show that SRTs were higher in the BiCI than NH group in all conditions. Both groups had higher SRTs in the Colocated and Separated conditions compared with Quiet, indicating masking. SRM was significant only in the NH group. In the BiCI group, the group effect of SRM was not significant, likely limited by the small sample size; however, all but two children had SRM values within the NH range. This work shows that to some extent, the ability to use spatial cues for source segregation develops by age 2 to 3 in NH children and is attainable in most of the children in the BiCI group. There is potential for the paradigm used here to be used in clinical settings to evaluate outcomes of bilateral hearing in very young children. PMID:29761735
What Makes a Caseload (Un) Manageable? School-Based Speech-Language Pathologists Speak
ERIC Educational Resources Information Center
Katz, Lauren A.; Maag, Abby; Fallon, Karen A.; Blenkarn, Katie; Smith, Megan K.
2010-01-01
Purpose: Large caseload sizes and a shortage of speech-language pathologists (SLPs) are ongoing concerns in the field of speech and language. This study was conducted to identify current mean caseload size for school-based SLPs, a threshold at which caseload size begins to be perceived as unmanageable, and variables contributing to school-based…
Lindblad, Ann-Cathrine; Rosenhall, Ulf; Olofsson, Åke; Hagerman, Björn
2014-01-01
The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: - pure tone audiometry with Békésy technique, - transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; - psychoacoustical modulation transfer function, - forward masking, - speech recognition in noise, - tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group exposed to noise below risk levels, had dysfunctions almost identical to those of the more exposed Industry group.
Lindblad, Ann-Cathrine; Rosenhall, Ulf; Olofsson, Åke; Hagerman, Björn
2014-01-01
The aim of the investigation was to study if dysfunctions associated to the cochlea or its regulatory system can be found, and possibly explain hearing problems in subjects with normal or near-normal audiograms. The design was a prospective study of subjects recruited from the general population. The included subjects were persons with auditory problems who had normal, or near-normal, pure tone hearing thresholds, who could be included in one of three subgroups: teachers, Education; people working with music, Music; and people with moderate or negligible noise exposure, Other. A fourth group included people with poorer pure tone hearing thresholds and a history of severe occupational noise, Industry. Ntotal = 193. The following hearing tests were used: − pure tone audiometry with Békésy technique, − transient evoked otoacoustic emissions and distortion product otoacoustic emissions, without and with contralateral noise; − psychoacoustical modulation transfer function, − forward masking, − speech recognition in noise, − tinnitus matching. A questionnaire about occupations, noise exposure, stress/anxiety, muscular problems, medication, and heredity, was addressed to the participants. Forward masking results were significantly worse for Education and Industry than for the other groups, possibly associated to the inner hair cell area. Forward masking results were significantly correlated to louder matched tinnitus. For many subjects speech recognition in noise, left ear, did not increase in a normal way when the listening level was increased. Subjects hypersensitive to loud sound had significantly better speech recognition in noise at the lower test level than subjects not hypersensitive. Self-reported stress/anxiety was similar for all groups. In conclusion, hearing dysfunctions were found in subjects with tinnitus and other auditory problems, combined with normal or near-normal pure tone thresholds. The teachers, mostly regarded as a group exposed to noise below risk levels, had dysfunctions almost identical to those of the more exposed Industry group. PMID:24827149
Anderson, Elizabeth S; Nelson, David A; Kreft, Heather; Nelson, Peggy B; Oxenham, Andrew J
2011-07-01
Spectral ripple discrimination thresholds were measured in 15 cochlear-implant users with broadband (350-5600 Hz) and octave-band noise stimuli. The results were compared with spatial tuning curve (STC) bandwidths previously obtained from the same subjects. Spatial tuning curve bandwidths did not correlate significantly with broadband spectral ripple discrimination thresholds but did correlate significantly with ripple discrimination thresholds when the rippled noise was confined to an octave-wide passband, centered on the STC's probe electrode frequency allocation. Ripple discrimination thresholds were also measured for octave-band stimuli in four contiguous octaves, with center frequencies from 500 Hz to 4000 Hz. Substantial variations in thresholds with center frequency were found in individuals, but no general trends of increasing or decreasing resolution from apex to base were observed in the pooled data. Neither ripple nor STC measures correlated consistently with speech measures in noise and quiet in the sample of subjects in this study. Overall, the results suggest that spectral ripple discrimination measures provide a reasonable measure of spectral resolution that correlates well with more direct, but more time-consuming, measures of spectral resolution, but that such measures do not always provide a clear and robust predictor of performance in speech perception tasks. © 2011 Acoustical Society of America
Anderson, Elizabeth S.; Nelson, David A.; Kreft, Heather; Nelson, Peggy B.; Oxenham, Andrew J.
2011-01-01
Spectral ripple discrimination thresholds were measured in 15 cochlear-implant users with broadband (350–5600 Hz) and octave-band noise stimuli. The results were compared with spatial tuning curve (STC) bandwidths previously obtained from the same subjects. Spatial tuning curve bandwidths did not correlate significantly with broadband spectral ripple discrimination thresholds but did correlate significantly with ripple discrimination thresholds when the rippled noise was confined to an octave-wide passband, centered on the STC’s probe electrode frequency allocation. Ripple discrimination thresholds were also measured for octave-band stimuli in four contiguous octaves, with center frequencies from 500 Hz to 4000 Hz. Substantial variations in thresholds with center frequency were found in individuals, but no general trends of increasing or decreasing resolution from apex to base were observed in the pooled data. Neither ripple nor STC measures correlated consistently with speech measures in noise and quiet in the sample of subjects in this study. Overall, the results suggest that spectral ripple discrimination measures provide a reasonable measure of spectral resolution that correlates well with more direct, but more time-consuming, measures of spectral resolution, but that such measures do not always provide a clear and robust predictor of performance in speech perception tasks. PMID:21786905
Nittrouer, Susan; Lowenstein, Joanna H
2007-02-01
It has been reported that children and adults weight differently the various acoustic properties of the speech signal that support phonetic decisions. This finding is generally attributed to the fact that the amount of weight assigned to various acoustic properties by adults varies across languages, and that children have not yet discovered the mature weighting strategies of their own native languages. But an alternative explanation exists: Perhaps children's auditory sensitivities for some acoustic properties of speech are poorer than those of adults, and children cannot categorize stimuli based on properties to which they are not keenly sensitive. The purpose of the current study was to test that hypothesis. Edited-natural, synthetic-formant, and sine wave stimuli were all used, and all were modeled after words with voiced and voiceless final stops. Adults and children (5 and 7 years of age) listened to pairs of stimuli in 5 conditions: 2 involving a temporal property (1 with speech and 1 with nonspeech stimuli) and 3 involving a spectral property (1 with speech and 2 with nonspeech stimuli). An AX discrimination task was used in which a standard stimulus (A) was compared with all other stimuli (X) equal numbers of times (method of constant stimuli). Adults and children had similar difference thresholds (i.e., 50% point on the discrimination function) for 2 of the 3 sets of nonspeech stimuli (1 temporal and 1 spectral), but children's thresholds were greater for both sets of speech stimuli. Results are interpreted as evidence that children's auditory sensitivities are adequate to support weighting strategies similar to those of adults, and so observed differences between children and adults in speech perception cannot be explained by differences in auditory perception. Furthermore, it is concluded that listeners bring expectations to the listening task about the nature of the signals they are hearing based on their experiences with those signals.
Brockmeyer, Alison M; Potts, Lisa G
2011-02-01
Difficulty understanding in background noise is a common complaint of cochlear implant (CI) recipients. Programming options are available to improve speech recognition in noise for CI users including automatic dynamic range optimization (ADRO), autosensitivity control (ASC), and a two-stage adaptive beamforming algorithm (BEAM). However, the processing option that results in the best speech recognition in noise is unknown. In addition, laboratory measures of these processing options often show greater degrees of improvement than reported by participants in everyday listening situations. To address this issue, Compton-Conley and colleagues developed a test system to replicate a restaurant environment. The R-SPACE™ consists of eight loudspeakers positioned in a 360 degree arc and utilizes a recording made at a restaurant of background noise. The present study measured speech recognition in the R-SPACE with four processing options: standard dual-port directional (STD), ADRO, ASC, and BEAM. A repeated-measures, within-subject design was used to evaluate the four different processing options at two noise levels. Twenty-seven unilateral and three bilateral adult Nucleus Freedom CI recipients. The participants' everyday program (with no additional processing) was used as the STD program. ADRO, ASC, and BEAM were added individually to the STD program to create a total of four programs. Participants repeated Hearing in Noise Test sentences presented at 0 degrees azimuth with R-SPACE restaurant noise at two noise levels, 60 and 70 dB SPL. The reception threshold for sentences (RTS) was obtained for each processing condition and noise level. In 60 dB SPL noise, BEAM processing resulted in the best RTS, with a significant improvement over STD and ADRO processing. In 70 dB SPL noise, ASC and BEAM processing had significantly better mean RTSs compared to STD and ADRO processing. Comparison of noise levels showed that STD and BEAM processing resulted in significantly poorer RTSs in 70 dB SPL noise compared to the performance with these processing conditions in 60 dB SPL noise. Bilateral participants demonstrated a bilateral improvement compared to the better monaural condition for both noise levels and all processing conditions, except ASC in 60 dB SPL noise. The results of this study suggest that the use of processing options that utilize noise reduction, like those available in ASC and BEAM, improve a CI recipient's ability to understand speech in noise in listening situations similar to those experienced in the real world. The choice of the best processing option is dependent on the noise level, with BEAM best at moderate noise levels and ASC best at loud noise levels for unilateral CI recipients. Therefore, multiple noise programs or a combination of processing options may be necessary to provide CI users with the best performance in a variety of listening situations. American Academy of Audiology.
Kaplan, Peter S.; Danko, Christina M.; Kalinka, Christina J.; Cejka, Anna M.
2014-01-01
Infants of mothers who varied in symptoms of depression were tested at 4 and 12 months of age for their ability to associate a segment of an unfamiliar non-depressed mother’s infant-directed speech (IDS) with a face. At 4 months, all infants learned the voice-face association. At 12 months, despite the fact that none of the mothers were still clinically depressed, infants of mothers with chronically elevated self-reported depressive symptoms, and infants of mothers with elevated self-reported depressive symptoms at 4 months but not 12 months, on average did not learn the association. For infants of mothers diagnosed with depression in remission, learning at 12 months was negatively correlated with the postpartum duration of the mother’s depressive episode. At neither age did extent of pitch modulation in the IDS segments correlate with infant learning. However, learning scores at 12 months correlated significantly with concurrent maternal reports of infant receptive language development. The roles of the duration and timing of maternal depressive symptoms are discussed. PMID:22721737
He, Shuman; Grose, John H; Teagle, Holly F B; Woodard, Jennifer; Park, Lisa R; Hatch, Debora R; Buchman, Craig A
2013-01-01
This study aimed (1) to investigate the feasibility of recording the electrically evoked auditory event-related potential (eERP), including the onset P1-N1-P2 complex and the electrically evoked auditory change complex (EACC) in response to temporal gaps, in children with auditory neuropathy spectrum disorder (ANSD); and (2) to evaluate the relationship between these measures and speech-perception abilities in these subjects. Fifteen ANSD children who are Cochlear Nucleus device users participated in this study. For each subject, the speech-processor microphone was bypassed and the eERPs were elicited by direct stimulation of one mid-array electrode (electrode 12). The stimulus was a train of biphasic current pulses 800 msec in duration. Two basic stimulation conditions were used to elicit the eERP. In the no-gap condition, the entire pulse train was delivered uninterrupted to electrode 12, and the onset P1-N1-P2 complex was measured relative to the stimulus onset. In the gapped condition, the stimulus consisted of two pulse train bursts, each being 400 msec in duration, presented sequentially on the same electrode and separated by one of five gaps (i.e., 5, 10, 20, 50, and 100 msec). Open-set speech-perception ability of these subjects with ANSD was assessed using the phonetically balanced kindergarten (PBK) word lists presented at 60 dB SPL, using monitored live voice in a sound booth. The eERPs were recorded from all subjects with ANSD who participated in this study. There were no significant differences in test-retest reliability, root mean square amplitude or P1 latency for the onset P1-N1-P2 complex between subjects with good (>70% correct on PBK words) and poorer speech-perception performance. In general, the EACC showed less mature morphological characteristics than the onset P1-N1-P2 response recorded from the same subject. There was a robust correlation between the PBK word scores and the EACC thresholds for gap detection. Subjects with poorer speech-perception performance showed larger EACC thresholds in this study. These results demonstrate the feasibility of recording eERPs from implanted children with ANSD, using direct electrical stimulation. Temporal-processing deficits, as demonstrated by large EACC thresholds for gap detection, might account in part for the poor speech-perception performances observed in a subgroup of implanted subjects with ANSD. This finding suggests that the EACC elicited by changes in temporal continuity (i.e., gap) holds promise as a predictor of speech-perception ability among implanted children with ANSD.
Hemispheric Lateralization of Motor Thresholds in Relation to Stuttering
Alm, Per A.; Karlsson, Ragnhild; Sundberg, Madeleine; Axelson, Hans W.
2013-01-01
Stuttering is a complex speech disorder. Previous studies indicate a tendency towards elevated motor threshold for the left hemisphere, as measured using transcranial magnetic stimulation (TMS). This may reflect a monohemispheric motor system impairment. The purpose of the study was to investigate the relative side-to-side difference (asymmetry) and the absolute levels of motor threshold for the hand area, using TMS in adults who stutter (n = 15) and in controls (n = 15). In accordance with the hypothesis, the groups differed significantly regarding the relative side-to-side difference of finger motor threshold (p = 0.0026), with the stuttering group showing higher motor threshold of the left hemisphere in relation to the right. Also the absolute level of the finger motor threshold for the left hemisphere differed between the groups (p = 0.049). The obtained results, together with previous investigations, provide support for the hypothesis that stuttering tends to be related to left hemisphere motor impairment, and possibly to a dysfunctional state of bilateral speech motor control. PMID:24146930
Bierer, Julie Arenberg
2007-03-01
The efficacy of cochlear implants is limited by spatial and temporal interactions among channels. This study explores the spatially restricted tripolar electrode configuration and compares it to bipolar and monopolar stimulation. Measures of threshold and channel interaction were obtained from nine subjects implanted with the Clarion HiFocus-I electrode array. Stimuli were biphasic pulses delivered at 1020 pulses/s. Threshold increased from monopolar to bipolar to tripolar stimulation and was most variable across channels with the tripolar configuration. Channel interaction, quantified by the shift in threshold between single- and two-channel stimulation, occurred for all three configurations but was largest for the monopolar and simultaneous conditions. The threshold shifts with simultaneous tripolar stimulation were slightly smaller than with bipolar and were not as strongly affected by the timing of the two channel stimulation as was monopolar. The subjects' performances on clinical speech tests were correlated with channel-to-channel variability in tripolar threshold, such that greater variability was related to poorer performance. The data suggest that tripolar channels with high thresholds may reveal cochlear regions of low neuron survival or poor electrode placement.
Jackson, Emily; Leitao, Suze; Claessen, Mary
2016-01-01
Children with specific language impairment (SLI) often experience word-learning difficulties, which are suggested to originate in the early stage of word learning: fast mapping. Some previous research indicates significantly poorer fast mapping capabilities in children with SLI compared with typically developing (TD) counterparts, with a range of methodological factors impacting on the consistency of this finding. Research has explored key issues that might underlie fast mapping difficulties in children with SLI, with strong theoretical support but little empirical evidence for the role of phonological short-term memory (STM). Additionally, further research is required to explore the influence of receptive vocabulary on fast mapping capabilities. Understanding the factors associated with fast mapping difficulties that are experienced by children with SLI may lead to greater theoretically driven word-learning intervention. To investigate whether children with SLI demonstrate significant difficulties with fast mapping, and to explore the related factors. It was hypothesized that children with SLI would score significantly lower on a fast mapping production task compared with TD children, and that phonological STM and receptive vocabulary would significantly predict fast mapping production scores in both groups of children. Twenty-three children with SLI (mean = 64.39 months, SD = 4.10 months) and 26 TD children (mean = 65.92 months, SD = 2.98) were recruited from specialist language and mainstream schools. All participants took part in a unique, interactive fast-mapping task whereby nine novel objects with non-word labels were presented and production accuracy was assessed. A non-word repetition test and the Peabody Picture Vocabulary Test-Fourth Edition (PPVT-IV) were also administered as measures of phonological STM capacity and receptive vocabulary, respectively. Results of the fast-mapping task indicated that children with SLI had significantly poorer fast mapping production scores than TD children. Scores from the non-word repetition task were also significantly lower for the SLI group, revealing reduced phonological STM capacity. Phonological STM capacity and receptive vocabulary emerged as significant predictors of fast mapping performance when the group data were combined in a multiple regression analysis. These results suggest that the word-learning difficulties experienced by children with SLI may originate at the fast mapping stage, and that phonological STM and receptive vocabulary significantly predict fast mapping ability. These findings contribute to the theoretical understanding of word-learning difficulties in children with SLI and may inform lexical learning intervention. © 2015 Royal College of Speech and Language Therapists.
Masked speech perception across the adult lifespan: Impact of age and hearing impairment.
Goossens, Tine; Vercammen, Charlotte; Wouters, Jan; van Wieringen, Astrid
2017-02-01
As people grow older, speech perception difficulties become highly prevalent, especially in noisy listening situations. Moreover, it is assumed that speech intelligibility is more affected in the event of background noises that induce a higher cognitive load, i.e., noises that result in informational versus energetic masking. There is ample evidence showing that speech perception problems in aging persons are partly due to hearing impairment and partly due to age-related declines in cognition and suprathreshold auditory processing. In order to develop effective rehabilitation strategies, it is indispensable to know how these different degrading factors act upon speech perception. This implies disentangling effects of hearing impairment versus age and examining the interplay between both factors in different background noises of everyday settings. To that end, we investigated open-set sentence identification in six participant groups: a young (20-30 years), middle-aged (50-60 years), and older cohort (70-80 years), each including persons who had normal audiometric thresholds up to at least 4 kHz, on the one hand, and persons who were diagnosed with elevated audiometric thresholds, on the other hand. All participants were screened for (mild) cognitive impairment. We applied stationary and amplitude modulated speech-weighted noise, which are two types of energetic maskers, and unintelligible speech, which causes informational masking in addition to energetic masking. By means of these different background noises, we could look into speech perception performance in listening situations with a low and high cognitive load, respectively. Our results indicate that, even when audiometric thresholds are within normal limits up to 4 kHz, irrespective of threshold elevations at higher frequencies, and there is no indication of even mild cognitive impairment, masked speech perception declines by middle age and decreases further on to older age. The impact of hearing impairment is as detrimental for young and middle-aged as it is for older adults. When the background noise becomes cognitively more demanding, there is a larger decline in speech perception, due to age or hearing impairment. Hearing impairment seems to be the main factor underlying speech perception problems in background noises that cause energetic masking. However, in the event of informational masking, which induces a higher cognitive load, age appears to explain a significant part of the communicative impairment as well. We suggest that the degrading effect of age is mediated by deficiencies in temporal processing and central executive functions. This study may contribute to the improvement of auditory rehabilitation programs aiming to prevent aging persons from missing out on conversations, which, in turn, will improve their quality of life. Copyright © 2016 Elsevier B.V. All rights reserved.
Eckert, Mark A; Matthews, Lois J; Dubno, Judy R
2017-01-01
Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility. We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982) were significantly associated with measures of speech recognition in noise that controlled for differences in speech audibility. One hundred sixty-two middle-aged and older adults had HHIE total scores that were significantly associated with audibility-adjusted measures of speech recognition for low-context but not high-context sentences. These findings were driven by HHIE items involving negative feelings related to communication difficulties that also captured variance in subjective ratings of effort and frustration that predicted speech recognition. The average pure-tone threshold accounted for some of the variance in the association between the HHIE and audibility-adjusted speech recognition, suggesting an effect of central and peripheral auditory system decline related to elevated thresholds. The accumulation of difficult listening experiences appears to produce a self-assessment of hearing handicap resulting from (a) reduced audibility of stimuli, (b) declines in the central and peripheral auditory system function, and (c) additional individual variation in central nervous system function.
Matthews, Lois J.; Dubno, Judy R.
2017-01-01
Purpose Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility. Method We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982) were significantly associated with measures of speech recognition in noise that controlled for differences in speech audibility. Results One hundred sixty-two middle-aged and older adults had HHIE total scores that were significantly associated with audibility-adjusted measures of speech recognition for low-context but not high-context sentences. These findings were driven by HHIE items involving negative feelings related to communication difficulties that also captured variance in subjective ratings of effort and frustration that predicted speech recognition. The average pure-tone threshold accounted for some of the variance in the association between the HHIE and audibility-adjusted speech recognition, suggesting an effect of central and peripheral auditory system decline related to elevated thresholds. Conclusion The accumulation of difficult listening experiences appears to produce a self-assessment of hearing handicap resulting from (a) reduced audibility of stimuli, (b) declines in the central and peripheral auditory system function, and (c) additional individual variation in central nervous system function. PMID:28060993
Spatial encoding in spinal sensorimotor circuits differs in different wild type mice strains
Thelin, Jonas; Schouenborg, Jens
2008-01-01
Background Previous studies in the rat have shown that the spatial organisation of the receptive fields of nociceptive withdrawal reflex (NWR) system are functionally adapted through experience dependent mechanisms, termed somatosensory imprinting, during postnatal development. Here we wanted to clarify 1) if mice exhibit a similar spatial encoding of sensory input to NWR as previously found in the rat and 2) if mice strains with a poor learning capacity in various behavioural tests, associated with deficient long term potention, also exhibit poor adaptation of NWR. The organisation of the NWR system in two adult wild type mouse strains with normal long term potentiation (LTP) in hippocampus and two adult wild type mouse strains exhibiting deficiencies in corresponding LTP were used and compared to previous results in the rat. Receptive fields of reflexes in single hindlimb muscles were mapped with CO2 laser heat pulses. Results While the spatial organisation of the nociceptive receptive fields in mice with normal LTP were very similar to those in rats, the LTP impaired strains exhibited receptive fields of NWRs with aberrant sensitivity distributions. However, no difference was found in NWR thresholds or onset C-fibre latencies suggesting that the mechanisms determining general reflex sensitivity and somatosensory imprinting are different. Conclusion Our results thus confirm that sensory encoding in mice and rat NWR is similar, provided that mice strains with a good learning capability are studied and raise the possibility that LTP like mechanisms are involved in somatosensory imprinting. PMID:18495020
Speech, Language, and Cognition in Preschool Children with Epilepsy
ERIC Educational Resources Information Center
Selassie, G. Rejno-Habte; Viggedal, G.; Olsson, I.; Jennische, M.
2008-01-01
We studied expressive and receptive language, oral motor ability, attention, memory, and intelligence in 20 6-year-old children with epilepsy (14 females, six males; mean age 6y 5mo, range 6y-6y 11mo) without learning disability, cerebral palsy (CP), and/or autism, and in 30 reference children without epilepsy (18 females, 12 males; mean age 6y…
ERIC Educational Resources Information Center
Binger, Cathy; Kent-Walsh, Jennifer; King, Marika; Mansfield, Lindsay
2017-01-01
Purpose: This study investigated the early rule-based sentence productions of 3- and 4-year-old children with severe speech disorders who used single-meaning graphic symbols to communicate. Method: Ten 3- and 4-year-olds requiring the use of augmentative and alternative communication, who had largely intact receptive language skills, received…
Two Studies of the Syntactic Knowledge of Young Children. A Preliminary Report.
ERIC Educational Resources Information Center
Smith, Carlota S.
This paper deals with two experiments whose purposes are to investigate the linguistic competence of young children and their receptivity to adult speech. In the free response experiment, imperative sentences were presented to 1 1/2- to 2 1/2-year-olds. The sentences were minimal (a single noun), telegraphic, or full adult sentences. The youngest…
Communication and Reception in Teaching: The Age of Image "versus" the "Weight" of Words
ERIC Educational Resources Information Center
Bradea, Adela
2015-01-01
Contemporary culture is mainly a culture of image. We get our information seeing. Examination of images is free, while reading is impelled by the necessity of browsing the whole text. The image seems more appropriate than the text when trying to communicate easy and quickly. The speech calls for articulated language, expressed through a symbolic…
Using Animated Language Software with Children Diagnosed with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Mulholland, Rita; Pete, Ann Marie; Popeson, Joanne
2008-01-01
We examined the impact of using an animated software program (Team Up With Timo) on the expressive and receptive language abilities of five children ages 5-9 in a self-contained Learning and Language Disabilities class. We chose to use Team Up With Timo (Animated Speech Corporation) because it allows the teacher to personalize the animation for…
The Effect of Remote Masking on the Reception of Speech by Young School-Age Children
ERIC Educational Resources Information Center
Youngdahl, Carla L.; Healy, Eric W.; Yoho, Sarah E.; Apoux, Frédéric; Holt, Rachael Frush
2018-01-01
Purpose: Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about children's…
Normal Language Skills and Normal Intelligence in a Child with de Lange Syndrome.
ERIC Educational Resources Information Center
Cameron, Thomas H.; Kelly, Desmond P.
1988-01-01
The subject of this case report is a two-year, seven-month-old girl with de Lange syndrome, normal intelligence, and age-appropriate language skills. She demonstrated initial delays in gross motor skills and in receptive and expressive language but responded well to intensive speech and language intervention, as well as to physical therapy.…
The Mechanism of Speech Processing in Congenital Amusia: Evidence from Mandarin Speakers
Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren
2012-01-01
Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results. PMID:22347374
The mechanism of speech processing in congenital amusia: evidence from Mandarin speakers.
Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren
2012-01-01
Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results.
Speech and motor disturbances in Rett syndrome.
Bashina, V M; Simashkova, N V; Grachev, V V; Gorbachevskaya, N L
2002-01-01
Rett syndrome is a severe, genetically determined disease of early childhood which produces a defined clinical phenotype in girls. The main clinical manifestations include lesions affecting speech functions, involving both expressive and receptive speech, as well as motor functions, producing apraxia of the arms and profound abnormalities of gait in the form of ataxia-apraxia. Most investigators note that patients have variability in the severity of derangement to large motor acts and in the damage to fine hand movements and speech functions. The aims of the present work were to study disturbances of speech and motor functions over 2-5 years in 50 girls aged 12 months to 14 years with Rett syndrome and to analyze the correlations between these disturbances. The results of comparing clinical data and EEG traces supported the stepwise involvement of frontal and parietal-temporal cortical structures in the pathological process. The ability to organize speech and motor activity is affected first, with subsequent development of lesions to gnostic functions, which are in turn followed by derangement of subcortical structures and the cerebellum and later by damage to structures in the spinal cord. A clear correlation was found between the severity of lesions to motor and speech functions and neurophysiological data: the higher the level of preservation of elements of speech and motor functions, the smaller were the contributions of theta activity and the greater the contributions of alpha and beta activities to the EEG. The possible pathogenetic mechanisms underlying the motor and speech disturbances in Rett syndrome are discussed.
Wilson, Richard H
2011-01-01
Since the 1940s, measures of pure-tone sensitivity and speech recognition in quiet have been vital components of the audiologic evaluation. Although early investigators urged that speech recognition in noise also should be a component of the audiologic evaluation, only recently has this suggestion started to become a reality. This report focuses on the Words-in-Noise (WIN) Test, which evaluates word recognition in multitalker babble at seven signal-to-noise ratios and uses the 50% correct point (in dB SNR) calculated with the Spearman-Kärber equation as the primary metric. The WIN was developed and validated in a series of 12 laboratory studies. The current study examined the effectiveness of the WIN materials for measuring the word-recognition performance of patients in a typical clinical setting. To examine the relations among three audiometric measures including pure-tone thresholds, word-recognition performances in quiet, and word-recognition performances in multitalker babble for veterans seeking remediation for their hearing loss. Retrospective, descriptive. The participants were 3430 veterans who for the most part were evaluated consecutively in the Audiology Clinic at the VA Medical Center, Mountain Home, Tennessee. The mean age was 62.3 yr (SD = 12.8 yr). The data were collected in the course of a 60 min routine audiologic evaluation. A history, otoscopy, and aural-acoustic immittance measures also were included in the clinic protocol but were not evaluated in this report. Overall, the 1000-8000 Hz thresholds were significantly lower (better) in the right ear (RE) than in the left ear (LE). There was a direct relation between age and the pure-tone thresholds, with greater change across age in the high frequencies than in the low frequencies. Notched audiograms at 4000 Hz were observed in at least one ear in 41% of the participants with more unilateral than bilateral notches. Normal pure-tone thresholds (≤20 dB HL) were obtained from 6% of the participants. Maximum performance on the Northwestern University Auditory Test No. 6 (NU-6) in quiet was ≥90% correct by 50% of the participants, with an additional 20% performing at ≥80% correct; the RE performed 1-3% better than the LE. Of the 3291 who completed the WIN on both ears, only 7% exhibited normal performance (50% correct point of ≤6 dB SNR). Overall, WIN performance was significantly better in the RE (mean = 13.3 dB SNR) than in the LE (mean = 13.8 dB SNR). Recognition performance on both the NU-6 and the WIN decreased as a function of both pure-tone hearing loss and age. There was a stronger relation between the high-frequency pure-tone average (1000, 2000, and 4000 Hz) and the WIN than between the pure-tone average (500, 1000, and 2000 Hz) and the WIN. The results on the WIN from both the previous laboratory studies and the current clinical study indicate that the WIN is an appropriate clinic instrument to assess word-recognition performance in background noise. Recognition performance on a speech-in-quiet task does not predict performance on a speech-in-noise task, as the two tasks reflect different domains of auditory function. Experience with the WIN indicates that word-in-noise tasks should be considered the "stress test" for auditory function. American Academy of Audiology.
Zenker Castro, Franz; Fernández Belda, Rafael; Barajas de Prat, José Juan
2008-12-01
In this study we present a case of a 71-year-old female patient with sensorineural hearing loss and fitted with bilateral hearing aids. The patient complained of scant benefit from the hearing aid fitting with difficulties in understanding speech with background noise. The otolaryngology examination was normal. Audiological tests revealed bilateral sensorineural hearing loss with threshold values of 51 and 50 dB HL in the right and left ear. The Dichotic Digit Test was administered in a divided attention mode and focalizing the attention to each ear. Results in this test are consistent with a Central Auditory Processing Disorder.
Musical duplex perception: perception of figurally good chords with subliminal distinguishing tones.
Hall, M D; Pastore, R E
1992-08-01
In a variant of duplex perception with speech, phoneme perception is maintained when distinguishing components are presented below intensities required for separate detection, forming the basis for the claim that a phonetic module takes precedence over nonspeech processing. This finding is replicated with music chords (C major and minor) created by mixing a piano fifth with a sinusoidal distinguishing tone (E or E flat). Individual threshold intensities for detecting E or E flat in the context of the fixed piano tones are established. Chord discrimination thresholds defined by distinguishing tone intensity were determined. Experiment 2 verified masked detection thresholds and subliminal chord identification for experienced musicians. Accurate chord perception was maintained at distinguishing tone intensities nearly 20 dB below the threshold for separate detection. Speech and music findings are argued to demonstrate general perceptual principles.
[The endpoint detection of cough signal in continuous speech].
Yang, Guoqing; Mo, Hongqiang; Li, Wen; Lian, Lianfang; Zheng, Zeguang
2010-06-01
The endpoint detection of cough signal in continuous speech has been researched in order to improve the efficiency and veracity of manual recognition or computer-based automatic recognition. First, using the short time zero crossing ratio(ZCR) for identifying the suspicious coughs and getting the threshold of short time energy based on acoustic characteristics of cough. Then, the short time energy is combined with short time ZCR in order to implement the endpoint detection of cough in continuous speech. To evaluate the effect of the method, first, the virtual number of coughs in each recording was identified by two experienced doctors using the graphical user interface (GUI). Second, the recordings were analyzed by automatic endpoint detection program under Matlab7.0. Finally, the comparison between these two results showed: The error rate of undetected cough is 2.18%, and 98.13% of noise, silence and speech were removed. The way of setting short time energy threshold is robust. The endpoint detection program can remove most speech and noise, thus maintaining a lower rate of error.
Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K
2016-01-01
Despite the positive effects of cochlear implantation, postimplant variability in speech perception and oral language outcomes is still difficult to predict. The aim of this study was to identify neuroimaging biomarkers of postimplant speech perception and oral language performance in children with hearing loss who receive a cochlear implant. The authors hypothesized positive correlations between blood oxygen level-dependent functional magnetic resonance imaging (fMRI) activation in brain regions related to auditory language processing and attention and scores on the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Early Speech Perception Test for Profoundly Hearing-Impaired Children (ESP), in children with congenital hearing loss. Eleven children with congenital hearing loss were recruited for the present study based on referral for clinical MRI and other inclusion criteria. All participants were <24 months at fMRI scanning and <36 months at first implantation. A silent background fMRI acquisition method was performed to acquire fMRI during auditory stimulation. A voxel-based analysis technique was utilized to generate z maps showing significant contrast in brain activation between auditory stimulation conditions (spoken narratives and narrow band noise). CELF-P2 and ESP were administered 2 years after implantation. Because most participants reached a ceiling on ESP, a voxel-wise regression analysis was performed between preimplant fMRI activation and postimplant CELF-P2 scores alone. Age at implantation and preimplant hearing thresholds were controlled in this regression analysis. Four brain regions were found to be significantly correlated with CELF-P2 scores. These clusters of positive correlation encompassed the temporo-parieto-occipital junction, areas in the prefrontal cortex and the cingulate gyrus. For the story versus silence contrast, CELF-P2 core language score demonstrated significant positive correlation with activation in the right angular gyrus (r = 0.95), left medial frontal gyrus (r = 0.94), and left cingulate gyrus (r = 0.96). For the narrow band noise versus silence contrast, the CELF-P2 core language score exhibited significant positive correlation with activation in the left angular gyrus (r = 0.89; for all clusters, corrected p < 0.05). Four brain regions related to language function and attention were identified that correlated with CELF-P2. Children with better oral language performance postimplant displayed greater activation in these regions preimplant. The results suggest that despite auditory deprivation, these regions are more receptive to gains in oral language development performance of children with hearing loss who receive early intervention via cochlear implantation. The present study suggests that oral language outcome following cochlear implant may be predicted by preimplant fMRI with auditory stimulation using natural speech.
Narrative skills in children with selective mutism: an exploratory study.
McInnes, Alison; Fung, Daniel; Manassis, Katharina; Fiksenbaum, Lisa; Tannock, Rosemary
2004-11-01
Selective mutism (SM) is a rare and complex disorder associated with anxiety symptoms and speech-language deficits; however, the nature of these language deficits has not been studied systematically. A novel cross-disciplinary assessment protocol was used to assess anxiety and nonverbal cognitive, receptive language, and expressive narrative abilities in 7 children with SM and a comparison group of 7 children with social phobia (SP). The children with SM produced significantly shorter narratives than children with SP, despite showing normal nonverbal cognitive and receptive language abilities. The findings suggest that SM may involve subtle expressive language deficits that may influence academic performance and raise additional questions for further research. The assessment procedure developed for this study may be potentially useful for language clinicians.
Humes, Larry E.; Kidd, Gary R.; Lentz, Jennifer J.
2013-01-01
This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male) ranging in age from 60 to 86 (mean = 69.2). Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures), psychophysical (17 measures), and speech-understanding (9 measures), as well as the Speech, Spatial, and Qualities of Hearing (SSQ) self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference). All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding) measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection) as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI), and performance on the text-recognition-threshold (TRT) task (a visual analog of interrupted speech recognition). These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance. PMID:24098273
Aubret, Fabien; Bonnet, Xavier; Shine, Richard; Lourdais, Olivier
2002-09-01
Reproduction is energetically expensive for both sexes, but the magnitude of expenditure and its relationship to reproductive success differ fundamentally between males and females. Males allocate relatively little to gamete production and, thus, can reproduce successfully with only minor energy investment. In contrast, females of many species experience high fecundity-independent costs of reproduction (such as migration to nesting sites), so they need to amass substantial energy reserves before initiating reproductive activity. Thus, we expect that the relationship between energy reserves and the intensity of reproductive behavior involves a threshold effect in females, but a gradual (or no) effect in males. We tested this prediction using captive vipers (Vipera aspis), dividing both males and females into groups of high versus low body condition. Snakes from each group were placed together and observed for reproductive behavior; sex-steroid levels were also measured. As predicted, females in below-average body condition had very low estradiol levels and did not show sexual receptivity, whereas males of all body condition indices had significant testosterone levels and displayed active courtship. Testosterone levels and courtship intensity increased gradually (i.e., no step function) with body condition in males, but high estradiol levels and sexual receptivity were seen only in females with body reserves above a critical threshold. Copyright 2002 Elsevier Science (USA)
Lodeiro-Fernández, Leire; Lorenzo-López, Laura; Maseda, Ana; Núñez-Naveira, Laura; Rodríguez-Villamil, José Luis; Millán-Calenti, José Carlos
2015-01-01
Purpose The possible relationship between audiometric hearing thresholds and cognitive performance on language tests was analyzed in a cross-sectional cohort of older adults aged ≥65 years (N=98) with different degrees of cognitive impairment. Materials and methods Participants were distributed into two groups according to Reisberg’s Global Deterioration Scale (GDS): a normal/predementia group (GDS scores 1–3) and a moderate/moderately severe dementia group (GDS scores 4 and 5). Hearing loss (pure-tone audiometry) and receptive and production-based language function (Verbal Fluency Test, Boston Naming Test, and Token Test) were assessed. Results Results showed that the dementia group achieved significantly lower scores than the predementia group in all language tests. A moderate negative correlation between hearing loss and verbal comprehension (r=−0.298; P<0.003) was observed in the predementia group (r=−0.363; P<0.007). However, no significant relationship between hearing loss and verbal fluency and naming scores was observed, regardless of cognitive impairment. Conclusion In the predementia group, reduced hearing level partially explains comprehension performance but not language production. In the dementia group, hearing loss cannot be considered as an explanatory factor of poor receptive and production-based language performance. These results are suggestive of cognitive rather than simply auditory problems to explain the language impairment in the elderly. PMID:25914528
The Effect of Remote Masking on the Reception of Speech by Young School-Age Children.
Youngdahl, Carla L; Healy, Eric W; Yoho, Sarah E; Apoux, Frédéric; Holt, Rachael Frush
2018-02-15
Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about children's ability to listen selectively during speech recognition. Accordingly, the current study examines remote masking during speech recognition in children and adults. Adults and 7- and 5-year-old children performed sentence recognition in the presence of various spectrally remote maskers. Intelligibility was determined for each remote-masker condition, and performance was compared across age groups. It was found that speech recognition for 5-year-olds was reduced in the presence of spectrally remote noise, whereas the maskers had no effect on the 7-year-olds or adults. Maskers of different bandwidth and remoteness had similar effects. In accord with psychoacoustic data, young children do not appear to focus on a spectral region of interest and ignore other regions during speech recognition. This tendency may help account for their typically poorer speech perception in noise. This study also appears to capture an important developmental stage, during which a substantial refinement in spectral listening occurs.
Masterson, Julie J.; Preston, Jonathan L.
2015-01-01
Purpose This archival investigation examined the relationship between preliteracy speech sound production skill (SSPS) and spelling in Grade 3 using a dataset in which children's receptive vocabulary was generally within normal limits, speech therapy was not provided until Grade 2, and phonological awareness instruction was discouraged at the time data were collected. Method Participants (N = 250), selected from the Templin Archive (Templin, 2004), varied on prekindergarten SSPS. Participants' real word spellings in Grade 3 were evaluated using a metric of linguistic knowledge, the Computerized Spelling Sensitivity System (Masterson & Apel, 2013). Relationships between kindergarten speech error types and later spellings also were explored. Results Prekindergarten children in the lowest SPSS (7th percentile) scored poorest among articulatory subgroups on both individual spelling elements (phonetic elements, junctures, and affixes) and acceptable spelling (using relatively more omissions and illegal spelling patterns). Within the 7th percentile subgroup, there were no statistical spelling differences between those with mostly atypical speech sound errors and those with mostly typical speech sound errors. Conclusions Findings were consistent with predictions from dual route models of spelling that SSPS is one of many variables associated with spelling skill and that children with impaired SSPS are at risk for spelling difficulty. PMID:26380965
NASA Astrophysics Data System (ADS)
Tallal, Paula; Miller, Steve L.; Bedi, Gail; Byma, Gary; Wang, Xiaoqin; Nagarajan, Srikantan S.; Schreiner, Christoph; Jenkins, William M.; Merzenich, Michael M.
1996-01-01
A speech processing algorithm was developed to create more salient versions of the rapidly changing elements in the acoustic waveform of speech that have been shown to be deficiently processed by language-learning impaired (LLI) children. LLI children received extensive daily training, over a 4-week period, with listening exercises in which all speech was translated into this synthetic form. They also received daily training with computer "games" designed to adaptively drive improvements in temporal processing thresholds. Significant improvements in speech discrimination and language comprehension abilities were demonstrated in two independent groups of LLI children.
Hess, Christi; Zettler-Greeley, Cynthia; Godar, Shelly P; Ellis-Weismer, Susan; Litovsky, Ruth Y
2014-01-01
Growing evidence suggests that children who are deaf and use cochlear implants (CIs) can communicate effectively using spoken language. Research has reported that age of implantation and length of experience with the CI play an important role in a predicting a child's linguistic development. In recent years, the increase in the number of children receiving bilateral CIs (BiCIs) has led to interest in new variables that may also influence the development of hearing, speech, and language abilities, such as length of bilateral listening experience and the length of time between the implantation of the two CIs. One goal of the present study was to determine how a cohort of children with BiCIs performed on standardized measures of language and nonverbal cognition. This study examined the relationship between performance on language and nonverbal intelligence quotient (IQ) tests and the ages at implantation of the first CI and second CI. This study also examined whether early bilateral activation is related to better language scores. Children with BiCIs (n = 39; ages 4 to 9 years) were tested on two standardized measures, the Test of Language Development and the Leiter International Performance Scale-Revised, to evaluate their expressive/receptive language skills and nonverbal IQ/memory. Hierarchical regression analyses were used to evaluate whether BiCI hearing experience predicts language performance. While large intersubject variability existed, on average, almost all the children with BiCIs scored within or above normal limits on measures of nonverbal cognition. Expressive and receptive language scores were highly variable, less likely to be above the normative mean, and did not correlate with Length of first CI Use, defined as length of auditory experience with one cochlear implant, or Length of second CI Use, defined as length of auditory experience with two cochlear implants. All children in the present study had BiCIs. Most IQ scores were either at or above that found in the general population of typically hearing children. However, there was greater variability in their performance on a standardized test of expressive and receptive language. This cohort of children, who are mainstreamed in schools at age-appropriate grades, whose mothers' education is high, and whose families' socioecononomic status is high, had, as a group, on average, language scores within the same range as the normative sample of hearing children. Further research identifying the predictors that contribute to the high variability in both expressive and receptive language scores in children with BiCIs will provide useful information that can aid in clinical management and decision making.
Schlund, M W
2000-10-01
Bedside hearing screenings are routinely conducted by speech and language pathologists for brain injury survivors during rehabilitation. Cognitive deficits resulting from brain injury, however, may interfere with obtaining estimates of auditory thresholds. Poor comprehension or attention deficits often compromise patient abilities to follow procedural instructions. This article describes the effects of jointly applying behavioral methods and psychophysical methods to improve two severely brain-injured survivors' attending and reporting on auditory test stimuli presentation. Treatment consisted of stimulus control training that involved differentially reinforcing responding in the presence and absence of an auditory test tone. Subsequent hearing screenings were conducted with novel auditory test tones and a common titration procedure. Results showed that prior stimulus control training improved attending and reporting such that hearing screenings were conducted and estimates of auditory thresholds were obtained.
Enhanced perception of pitch changes in speech and music in early blind adults.
Arnaud, Laureline; Gracco, Vincent; Ménard, Lucie
2018-06-12
It is well known that congenitally blind adults have enhanced auditory processing for some tasks. For instance, they show supra-normal capacity to perceive accelerated speech. However, only a few studies have investigated basic auditory processing in this population. In this study, we investigated if pitch processing enhancement in the blind is a domain-general or domain-specific phenomenon, and if pitch processing shares the same properties as in the sighted regarding how scores from different domains are associated. Fifteen congenitally blind adults and fifteen sighted adults participated in the study. We first created a set of personalized native and non-native vowel stimuli using an identification and rating task. Then, an adaptive discrimination paradigm was used to determine the frequency difference limen for pitch direction identification of speech (native and non-native vowels) and non-speech stimuli (musical instruments and pure tones). The results show that the blind participants had better discrimination thresholds than controls for native vowels, music stimuli, and pure tones. Whereas within the blind group, the discrimination thresholds were smaller for musical stimuli than speech stimuli, replicating previous findings in sighted participants, we did not find this effect in the current control group. Further analyses indicate that older sighted participants show higher thresholds for instrument sounds compared to speech sounds. This effect of age was not found in the blind group. Moreover, the scores across domains were not associated to the same extent in the blind as they were in the sighted. In conclusion, in addition to providing further evidence of compensatory auditory mechanisms in early blind individuals, our results point to differences in how auditory processing is modulated in this population. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Emerson, Anne; Dearden, Jackie
2013-01-01
A 10-year-old boy with autism was part of an evaluation of an innovative intervention focused on improving communication skills. His school was using the minimal speech approach (Potter and Whittaker, 2001) with all children in accordance with government guidance. The pupil's receptive language had not been formally assessed due to his lack of…
Thresholding of auditory cortical representation by background noise
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
2014-01-01
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
Brainstem transcription of speech is disrupted in children with autism spectrum disorders
Russo, Nicole; Nicol, Trent; Trommer, Barbara; Zecker, Steve; Kraus, Nina
2009-01-01
Language impairment is a hallmark of autism spectrum disorders (ASD). The origin of the deficit is poorly understood although deficiencies in auditory processing have been detected in both perception and cortical encoding of speech sounds. Little is known about the processing and transcription of speech sounds at earlier (brainstem) levels or about how background noise may impact this transcription process. Unlike cortical encoding of sounds, brainstem representation preserves stimulus features with a degree of fidelity that enables a direct link between acoustic components of the speech syllable (e.g., onsets) to specific aspects of neural encoding (e.g., waves V and A). We measured brainstem responses to the syllable /da/, in quiet and background noise, in children with and without ASD. Children with ASD exhibited deficits in both the neural synchrony (timing) and phase locking (frequency encoding) of speech sounds, despite normal click-evoked brainstem responses. They also exhibited reduced magnitude and fidelity of speech-evoked responses and inordinate degradation of responses by background noise in comparison to typically developing controls. Neural synchrony in noise was significantly related to measures of core and receptive language ability. These data support the idea that abnormalities in the brainstem processing of speech contribute to the language impairment in ASD. Because it is both passively-elicited and malleable, the speech-evoked brainstem response may serve as a clinical tool to assess auditory processing as well as the effects of auditory training in the ASD population. PMID:19635083
Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam
2013-09-01
Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school when communicating with their parents and educators prior to and after implantation. The trial has been registered at IRCT.ir, number IRCT201109267637N1. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Receptive and Productive Vocabulary Sizes of L2 Learners
ERIC Educational Resources Information Center
Webb, Stuart
2008-01-01
This study investigated the relationship between receptive and productive vocabulary size. The experimental design expanded upon earlier methodologies by using equivalent receptive and productive test formats with different receptive and productive target words to provide more accurate results. Translation tests were scored at two levels of…
Lucero, Jorge C.; Koenig, Laura L.; Lourenço, Kelem G.; Ruty, Nicolas; Pelorson, Xavier
2011-01-01
This paper examines an updated version of a lumped mucosal wave model of the vocal fold oscillation during phonation. Threshold values of the subglottal pressure and the mean (DC) glottal airflow for the oscillation onset are determined. Depending on the nonlinear characteristics of the model, an oscillation hysteresis phenomenon may occur, with different values for the oscillation onset and offset threshold. The threshold values depend on the oscillation frequency, but the occurrence of the hysteresis is independent of it. The results are tested against pressure data collected from a mechanical replica of the vocal folds, and oral airflow data collected from speakers producing intervocalic ∕h∕. In the human speech data, observed differences between voice onset and offset may be attributed to variations in voice pitch, with a very small or inexistent hysteresis phenomenon. PMID:21428520
Neuroscience-inspired computational systems for speech recognition under noisy conditions
NASA Astrophysics Data System (ADS)
Schafer, Phillip B.
Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes advantage of the neural representation's invariance in noise. The scheme centers on a speech similarity measure based on the longest common subsequence between spike sequences. The combined encoding and decoding scheme outperforms a benchmark system in extremely noisy acoustic conditions. Finally, I consider methods for decoding spike representations of continuous speech. To help guide the alignment of templates to words, I design a syllable detection scheme that robustly marks the locations of syllabic nuclei. The scheme combines SVM-based training with a peak selection algorithm designed to improve noise tolerance. By incorporating syllable information into the ASR system, I obtain strong recognition results in noisy conditions, although the performance in noiseless conditions is below the state of the art. The work presented here constitutes a novel approach to the problem of ASR that can be applied in the many challenging acoustic environments in which we use computer technologies today. The proposed spike-based processing methods can potentially be exploited in effcient hardware implementations and could significantly reduce the computational costs of ASR. The work also provides a framework for understanding the advantages of spike-based acoustic coding in the human brain.
Small intragenic deletion in FOXP2 associated with childhood apraxia of speech and dysarthria.
Turner, Samantha J; Hildebrand, Michael S; Block, Susan; Damiano, John; Fahey, Michael; Reilly, Sheena; Bahlo, Melanie; Scheffer, Ingrid E; Morgan, Angela T
2013-09-01
Relatively little is known about the neurobiological basis of speech disorders although genetic determinants are increasingly recognized. The first gene for primary speech disorder was FOXP2, identified in a large, informative family with verbal and oral dyspraxia. Subsequently, many de novo and familial cases with a severe speech disorder associated with FOXP2 mutations have been reported. These mutations include sequencing alterations, translocations, uniparental disomy, and genomic copy number variants. We studied eight probands with speech disorder and their families. Family members were phenotyped using a comprehensive assessment of speech, oral motor function, language, literacy skills, and cognition. Coding regions of FOXP2 were screened to identify novel variants. Segregation of the variant was determined in the probands' families. Variants were identified in two probands. One child with severe motor speech disorder had a small de novo intragenic FOXP2 deletion. His phenotype included features of childhood apraxia of speech and dysarthria, oral motor dyspraxia, receptive and expressive language disorder, and literacy difficulties. The other variant was found in a family in two of three family members with stuttering, and also in the mother with oral motor impairment. This variant was considered a benign polymorphism as it was predicted to be non-pathogenic with in silico tools and found in database controls. This is the first report of a small intragenic deletion of FOXP2 that is likely to be the cause of severe motor speech disorder associated with language and literacy problems. Copyright © 2013 Wiley Periodicals, Inc.
Early speech development in Koolen de Vries syndrome limited by oral praxis and hypotonia.
Morgan, Angela T; Haaften, Leenke van; van Hulst, Karen; Edley, Carol; Mei, Cristina; Tan, Tiong Yang; Amor, David; Fisher, Simon E; Koolen, David A
2018-01-01
Communication disorder is common in Koolen de Vries syndrome (KdVS), yet its specific symptomatology has not been examined, limiting prognostic counselling and application of targeted therapies. Here we examine the communication phenotype associated with KdVS. Twenty-nine participants (12 males, 4 with KANSL1 variants, 25 with 17q21.31 microdeletion), aged 1.0-27.0 years were assessed for oral-motor, speech, language, literacy, and social functioning. Early history included hypotonia and feeding difficulties. Speech and language development was delayed and atypical from onset of first words (2; 5-3; 5 years of age on average). Speech was characterised by apraxia (100%) and dysarthria (93%), with stuttering in some (17%). Speech therapy and multi-modal communication (e.g., sign-language) was critical in preschool. Receptive and expressive language abilities were typically commensurate (79%), both being severely affected relative to peers. Children were sociable with a desire to communicate, although some (36%) had pragmatic impairments in domains, where higher-level language was required. A common phenotype was identified, including an overriding 'double hit' of oral hypotonia and apraxia in infancy and preschool, associated with severely delayed speech development. Remarkably however, speech prognosis was positive; apraxia resolved, and although dysarthria persisted, children were intelligible by mid-to-late childhood. In contrast, language and literacy deficits persisted, and pragmatic deficits were apparent. Children with KdVS require early, intensive, speech motor and language therapy, with targeted literacy and social language interventions as developmentally appropriate. Greater understanding of the linguistic phenotype may help unravel the relevance of KANSL1 to child speech and language development.
Right Ear Advantage of Speech Audiometry in Single-sided Deafness.
Wettstein, Vincent G; Probst, Rudolf
2018-04-01
Postlingual single-sided deafness (SSD) is defined as normal hearing in one ear and severely impaired hearing in the other ear. A right ear advantage and dominance of the left hemisphere are well established findings in individuals with normal hearing and speech processing. Therefore, it seems plausible that a right ear advantage would exist in patients with SSD. The audiometric database was searched to identify patients with SSD. Results from the German monosyllabic Freiburg word test and four-syllabic number test in quiet were evaluated. Results of right-sided SSD were compared with left-sided SSD. Statistical calculations were done with the Mann-Whitney U test. Four hundred and six patients with SSD were identified, 182 with right-sided and 224 with left-sided SSD. The two groups had similar pure-tone thresholds without significant differences. All test parameters of speech audiometry had better values for right ears (SSD left) when compared with left ears (SSD right). Statistically significant results (p < 0.05) were found for a weighted score (social index, 98.2 ± 4% right and 97.5 ± 4.7% left, p < 0.026), for word understanding at 60 dB SPL (95.2 ± 8.7% right and 93.9 ± 9.1% left, p < 0.035), and for the level at which 100% understanding was reached (61.5 ± 10.1 dB SPL right and 63.8 ± 11.1 dB SPL left, p < 0.022) on a performance-level function. A right ear advantage of speech audiometry was found in patients with SSD in this retrospective study of audiometric test results.
Assessing cognitive functioning in females with Rett syndrome by eye-tracking methodology.
Ahonniska-Assa, Jaana; Polack, Orli; Saraf, Einat; Wine, Judy; Silberg, Tamar; Nissenkorn, Andreea; Ben-Zeev, Bruria
2018-01-01
While many individuals with severe developmental impairments learn to communicate with augmentative and alternative communication (AAC) devices, a significant number of individuals show major difficulties in the effective use of AAC. Recent technological innovations, i.e., eye-tracking technology (ETT), aim to improve the transparency of communication and may also enable a more valid cognitive assessment. To investigate whether ETT in forced-choice tasks can enable children with very severe motor and speech impairments to respond consistently, allowing a more reliable evaluation of their language comprehension. Participants were 17 girls with Rett syndrome (M = 6:06 years). Their ability to respond by eye gaze was first practiced with computer games using ETT. Afterwards, their receptive vocabulary was assessed using the Peabody Picture Vocabulary Test-4 (PPVT-4). Target words were orally presented and participants responded by focusing their eyes on the preferred picture. Remarkable differences between the participants in receptive vocabulary were demonstrated using ETT. The verbal comprehension abilities of 32% of the participants ranged from low-average to mild cognitive impairment, and the other 68% of the participants showed moderate to severe impairment. Young age at the time of assessment was positively correlated with higher receptive vocabulary. The use of ETT seems to make the communicational signals of children with severe motor and communication impairments more easily understood. Early practice of ETT may improve the quality of communication and enable more reliable conclusions in learning and assessment sessions. Copyright © 2017 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.
Shetty, Hemanth Narayan; Koonoor, Vishal
2016-11-01
Past research has reported that children with repeated occurrences of otitis media at an early age have a negative impact on speech perception at a later age. The present study necessitates documenting the temporal and spectral processing on speech perception in noise from normal and atypical groups. The present study evaluated the relation between speech perception in noise and temporal; and spectral processing abilities in children with normal and atypical groups. The study included two experiments. In the first experiment, temporal resolution and frequency discrimination of listeners with normal group and three subgroups of atypical groups (had a history of OM) a) less than four episodes b) four to nine episodes and c) More than nine episodes during their chronological age of 6 months to 2 years) were evaluated using measures of temporal modulation transfer function and frequency discrimination test. In the second experiment, SNR 50 was evaluated on each group of study participants. All participants had normal hearing and middle ear status during the course of testing. Demonstrated that children with atypical group had significantly poorer modulation detection threshold, peak sensitivity and bandwidth; and frequency discrimination to each F0 than normal hearing listeners. Furthermore, there was a significant correlation seen between measures of temporal resolution; frequency discrimination and speech perception in noise. It infers atypical groups have significant impairment in extracting envelope as well as fine structure cues from the signal. The results supported the idea that episodes of OM before 2 years of agecan produce periods of sensory deprivation that alters the temporal and spectral skills which in turn has negative consequences on speech perception in noise. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Advantages of binaural amplification to acceptable noise level of directional hearing aid users.
Kim, Ja-Hee; Lee, Jae Hee; Lee, Ho-Ki
2014-06-01
The goal of the present study was to examine whether Acceptable Noise Levels (ANLs) would be lower (greater acceptance of noise) in binaural listening than in monaural listening condition and also whether meaningfulness of background speech noise would affect ANLs for directional microphone hearing aid users. In addition, any relationships between the individual binaural benefits on ANLs and the individuals' demographic information were investigated. Fourteen hearing aid users (mean age, 64 years) participated for experimental testing. For the ANL calculation, listeners' most comfortable listening levels and background noise level were measured. Using Korean ANL material, ANLs of all participants were evaluated under monaural and binaural amplification with a counterbalanced order. The ANLs were also compared across five types of competing speech noises, consisting of 1- through 8-talker background speech maskers. Seven young normal-hearing listeners (mean age, 27 years) participated for the same measurements as a pilot testing. The results demonstrated that directional hearing aid users accepted more noise (lower ANLs) with binaural amplification than with monaural amplification, regardless of the type of competing speech. When the background speech noise became more meaningful, hearing-impaired listeners accepted less amount of noise (higher ANLs), revealing that ANL is dependent on the intelligibility of the competing speech. The individuals' binaural advantages in ANLs were significantly greater for the listeners with longer experience of hearing aids, yet not related to their age or hearing thresholds. Binaural directional microphone processing allowed hearing aid users to accept a greater amount of background noise, which may in turn improve listeners' hearing aid success. Informational masking substantially influenced background noise acceptance. Given a significant association between ANLs and duration of hearing aid usage, ANL measurement can be useful for clinical counseling of binaural hearing aid candidates or unsuccessful users.
Fogerty, Daniel; Ahlstrom, Jayne B.; Bologna, William J.; Dubno, Judy R.
2015-01-01
This study investigated how single-talker modulated noise impacts consonant and vowel cues to sentence intelligibility. Younger normal-hearing, older normal-hearing, and older hearing-impaired listeners completed speech recognition tests. All listeners received spectrally shaped speech matched to their individual audiometric thresholds to ensure sufficient audibility with the exception of a second younger listener group who received spectral shaping that matched the mean audiogram of the hearing-impaired listeners. Results demonstrated minimal declines in intelligibility for older listeners with normal hearing and more evident declines for older hearing-impaired listeners, possibly related to impaired temporal processing. A correlational analysis suggests a common underlying ability to process information during vowels that is predictive of speech-in-modulated noise abilities. Whereas, the ability to use consonant cues appears specific to the particular characteristics of the noise and interruption. Performance declines for older listeners were mostly confined to consonant conditions. Spectral shaping accounted for the primary contributions of audibility. However, comparison with the young spectral controls who received identical spectral shaping suggests that this procedure may reduce wideband temporal modulation cues due to frequency-specific amplification that affected high-frequency consonants more than low-frequency vowels. These spectral changes may impact speech intelligibility in certain modulation masking conditions. PMID:26093436