ERIC Educational Resources Information Center
Zekveld, Adriana A.; George, Erwin L. J.; Kramer, Sophia E.; Goverts, S. Theo; Houtgast, Tammo
2007-01-01
Purpose: In this study, the authors aimed to develop a visual analogue of the widely used Speech Reception Threshold (SRT; R. Plomp & A. M. Mimpen, 1979b) test. The Text Reception Threshold (TRT) test, in which visually presented sentences are masked by a bar pattern, enables the quantification of modality-aspecific variance in speech-in-noise…
Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.
Bissmeyer, Susan R S; Goldsworthy, Raymond L
2017-09-01
Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.
The effect of guessing on the speech reception thresholds of children.
Moodley, A
1990-01-01
Speech audiometry is an essential part of the assessment of hearing impaired children and it is now widely used throughout the United Kingdom. Although instructions are universally agreed upon as an important aspect in the administration of any form of audiometric testing, there has been little, if any, research towards evaluating the influence which instructions that are given to a listener have on the Speech Reception Threshold obtained. This study attempts to evaluate what effect guessing has on the Speech Reception Threshold of children. A sample of 30 secondary school pupils between 16 and 18 years of age with normal hearing was used in the study. It is argued that the type of instruction normally used for Speech Reception Threshold in audiometric testing may not provide a sufficient amount of control for guessing and the implications of this, using data obtained in the study, are examined.
ERIC Educational Resources Information Center
Besser, Jana; Zekveld, Adriana A.; Kramer, Sophia E.; Ronnberg, Jerker; Festen, Joost M.
2012-01-01
Purpose: In this research, the authors aimed to increase the analogy between Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) and Speech Reception Threshold (SRT; Plomp & Mimpen, 1979) and to examine the TRT's value in estimating cognitive abilities that are important for speech comprehension in noise. Method: The…
Simulated Critical Differences for Speech Reception Thresholds
ERIC Educational Resources Information Center
Pedersen, Ellen Raben; Juhl, Peter Møller
2017-01-01
Purpose: Critical differences state by how much 2 test results have to differ in order to be significantly different. Critical differences for discrimination scores have been available for several decades, but they do not exist for speech reception thresholds (SRTs). This study presents and discusses how critical differences for SRTs can be…
Sinex, Donal G.
2013-01-01
Binary time-frequency (TF) masks can be applied to separate speech from noise. Previous studies have shown that with appropriate parameters, ideal TF masks can extract highly intelligible speech even at very low speech-to-noise ratios (SNRs). Two psychophysical experiments provided additional information about the dependence of intelligibility on the frequency resolution and threshold criteria that define the ideal TF mask. Listeners identified AzBio Sentences in noise, before and after application of TF masks. Masks generated with 8 or 16 frequency bands per octave supported nearly-perfect identification. Word recognition accuracy was slightly lower and more variable with 4 bands per octave. When TF masks were generated with a local threshold criterion of 0 dB SNR, the mean speech reception threshold was −9.5 dB SNR, compared to −5.7 dB for unprocessed sentences in noise. Speech reception thresholds decreased by about 1 dB per dB of additional decrease in the local threshold criterion. Information reported here about the dependence of speech intelligibility on frequency and level parameters has relevance for the development of non-ideal TF masks for clinical applications such as speech processing for hearing aids. PMID:23556604
Zhou, Ning
2017-03-01
The study examined whether the benefit of deactivating stimulation sites estimated to have broad neural excitation was attributed to improved spectral resolution in cochlear implant users. The subjects' spatial neural excitation pattern was estimated by measuring low-rate detection thresholds across the array [see Zhou (2016). PLoS One 11, e0165476]. Spectral resolution, as assessed by spectral-ripple discrimination thresholds, significantly improved after deactivation of five high-threshold sites. The magnitude of improvement in spectral-ripple discrimination thresholds predicted the magnitude of improvement in speech reception thresholds after deactivation. Results suggested that a smaller number of relatively independent channels provide a better outcome than using all channels that might interact.
Accuracy of cochlear implant recipients in speech reception in the presence of background music.
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-12-01
This study examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of 3 contrasting types of background music, and compared performance based upon listener groups: CI recipients using conventional long-electrode devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing adults. We tested 154 long-electrode CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 normal-hearing adults on closed-set recognition of spondees presented in 3 contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Signal-to-noise ratio thresholds for speech in music were examined in relation to measures of speech recognition in background noise and multitalker babble, pitch perception, and music experience. The signal-to-noise ratio thresholds for speech in music varied as a function of category of background music, group membership (long-electrode, Hybrid, normal-hearing), and age. The thresholds for speech in background music were significantly correlated with measures of pitch perception and thresholds for speech in background noise; auditory status was an important predictor. Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music.
Accuracy of Cochlear Implant Recipients on Speech Reception in Background Music
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
2012-01-01
Objectives This study (a) examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of three contrasting types of background music, and (b) compared performance based upon listener groups: CI recipients using conventional long-electrode (LE) devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing (NH) adults. Methods We tested 154 LE CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 NH adults on closed-set recognition of spondees presented in three contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Outcomes Signal-to-noise thresholds for speech in music (SRTM) were examined in relation to measures of speech recognition in background noise and multi-talker babble, pitch perception, and music experience. Results SRTM thresholds varied as a function of category of background music, group membership (LE, Hybrid, NH), and age. Thresholds for speech in background music were significantly correlated with measures of pitch perception and speech in background noise thresholds; auditory status was an important predictor. Conclusions Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music. PMID:23342550
Reading behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users
ERIC Educational Resources Information Center
Zekveld, Adriana A.; Pronk, Marieke; Danielsson, Henrik; Rönnberg, Jerker
2018-01-01
Purpose: The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in…
Döge, Julia; Baumann, Uwe; Weissgerber, Tobias; Rader, Tobias
2017-12-01
To assess auditory localization accuracy and speech reception threshold (SRT) in complex noise conditions in adult patients with acquired single-sided deafness, after intervention with a cochlear implant (CI) in the deaf ear. Nonrandomized, open, prospective patient series. Tertiary referral university hospital. Eleven patients with late-onset single-sided deafness (SSD) and normal hearing in the unaffected ear, who received a CI. All patients were experienced CI users. Unilateral cochlear implantation. Speech perception was tested in a complex multitalker equivalent noise field consisting of multiple sound sources. Speech reception thresholds in noise were determined in aided (with CI) and unaided conditions. Localization accuracy was assessed in complete darkness. Acoustic stimuli were radiated by multiple loudspeakers distributed in the frontal horizontal plane between -60 and +60 degrees. In the aided condition, results show slightly improved speech reception scores compared with the unaided condition in most of the patients. For 8 of the 11 subjects, SRT was improved between 0.37 and 1.70 dB. Three of the 11 subjects showed deteriorations between 1.22 and 3.24 dB SRT. Median localization error decreased significantly by 12.9 degrees compared with the unaided condition. CI in single-sided deafness is an effective treatment to improve the auditory localization accuracy. Speech reception in complex noise conditions is improved to a lesser extent in 73% of the participating CI SSD patients. However, the absence of true binaural interaction effects (summation, squelch) impedes further improvements. The development of speech processing strategies that respect binaural interaction seems to be mandatory to advance speech perception in demanding listening situations in SSD patients.
Processing load induced by informational masking is related to linguistic abilities.
Koelewijn, Thomas; Zekveld, Adriana A; Festen, Joost M; Rönnberg, Jerker; Kramer, Sophia E
2012-01-01
It is often assumed that the benefit of hearing aids is not primarily reflected in better speech performance, but that it is reflected in less effortful listening in the aided than in the unaided condition. Before being able to assess such a hearing aid benefit the present study examined how processing load while listening to masked speech relates to inter-individual differences in cognitive abilities relevant for language processing. Pupil dilation was measured in thirty-two normal hearing participants while listening to sentences masked by fluctuating noise or interfering speech at either 50% and 84% intelligibility. Additionally, working memory capacity, inhibition of irrelevant information, and written text reception was tested. Pupil responses were larger during interfering speech as compared to fluctuating noise. This effect was independent of intelligibility level. Regression analysis revealed that high working memory capacity, better inhibition, and better text reception were related to better speech reception thresholds. Apart from a positive relation to speech recognition, better inhibition and better text reception are also positively related to larger pupil dilation in the single-talker masker conditions. We conclude that better cognitive abilities not only relate to better speech perception, but also partly explain higher processing load in complex listening conditions.
Corthals, Paul
2008-01-01
The aim of the present study is to construct a simple method for visualizing and quantifying the audibility of speech on the audiogram and to predict speech intelligibility. The proposed method involves a series of indices on the audiogram form reflecting the sound pressure level distribution of running speech. The indices that coincide with a patient's pure tone thresholds reflect speech audibility and give evidence of residual functional hearing capacity. Two validation studies were conducted among sensorineurally hearing-impaired participants (n = 56 and n = 37, respectively) to investigate the relation with speech recognition ability and hearing disability. The potential of the new audibility indices as predictors for speech reception thresholds is comparable to the predictive potential of the ANSI 1968 articulation index and the ANSI 1997 speech intelligibility index. The sum of indices or a weighted combination can explain considerable proportions of variance in speech reception results for sentences in quiet free field conditions. The proportions of variance that can be explained in questionnaire results on hearing disability are less, presumably because the threshold indices almost exclusively reflect message audibility and much less the psychosocial consequences of hearing deficits. The outcomes underpin the validity of the new audibility indexing system, even though the proposed method may be better suited for predicting relative performance across a set of conditions than for predicting absolute speech recognition performance. (c) 2007 S. Karger AG, Basel
Do Older Listeners With Hearing Loss Benefit From Dynamic Pitch for Speech Recognition in Noise?
Shen, Jing; Souza, Pamela E
2017-10-12
Dynamic pitch, the variation in the fundamental frequency of speech, aids older listeners' speech perception in noise. It is unclear, however, whether some older listeners with hearing loss benefit from strengthened dynamic pitch cues for recognizing speech in certain noise scenarios and how this relative benefit may be associated with individual factors. We first examined older individuals' relative benefit between natural and strong dynamic pitches for better speech recognition in noise. Further, we reported the individual factors of the 2 groups of listeners who benefit differently from natural and strong dynamic pitches. Speech reception thresholds of 13 older listeners with mild-moderate hearing loss were measured using target speech with 3 levels of dynamic pitch strength. Individuals' ability to benefit from dynamic pitch was defined as the speech reception threshold difference between speeches with and without dynamic pitch cues. The relative benefit of natural versus strong dynamic pitch varied across individuals. However, this relative benefit remained consistent for the same individuals across those background noises with temporal modulation. Those listeners who benefited more from strong dynamic pitch reported better subjective speech perception abilities. Strong dynamic pitch may be more beneficial than natural dynamic pitch for some older listeners to recognize speech better in noise, particularly when the noise has temporal modulation.
Neher, Tobias
2017-02-01
To scrutinize the binaural contribution to speech-in-noise reception, four groups of elderly participants with or without audiometric asymmetry <2 kHz and with or without near-normal binaural intelligibility level difference (BILD) completed tests of monaural and binaural phase sensitivity as well as cognitive function. Groups did not differ in age, overall degree of hearing loss, or cognitive function. Analyses revealed an influence of BILD status but not audiometric asymmetry on monaural phase sensitivity, strong correlations between monaural and binaural detection thresholds, and monaural and binaural but not cognitive BILD contributions. Furthermore, the N 0 S π threshold at 500 Hz predicted BILD performance effectively.
Koelewijn, Thomas; Zekveld, Adriana A; Festen, Joost M; Kramer, Sophia E
2014-03-01
A recent pupillometry study on adults with normal hearing indicates that the pupil response during speech perception (cognitive processing load) is strongly affected by the type of speech masker. The current study extends these results by recording the pupil response in 32 participants with hearing impairment (mean age 59 yr) while they were listening to sentences masked by fluctuating noise or a single-talker. Efforts were made to improve audibility of all sounds by means of spectral shaping. Additionally, participants performed tests measuring verbal working memory capacity, inhibition of interfering information in working memory, and linguistic closure. The results showed worse speech reception thresholds for speech masked by single-talker speech compared to fluctuating noise. In line with previous results for participants with normal hearing, the pupil response was larger when listening to speech masked by a single-talker compared to fluctuating noise. Regression analysis revealed that larger working memory capacity and better inhibition of interfering information related to better speech reception thresholds, but these variables did not account for inter-individual differences in the pupil response. In conclusion, people with hearing impairment show more cognitive load during speech processing when there is interfering speech compared to fluctuating noise.
Decreased Speech-In-Noise Understanding in Young Adults with Tinnitus
Gilles, Annick; Schlee, Winny; Rabau, Sarah; Wouters, Kristien; Fransen, Erik; Van de Heyning, Paul
2016-01-01
Objectives: Young people are often exposed to high music levels which make them more at risk to develop noise-induced symptoms such as hearing loss, hyperacusis, and tinnitus of which the latter is the symptom perceived the most by young adults. Although, subclinical neural damage was demonstrated in animal experiments, the human correlate remains under debate. Controversy exists on the underlying condition of young adults with normal hearing thresholds and noise-induced tinnitus (NIT) due to leisure noise. The present study aimed to assess differences in audiological characteristics between noise-exposed adolescents with and without NIT. Methods: A group of 87 young adults with a history of recreational noise exposure was investigated by use of the following tests: otoscopy, impedance measurements, pure-tone audiometry including high-frequencies, transient and distortion product otoacoustic emissions, speech-in-noise testing with continuous and modulated noise (amplitude-modulated by 15 Hz), auditory brainstem responses (ABR) and questionnaires.Nineteen students reported NIT due to recreational noise exposure, and their measures were compared to the non-tinnitus subjects. Results: No significant differences between tinnitus and non-tinnitus subjects could be found for hearing thresholds, otoacoustic emissions, and ABR results.Tinnitus subjects had significantly worse speech reception in noise compared to non-tinnitus subjects for sentences embedded in steady-state noise (mean speech reception threshold (SRT) scores, respectively −5.77 and −6.90 dB SNR; p = 0.025) as well as for sentences embedded in 15 Hz AM-noise (mean SRT scores, respectively −13.04 and −15.17 dB SNR; p = 0.013). In both groups speech reception was significantly improved during AM-15 Hz noise compared to the steady-state noise condition (p < 0.001). However, the modulation masking release was not affected by the presence of NIT. Conclusions: Young adults with and without NIT did not differ regarding audiometry, OAE, and ABR.However, tinnitus patients showed decreased speech-in-noise reception. The results are discussed in the light of previous findings suggestion NIT may occur in the absence of measurable peripheral damage as reflected in speech-in-noise deficits in tinnitus subjects. PMID:27445661
Best, Virginia; Keidser, Gitte; Buchholz, Jörg M; Freeston, Katrina
2015-01-01
There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing-aid benefit from those measured in the standard environment. The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests.
Best, Virginia; Keidser, Gitte; Buchholz, J(x004E7)rg M.; Freeston, Katrina
2016-01-01
Objective There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Design Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. Study Sample The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Results Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing aid benefit from those measured in the standard environment. Conclusions The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests. PMID:25853616
Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger
2016-05-01
A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.
The effect of noise-induced hearing loss on the intelligibility of speech in noise
NASA Astrophysics Data System (ADS)
Smoorenburg, G. F.; Delaat, J. A. P. M.; Plomp, R.
1981-06-01
Speech reception thresholds, both in quiet and in noise, and tone audiograms were measured for 14 normal ears (7 subjects) and 44 ears (22 subjects) with noise-induced hearing loss. Maximum hearing loss in the 4-6 kHz region equalled 40 to 90 dB (losses exceeded by 90% and 10%, respectively). Hearing loss for speech in quiet measured with respect to the median speech reception threshold for normal ears ranged from 1.8 dB to 13.4 dB. For speech in noise the numbers are 1.2 dB to 7.0 dB which means that the subjects with noise-induced hearing loss need a 1.2 to 7.0 dB higher signal-to-noise ratio than normal to understand sentences equally well. A hearing loss for speech of 1 dB corresponds to a decrease in sentence intelligibility of 15 to 20%. The relation between hearing handicap conceived as a reduced ability to understand speech and tone audiogram is discussed. The higher signal-to-noise ratio needed by people with noise-induced hearing loss to understand speech in noisy environments is shown to be due partly to the decreased bandwidth of their hearing caused by the noise dip.
2014-01-01
This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772
Iwasaki, Satoshi; Usami, Shin-Ichi; Takahashi, Haruo; Kanda, Yukihiko; Tono, Tetsuya; Doi, Katsumi; Kumakawa, Kozo; Gyo, Kiyofumi; Naito, Yasushi; Kanzaki, Sho; Yamanaka, Noboru; Kaga, Kimitaka
2017-07-01
To report on the safety and efficacy of an investigational active middle ear implant (AMEI) in Japan, and to compare results to preoperative results with a hearing aid. Prospective study conducted in Japan in which 23 Japanese-speaking adults suffering from conductive or mixed hearing loss received a VIBRANT SOUNDBRIDGE with implantation at the round window. Postoperative thresholds, speech perception results (word recognition scores, speech reception thresholds, signal-to-noise ratio [SNR]), and quality of life questionnaires at 20 weeks were compared with preoperative results with all patients receiving the same, best available hearing aid (HA). Statistically significant improvements in postoperative AMEI-aided thresholds (1, 2, 4, and 8 kHz) and on the speech reception thresholds and word recognition scores tests, compared with preoperative HA-aided results, were observed. On the SNR, the subjects' mean values showed statistically significant improvement, with -5.7 dB SNR for the AMEI-aided mean and -2.1 dB SNR for the preoperative HA-assisted mean. The APHAB quality of life questionnaire also showed statistically significant improvement with the AMEI. Results with the AMEI applied to the round window exceeded those of the best available hearing aid in speech perception as well as quality of life questionnaires. There were minimal adverse events or changes to patients' residual hearing.
Hochmuth, Sabine; Kollmeier, Birger; Brand, Thomas; Jürgens, Tim
2015-01-01
To compare speech reception thresholds (SRTs) in noise using matrix sentence tests in four languages: German, Spanish, Russian, Polish. The four tests were composed of equivalent five-word sentences and were all designed and optimized using the same principles. Six stationary speech-shaped noises and three non-stationary noises were used as maskers. Forty native listeners with normal hearing: 10 for each language. SRTs were about 3 dB higher for the German and Spanish tests than for the Russian and Polish tests when stationary noise was used that matched the long-term frequency spectrum of the respective speech test materials. This general SRT difference was also observed for the other stationary noises. The within-test variability across noise conditions differed between languages. About 56% of the observed variance was predicted by the speech intelligibility index. The observed SRT benefit in fluctuating noise was similar for all tests, with a slightly smaller benefit for the Spanish test. Of the stationary noises employed, noise with the same spectrum as the speech yielded the best masking. SRT differences across languages and noises could be attributed in part to spectral differences. These findings provide the feasibility and limits of comparing audiological results across languages.
Schoof, Tim; Rosen, Stuart
2014-01-01
Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60–72 years) and younger (19–29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed. PMID:25429266
Dietz, Mathias; Hohmann, Volker; Jürgens, Tim
2015-01-01
For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types. PMID:26721918
Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.
2014-01-01
This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120
Cognitive abilities relate to self-reported hearing disability.
Zekveld, Adriana A; George, Erwin L J; Houtgast, Tammo; Kramer, Sophia E
2013-10-01
In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1996) and performed the Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) test as well as tests of spatial working memory (SWM) and visual sustained attention. Regression analyses examined the predictive value of age, hearing thresholds (pure-tone averages [PTAs]), speech perception in noise (speech reception thresholds in noise [SRTNs]), and the cognitive tests for the 5 AIADH factors. Besides the variance explained by age, PTA, and SRTN, cognitive abilities were related to each hearing factor. The reported difficulties with sound detection and speech perception in quiet were less severe for participants with higher age, lower PTAs, and better TRTs. Fewer sound localization and speech perception in noise problems were reported by participants with better SRTNs and smaller SWM. Fewer sound discrimination difficulties were reported by subjects with better SRTNs and TRTs and smaller SWM. The results suggest a general role of the ability to read partly masked text in subjective hearing. Large working memory was associated with more reported hearing difficulties. This study shows that besides auditory variables and age, cognitive abilities are related to self-reported hearing disability.
Haumann, Sabine; Hohmann, Volker; Meis, Markus; Herzke, Tobias; Lenarz, Thomas; Büchner, Andreas
2012-01-01
Owing to technological progress and a growing body of clinical experience, indication criteria for cochlear implants (CI) are being extended to less severe hearing impairments. It is, therefore, worth reconsidering these indication criteria by introducing novel testing procedures. The diagnostic evidence collected will be evaluated. The investigation includes postlingually deafened adults seeking a CI. Prior to surgery, speech perception tests [Freiburg Speech Test and Oldenburg sentence (OLSA) test] were performed unaided and aided using the Oldenburg Master Hearing Aid (MHA) system. Linguistic skills were assessed with the visual Text Reception Threshold (TRT) test, and general state of health, socio-economic status (SES) and subjective hearing were evaluated through questionnaires. After surgery, the speech tests were repeated aided with a CI. To date, 97 complete data sets are available for evaluation. Statistical analyses showed significant correlations between postsurgical speech reception threshold (SRT) measured with the adaptive OLSA test and pre-surgical data such as the TRT test (r=−0.29), SES (r=−0.22) and (if available) aided SRT (r=0.53). The results suggest that new measures and setups such as the TRT test, SES and speech perception with the MHA provide valuable extra information regarding indication for CI. PMID:26557327
NASA Astrophysics Data System (ADS)
Oxenham, Andrew J.; Rosengard, Peninah S.; Braida, Louis D.
2004-05-01
Cochlear damage can lead to a reduction in the overall amount of peripheral auditory compression, presumably due to outer hair cell (OHC) loss or dysfunction. The perceptual consequences of functional OHC loss include loudness recruitment and reduced dynamic range, poorer frequency selectivity, and poorer effective temporal resolution. These in turn may lead to a reduced ability to make use of spectral and temporal fluctuations in background noise when listening to a target sound, such as speech. We tested the effect of OHC function on speech reception in hearing-impaired listeners by comparing psychoacoustic measures of cochlear compression and sentence recognition in a variety of noise backgrounds. In line with earlier studies, we found weak (nonsignificant) correlations between the psychoacoustic tasks and speech reception thresholds in quiet or in steady-state noise. However, when spectral and temporal fluctuations were introduced in the masker, speech reception improved to an extent that was well predicted by the psychoacoustic measures. Thus, our initial results suggest a strong relationship between measures of cochlear compression and the ability of listeners to take advantage of spectral and temporal masker fluctuations in recognizing speech. [Work supported by NIH Grants Nos. R01DC03909, T32DC00038, and R01DC00117.
Rader, T
2015-02-01
Cochlear implantation with the aim of hearing preservation for combined electric-acoustic stimulation (EAS) is the therapy of choice for patients with residual low-frequency hearing. Preserved residual acoustic hearing has a positive effect on speech intelligibility in difficult noise conditions. The goal of this study was to assess speech reception thresholds in various complex noise conditions for patients with EAS in comparison with patients using bilateral cochlear implants (CI). Speech perception in noise was measured for bilateral CI and EAS patient groups. A total of 22 listeners with normal hearing served as a control group. Speech reception thresholds (SRT) were measured using a closed-set sentence matrix test. Speech was presented with a single source in frontal position; noise was presented in frontal position or in a multisource noise field (MSNF) consisting of a four-loudspeaker array with independent noise sources. Modulated speech-simulating noise and pseudocontinuous noise served respectively as interference signal with different temporal characteristics. The average SRTs in the EAS group were significantly better in all test conditions than those of the group with bilateral CI. Both user groups showed significant improvement in the MSNF condition compared with the frontal noise condition as a result of bilateral interaction. The normal-hearing control group was able to use short temporal gaps in modulated noise to improve speech perception in noise (gap listening). This effect was absent in both implanted user groups. Patients with combined EAS in one ear and a hearing aid in the contralateral ear show significantly improved speech perception in complex noise conditions compared with bilateral CI recipients.
Gifford, René H; Dorman, Michael F; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L W; Roland, Peter; Buchman, Craig A
2013-01-01
The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. The present study included a within-subjects, repeated-measures design including 21 English-speaking and 17 Polish-speaking cochlear implant (CI) recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250, and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an eight-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: CI plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best-aided condition). A subset of six English-speaking listeners were also assessed on measures of interaural time difference thresholds for a 250-Hz signal. Small, but significant, improvements in performance (1.7-2.1 dB and 6-10 percentage points) were found for the best-aided condition versus the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of electric and acoustic stimulation (EAS) benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold after surgery and improvement in speech understanding in reverberation. There was a significant correlation between interaural time difference threshold at 250 Hz and EAS-related benefit for the adaptive speech reception threshold. The findings of this study suggest that (1) preserved low-frequency hearing improves speech understanding for CI recipients, (2) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing, and (3) preservation of binaural timing cues, although poorer than observed for individuals with normal hearing, is possible after unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. The results of this study demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of CI criteria to include individuals with low-frequency thresholds in even the normal to near-normal range.
Comparison of Fluoroplastic Causse Loop Piston and Titanium Soft-Clip in Stapedotomy
Faramarzi, Mohammad; Gilanifar, Nafiseh; Roosta, Sareh
2017-01-01
Introduction: Different types of prosthesis are available for stapes replacement. Because there has been no published report on the efficacy of the titanium soft-clip vs the fluoroplastic Causse loop Teflon piston, we compared short-term hearing results of both types of prosthesis in patients who underwent stapedotomy due to otosclerosis. Materials and Methods: A total of 57 ears were included in the soft-clip group and 63 ears were included in the Teflon-piston group. Pre-operative and post-operative air conduction, bone conduction, air-bone gaps, speech discrimination score, and speech reception thresholds were analyzed. Results: Post-operative speech reception threshold gains did not differ significantly between the two groups (P=0.919). However, better post-operative air-bone gap improvement at low frequencies was observed in the Teflon-piston group over the short-term follow-up (at frequencies of 0.25 and 0.50 kHz; P=0.007 and P=0.001, respectively). Conclusion: Similar post-operative hearing results were observed in the two groups in the short-term. PMID:28229059
How linguistic closure and verbal working memory relate to speech recognition in noise--a review.
Besser, Jana; Koelewijn, Thomas; Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M
2013-06-01
The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations.
How Linguistic Closure and Verbal Working Memory Relate to Speech Recognition in Noise—A Review
Koelewijn, Thomas; Zekveld, Adriana A.; Kramer, Sophia E.; Festen, Joost M.
2013-01-01
The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations. PMID:23945955
A Spanish matrix sentence test for assessing speech reception thresholds in noise.
Hochmuth, Sabine; Brand, Thomas; Zokoll, Melanie A; Castro, Franz Zenker; Wardenga, Nina; Kollmeier, Birger
2012-07-01
To develop, optimize, and evaluate a new Spanish sentence test in noise. The test comprises a basic matrix of ten names, verbs, numerals, nouns, and adjectives. From this matrix, test lists of ten sentences with an equal syntactical structure can be formed at random, with each list containing the whole speech material. The speech material represents the phoneme distribution of the Spanish language. The test was optimized for measuring speech reception thresholds (SRTs) in noise by adjusting the presentation levels of the individual words. Subsequently, the test was evaluated by independent measurements investigating the training effects, the comparability of test lists, open-set vs. closed-set test format, and performance of listeners of different Spanish varieties. In total, 68 normal-hearing native Spanish-speaking listeners. SRTs measured using an adaptive procedure were 6.2 ± 0.8 dB SNR for the open-set and 7.2 ± 0.7 dB SNR for the closed-set test format. The residual training effect was less than 1 dB after using two double-lists before data collection. No significant differences were found for listeners of different Spanish varieties indicating that the test is applicable to Spanish as well as Latin American listeners. Test lists can be used interchangeably.
Carroll, Rebecca; Meis, Markus; Schulte, Michael; Vormann, Matthias; Kießling, Jürgen; Meister, Hartmut
2015-02-01
To report the development of a standardized German version of a reading span test (RST) with a dual task design. Special attention was paid to psycholinguistic control of the test items and time-sensitive scoring. We aim to establish our RST version to use for determining an individual's working memory in the framework of hearing research in German contexts. RST stimuli were controlled and pretested for psycholinguistic factors. The RST task was to read sentences, quickly determine their plausibility, and later recall certain words to determine a listener's individual reading span. RST results were correlated with outcomes of additional sentence-in-noise tests measured in an aided and an unaided listening condition, each at two reception thresholds. Item plausibility was pre-determined by 28 native German participants. An additional 62 listeners (45-86 years, M = 69.8) with mild-to-moderate hearing loss were tested for speech intelligibility and reading span in a multicenter study. The reading span test significantly correlated with speech intelligibility at both speech reception thresholds in the aided listening condition. Our German RST is standardized with respect to psycholinguistic construction principles of the stimuli, and is a cognitive correlate of intelligibility in a German matrix speech-in-noise test.
Binaural sluggishness in the perception of tone sequences and speech in noise.
Culling, J F; Colburn, H S
2000-01-01
The binaural system is well-known for its sluggish response to changes in the interaural parameters to which it is sensitive. Theories of binaural unmasking have suggested that detection of signals in noise is mediated by detection of differences in interaural correlation. If these theories are correct, improvements in the intelligibility of speech in favorable binaural conditions is most likely mediated by spectro-temporal variations in interaural correlation of the stimulus which mirror the spectro-temporal amplitude modulations of the speech. However, binaural sluggishness should limit the temporal resolution of the representation of speech recovered by this means. The present study tested this prediction in two ways. First, listeners' masked discrimination thresholds for ascending vs descending pure-tone arpeggios were measured as a function of rate of frequency change in the NoSo and NoSpi binaural configurations. Three-tone arpeggios were presented repeatedly and continuously for 1.6 s, masked by a 1.6-s burst of noise. In a two-interval task, listeners determined the interval in which the arpeggios were ascending. The results showed a binaural advantage of 12-14 dB for NoSpi at 3.3 arpeggios per s (arp/s), which reduced to 3-5 dB at 10.4 arp/s. This outcome confirmed that the discrimination of spectro-temporal patterns in noise is susceptible to the effects of binaural sluggishness. Second, listeners' masked speech-reception thresholds were measured in speech-shaped noise using speech which was 1, 1.5, and 2 times the original articulation rate. The articulation rate was increased using a phase-vocoder technique which increased all the modulation frequencies in the speech without altering its pitch. Speech-reception thresholds were, on average, 5.2 dB lower for the NoSpi than for the NoSo configuration, at the original articulation rate. This binaural masking release was reduced to 2.8 dB when the articulation rate was doubled, but the most notable effect was a 6-8 dB increase in thresholds with articulation rate for both configurations. These results suggest that higher modulation frequencies in masked signals cannot be temporally resolved by the binaural system, but that the useful modulation frequencies in speech are sufficiently low (<5 Hz) that they are invulnerable to the effects of binaural sluggishness, even at elevated articulation rates.
Hearing status in patients with rheumatoid arthritis.
Ahmadzadeh, A; Daraei, M; Jalessi, M; Peyvandi, A A; Amini, E; Ranjbar, L A; Daneshi, A
2017-10-01
Rheumatoid arthritis is thought to induce conductive hearing loss and/or sensorineural hearing loss. This study evaluated the function of the middle ear and cochlea, and the related factors. Pure tone audiometry, speech reception thresholds, speech discrimination scores, tympanometry, acoustic reflexes, and distortion product otoacoustic emissions were assessed in rheumatoid arthritis patients and healthy volunteers. Pure tone audiometry results revealed a higher bone conduction threshold in the rheumatoid arthritis group, but there was no significant difference when evaluated according to the sensorineural hearing loss definition. Distortion product otoacoustic emissions related prevalence of conductive or mixed hearing loss, tympanometry values, acoustic reflexes, and speech discrimination scores were not significantly different between the two groups. Sensorineural hearing loss was significantly more prevalent in patients who used azathioprine, cyclosporine and etanercept. Higher bone conduction thresholds in some frequencies were detected in rheumatoid arthritis patients that were not clinically significant. Sensorineural hearing loss is significantly more prevalent in refractory rheumatoid arthritis patients.
Reading Behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users.
Zekveld, Adriana A; Pronk, Marieke; Danielsson, Henrik; Rönnberg, Jerker
2018-03-15
The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in the TRT. Second, we aimed to assess the relationships between the TRT and the speech reception thresholds (SRTs) estimated in various conditions. First, we reviewed studies reporting relationships between the TRT and the auditory and/or cognitive factors and formulated specific hypotheses regarding the TRT predictors. These hypotheses were tested using a prediction model applied to a rich data set of 180 hearing aid users. In separate association models, we tested the relationships between the TRT and the various SRTs and subjective hearing difficulties, while taking into account potential confounding variables. The results of the prediction model indicate that the TRT is predicted by the ability to fill in missing words in incomplete sentences, by lexical access speed, and by working memory capacity. Furthermore, in line with previous studies, a moderate association between higher age, poorer pure-tone hearing acuity, and poorer TRTs was observed. Better TRTs were associated with better SRTs for the correct perception of 50% of Hagerman matrix sentences in a 4-talker babble, as well as with better subjective ratings of speech perception. Age and pure-tone hearing thresholds significantly confounded these associations. The associations of the TRT with SRTs estimated in other conditions and with subjective qualities of hearing were not statistically significant when adjusting for age and pure-tone average. We conclude that the abilities tapped into by the TRT test include processes relevant for speeded lexical decision making when completing partly masked sentences and that these processes require working memory capacity. Furthermore, the TRT is associated with the SRT of hearing aid users as estimated in a challenging condition that includes informational masking and with experienced difficulties with speech perception in daily-life conditions. The current results underline the value of using the TRT test in studies involving speech perception and aid in the interpretation of findings acquired using the test.
Ragab, A; Shreef, E; Behiry, E; Zalat, S; Noaman, M
2009-01-01
To investigate the safety and efficacy of ozone therapy in adult patients with sudden sensorineural hearing loss. Prospective, randomised, double-blinded, placebo-controlled, parallel group, clinical trial. Forty-five adult patients presented with sudden sensorineural hearing loss, and were randomly allocated to receive either placebo (15 patients) or ozone therapy (auto-haemotherapy; 30 patients). For the latter treatment, 100 ml of the patient's blood was treated immediately with a 1:1 volume, gaseous mixture of oxygen and ozone (from an ozone generator) and re-injected into the patient by intravenous infusion. Treatments were administered twice weekly for 10 sessions. The following data were recorded: pre- and post-treatment mean hearing gains; air and bone pure tone averages; speech reception thresholds; speech discrimination scores; and subjective recovery rates. Significant recovery was observed in 23 patients (77 per cent) receiving ozone treatment, compared with six (40 per cent) patients receiving placebo (p < 0.05). Mean hearing gains, pure tone averages, speech reception thresholds and subjective recovery rates were significantly better in ozone-treated patients compared with placebo-treated patients (p < 0.05). Ozone therapy is a significant modality for treatment of sudden sensorineural hearing loss; no complications were observed.
Mostafapour, S P; Lahargoue, K; Gates, G A
1998-12-01
No consensus exists regarding the magnitude of the risk of noise-induced hearing loss (NIHL) associated with leisure noise, in particular, personal listening devices in young adults. Examine the magnitude of hearing loss associated with personal listening devices and other sources of leisure noise in causing NIHL in young adults. Prospective auditory testing of college student volunteers with retrospective history exposure to home stereos, personal listening devices, firearms, and other sources of recreational noise. Subjects underwent audiologic examination consisting of estimation of pure-tone thresholds, speech reception thresholds, and word recognition at 45 dB HL. Fifty subjects aged 18 to 30 years were tested. All hearing thresholds of all subjects (save one-a unilateral 30 dB HL threshold at 6 kHz) were normal, (i.e., 25 dB HL or better). A 10 dB threshold elevation (notch) in either ear at 3 to 6 kHz as compared with neighboring frequencies was noted in 11 (22%) subjects and an unequivocal notch (15 dB or greater) in either ear was noted in 14 (28%) of subjects. The presence or absence of any notch (small or large) did not correlate with any single or cumulative source of noise exposure. No difference in pure-tone threshold, speech reception threshold, or speech discrimination was found among subjects when segregated by noise exposure level. The majority of young users of personal listening devices are at low risk for substantive NIHL. Interpretation of the significance of these findings in relation to noise exposure must be made with caution. NIHL is an additive process and even subtle deficits may contribute to unequivocal hearing loss with continued exposure. The low prevalence of measurable deficits in this study group may not exclude more substantive deficits in other populations with greater exposures. Continued education of young people about the risk to hearing from recreational noise exposure is warranted.
Ng, Elaine H N; Classon, Elisabet; Larsby, Birgitta; Arlinger, Stig; Lunner, Thomas; Rudner, Mary; Rönnberg, Jerker
2014-11-23
The present study aimed to investigate the changing relationship between aided speech recognition and cognitive function during the first 6 months of hearing aid use. Twenty-seven first-time hearing aid users with symmetrical mild to moderate sensorineural hearing loss were recruited. Aided speech recognition thresholds in noise were obtained in the hearing aid fitting session as well as at 3 and 6 months postfitting. Cognitive abilities were assessed using a reading span test, which is a measure of working memory capacity, and a cognitive test battery. Results showed a significant correlation between reading span and speech reception threshold during the hearing aid fitting session. This relation was significantly weakened over the first 6 months of hearing aid use. Multiple regression analysis showed that reading span was the main predictor of speech recognition thresholds in noise when hearing aids were first fitted, but that the pure-tone average hearing threshold was the main predictor 6 months later. One way of explaining the results is that working memory capacity plays a more important role in speech recognition in noise initially rather than after 6 months of use. We propose that new hearing aid users engage working memory capacity to recognize unfamiliar processed speech signals because the phonological form of these signals cannot be automatically matched to phonological representations in long-term memory. As familiarization proceeds, the mismatch effect is alleviated, and the engagement of working memory capacity is reduced. © The Author(s) 2014.
Eustaquio-Martín, Almudena; Stohl, Joshua S.; Wolford, Robert D.; Schatzer, Reinhold; Wilson, Blake S.
2016-01-01
Objectives: In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. Design: Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. Results: In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. Conclusions: The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids. PMID:26862711
Lopez-Poveda, Enrique A; Eustaquio-Martín, Almudena; Stohl, Joshua S; Wolford, Robert D; Schatzer, Reinhold; Wilson, Blake S
2016-01-01
In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids.
Selective Auditory Attention in Adults: Effects of Rhythmic Structure of the Competing Language
ERIC Educational Resources Information Center
Reel, Leigh Ann; Hicks, Candace Bourland
2012-01-01
Purpose: The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Method: Reception thresholds for English sentences were measured for 50…
Plomp, R; Duquesnoy, A J
1980-12-01
This article deals with the combined effects of noise and reverberation on the speech-reception threshold for sentences. It is based on a series of current investigations on: (1) the modulation-transfer function as a measure of speech intelligibility in rooms, (2) the applicability of this concept to hearing-impaired persons, and (3) hearing loss for speech in quiet and in noise as a function of age. It is shown that, generally, in auditoria, classrooms, etc. the reverberation time T, acceptable for normal-hearing listeners, has to be reduced to (0.75)DT in order to be acceptable for elderly subjects with a hearing loss of D dB for speech in noise; for listening conditions as in lounges, restaurants, etc. the corresponding value is (0.82)DT.
Panday, Seema; Kathard, Harsha; Pillay, Mershen; Govender, Cyril
2007-01-01
The measurement of speech reception threshold (SRT) is best evaluated in an individual's first language. The present study focused on the development of a Zulu SRT word list, according to adapted criteria for SRT in Zulu. The aim of this paper is to present the process involved in the development of the Zulu word list. In acquiring the data to realize this aim, 131 common bisyllabic Zulu words were identified by two Zulu speaking language interpreters and two tertiary level educators. Eighty two percent of these words were described as bisyllabic verbs. Thereafter using a three point Likert scale, 58 bisyllabic verbs were rated by 5 linguistic experts as being familiar, phonetically dissimilar and being low tone verbs. According to the Kendall's co-efficient of concordance at 95% level of confidence the agreement among the raters was good for each criterion. The results highlighted the importance of adapting the criteria for SRT to suit the structure of the language. An important research implication emerging from the study is the theoretical guidelines proposed for the development of SRT material in other African Languages. Furthermore, the importance of using speech material appropriate to the language has also being highlighted. The developed SRT word list in Zulu is applicable to the adult Zulu First Language Speaker in KZN.
Panday, Seema; Kathard, Harsha; Pillay, Mershen; Govender, Cyril
2009-01-01
The aim of this investigation was to determine which of 58 preselected Zulu words developed by Panday et al. (2007) could be used for Speech Reception Threshold (SRT) testing. To realize this aim the homogeneity of audibility of 58 bisyllabic Zulu low tone verbs was measured, followed by an analysis of the prosodic features of the selected words. The words were digitally recorded by a Zulu first language male speaker and presented at 6 intensity levels to 30 Zulu first language speakers (18-25 years, mean age of 21.5 years), whose hearing was normal. Homogeneity of audibility was determined by employing logistic regression analysis. Twenty eight words met the criterion of homogeneity of audibility. This was evidenced by a mean slope of 50% at 5.98%/dB. The prosodic features of the twenty eight words were further analyzed using a computerized speech laboratory system. The findings confirmed that the pitch contours of the words followed the prosodic pattern apparent within Zulu linguistic structure. Eighty nine percent of the Zulu verbs were found to have a difference in the pitch pattern between the two syllables i.e. the first syllable was low in pitch, while the second syllable was high in pitch. It emerged that the twenty eight words could be used for establishing SRT within a normal hearing Zulu speaking population. Further research within clinical populations is recommended.
Pedersen, K; Rosenhall, U
1991-01-01
The relationship between self-assessed hearing handicap and audiometric measures using pure-tone and speech audiometry was studied in a group of elderly persons representative of an urban Swedish population. The study population consisted of two cohorts, one of which was followed longitudinally. Significant correlations between measured and self-assessed hearing were found. Speech discrimination scores showed lower correlations with the self-estimated hearing than pure-tone averages and speech reception threshold. Questions concerning conversation with one person and concerning difficulty in hearing the doorbell showed lower correlations with measured hearing than the other questions. The discrimination score test is an inadequate tool for measuring hearing handicap.
Aided and Unaided Speech Perception by Older Hearing Impaired Listeners
Woods, David L.; Arbogast, Tanya; Doss, Zoe; Younus, Masood; Herron, Timothy J.; Yund, E. William
2015-01-01
The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners. PMID:25730423
Lopez-Poveda, Enrique A; Eustaquio-Martín, Almudena; Stohl, Joshua S; Wolford, Robert D; Schatzer, Reinhold; Gorospe, José M; Ruiz, Santiago Santa Cruz; Benito, Fernando; Wilson, Blake S
2017-05-01
We have recently proposed a binaural cochlear implant (CI) sound processing strategy inspired by the contralateral medial olivocochlear reflex (the MOC strategy) and shown that it improves intelligibility in steady-state noise (Lopez-Poveda et al., 2016, Ear Hear 37:e138-e148). The aim here was to evaluate possible speech-reception benefits of the MOC strategy for speech maskers, a more natural type of interferer. Speech reception thresholds (SRTs) were measured in six bilateral and two single-sided deaf CI users with the MOC strategy and with a standard (STD) strategy. SRTs were measured in unilateral and bilateral listening conditions, and for target and masker stimuli located at azimuthal angles of (0°, 0°), (-15°, +15°), and (-90°, +90°). Mean SRTs were 2-5 dB better with the MOC than with the STD strategy for spatially separated target and masker sources. For bilateral CI users, the MOC strategy (1) facilitated the intelligibility of speech in competition with spatially separated speech maskers in both unilateral and bilateral listening conditions; and (2) led to an overall improvement in spatial release from masking in the two listening conditions. Insofar as speech is a more natural type of interferer than steady-state noise, the present results suggest that the MOC strategy holds potential for promising outcomes for CI users. Copyright © 2017. Published by Elsevier B.V.
Smits, Cas; Merkus, Paul; Festen, Joost M.; Goverts, S. Theo
2017-01-01
Not all of the variance in speech-recognition performance of cochlear implant (CI) users can be explained by biographic and auditory factors. In normal-hearing listeners, linguistic and cognitive factors determine most of speech-in-noise performance. The current study explored specifically the influence of visually measured lexical-access ability compared with other cognitive factors on speech recognition of 24 postlingually deafened CI users. Speech-recognition performance was measured with monosyllables in quiet (consonant-vowel-consonant [CVC]), sentences-in-noise (SIN), and digit-triplets in noise (DIN). In addition to a composite variable of lexical-access ability (LA), measured with a lexical-decision test (LDT) and word-naming task, vocabulary size, working-memory capacity (Reading Span test [RSpan]), and a visual analogue of the SIN test (text reception threshold test) were measured. The DIN test was used to correct for auditory factors in SIN thresholds by taking the difference between SIN and DIN: SRTdiff. Correlation analyses revealed that duration of hearing loss (dHL) was related to SIN thresholds. Better working-memory capacity was related to SIN and SRTdiff scores. LDT reaction time was positively correlated with SRTdiff scores. No significant relationships were found for CVC or DIN scores with the predictor variables. Regression analyses showed that together with dHL, RSpan explained 55% of the variance in SIN thresholds. When controlling for auditory performance, LA, LDT, and RSpan separately explained, together with dHL, respectively 37%, 36%, and 46% of the variance in SRTdiff outcome. The results suggest that poor verbal working-memory capacity and to a lesser extent poor lexical-access ability limit speech-recognition ability in listeners with a CI. PMID:29205095
Musicians and non-musicians are equally adept at perceiving masked speech
Boebinger, Dana; Evans, Samuel; Scott, Sophie K.; Rosen, Stuart; Lima, César F.; Manly, Tom
2015-01-01
There is much interest in the idea that musicians perform better than non-musicians in understanding speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n = 25) and non-musicians (n = 25). Although musicians outperformed non-musicians on a measure of frequency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that nonverbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise. PMID:25618067
Cortical activation patterns correlate with speech understanding after cochlear implantation
Olds, Cristen; Pollonini, Luca; Abaya, Homer; Larky, Jannine; Loy, Megan; Bortfeld, Heather; Beauchamp, Michael S.; Oghalai, John S.
2015-01-01
Objectives Cochlear implants are a standard therapy for deafness, yet the ability of implanted patients to understand speech varies widely. To better understand this variability in outcomes, we used functional near-infrared spectroscopy (fNIRS) to image activity within regions of the auditory cortex and compare the results to behavioral measures of speech perception. Design We studied 32 deaf adults hearing through cochlear implants and 35 normal-hearing controls. We used fNIRS to measure responses within the lateral temporal lobe and the superior temporal gyrus to speech stimuli of varying intelligibility. The speech stimuli included normal speech, channelized speech (vocoded into 20 frequency bands), and scrambled speech (the 20 frequency bands were shuffled in random order). We also used environmental sounds as a control stimulus. Behavioral measures consisted of the Speech Reception Threshold, CNC words, and AzBio Sentence tests measured in quiet. Results Both control and implanted participants with good speech perception exhibited greater cortical activations to natural speech than to unintelligible speech. In contrast, implanted participants with poor speech perception had large, indistinguishable cortical activations to all stimuli. The ratio of cortical activation to normal speech to that of scrambled speech directly correlated with the CNC Words and AzBio Sentences scores. This pattern of cortical activation was not correlated with auditory threshold, age, side of implantation, or time after implantation. Turning off the implant reduced cortical activations in all implanted participants. Conclusions Together, these data indicate that the responses we measured within the lateral temporal lobe and the superior temporal gyrus correlate with behavioral measures of speech perception, demonstrating a neural basis for the variability in speech understanding outcomes after cochlear implantation. PMID:26709749
Kabadi, S J; Ruhl, D S; Mukherjee, S; Kesser, B W
2018-02-01
Middle ear space is one of the most important components of the Jahrsdoerfer grading system (J-score), which is used to determine surgical candidacy for congenital aural atresia. The purpose of this study was to introduce a semiautomated method for measuring middle ear volume and determine whether middle ear volume, either alone or in combination with the J-score, can be used to predict early postoperative audiometric outcomes. A retrospective analysis was conducted of 18 patients who underwent an operation for unilateral congenital aural atresia at our institution. Using the Livewire Segmentation tool in the Carestream Vue PACS, we segmented middle ear volumes using a semiautomated method for all atretic and contralateral normal ears on preoperative high-resolution CT imaging. Postsurgical audiometric outcome data were then analyzed in the context of these middle ear volumes. Atretic middle ear volumes were significantly smaller than those in contralateral normal ears ( P < .001). Patients with atretic middle ear volumes of >305 mm 3 had significantly better postoperative pure tone average and speech reception thresholds than those with atretic ears below this threshold volume ( P = .01 and P = .006, respectively). Atretic middle ear volume incorporated into the J-score offered the best association with normal postoperative hearing (speech reception threshold ≤ 30 dB; OR = 37.8, P = .01). Middle ear volume, calculated in a semiautomated fashion, is predictive of postsurgical audiometric outcomes, both independently and in combination with the conventional J-score. © 2018 by American Journal of Neuroradiology.
Razza, Sergio; Zaccone, Monica; Meli, Aannalisa; Cristofari, Eliana
2017-12-01
Children affected by hearing loss can experience difficulties in challenging and noisy environments even when deafness is corrected by Cochlear implant (CI) devices. These patients have a selective attention deficit in multiple listening conditions. At present, the most effective ways to improve the performance of speech recognition in noise consists of providing CI processors with noise reduction algorithms and of providing patients with bilateral CIs. The aim of this study was to compare speech performances in noise, across increasing noise levels, in CI recipients using two kinds of wireless remote-microphone radio systems that use digital radio frequency transmission: the Roger Inspiro accessory and the Cochlear Wireless Mini Microphone accessory. Eleven Nucleus Cochlear CP910 CI young user subjects were studied. The signal/noise ratio, at a speech reception threshold (SRT) value of 50%, was measured in different conditions for each patient: with CI only, with the Roger or with the MiniMic accessory. The effect of the application of the SNR-noise reduction algorithm in each of these conditions was also assessed. The tests were performed with the subject positioned in front of the main speaker, at a distance of 2.5 m. Another two speakers were positioned at 3.50 m. The main speaker at 65 dB issued disyllabic words. Babble noise signal was delivered through the other speakers, with variable intensity. The use of both wireless remote microphones improved the SRT results. Both systems improved gain of speech performances. The gain was higher with the Mini Mic system (SRT = -4.76) than the Roger system (SRT = -3.01). The addition of the NR algorithm did not statistically further improve the results. There is significant improvement in speech recognition results with both wireless digital remote microphone accessories, in particular with the Mini Mic system when used with the CP910 processor. The use of a remote microphone accessory surpasses the benefit of application of NR algorithm. Copyright © 2017. Published by Elsevier B.V.
Selective spatial attention modulates bottom-up informational masking of speech
Carlile, Simon; Corkhill, Caitlin
2015-01-01
To hear out a conversation against other talkers listeners overcome energetic and informational masking. Largely attributed to top-down processes, information masking has also been demonstrated using unintelligible speech and amplitude-modulated maskers suggesting bottom-up processes. We examined the role of speech-like amplitude modulations in information masking using a spatial masking release paradigm. Separating a target talker from two masker talkers produced a 20 dB improvement in speech reception threshold; 40% of which was attributed to a release from informational masking. When across frequency temporal modulations in the masker talkers are decorrelated the speech is unintelligible, although the within frequency modulation characteristics remains identical. Used as a masker as above, the information masking accounted for 37% of the spatial unmasking seen with this masker. This unintelligible and highly differentiable masker is unlikely to involve top-down processes. These data provides strong evidence of bottom-up masking involving speech-like, within-frequency modulations and that this, presumably low level process, can be modulated by selective spatial attention. PMID:25727100
Selective spatial attention modulates bottom-up informational masking of speech.
Carlile, Simon; Corkhill, Caitlin
2015-03-02
To hear out a conversation against other talkers listeners overcome energetic and informational masking. Largely attributed to top-down processes, information masking has also been demonstrated using unintelligible speech and amplitude-modulated maskers suggesting bottom-up processes. We examined the role of speech-like amplitude modulations in information masking using a spatial masking release paradigm. Separating a target talker from two masker talkers produced a 20 dB improvement in speech reception threshold; 40% of which was attributed to a release from informational masking. When across frequency temporal modulations in the masker talkers are decorrelated the speech is unintelligible, although the within frequency modulation characteristics remains identical. Used as a masker as above, the information masking accounted for 37% of the spatial unmasking seen with this masker. This unintelligible and highly differentiable masker is unlikely to involve top-down processes. These data provides strong evidence of bottom-up masking involving speech-like, within-frequency modulations and that this, presumably low level process, can be modulated by selective spatial attention.
Objective measures of listening effort: effects of background noise and noise reduction.
Sarampalis, Anastasios; Kalluri, Sridhar; Edwards, Brent; Hafter, Ervin
2009-10-01
This work is aimed at addressing a seeming contradiction related to the use of noise-reduction (NR) algorithms in hearing aids. The problem is that although some listeners claim a subjective improvement from NR, it has not been shown to improve speech intelligibility, often even making it worse. To address this, the hypothesis tested here is that the positive effects of NR might be to reduce cognitive effort directed toward speech reception, making it available for other tasks. Normal-hearing individuals participated in 2 dual-task experiments, in which 1 task was to report sentences or words in noise set to various signal-to-noise ratios. Secondary tasks involved either holding words in short-term memory or responding in a complex visual reaction-time task. At low values of signal-to-noise ratio, although NR had no positive effect on speech reception thresholds, it led to better performance on the word-memory task and quicker responses in visual reaction times. Results from both dual tasks support the hypothesis that NR reduces listening effort and frees up cognitive resources for other tasks. Future hearing aid research should incorporate objective measurements of cognitive benefits.
Reduced auditory efferent activity in childhood selective mutism.
Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava
2004-06-01
Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.
Lexical and phonological variability in preschool children with speech sound disorder.
Macrae, Toby; Tyler, Ann A; Lewis, Kerry E
2014-02-01
The authors of this study examined relationships between measures of word and speech error variability and between these and other speech and language measures in preschool children with speech sound disorder (SSD). In this correlational study, 18 preschool children with SSD, age-appropriate receptive vocabulary, and normal oral motor functioning and hearing were assessed across 2 sessions. Experimental measures included word and speech error variability, receptive vocabulary, nonword repetition (NWR), and expressive language. Pearson product–moment correlation coefficients were calculated among the experimental measures. The correlation between word and speech error variability was slight and nonsignificant. The correlation between word variability and receptive vocabulary was moderate and negative, although nonsignificant. High word variability was associated with small receptive vocabularies. The correlations between speech error variability and NWR and between speech error variability and the mean length of children's utterances were moderate and negative, although both were nonsignificant. High speech error variability was associated with poor NWR and language scores. High word variability may reflect unstable lexical representations, whereas high speech error variability may reflect indistinct phonological representations. Preschool children with SSD who show abnormally high levels of different types of speech variability may require slightly different approaches to intervention.
Normal Aspects of Speech, Hearing, and Language.
ERIC Educational Resources Information Center
Minifie, Fred. D., Ed.; And Others
This book is written as a guide to the understanding of the processes involved in human speech communication. Ten authorities contributed material to provide an introduction to the physiological aspects of speech production and reception, the acoustical aspects of speech production and transmission, the psychophysics of sound reception, the nature…
Assessment of a directional microphone array for hearing-impaired listeners.
Soede, W; Bilsen, F A; Berkhout, A J
1993-08-01
Hearing-impaired listeners often have great difficulty understanding speech in surroundings with background noise or reverberation. Based on array techniques, two microphone prototypes (broadside and endfire) have been developed with strongly directional characteristics [Soede et al., "Development of a new directional hearing instrument based on array technology," J. Acoust. Soc. Am. 94, 785-798 (1993)]. Physical measurements show that the arrays attenuate reverberant sound by 6 dB (free-field) and can improve the signal-to-noise ratio by 7 dB in a diffuse noise field (measured with a KEMAR manikin). For the clinical assessment of these microphones an experimental setup was made in a sound-insulated listening room with one loudspeaker in front of the listener simulating the partner in a discussion and eight loudspeakers placed on the edges of a cube producing a diffuse background noise. The hearing-impaired subject wearing his own (familiar) hearing aid is placed in the center of the cube. The speech-reception threshold in noise for simple Dutch sentences was determined with a normal single omnidirectional microphone and with one of the microphone arrays. The results of monaural listening tests with hearing impaired subjects show that in comparison with an omnidirectional hearing-aid microphone the broadside and endfire microphone array gives a mean improvement of the speech reception threshold in noise of 7.0 dB (26 subjects) and 6.8 dB (27 subjects), respectively. Binaural listening with two endfire microphone arrays gives a binaural improvement which is comparable to the binaural improvement obtained by listening with two normal ears or two conventional hearing aids.
de Kleijn, Jasper L; van Kalmthout, Ludwike W M; van der Vossen, Martijn J B; Vonck, Bernard M D; Topsakal, Vedat; Bruijnzeel, Hanneke
2018-05-24
Although current guidelines recommend cochlear implantation only for children with profound hearing impairment (HI) (>90 decibel [dB] hearing level [HL]), studies show that children with severe hearing impairment (>70-90 dB HL) could also benefit from cochlear implantation. To perform a systematic review to identify audiologic thresholds (in dB HL) that could serve as an audiologic candidacy criterion for pediatric cochlear implantation using 4 domains of speech and language development as independent outcome measures (speech production, speech perception, receptive language, and auditory performance). PubMed and Embase databases were searched up to June 28, 2017, to identify studies comparing speech and language development between children who were profoundly deaf using cochlear implants and children with severe hearing loss using hearing aids, because no studies are available directly comparing children with severe HI in both groups. If cochlear implant users with profound HI score better on speech and language tests than those with severe HI who use hearing aids, this outcome could support adjusting cochlear implantation candidacy criteria to lower audiologic thresholds. Literature search, screening, and article selection were performed using a predefined strategy. Article screening was executed independently by 4 authors in 2 pairs; consensus on article inclusion was reached by discussion between these 4 authors. This study is reported according to the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) statement. Title and abstract screening of 2822 articles resulted in selection of 130 articles for full-text review. Twenty-one studies were selected for critical appraisal, resulting in selection of 10 articles for data extraction. Two studies formulated audiologic thresholds (in dB HLs) at which children could qualify for cochlear implantation: (1) at 4-frequency pure-tone average (PTA) thresholds of 80 dB HL or greater based on speech perception and auditory performance subtests and (2) at PTA thresholds of 88 and 96 dB HL based on a speech perception subtest. In 8 of the 18 outcome measures, children with profound HI using cochlear implants performed similarly to children with severe HI using hearing aids. Better performance of cochlear implant users was shown with a picture-naming test and a speech perception in noise test. Owing to large heterogeneity in study population and selected tests, it was not possible to conduct a meta-analysis. Studies indicate that lower audiologic thresholds (≥80 dB HL) than are advised in current national and manufacturer guidelines would be appropriate as audiologic candidacy criteria for pediatric cochlear implantation.
Frequency-specific hearing outcomes in pediatric type I tympanoplasty.
Kent, David T; Kitsko, Dennis J; Wine, Todd; Chi, David H
2014-02-01
Middle ear disease is the primary cause of hearing loss in children and has a significant impact on language development and academic performance. Multiple prognostic factors have previously been examined, but there is little published data regarding frequency-specific hearing outcomes. To examine the relationship between type I tympanoplasty in a pediatric population and frequency-specific hearing changes, as well as the relationship between several prognostic factors and graft retention. Retrospective medical chart review (February 2006 to October 2011) of 492 consecutive pediatric otolaryngology patients undergoing type I tympanoplasty for tympanic membrane (TM) perforation of any etiology at a tertiary-care pediatric otolaryngology practice. Type I tympanoplasty. Preoperative and postoperative audiometric data were collected for patients undergoing successful TM repair. It was hypothesized before data collection that conductive hearing would improve at all frequencies with no significant change in sensorineural hearing. Data collected included air conduction at 250 to 8000 Hz, speech reception thresholds, bone conduction at 500 to 4000 Hz, and air-bone gap at 500 to 4000 Hz. Demographic data obtained included sex, age, size, mechanism, location of perforation, and operative repair technique. Of 492 patients, 320 were excluded; results were thus examined for 172 patients. Surgery was successful for 73.8% of patients. Perforation size was significantly associated with repair success (mean [SD] surgical success rate of 38.6% [15.3%] vs surgical failure rate of 31.4% [15.0%]; P < .01); however, mean (SD) age (9.02 [3.89] years [surgical success] vs 8.52 [3.43] years [surgical failure]; P > .05) and repair technique (medial [73.08%] vs lateral [76.47%] graft success; P > .99) were not. Air conduction significantly improved from 250 to 2000 Hz (P < .001), did not significantly improve at 4000 Hz (P = .08), and there was a nonsignificant decline at 8000 Hz (P = .12). Speech reception threshold significantly improved (20 vs 15 dB; P < .001). This large review found an association of TM perforation size with surgical success and an improvement in speech reception threshold, air conduction at 250 to 2000 Hz, air-bone gap at 500 to 2000 Hz, and worsening bone conduction at 4000 Hz. Patients with high-frequency hearing loss due to TM perforation should not anticipate significant recovery from type I tympanoplasty. Hearing loss at higher frequencies may require postoperative hearing rehabilitation.
The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise.
Shen, Jing; Souza, Pamela E
2017-09-18
This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise. Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise. The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise. Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss.
The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise
Souza, Pamela E.
2017-01-01
Purpose This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise. Method Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise. Results The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise. Conclusions Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss. PMID:28800370
[Improving speech comprehension using a new cochlear implant speech processor].
Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A
2009-06-01
The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg sentences in the clinical setting S(0)N(CI), with speech signal at 0 degrees and noise lateral to the CI at 90 degrees . With the convincing findings from our evaluations of this multicenter study cohort, a trial with the Freedom 24 sound processor for all suitable CI users is recommended. For evaluating the benefits of a new processor, the comparative assessment paradigm used in our study design would be considered ideal for use with individual patients.
Davidson, Lisa S; Geers, Ann E; Nicholas, Johanna G
2014-07-01
A novel word learning (NWL) paradigm was used to explore underlying phonological and cognitive mechanisms responsible for delayed vocabulary level in children with cochlear implants (CIs). One hundred and one children using CIs, 6-12 years old, were tested along with 47 children with normal hearing (NH). Tests of NWL, receptive vocabulary, and speech perception at 2 loudness levels were administered to children with CIs. Those with NH completed the NWL task and a receptive vocabulary test. CI participants with good audibility (GA) versus poor audibility (PA) were compared on all measures. Analysis of variance was used to compare performance across the children with NH and the two groups of children with CIs. Multiple regression analysis was employed to identify independent predictors of vocabulary outcomes. Children with CIs in the GA group scored higher in receptive vocabulary and NWL than children in the PA group, although they did not reach NH levels. CI-aided pure tone threshold and performance on the NWL task predicted independent variance in vocabulary after accounting for other known predictors. Acquiring spoken vocabulary is facilitated by GA with a CI and phonological learning and memory skills. Children with CIs did not learn novel words at the same rate or achieve the same receptive vocabulary levels as their NH peers. Maximizing audibility for the perception of speech and direct instruction of new vocabulary may be necessary for children with CIs to reach levels seen in peers with NH.
Clinical Validation of a Sound Processor Upgrade in Direct Acoustic Cochlear Implant Subjects
Kludt, Eugen; D’hondt, Christiane; Lenarz, Thomas; Maier, Hannes
2017-01-01
Objective: The objectives of the investigation were to evaluate the effect of a sound processor upgrade on the speech reception threshold in noise and to collect long-term safety and efficacy data after 2½ to 5 years of device use of direct acoustic cochlear implant (DACI) recipients. Study Design: The study was designed as a mono-centric, prospective clinical trial. Setting: Tertiary referral center. Patients: Fifteen patients implanted with a direct acoustic cochlear implant. Intervention: Upgrade with a newer generation of sound processor. Main Outcome Measures: Speech recognition test in quiet and in noise, pure tone thresholds, subject-reported outcome measures. Results: The speech recognition in quiet and in noise is superior after the sound processor upgrade and stable after long-term use of the direct acoustic cochlear implant. The bone conduction thresholds did not decrease significantly after long-term high level stimulation. Conclusions: The new sound processor for the DACI system provides significant benefits for DACI users for speech recognition in both quiet and noise. Especially the noise program with the use of directional microphones (Zoom) allows DACI patients to have much less difficulty when having conversations in noisy environments. Furthermore, the study confirms that the benefits of the sound processor upgrade are available to the DACI recipients even after several years of experience with a legacy sound processor. Finally, our study demonstrates that the DACI system is a safe and effective long-term therapy. PMID:28406848
Dincer D'Alessandro, Hilal; Ballantyne, Deborah; Boyle, Patrick J; De Seta, Elio; DeVincentiis, Marco; Mancini, Patrizia
2017-11-30
The aim of the study was to investigate the link between temporal fine structure (TFS) processing, pitch, and speech perception performance in adult cochlear implant (CI) recipients, including bimodal listeners who may benefit better low-frequency (LF) temporal coding in the contralateral ear. The study participants were 43 adult CI recipients (23 unilateral, 6 bilateral, and 14 bimodal listeners). Two new LF pitch perception tests-harmonic intonation (HI) and disharmonic intonation (DI)-were used to evaluate TFS sensitivity. HI and DI were designed to estimate a difference limen for discrimination of tone changes based on harmonic or inharmonic pitch glides. Speech perception was assessed using the newly developed Italian Sentence Test with Adaptive Randomized Roving level (STARR) test where sentences relevant to everyday contexts were presented at low, medium, and high levels in a fluctuating background noise to estimate a speech reception threshold (SRT). Although TFS and STARR performances in the majority of CI recipients were much poorer than those of hearing people reported in the literature, a considerable intersubject variability was observed. For CI listeners, median just noticeable differences were 27.0 and 147.0 Hz for HI and DI, respectively. HI outcomes were significantly better than those for DI. Median STARR score was 14.8 dB. Better performers with speech reception thresholds less than 20 dB had a median score of 8.6 dB. A significant effect of age was observed for both HI/DI tests, suggesting that TFS sensitivity tended to worsen with increasing age. CI pure-tone thresholds and duration of profound deafness were significantly correlated with STARR performance. Bimodal users showed significantly better TFS and STARR performance for bimodal listening than for their CI-only condition. Median bimodal gains were 33.0 Hz for the HI test and 95.0 Hz for the DI test. DI outcomes in bimodal users revealed a significant correlation with unaided hearing thresholds for octave frequencies lower than 1000 Hz. Median STARR scores were 17.3 versus 8.1 dB for CI only and bimodal listening, respectively. STARR performance was significantly correlated with HI findings for CI listeners and with those of DI for bimodal listeners. LF pitch perception was found to be abnormal in the majority of adult CI recipients, confirming poor TFS processing of CIs. Similarly, the STARR findings reflected a common performance deterioration with the HI/DI tests, suggesting the cause probably being a lack of access to TFS information. Contralateral hearing aid users obtained a remarkable bimodal benefit for all tests. Such results highlighted the importance of TFS cues for challenging speech perception and the relevance to everyday listening conditions. HI/DI and STARR tests show promise for gaining insights into how TFS and speech perception are being limited and may guide the customization of CI program parameters and support the fine tuning of bimodal listening.
Pronk, Marieke; Deeg, Dorly J H; Kramer, Sophia E
2018-04-17
The purpose of this study is to determine which demographic, health-related, mood, personality, or social factors predict discrepancies between older adults' functional speech-in-noise test result and their self-reported hearing problems. Data of 1,061 respondents from the Longitudinal Aging Study Amsterdam were used (ages ranged from 57 to 95 years). Functional hearing problems were measured using a digit triplet speech-in-noise test. Five questions were used to assess self-reported hearing problems. Scores of both hearing measures were dichotomized. Two discrepancy outcomes were created: (a) being unaware: those with functional but without self-reported problems (reference is aware: those with functional and self-reported problems); (b) reporting false complaints: those without functional but with self-reported problems (reference is well: those without functional and self-reported hearing problems). Two multivariable prediction models (logistic regression) were built with 19 candidate predictors. The speech reception threshold in noise was kept (forced) as a predictor in both models. Persons with higher self-efficacy (to initiate behavior) and higher self-esteem had a higher odds to being unaware than persons with lower self-efficacy scores (odds ratio [OR] = 1.13 and 1.11, respectively). Women had a higher odds than men (OR = 1.47). Persons with more chronic diseases and persons with worse (i.e., higher) speech-in-noise reception thresholds in noise had a lower odds to being unaware (OR = 0.85 and 0.91, respectively) than persons with less diseases and better thresholds, respectively. A higher odds to reporting false complaints was predicted by more depressive symptoms (OR = 1.06), more chronic diseases (OR = 1.21), and a larger social network (OR = 1.02). Persons with higher self-efficacy (to complete behavior) had a lower odds (OR = 0.86), whereas persons with higher self-esteem had a higher odds to report false complaints (OR = 1.21). The explained variance of both prediction models was small (Nagelkerke R2 = .11 for the unaware model, and .10 for the false complaints model). The findings suggest that a small proportion of the discrepancies between older individuals' results on a speech-in-noise screening test and their self-reports of hearing problems can be explained by the unique context of these individuals. The likelihood of discrepancies partly depends on a person's health (chronic diseases), demographics (gender), personality (self-efficacy to initiate behavior and to persist in adversity, self-esteem), mood (depressive symptoms), and social situation (social network size). Implications are discussed.
Ching, Teresa Yc; Zhang, Vicky W; Flynn, Christopher; Burns, Lauren; Button, Laura; Hou, Sanna; McGhie, Karen; Van Buynder, Patricia
2017-07-07
We investigated the factors influencing speech perception in babble for 5-year-old children with hearing loss who were using hearing aids (HAs) or cochlear implants (CIs). Speech reception thresholds (SRTs) for 50% correct identification were measured in two conditions - speech collocated with babble, and speech with spatially separated babble. The difference in SRTs between the two conditions give a measure of binaural unmasking, commonly known as spatial release from masking (SRM). Multiple linear regression analyses were conducted to examine the influence of a range of demographic factors on outcomes. Participants were 252 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Children using HAs or CIs required a better signal-to-noise ratio to achieve the same level of performance as their normal-hearing peers but demonstrated SRM of a similar magnitude. For children using HAs, speech perception was significantly influenced by cognitive and language abilities. For children using CIs, age at CI activation and language ability were significant predictors of speech perception outcomes. Speech perception in children with hearing loss can be enhanced by improving their language abilities. Early age at cochlear implantation was also associated with better outcomes.
Gifford, René H.; Dorman, Michael F.; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L. W.; Roland, Peter; Buchman, Craig A.
2012-01-01
Objective The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. Design The current study included a within subjects, repeated-measures design including 21 English speaking and 17 Polish speaking cochlear implant recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250 and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an 8-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: cochlear implant (CI) plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best aided condition). A subset of 6 English-speaking listeners were also assessed on measures of interaural time difference (ITD) thresholds for a 250-Hz signal. Results Small, but significant, improvements in performance (1.7 – 2.1 dB and 6 – 10 percentage points) were found for the best-aided condition vs. the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of EAS benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold following surgery and improvement in speech understanding in reverberation. There was a significant correlation between ITD threshold at 250 Hz and EAS-related benefit for the adaptive SRT. Conclusions Our results suggest that (i) preserved low-frequency hearing improves speech understanding for CI recipients (ii) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing and (iii) preservation of binaural timing cues, albeit poorer than observed for individuals with normal hearing, is possible following unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. Our results demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of cochlear implant criteria to include individuals with low-frequency thresholds in even the normal to near-normal range. PMID:23446225
Mehraei, Golbarg; Gallun, Frederick J; Leek, Marjorie R; Bernstein, Joshua G W
2014-07-01
Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4-32 Hz), spectral ripple density [0.5-4 cycles/octave (c/o)] and carrier center frequency (500-4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4-12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements.
Panday, Seema; Kathard, Harsha; Pillay, Mershen; Wilson, Wayne
2018-03-29
The purpose of this study was to consider the value of adding first-language speaker ratings to the process of validating word recordings for use in a new speech reception threshold (SRT) test in audiology. Previous studies had identified 28 word recordings as being suitable for use in a new SRT test. These word recordings had been shown to satisfy the linguistic criteria of familiarity, phonetic dissimilarity and tone, and the psychometric criterion of homogeneity of audibility. Objectives: The aim of the study was to consider the value of adding first-language speakers' ratings when validating word recordings for a new SRT test. Method: A single observation, cross-sectional design was used to collect and analyse quantitative data in this study. Eleven first-language isiZulu speakers, purposively selected, were asked to rate each of the word recordings for pitch, clarity, naturalness, speech rate and quality on a 5-point Likert scale. The percent agreement and Friedman test were used for analysis. Results: More than 20% of these 11 participants rated the three-word recordings below 'strongly agree' in the category of pitch or tone, and one-word recording below 'strongly agree' in the categories of pitch or tone, clarity or articulation and naturalness or dialect. Conclusion: The first-language speaker ratings proved to be a valuable addition to the process of selecting word recordings for use in a new SRT test. In particular, these ratings identified potentially problematic word recordings in the new SRT test that had been missed by the previously and more commonly used linguistic and psychometric selection criteria.
Danielsson, Henrik; Hällgren, Mathias; Stenfelt, Stefan; Rönnberg, Jerker; Lunner, Thomas
2016-01-01
The audiogram predicts <30% of the variance in speech-reception thresholds (SRTs) for hearing-impaired (HI) listeners fitted with individualized frequency-dependent gain. The remaining variance could reflect suprathreshold distortion in the auditory pathways or nonauditory factors such as cognitive processing. The relationship between a measure of suprathreshold auditory function—spectrotemporal modulation (STM) sensitivity—and SRTs in noise was examined for 154 HI listeners fitted with individualized frequency-specific gain. SRTs were measured for 65-dB SPL sentences presented in speech-weighted noise or four-talker babble to an individually programmed master hearing aid, with the output of an ear-simulating coupler played through insert earphones. Modulation-depth detection thresholds were measured over headphones for STM (2cycles/octave density, 4-Hz rate) applied to an 85-dB SPL, 2-kHz lowpass-filtered pink-noise carrier. SRTs were correlated with both the high-frequency (2–6 kHz) pure-tone average (HFA; R2 = .31) and STM sensitivity (R2 = .28). Combined with the HFA, STM sensitivity significantly improved the SRT prediction (ΔR2 = .13; total R2 = .44). The remaining unaccounted variance might be attributable to variability in cognitive function and other dimensions of suprathreshold distortion. STM sensitivity was most critical in predicting SRTs for listeners < 65 years old or with HFA <53 dB HL. Results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low-frequency carriers is impaired by a reduced ability to use temporal fine-structure information to detect dynamic spectra. STM detection is a fast test of suprathreshold auditory function for frequencies <2 kHz that complements the HFA to predict variability in hearing-aid outcomes for speech perception in noise. PMID:27815546
Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise
Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther
2016-01-01
Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18–35 years) and 22 older (60–78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults’ poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access. PMID:27458400
Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise.
Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther
2016-01-01
Vocabulary size has been suggested as a useful measure of "verbal abilities" that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18-35 years) and 22 older (60-78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults' poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access.
Development of the Russian matrix sentence test.
Warzybok, Anna; Zokoll, Melanie; Wardenga, Nina; Ozimek, Edward; Boboshko, Maria; Kollmeier, Birger
2015-01-01
To develop the Russian matrix sentence test for speech intelligibility measurements in noise. Test development included recordings, optimization of speech material, and evaluation to investigate the equivalency of the test lists and training. For each of the 500 test items, the speech intelligibility function, speech reception threshold (SRT: signal-to-noise ratio, SNR, that provides 50% speech intelligibility), and slope was obtained. The speech material was homogenized by applying level corrections. In evaluation measurements, speech intelligibility was measured at two fixed SNRs to compare list-specific intelligibility functions. To investigate the training effect and establish reference data, speech intelligibility was measured adaptively. Overall, 77 normal-hearing native Russian listeners. The optimization procedure decreased the spread in SRTs across words from 2.8 to 0.6 dB. Evaluation measurements confirmed that the 16 test lists were equivalent, with a mean SRT of -9.5 ± 0.2 dB and a slope of 13.8 ± 1.6%/dB. The reference SRT, -8.8 ± 0.8 dB for the open-set and -9.4 ± 0.8 dB for the closed-set format, increased slightly for noise levels above 75 dB SPL. The Russian matrix sentence test is suitable for accurate and reliable speech intelligibility measurements in noise.
Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas
2014-03-01
Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.
Buss, Emily; Leibold, Lori J.; Porter, Heather L.; Grose, John H.
2017-01-01
Children perform more poorly than adults on a wide range of masked speech perception paradigms, but this effect is particularly pronounced when the masker itself is also composed of speech. The present study evaluated two factors that might contribute to this effect: the ability to perceptually isolate the target from masker speech, and the ability to recognize target speech based on sparse cues (glimpsing). Speech reception thresholds (SRTs) were estimated for closed-set, disyllabic word recognition in children (5–16 years) and adults in a one- or two-talker masker. Speech maskers were 60 dB sound pressure level (SPL), and they were either presented alone or in combination with a 50-dB-SPL speech-shaped noise masker. There was an age effect overall, but performance was adult-like at a younger age for the one-talker than the two-talker masker. Noise tended to elevate SRTs, particularly for older children and adults, and when summed with the one-talker masker. Removing time-frequency epochs associated with a poor target-to-masker ratio markedly improved SRTs, with larger effects for younger listeners; the age effect was not eliminated, however. Results were interpreted as indicating that development of speech-in-speech recognition is likely impacted by development of both perceptual masking and the ability recognize speech based on sparse cues. PMID:28464682
Mehraei, Golbarg; Gallun, Frederick J.; Leek, Marjorie R.; Bernstein, Joshua G. W.
2014-01-01
Poor speech understanding in noise by hearing-impaired (HI) listeners is only partly explained by elevated audiometric thresholds. Suprathreshold-processing impairments such as reduced temporal or spectral resolution or temporal fine-structure (TFS) processing ability might also contribute. Although speech contains dynamic combinations of temporal and spectral modulation and TFS content, these capabilities are often treated separately. Modulation-depth detection thresholds for spectrotemporal modulation (STM) applied to octave-band noise were measured for normal-hearing and HI listeners as a function of temporal modulation rate (4–32 Hz), spectral ripple density [0.5–4 cycles/octave (c/o)] and carrier center frequency (500–4000 Hz). STM sensitivity was worse than normal for HI listeners only for a low-frequency carrier (1000 Hz) at low temporal modulation rates (4–12 Hz) and a spectral ripple density of 2 c/o, and for a high-frequency carrier (4000 Hz) at a high spectral ripple density (4 c/o). STM sensitivity for the 4-Hz, 4-c/o condition for a 4000-Hz carrier and for the 4-Hz, 2-c/o condition for a 1000-Hz carrier were correlated with speech-recognition performance in noise after partialling out the audiogram-based speech-intelligibility index. Poor speech-reception and STM-detection performance for HI listeners may be related to a combination of reduced frequency selectivity and a TFS-processing deficit limiting the ability to track spectral-peak movements. PMID:24993215
ERIC Educational Resources Information Center
Wald, Mike
2006-01-01
The potential use of Automatic Speech Recognition to assist receptive communication is explored. The opportunities and challenges that this technology presents students and staff to provide captioning of speech online or in classrooms for deaf or hard of hearing students and assist blind, visually impaired or dyslexic learners to read and search…
Geravanchizadeh, Masoud; Fallah, Ali
2015-12-01
A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.
Zokoll, Melanie A; Wagener, Kirsten C; Brand, Thomas; Buschermöhle, Michael; Kollmeier, Birger
2012-09-01
A review is given of internationally comparable speech-in-noise tests for hearing screening purposes that were part of the European HearCom project. This report describes the development, optimization, and evaluation of such tests for headphone and telephone presentation, using the example of the German digit triplet test. In order to achieve the highest possible comparability, language- and speaker-dependent factors in speech intelligibility should be compensated for. The tests comprise spoken numbers in background noise and estimate the speech reception threshold (SRT), i.e. the signal-to-noise ratio (SNR) yielding 50% speech intelligibility. The respective reference speech intelligibility functions for headphone and telephone presentation of the German version for 15 and 10 normal-hearing listeners are described by a SRT of -9.3 ± 0.2 and -6.5 ± 0.4 dB SNR, and slopes of 19.6 and 17.9%/dB, respectively. Reference speech intelligibility functions of all digit triplet tests optimized within the HearCom project allow for investigation of the comparability due to language specificities. The optimization criteria established here should be used for similar screening tests in other languages.
Development of a test battery for evaluating speech perception in complex listening environments.
Brungart, Douglas S; Sheffield, Benjamin M; Kubli, Lina R
2014-08-01
In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.
Echolalia and comprehension in autistic children.
Roberts, J M
1989-06-01
The research reported in this paper investigates the phenomenon of echolalia in the speech of autistic children by examining the relationship between the frequency of echolalia and receptive language ability. The receptive language skills of 10 autistic children were assessed, and spontaneous speech samples were recorded. Analysis of these data showed that those children with poor receptive language skills produced significantly more echolalic utterances than those children whose receptive skills were more age-appropriate. Children who produced fewer echolalic utterances, and had more advanced receptive language ability, evidenced a higher proportion of mitigated echolalia. The most common type of mitigation was echo plus affirmation or denial.
Speech Perception in Older Hearing Impaired Listeners: Benefits of Perceptual Training
Woods, David L.; Doss, Zoe; Herron, Timothy J.; Arbogast, Tanya; Younus, Masood; Ettlinger, Marc; Yund, E. William
2015-01-01
Hearing aids (HAs) only partially restore the ability of older hearing impaired (OHI) listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC) syllables containing different vowels (/ɑ/, /i/, or /u/) and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs) spanning a 12 dB range. Noise levels were adjusted over training sessions based on d’ measures. Listeners were tested before and after training to measure (1) changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2) sentence reception thresholds (SeRTs) using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d’ thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in processing simple sentences and that greater transfer of benefit would be evident in the comprehension of more unpredictable speech material. PMID:25730330
Determining the energetic and informational components of speech-on-speech masking
Kidd, Gerald; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Clayton, Kameron K.; Best, Virginia
2016-01-01
Identification of target speech was studied under masked conditions consisting of two or four independent speech maskers. In the reference conditions, the maskers were colocated with the target, the masker talkers were the same sex as the target, and the masker speech was intelligible. The comparison conditions, intended to provide release from masking, included different-sex target and masker talkers, time-reversal of the masker speech, and spatial separation of the maskers from the target. Significant release from masking was found for all comparison conditions. To determine whether these reductions in masking could be attributed to differences in energetic masking, ideal time-frequency segregation (ITFS) processing was applied so that the time-frequency units where the masker energy dominated the target energy were removed. The remaining target-dominated “glimpses” were reassembled as the stimulus. Speech reception thresholds measured using these resynthesized ITFS-processed stimuli were the same for the reference and comparison conditions supporting the conclusion that the amount of energetic masking across conditions was the same. These results indicated that the large release from masking found under all comparison conditions was due primarily to a reduction in informational masking. Furthermore, the large individual differences observed generally were correlated across the three masking release conditions. PMID:27475139
Development and preliminary evaluation of a pediatric Spanish-English speech perception task.
Calandruccio, Lauren; Gomez, Bianca; Buss, Emily; Leibold, Lori J
2014-06-01
The purpose of this study was to develop a task to evaluate children's English and Spanish speech perception abilities in either noise or competing speech maskers. Eight bilingual Spanish-English and 8 age-matched monolingual English children (ages 4.9-16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish-English talkers. The target stimuli were 30 disyllabic English and Spanish words, familiar to 5-year-olds and easily illustrated. Competing stimuli included either 2-talker English or 2-talker Spanish speech (corresponding to target language) and spectrally matched noise. For both groups of children, regardless of test language, performance was significantly worse for the 2-talker than for the noise masker condition. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. Results indicated that the stimuli and task were appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use.
Development and preliminary evaluation of a pediatric Spanish/English speech perception task
Calandruccio, Lauren; Gomez, Bianca; Buss, Emily; Leibold, Lori J.
2014-01-01
Purpose To develop a task to evaluate children’s English and Spanish speech perception abilities in either noise or competing speech maskers. Methods Eight bilingual Spanish/English and eight age matched monolingual English children (ages 4.9 –16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish/English talkers. The target stimuli were thirty disyllabic English and Spanish words, familiar to five-year-olds, and easily illustrated. Competing stimuli included either two-talker English or two-talker Spanish speech (corresponding to target language) and spectrally matched noise. Results For both groups of children, regardless of test language, performance was significantly worse for the two-talker than the noise masker. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. Conclusions Results indicate that the stimuli and task are appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use. PMID:24686915
Results using the OPAL strategy in Mandarin speaking cochlear implant recipients.
Vandali, Andrew E; Dawson, Pam W; Arora, Komal
2017-01-01
To evaluate the effectiveness of an experimental pitch-coding strategy for improving recognition of Mandarin lexical tone in cochlear implant (CI) recipients. Adult CI recipients were tested on recognition of Mandarin tones in quiet and speech-shaped noise at a signal-to-noise ratio of +10 dB; Mandarin sentence speech-reception threshold (SRT) in speech-shaped noise; and pitch discrimination of synthetic complex-harmonic tones in quiet. Two versions of the experimental strategy were examined: (OPAL) linear (1:1) mapping of fundamental frequency (F0) to the coded modulation rate; and (OPAL+) transposed mapping of high F0s to a lower coded rate. Outcomes were compared to results using the clinical ACE™ strategy. Five Mandarin speaking users of Nucleus® cochlear implants. A small but significant benefit in recognition of lexical tones was observed using OPAL compared to ACE in noise, but not in quiet, and not for OPAL+ compared to ACE or OPAL in quiet or noise. Sentence SRTs were significantly better using OPAL+ and comparable using OPAL to those using ACE. No differences in pitch discrimination thresholds were observed across strategies. OPAL can provide benefits to Mandarin lexical tone recognition in moderately noisy conditions and preserve perception of Mandarin sentences in challenging noise conditions.
Grammar without Speech Production: The Case of Labrador Inuttitut Heritage Receptive Bilinguals
ERIC Educational Resources Information Center
Sherkina-Lieber, Marina; Perez-Leroux, Ana T.; Johns, Alana
2011-01-01
We examine morphosyntactic knowledge of Labrador Inuttitut by Inuit receptive bilinguals (RBs)--heritage speakers who are capable of comprehension, but produce little or no speech. A grammaticality judgment study suggests that RBs possess sensitivity to morphosyntactic violations, though to a lesser degree than fluent bilinguals. Low-proficiency…
Rosemann, Stephanie; Gießing, Carsten; Özyurt, Jale; Carroll, Rebecca; Puschmann, Sebastian; Thiel, Christiane M.
2017-01-01
Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome. PMID:28638329
Fractionated Stereotactic Radiotherapy of Vestibular Schwannomas Accelerates Hearing Loss
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rasmussen, Rune, E-mail: rune333@gmail.com; Claesson, Magnus; Stangerup, Sven-Eric
2012-08-01
Objective: To evaluate long-term tumor control and hearing preservation rates in patients with vestibular schwannoma treated with fractionated stereotactic radiotherapy (FSRT), comparing hearing preservation rates to an untreated control group. The relationship between radiation dose to the cochlea and hearing preservation was also investigated. Methods and Materials: Forty-two patients receiving FSRT between 1997 and 2008 with a minimum follow-up of 2 years were included. All patients received 54 Gy in 27-30 fractions during 5.5-6.0 weeks. Clinical and audiometry data were collected prospectively. From a 'wait-and-scan' group, 409 patients were selected as control subjects, matched by initial audiometric parameters. Radiation dosemore » to the cochlea was measured using the original treatment plan and then related to changes in acoustic parameters. Results: Actuarial 2-, 4-, and 10-year tumor control rates were 100%, 91.5%, and 85.0%, respectively. Twenty-one patients had serviceable hearing before FSRT, 8 of whom (38%) retained serviceable hearing at 2 years after FSRT. No patients retained serviceable hearing after 10 years. At 2 years, hearing preservation rates in the control group were 1.8 times higher compared with the group receiving FSRT (P=.007). Radiation dose to the cochlea was significantly correlated to deterioration of the speech reception threshold (P=.03) but not to discrimination loss. Conclusion: FSRT accelerates the naturally occurring hearing loss in patients with vestibular schwannoma. Our findings, using fractionation of radiotherapy, parallel results using single-dose radiation. The radiation dose to the cochlea is correlated to hearing loss measured as the speech reception threshold.« less
Searchfield, Grant D; Linford, Tania; Kobayashi, Kei; Crowhen, David; Latzel, Matthias
2018-03-01
To compare preference for and performance of manually selected programmes to an automatic sound classifier, the Phonak AutoSense OS. A single blind repeated measures study. Participants were fit with Phonak Virto V90 ITE aids; preferences for different listening programmes were compared across four different sound scenarios (speech in: quiet, noise, loud noise and a car). Following a 4-week trial preferences were reassessed and the users preferred programme was compared to the automatic classifier for sound quality and hearing in noise (HINT test) using a 12 loudspeaker array. Twenty-five participants with symmetrical moderate-severe sensorineural hearing loss. Participant preferences of manual programme for scenarios varied considerably between and within sessions. A HINT Speech Reception Threshold (SRT) advantage was observed for the automatic classifier over participant's manual selection for speech in quiet, loud noise and car noise. Sound quality ratings were similar for both manual and automatic selections. The use of a sound classifier is a viable alternative to manual programme selection.
Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M
2011-01-01
The aim of the present study was to evaluate the influence of age, hearing loss, and cognitive ability on the cognitive processing load during listening to speech presented in noise. Cognitive load was assessed by means of pupillometry (i.e., examination of pupil dilation), supplemented with subjective ratings. Two groups of subjects participated: 38 middle-aged participants (mean age = 55 yrs) with normal hearing and 36 middle-aged participants (mean age = 61 yrs) with hearing loss. Using three Speech Reception Threshold (SRT) in stationary noise tests, we estimated the speech-to-noise ratios (SNRs) required for the correct repetition of 50%, 71%, or 84% of the sentences (SRT50%, SRT71%, and SRT84%, respectively). We examined the pupil response during listening: the peak amplitude, the peak latency, the mean dilation, and the pupil response duration. For each condition, participants rated the experienced listening effort and estimated their performance level. Participants also performed the Text Reception Threshold (TRT) test, a test of processing speed, and a word vocabulary test. Data were compared with previously published data from young participants with normal hearing. Hearing loss was related to relatively poor SRTs, and higher speech intelligibility was associated with lower effort and higher performance ratings. For listeners with normal hearing, increasing age was associated with poorer TRTs and slower processing speed but with larger word vocabulary. A multivariate repeated-measures analysis of variance indicated main effects of group and SNR and an interaction effect between these factors on the pupil response. The peak latency was relatively short and the mean dilation was relatively small at low intelligibility levels for the middle-aged groups, whereas the reverse was observed for high intelligibility levels. The decrease in the pupil response as a function of increasing SNR was relatively small for the listeners with hearing loss. Spearman correlation coefficients indicated that the cognitive load was larger in listeners with better TRT performances as reflected by a longer peak latency (normal-hearing participants, SRT50% condition) and a larger peak amplitude and longer response duration (hearing-impaired participants, SRT50% and SRT84% conditions). Also, a larger word vocabulary was related to longer response duration in the SRT84% condition for the participants with normal hearing. The pupil response systematically increased with decreasing speech intelligibility. Ageing and hearing loss were related to less release from effort when increasing the intelligibility of speech in noise. In difficult listening conditions, these factors may induce cognitive overload relatively early or they may be associated with relatively shallow speech processing. More research is needed to elucidate the underlying mechanisms explaining these results. Better TRTs and larger word vocabulary were related to higher mental processing load across speech intelligibility levels. This indicates that utilizing linguistic ability to improve speech perception is associated with increased listening load.
Speech Perception With Combined Electric-Acoustic Stimulation: A Simulation and Model Comparison.
Rader, Tobias; Adel, Youssef; Fastl, Hugo; Baumann, Uwe
2015-01-01
The aim of this study is to simulate speech perception with combined electric-acoustic stimulation (EAS), verify the advantage of combined stimulation in normal-hearing (NH) subjects, and then compare it with cochlear implant (CI) and EAS user results from the authors' previous study. Furthermore, an automatic speech recognition (ASR) system was built to examine the impact of low-frequency information and is proposed as an applied model to study different hypotheses of the combined-stimulation advantage. Signal-detection-theory (SDT) models were applied to assess predictions of subject performance without the need to assume any synergistic effects. Speech perception was tested using a closed-set matrix test (Oldenburg sentence test), and its speech material was processed to simulate CI and EAS hearing. A total of 43 NH subjects and a customized ASR system were tested. CI hearing was simulated by an aurally adequate signal spectrum analysis and representation, the part-tone-time-pattern, which was vocoded at 12 center frequencies according to the MED-EL DUET speech processor. Residual acoustic hearing was simulated by low-pass (LP)-filtered speech with cutoff frequencies 200 and 500 Hz for NH subjects and in the range from 100 to 500 Hz for the ASR system. Speech reception thresholds were determined in amplitude-modulated noise and in pseudocontinuous noise. Previously proposed SDT models were lastly applied to predict NH subject performance with EAS simulations. NH subjects tested with EAS simulations demonstrated the combined-stimulation advantage. Increasing the LP cutoff frequency from 200 to 500 Hz significantly improved speech reception thresholds in both noise conditions. In continuous noise, CI and EAS users showed generally better performance than NH subjects tested with simulations. In modulated noise, performance was comparable except for the EAS at cutoff frequency 500 Hz where NH subject performance was superior. The ASR system showed similar behavior to NH subjects despite a positive signal-to-noise ratio shift for both noise conditions, while demonstrating the synergistic effect for cutoff frequencies ≥300 Hz. One SDT model largely predicted the combined-stimulation results in continuous noise, while falling short of predicting performance observed in modulated noise. The presented simulation was able to demonstrate the combined-stimulation advantage for NH subjects as observed in EAS users. Only NH subjects tested with EAS simulations were able to take advantage of the gap listening effect, while CI and EAS user performance was consistently degraded in modulated noise compared with performance in continuous noise. The application of ASR systems seems feasible to assess the impact of different signal processing strategies on speech perception with CI and EAS simulations. In continuous noise, SDT models were largely able to predict the performance gain without assuming any synergistic effects, but model amendments are required to explain the gap listening effect in modulated noise.
The influence of informational masking in reverberant, multi-talker environments.
Westermann, Adam; Buchholz, Jörg M
2015-08-01
The relevance of informational masking (IM) in real-world listening is not well understood. In literature, IM effects of up to 10 dB in measured speech reception thresholds (SRTs) are reported. However, these experiments typically employed simplified spatial configurations and speech corpora that magnified confusions. In this study, SRTs were measured with normal hearing subjects in a simulated cafeteria environment. The environment was reproduced by a 41-channel 3D-loudspeaker array. The target talker was 2 m in front of the listener and masking talkers were either spread throughout the room or colocated with the target. Three types of maskers were realized: one with the same talker as the target (maximum IM), one with talkers different from the target, and one with unintelligible, noise-vocoded talkers (minimal IM). Overall, SRTs improved for the spatially distributed conditions compared to the colocated conditions. Within the spatially distributed conditions, there was no significant difference between thresholds with the different- and vocoded-talker maskers. Conditions with the same-talker masker were the only conditions with substantially higher thresholds, especially in the colocated conditions. These results suggest that IM related to target-masker confusions, at least for normal-hearing listeners, is of low relevance in real-life listening.
Improving Your Child's Listening and Language Skills: A Parent's Guide to Language Development.
ERIC Educational Resources Information Center
Johnson, Ruth; And Others
The parent's guide reviews normal speech and language development and discusses ways in which parents of young children with language problems facilitate that development. Terms such as speech, communication, and receptive and expressive language are defined, and stages in receptive/expressive language development are charted. Implications for…
The Relationship between Socio-Economic Status and Lexical Development
ERIC Educational Resources Information Center
Black, Esther; Peppe, Sue; Gibbon, Fiona
2008-01-01
The British Picture Vocabulary Scale, second edition (BPVS-II), a measure of receptive vocabulary, is widely used by speech and language therapists and researchers into speech and language disorders, as an indicator of language delay, but it has frequently been suggested that receptive vocabulary may be more associated with socio-economic status.…
An Experimental Study of Interference between Receptive and Productive Processes Involving Speech
ERIC Educational Resources Information Center
Goldman-Eisler, Frieda; Cohen, Michele
1975-01-01
Reports an experiment designed to throw light on the interference between the reception and production of speech by controlling the level of interference between decoding and encoding, using hesitancy as an indicator of interference. This proved effective in spotting the levels at which interference takes place. (Author/RM)
Role of working memory and lexical knowledge in perceptual restoration of interrupted speech.
Nagaraj, Naveen K; Magimairaj, Beula M
2017-12-01
The role of working memory (WM) capacity and lexical knowledge in perceptual restoration (PR) of missing speech was investigated using the interrupted speech perception paradigm. Speech identification ability, which indexed PR, was measured using low-context sentences periodically interrupted at 1.5 Hz. PR was measured for silent gated, low-frequency speech noise filled, and low-frequency fine-structure and envelope filled interrupted conditions. WM capacity was measured using verbal and visuospatial span tasks. Lexical knowledge was assessed using both receptive vocabulary and meaning from context tests. Results showed that PR was better for speech noise filled condition than other conditions tested. Both receptive vocabulary and verbal WM capacity explained unique variance in PR for the speech noise filled condition, but were unrelated to performance in the silent gated condition. It was only receptive vocabulary that uniquely predicted PR for fine-structure and envelope filled conditions. These findings suggest that the contribution of lexical knowledge and verbal WM during PR depends crucially on the information content that replaced the silent intervals. When perceptual continuity was partially restored by filler speech noise, both lexical knowledge and verbal WM capacity facilitated PR. Importantly, for fine-structure and envelope filled interrupted conditions, lexical knowledge was crucial for PR.
Characterizing Speech Intelligibility in Noise After Wide Dynamic Range Compression.
Rhebergen, Koenraad S; Maalderink, Thijs H; Dreschler, Wouter A
The effects of nonlinear signal processing on speech intelligibility in noise are difficult to evaluate. Often, the effects are examined by comparing speech intelligibility scores with and without processing measured at fixed signal to noise ratios (SNRs) or by comparing the adaptive measured speech reception thresholds corresponding to 50% intelligibility (SRT50) with and without processing. These outcome measures might not be optimal. Measuring at fixed SNRs can be affected by ceiling or floor effects, because the range of relevant SNRs is not know in advance. The SRT50 is less time consuming, has a fixed performance level (i.e., 50% correct), but the SRT50 could give a limited view, because we hypothesize that the effect of most nonlinear signal processing algorithms at the SRT50 cannot be generalized to other points of the psychometric function. In this article, we tested the value of estimating the entire psychometric function. We studied the effect of wide dynamic range compression (WDRC) on speech intelligibility in stationary, and interrupted speech-shaped noise in normal-hearing subjects, using a fast method-based local linear fitting approach and by two adaptive procedures. The measured performance differences for conditions with and without WDRC for the psychometric functions in stationary noise and interrupted speech-shaped noise show that the effects of WDRC on speech intelligibility are SNR dependent. We conclude that favorable and unfavorable effects of WDRC on speech intelligibility can be missed if the results are presented in terms of SRT50 values only.
ERIC Educational Resources Information Center
Law, J.; Campbell, C.; Roulstone, S.; Adams, C.; Boyle, J.
2008-01-01
Background: Receptive language impairment (RLI) is one of the most significant indicators of negative sequelae for children with speech and language disorders. Despite this, relatively little is known about the most effective treatments for these children in the primary school period. Aims: To explore the relationship between the reported practice…
Barton-Hulsey, Andrea; Sevcik, Rose A; Romski, MaryAnn
2018-05-03
A number of intrinsic factors, including expressive speech skills, have been suggested to place children with developmental disabilities at risk for limited development of reading skills. This study examines the relationship between these factors, speech ability, and children's phonological awareness skills. A nonexperimental study design was used to examine the relationship between intrinsic skills of speech, language, print, and letter-sound knowledge to phonological awareness in 42 children with developmental disabilities between the ages of 48 and 69 months. Hierarchical multiple regression was done to determine if speech ability accounted for a unique amount of variance in phonological awareness skill beyond what would be expected by developmental skills inclusive of receptive language and print and letter-sound knowledge. A range of skill in all areas of direct assessment was found. Children with limited speech were found to have emerging skills in print knowledge, letter-sound knowledge, and phonological awareness. Speech ability did not predict a significant amount of variance in phonological awareness beyond what would be expected by developmental skills of receptive language and print and letter-sound knowledge. Children with limited speech ability were found to have receptive language and letter-sound knowledge that supported the development of phonological awareness skills. This study provides implications for practitioners and researchers concerning the factors related to early reading development in children with limited speech ability and developmental disabilities.
Segmental and Suprasegmental Perception in Children Using Hearing Aids.
Wenrich, Kaitlyn A; Davidson, Lisa S; Uchanski, Rosalie M
Suprasegmental perception (perception of stress, intonation, "how something is said" and "who says it") and segmental speech perception (perception of individual phonemes or perception of "what is said") are perceptual abilities that provide the foundation for the development of spoken language and effective communication. While there are numerous studies examining segmental perception in children with hearing aids (HAs), there are far fewer studies examining suprasegmental perception, especially for children with greater degrees of residual hearing. Examining the relation between acoustic hearing thresholds, and both segmental and suprasegmental perception for children with HAs, may ultimately enable better device recommendations (bilateral HAs, bimodal devices [one CI and one HA in opposite ears], bilateral CIs) for a particular degree of residual hearing. Examining both types of speech perception is important because segmental and suprasegmental cues are affected differentially by the type of hearing device(s) used (i.e., cochlear implant [CI] and/or HA). Additionally, suprathreshold measures, such as frequency resolution ability, may partially predict benefit from amplification and may assist audiologists in making hearing device recommendations. The purpose of this study is to explore the relationship between audibility (via hearing thresholds and speech intelligibility indices), and segmental and suprasegmental speech perception for children with HAs. A secondary goal is to explore the relationships among frequency resolution ability (via spectral modulation detection [SMD] measures), segmental and suprasegmental speech perception, and receptive language in these same children. A prospective cross-sectional design. Twenty-three children, ages 4 yr 11 mo to 11 yr 11 mo, participated in the study. Participants were recruited from pediatric clinic populations, oral schools for the deaf, and mainstream schools. Audiological history and hearing device information were collected from participants and their families. Segmental and suprasegmental speech perception, SMD, and receptive vocabulary skills were assessed. Correlations were calculated to examine the significance (p < 0.05) of relations between audibility and outcome measures. Measures of audibility and segmental speech perception are not significantly correlated, while low-frequency pure-tone average (unaided) is significantly correlated with suprasegmental speech perception. SMD is significantly correlated with all measures (measures of audibility, segmental and suprasegmental perception and vocabulary). Lastly, although age is not significantly correlated with measures of audibility, it is significantly correlated with all other outcome measures. The absence of a significant correlation between audibility and segmental speech perception might be attributed to overall audibility being maximized through well-fit HAs. The significant correlation between low-frequency unaided audibility and suprasegmental measures is likely due to the strong, predominantly low-frequency nature of suprasegmental acoustic properties. Frequency resolution ability, via SMD performance, is significantly correlated with all outcomes and requires further investigation; its significant correlation with vocabulary suggests that linguistic ability may be partially related to frequency resolution ability. Last, all of the outcome measures are significantly correlated with age, suggestive of developmental effects. American Academy of Audiology
Oral motor deficits in speech-impaired children with autism
Belmonte, Matthew K.; Saxena-Chandhok, Tanushree; Cherian, Ruth; Muneer, Reema; George, Lisa; Karanth, Prathibha
2013-01-01
Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive vs. expressive speech/language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills) and 90 (for oral motor skills) typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual. PMID:23847480
ERIC Educational Resources Information Center
Roberts, Joanne; Price, Johanna; Barnes, Elizabeth; Nelson, Lauren; Burchinal, Margaret; Hennon, Elizabeth A.; Moskowitz, Lauren; Edwards, Anne; Malkin, Cheryl; Anderson, Kathleen; Misenheimer, Jan; Hooper, Stephen R.
2007-01-01
Boys with fragile X syndrome with (n = 49) and without (n = 33) characteristics of autism spectrum disorder, boys with Down syndrome (39), and typically developing boys (n = 41) were compared on standardized measures of receptive vocabulary, expressive vocabulary, and speech administered annually over 4 years. Three major findings emerged. Boys…
The spatial unmasking of speech: evidence for within-channel processing of interaural time delay.
Edmonds, Barrie A; Culling, John F
2005-05-01
Across-frequency processing by common interaural time delay (ITD) in spatial unmasking was investigated by measuring speech reception thresholds (SRTs) for high- and low-frequency bands of target speech presented against concurrent speech or a noise masker. Experiment 1 indicated that presenting one of these target bands with an ITD of +500 micros and the other with zero ITD (like the masker) provided some release from masking, but full binaural advantage was only measured when both target bands were given an ITD of + 500 micros. Experiment 2 showed that full binaural advantage could also be achieved when the high- and low-frequency bands were presented with ITDs of equal but opposite magnitude (+/- 500 micros). In experiment 3, the masker was also split into high- and low-frequency bands with ITDs of equal but opposite magnitude (+/-500 micros). The ITD of the low-frequency target band matched that of the high-frequency masking band and vice versa. SRTs indicated that, as long as the target and masker differed in ITD within each frequency band, full binaural advantage could be achieved. These results suggest that the mechanism underlying spatial unmasking exploits differences in ITD independently within each frequency channel.
Goverts, S Theo; Huysmans, Elke; Kramer, Sophia E; de Groot, Annette M B; Houtgast, Tammo
2011-12-01
Researchers have used the distortion-sensitivity approach in the psychoacoustical domain to investigate the role of auditory processing abilities in speech perception in noise (van Schijndel, Houtgast, & Festen, 2001; Goverts & Houtgast, 2010). In this study, the authors examined the potential applicability of the distortion-sensitivity approach for investigating the role of linguistic abilities in speech understanding in noise. The authors applied the distortion-sensitivity approach by measuring the processing of visually presented masked text in a condition with manipulated syntactic, lexical, and semantic cues and while using the Text Reception Threshold (George et al., 2007; Kramer, Zekveld, & Houtgast, 2009; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) method. Two groups that differed in linguistic abilities were studied: 13 native and 10 non-native speakers of Dutch, all typically hearing university students. As expected, the non-native subjects showed substantially reduced performance. The results of the distortion-sensitivity approach yielded differentiated results on the use of specific linguistic cues in the 2 groups. The results show the potential value of the distortion-sensitivity approach in studying the role of linguistic abilities in speech understanding in noise of individuals with hearing impairment.
Phase effects in masking by harmonic complexes: speech recognition.
Deroche, Mickael L D; Culling, John F; Chatterjee, Monita
2013-12-01
Harmonic complexes that generate highly modulated temporal envelopes on the basilar membrane (BM) mask a tone less effectively than complexes that generate relatively flat temporal envelopes, because the non-linear active gain of the BM selectively amplifies a low-level tone in the dips of a modulated masker envelope. The present study examines a similar effect in speech recognition. Speech reception thresholds (SRTs) were measured for a voice masked by harmonic complexes with partials in sine phase (SP) or in random phase (RP). The masker's fundamental frequency (F0) was 50, 100 or 200 Hz. SRTs were considerably lower for SP than for RP maskers at 50-Hz F0, but the two converged at 100-Hz F0, while at 200-Hz F0, SRTs were a little higher for SP than RP maskers. The results were similar whether the target voice was male or female and whether the masker's spectral profile was flat or speech-shaped. Although listening in the masker dips has been shown to play a large role for artificial stimuli such as Schroeder-phase complexes at high levels, it contributes weakly to speech recognition in the presence of harmonic maskers with different crest factors at more moderate sound levels (65 dB SPL). Copyright © 2013 Elsevier B.V. All rights reserved.
Semeraro, Hannah D; Rowan, Daniel; van Besouw, Rachel M; Allsopp, Adrian A
2017-10-01
The studies described in this article outline the design and development of a British English version of the coordinate response measure (CRM) speech-in-noise (SiN) test. Our interest in the CRM is as a SiN test with high face validity for occupational auditory fitness for duty (AFFD) assessment. Study 1 used the method of constant stimuli to measure and adjust the psychometric functions of each target word, producing a speech corpus with equal intelligibility. After ensuring all the target words had similar intelligibility, for Studies 2 and 3, the CRM was presented in an adaptive procedure in stationary speech-spectrum noise to measure speech reception thresholds and evaluate the test-retest reliability of the CRM SiN test. Studies 1 (n = 20) and 2 (n = 30) were completed by normal-hearing civilians. Study 3 (n = 22) was completed by hearing impaired military personnel. The results display good test-retest reliability (95% confidence interval (CI) < 2.1 dB) and concurrent validity when compared to the triple-digit test (r ≤ 0.65), and the CRM is sensitive to hearing impairment. The British English CRM using stationary speech-spectrum noise is a "ready to use" SiN test, suitable for investigation as an AFFD assessment tool for military personnel.
From innervation density to tactile acuity: 1. Spatial representation.
Brown, Paul B; Koerber, H Richard; Millecchia, Ronald
2004-06-11
We tested the hypothesis that the population receptive field representation (a superposition of the excitatory receptive field areas of cells responding to a tactile stimulus) provides spatial information sufficient to mediate one measure of static tactile acuity. In psychophysical tests, two-point discrimination thresholds on the hindlimbs of adult cats varied as a function of stimulus location and orientation, as they do in humans. A statistical model of the excitatory low threshold mechanoreceptive fields of spinocervical, postsynaptic dorsal column and spinothalamic tract neurons was used to simulate the population receptive field representations in this neural population of the one- and two-point stimuli used in the psychophysical experiments. The simulated and observed thresholds were highly correlated. Simulated and observed thresholds' relations to physiological and anatomical variables such as stimulus location and orientation, receptive field size and shape, map scale, and innervation density were strikingly similar. Simulated and observed threshold variations with receptive field size and map scale obeyed simple relationships predicted by the signal detection model, and were statistically indistinguishable from each other. The population receptive field representation therefore contains information sufficient for this discrimination.
Brännström, K Jonas; Lantz, Johannes; Nielsen, Lars Holme; Olsen, Steen Østergaard
2014-02-01
Outcome measures can be used to improve the quality of the rehabilitation by identifying and understanding which variables influence the outcome. This information can be used to improve outcomes for clients. In clinical practice, pure-tone audiometry, speech reception thresholds (SRTs), and speech discrimination scores (SDSs) in quiet or in noise are common assessments made prior to hearing aid (HA) fittings. It is not known whether SRT and SDS in quiet relate to HA outcome measured with the International Outcome Inventory for Hearing Aids (IOI-HA). The aim of the present study was to investigate the relationship between pure-tone average (PTA), SRT, and SDS in quiet and IOI-HA in both first-time and experienced HA users. SRT and SDS were measured in a sample of HA users who also responded to the IOI-HA. Fifty-eight Danish-speaking adult HA users. The psychometric properties were evaluated and compared to previous studies using the IOI-HA. The associations and differences between the outcome scores and a number of descriptive variables (age, gender, fitted monaurally/binaurally with HA, first-time/experienced HA users, years of HA use, time since last HA fitting, best ear PTA, best ear SRT, or best ear SDS) were examined. A multiple forward stepwise regression analysis was conducted using scores on the separate IOI-HA items, the global score, and scores on the introspection and interaction subscales as dependent variables to examine whether the descriptive variables could predict these outcome measures. Scores on single IOI-HA items, the global score, and scores on the introspection (items 1, 2, 4, and 7) and interaction (items 3, 5, and 6) subscales closely resemble those previously reported. Multiple regression analysis showed that the best ear SDS predicts about 18-19% of the outcome on items 3 and 5 separately, and about 16% on the interaction subscale (sum of items 3, 5, and 6) CONCLUSIONS: The best ears SDS explains some of the variance displayed in the IOI-HA global score and the interaction subscale. The relation between SDS and IOI-HA suggests that a poor unaided SDS might in itself be a limiting factor for the HA rehabilitation efficacy and hence the IOI-HA outcome. The clinician could use this information to align the user's HA expectations to what is within possible reach. American Academy of Audiology.
Rajan, R; Cainer, K E
2008-06-23
In most everyday settings, speech is heard in the presence of competing sounds and understanding speech requires skills in auditory streaming and segregation, followed by identification and recognition, of the attended signals. Ageing leads to difficulties in understanding speech in noisy backgrounds. In addition to age-related changes in hearing-related factors, cognitive factors also play a role but it is unclear to what extent these are generalized or modality-specific cognitive factors. We examined how ageing in normal-hearing decade age cohorts from 20 to 69 years affected discrimination of open-set speech in background noise. We used two types of sentences of similar structural and linguistic characteristics but different masking levels (i.e. differences in signal-to-noise ratios required for detection of sentences in a standard masker) so as to vary sentence demand, and two background maskers (one causing purely energetic masking effects and the other causing energetic and informational masking) to vary load conditions. There was a decline in performance (measured as speech reception thresholds for perception of sentences in noise) in the oldest cohort for both types of sentences, but only in the presence of the more demanding informational masker. We interpret these results to indicate a modality-specific decline in cognitive processing, likely a decrease in the ability to use acoustic and phonetic cues efficiently to segregate speech from background noise, in subjects aged >60.
The RetroX auditory implant for high-frequency hearing loss.
Garin, P; Genard, F; Galle, C; Jamart, J
2004-07-01
The objective of this study was to analyze the subjective satisfaction and measure the hearing gain provided by the RetroX (Auric GmbH, Rheine, Germany), an auditory implant of the external ear. We conducted a retrospective case review. We conducted this study at a tertiary referral center at a university hospital. We studied 10 adults with high-frequency sensori-neural hearing loss (ski-slope audiogram). The RetroX consists of an electronic unit sited in the postaural sulcus connected to a titanium tube implanted under the auricle between the sulcus and the entrance of the external auditory canal. Implanting requires only minor surgery under local anesthesia. Main outcome measures were a satisfaction questionnaire, pure-tone audiometry in quiet, speech audiometry in quiet, speech audiometry in noise, and azimuth audiometry (hearing threshold in function of sound source location within the horizontal plane at ear level). : Subjectively, all 10 patients are satisfied or even extremely satisfied with the hearing improvement provided by the RetroX. They wear the implant daily, from morning to evening. We observe a statistically significant improvement of pure-tone thresholds at 1, 2, and 4 kHz. In quiet, the speech reception threshold improves by 9 dB. Speech audiometry in noise shows that intelligibility improves by 26% for a signal-to-noise ratio of -5 dB, by 18% for a signal-to-noise ratio of 0 dB, and by 13% for a signal-to-noise ratio of +5 dB. Localization audiometry indicates that the skull masks sound contralateral to the implanted ear. Of the 10 patients, one had acoustic feedback and one presented with a granulomatous reaction to the foreign body that necessitated removing the implant. The RetroX auditory implant is a semi-implantable hearing aid without occlusion of the external auditory canal. It provides a new therapeutic alternative for managing high-frequency hearing loss.
Gage, Nicole M; Eliashiv, Dawn S; Isenberg, Anna L; Fillmore, Paul T; Kurelowech, Lacey; Quint, Patti J; Chung, Jeffrey M; Otis, Shirley M
2011-06-01
Neuroimaging studies have shed light on cortical language organization, with findings implicating the left and right temporal lobes in speech processing converging to a left-dominant pattern. Findings highlight the fact that the state of theoretical language knowledge is ahead of current clinical language mapping methods, motivating a rethinking of these approaches. The authors used magnetoencephalography and multiple tasks in seven candidates for resective epilepsy surgery to investigate language organization. The authors scanned 12 control subjects to investigate the time course of bilateral receptive speech processes. Laterality indices were calculated for left and right hemisphere late fields ∼150 to 400 milliseconds. The authors report that (1) in healthy adults, speech processes activated superior temporal regions bilaterally converging to a left-dominant pattern, (2) in four of six patients, this was reversed, with bilateral processing converging to a right-dominant pattern, and (3) in three of four of these patients, receptive and expressive language processes were laterally discordant. Results provide evidence that receptive and expressive language may have divergent hemispheric dominance. Right-sided receptive language dominance in epilepsy patients emphasizes the need to assess both receptive and expressive language. Findings indicate that it is critical to use multiple tasks tapping separable aspects of language function to provide sensitive and specific estimates of language localization in surgical patients.
Koelewijn, Thomas; Versfeld, Niek J; Kramer, Sophia E
2017-10-01
For people with hearing difficulties, following a conversation in a noisy environment requires substantial cognitive processing, which is often perceived as effortful. Recent studies with normal hearing (NH) listeners showed that the pupil dilation response, a measure of cognitive processing load, is affected by 'attention related' processes. How these processes affect the pupil dilation response for hearing impaired (HI) listeners remains unknown. Therefore, the current study investigated the effect of auditory attention on various pupil response parameters for 15 NH adults (median age 51 yrs.) and 15 adults with mild to moderate sensorineural hearing loss (median age 52 yrs.). Both groups listened to two different sentences presented simultaneously, one to each ear and partially masked by stationary noise. Participants had to repeat either both sentences or only one, for which they had to divide or focus attention, respectively. When repeating one sentence, the target sentence location (left or right) was either randomized or blocked across trials, which in the latter case allowed for a better spatial focus of attention. The speech-to-noise ratio was adjusted to yield about 50% sentences correct for each task and condition. NH participants had lower ('better') speech reception thresholds (SRT) than HI participants. The pupil measures showed no between-group effects, with the exception of a shorter peak latency for HI participants, which indicated a shorter processing time. Both groups showed higher SRTs and a larger pupil dilation response when two sentences were processed instead of one. Additionally, SRTs were higher and dilation responses were larger for both groups when the target location was randomized instead of fixed. We conclude that although HI participants could cope with less noise than the NH group, their ability to focus attention on a single talker, thereby improving SRTs and lowering cognitive processing load, was preserved. Shorter peak latencies could indicate that HI listeners adapt their listening strategy by not processing some information, which reduces processing time and thereby listening effort. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Gorgias on Madison Avenue: Sophistry and the Rhetoric of Advertising.
ERIC Educational Resources Information Center
Matcuk, Matt
Using sophistic theory and focusing on intersections in the practice and reception of sophistry and advertising, a study analyzed a contemporary advertising campaign. A number of extrinsic similarities between sophistic and advertising rhetoric exist: their commercial basis, their popular reception as dishonest speech, and the reception of both as…
Baumgärtel, Regina M; Hu, Hongmei; Krawczyk-Becker, Martin; Marquardt, Daniel; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Bomke, Katrin; Plotz, Karsten; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-12-30
Several binaural audio signal enhancement algorithms were evaluated with respect to their potential to improve speech intelligibility in noise for users of bilateral cochlear implants (CIs). 50% speech reception thresholds (SRT50) were assessed using an adaptive procedure in three distinct, realistic noise scenarios. All scenarios were highly nonstationary, complex, and included a significant amount of reverberation. Other aspects, such as the perfectly frontal target position, were idealized laboratory settings, allowing the algorithms to perform better than in corresponding real-world conditions. Eight bilaterally implanted CI users, wearing devices from three manufacturers, participated in the study. In all noise conditions, a substantial improvement in SRT50 compared to the unprocessed signal was observed for most of the algorithms tested, with the largest improvements generally provided by binaural minimum variance distortionless response (MVDR) beamforming algorithms. The largest overall improvement in speech intelligibility was achieved by an adaptive binaural MVDR in a spatially separated, single competing talker noise scenario. A no-pre-processing condition and adaptive differential microphones without a binaural link served as the two baseline conditions. SRT50 improvements provided by the binaural MVDR beamformers surpassed the performance of the adaptive differential microphones in most cases. Speech intelligibility improvements predicted by instrumental measures were shown to account for some but not all aspects of the perceptually obtained SRT50 improvements measured in bilaterally implanted CI users. © The Author(s) 2015.
Evaluating a smartphone digits-in-noise test as part of the audiometric test battery.
Potgieter, Jenni-Mari; Swanepoel, De Wet; Smits, Cas
2018-05-21
Speech-in-noise tests have become a valuable part of the audiometric test battery providing an indication of a listener's ability to function in background noise. A simple digits-in-noise (DIN) test could be valuable to support diagnostic hearing assessments, hearing aid fittings and counselling for both paediatric and adult populations. Objective: The objective of this study was to evaluate the South African English smartphone DIN test's performance as part of the audiometric test battery. Design: This descriptive study evaluated 109 adult subjects (43 male and 66 female subjects) with and without sensorineural hearing loss by comparing pure-tone air conduction thresholds, speech recognition monaural performance scores (SRS dB) and the DIN speech reception threshold (SRT). An additional nine adult hearing aid users (four male and five female subjects) were included in a subset to determine aided and unaided DIN SRTs. Results: The DIN SRT is strongly associated with the best ear 4 frequency pure-tone average (4FPTA) (rs = 0.81) and maximum SRS dB (r = 0.72). The DIN test had high sensitivity and specificity to identify abnormal pure-tone (0.88 and 0.88, respectively) and SRS dB (0.76 and 0.88, respectively) results. There was a mean signal-to-noise ratio (SNR) improvement in the aided condition that demonstrated an overall benefit of 0.84 SNR dB. Conclusion: The DIN SRT was significantly correlated with the best ear 4FPTA and maximum SRS dB. The DIN SRT provides a useful measure of speech recognition in noise that can evaluate hearing aid fittings, manage counselling and hearing expectations.
Role of masker predictability in the cocktail party problem1
Jones, Gary L.; Litovsky, Ruth Y.
2008-01-01
In studies of the cocktail party problem, the number and locations of maskers are typically fixed throughout a block of trials, which leaves out uncertainty that exists in real-world environments. The current experiments examined whether there is (1) improved speech intelligibility and (2) increased spatial release from masking (SRM), as predictability of the number∕locations of speech maskers is increased. In the first experiment, subjects identified a target word presented at a fixed level in the presence of 0, 1, or 2 maskers as predictability of the masker configuration ranged from 10% to 80%. The second experiment examined speech reception thresholds and SRM as (a) predictability of the masker configuration is increased from 20% to 80% and∕or (b) the complexity of the listening environment is decreased. In the third experiment, predictability of the masker configuration was increased from 20% up to 100% while minimizing the onset delay between maskers and the target. All experiments showed no effect of predictability of the masker configuration on speech intelligibility or SRM. These results suggest that knowing the number and location(s) of maskers may not necessarily contribute significantly to solving the cocktail party problem, at least not when the location of the target is known. PMID:19206808
Selective auditory attention in adults: effects of rhythmic structure of the competing language.
Reel, Leigh Ann; Hicks, Candace Bourland
2012-02-01
The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Reception thresholds for English sentences were measured for 50 monolingual English-speaking adults in conditions with 2-talker (male-female) competing speech spoken in a stress-based (English, German), syllable-based (Spanish, French), or mora-based (Japanese) language. Two different masking signals were created for each language (i.e., 2 different 2-talker pairs). All subjects were tested in 10 competing conditions (2 conditions for each of the 5 languages). A significant difference was noted between the 2 masking signals within each language. Across languages, significantly greater listening difficulty was observed in conditions where competing speech was spoken in English, German, or Japanese, as compared with Spanish or French. Results suggest that (a) for a particular language, masking effectiveness can vary between different male-female 2-talker maskers and (b) for stress-based vs. syllable-based languages, competing speech is more difficult to ignore when spoken in a language from the native rhythmic class as compared with a nonnative rhythmic class, regardless of whether the language is familiar or unfamiliar to the listener.
Speech recognition by bilateral cochlear implant users in a cocktail-party setting
Loizou, Philipos C.; Hu, Yi; Litovsky, Ruth; Yu, Gongqiang; Peters, Robert; Lake, Jennifer; Roland, Peter
2009-01-01
Unlike prior studies with bilateral cochlear implant users which considered only one interferer, the present study considered realistic listening situations wherein multiple interferers were present and in some cases originating from both hemifields. Speech reception thresholds were measured in bilateral users unilaterally and bilaterally in four different spatial configurations, with one and three interferers consisting of modulated noise or competing talkers. The data were analyzed in terms of binaural benefits including monaural advantage (better-ear listening) and binaural interaction. The total advantage (overall spatial release) received was 2–5 dB and was maintained with multiple interferers present. This advantage was dominated by the monaural advantage, which ranged from 1 to 6 dB and was largest when the interferers were mostly energetic. No binaural-interaction benefit was found in the present study with either type of interferer (speech or noise). While the total and monaural advantage obtained for noise interferers was comparable to that attained by normal-hearing listeners, it was considerably lower for speech interferers. This suggests that bilateral users are less capable of taking advantage of binaural cues, in particular, under conditions of informational masking. Furthermore, the use of noise interferers does not adequately reflect the difficulties experienced by bilateral users in real-life situations. PMID:19173424
Potgieter, Jenni-Marí; Swanepoel, De Wet; Myburgh, Hermanus Carel; Hopper, Thomas Christopher; Smits, Cas
2015-07-01
The objective of this study was to develop and validate a smartphone-based digits-in-noise hearing test for South African English. Single digits (0-9) were recorded and spoken by a first language English female speaker. Level corrections were applied to create a set of homogeneous digits with steep speech recognition functions. A smartphone application was created to utilize 120 digit-triplets in noise as test material. An adaptive test procedure determined the speech reception threshold (SRT). Experiments were performed to determine headphones effects on the SRT and to establish normative data. Participants consisted of 40 normal-hearing subjects with thresholds ≤15 dB across the frequency spectrum (250-8000 Hz) and 186 subjects with normal-hearing in both ears, or normal-hearing in the better ear. The results show steep speech recognition functions with a slope of 20%/dB for digit-triplets presented in noise using the smartphone application. The results of five headphone types indicate that the smartphone-based hearing test is reliable and can be conducted using standard Android smartphone headphones or clinical headphones. A digits-in-noise hearing test was developed and validated for South Africa. The mean SRT and speech recognition functions correspond to previous developed telephone-based digits-in-noise tests.
A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments
Colburn, H. Steven
2016-01-01
Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. PMID:27698261
A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.
Mi, Jing; Colburn, H Steven
2016-10-03
Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.
Neher, Tobias; Wagener, Kirsten C; Latzel, Matthias
2017-09-01
Hearing aid (HA) users can differ markedly in their benefit from directional processing (or beamforming) algorithms. The current study therefore investigated candidacy for different bilateral directional processing schemes. Groups of elderly listeners with symmetric (N = 20) or asymmetric (N = 19) hearing thresholds for frequencies below 2 kHz, a large spread in the binaural intelligibility level difference (BILD), and no difference in age, overall degree of hearing loss, or performance on a measure of selective attention took part. Aided speech reception was measured using virtual acoustics together with a simulation of a linked pair of completely occluding behind-the-ear HAs. Five processing schemes and three acoustic scenarios were used. The processing schemes differed in the tradeoff between signal-to-noise ratio (SNR) improvement and binaural cue preservation. The acoustic scenarios consisted of a frontal target talker presented against two speech maskers from ±60° azimuth or spatially diffuse cafeteria noise. For both groups, a significant interaction between BILD, processing scheme, and acoustic scenario was found. This interaction implied that, in situations with lateral speech maskers, HA users with BILDs larger than about 2 dB profited more from preserved low-frequency binaural cues than from greater SNR improvement, whereas for smaller BILDs the opposite was true. Audiometric asymmetry reduced the influence of binaural hearing. In spatially diffuse noise, the maximal SNR improvement was generally beneficial. N 0 S π detection performance at 500 Hz predicted the benefit from low-frequency binaural cues. Together, these findings provide a basis for adapting bilateral directional processing to individual and situational influences. Further research is needed to investigate their generalizability to more realistic HA conditions (e.g., with low-frequency vent-transmitted sound). Copyright © 2017 Elsevier B.V. All rights reserved.
Mesnildrey, Quentin; Hilkhuysen, Gaston; Macherey, Olivier
2016-02-01
Noise- and sine-carrier vocoders are often used to acoustically simulate the information transmitted by a cochlear implant (CI). However, sine-waves fail to mimic the broad spread of excitation produced by a CI and noise-bands contain intrinsic modulations that are absent in CIs. The present study proposes pulse-spreading harmonic complexes (PSHCs) as an alternative acoustic carrier in vocoders. Sentence-in-noise recognition was measured in 12 normal-hearing subjects for noise-, sine-, and PSHC-vocoders. Consistent with the amount of intrinsic modulations present in each vocoder condition, the average speech reception threshold obtained with the PSHC-vocoder was higher than with sine-vocoding but lower than with noise-vocoding.
Zamaninezhad, Ladan; Hohmann, Volker; Büchner, Andreas; Schädler, Marc René; Jürgens, Tim
2017-02-01
This study introduces a speech intelligibility model for cochlear implant users with ipsilateral preserved acoustic hearing that aims at simulating the observed speech-in-noise intelligibility benefit when receiving simultaneous electric and acoustic stimulation (EA-benefit). The model simulates the auditory nerve spiking in response to electric and/or acoustic stimulation. The temporally and spatially integrated spiking patterns were used as the final internal representation of noisy speech. Speech reception thresholds (SRTs) in stationary noise were predicted for a sentence test using an automatic speech recognition framework. The model was employed to systematically investigate the effect of three physiologically relevant model factors on simulated SRTs: (1) the spatial spread of the electric field which co-varies with the number of electrically stimulated auditory nerves, (2) the "internal" noise simulating the deprivation of auditory system, and (3) the upper bound frequency limit of acoustic hearing. The model results show that the simulated SRTs increase monotonically with increasing spatial spread for fixed internal noise, and also increase with increasing the internal noise strength for a fixed spatial spread. The predicted EA-benefit does not follow such a systematic trend and depends on the specific combination of the model parameters. Beyond 300 Hz, the upper bound limit for preserved acoustic hearing is less influential on speech intelligibility of EA-listeners in stationary noise. The proposed model-predicted EA-benefits are within the range of EA-benefits shown by 18 out of 21 actual cochlear implant listeners with preserved acoustic hearing. Copyright © 2016 Elsevier B.V. All rights reserved.
Nordberg, Ann; Dahlgren Sandberg, Annika; Miniscalco, Carmela
2015-01-01
Research on retelling ability and cognition is limited in children with cerebral palsy (CP) and speech impairment. To explore the impact of expressive and receptive language, narrative discourse dimensions (Narrative Assessment Profile measures), auditory and visual memory, theory of mind (ToM) and non-verbal cognition on the retelling ability of children with CP and speech impairment. Fifteen speaking children with speech impairment (seven girls, eight boys) (mean age = 11 years, SD = 1;4 years), and different types of CP and different levels of gross motor and cognitive function participated in the present study. Story retelling skills were tested and analysed with the Bus Story Test (BST) and the Narrative Assessment Profile (NAP). Receptive language ability was tested with the Test for Reception of Grammar-2 (TROG-2) and the Peabody Picture Vocabulary Test - IV (PPVT-IV). Non-verbal cognitive level was tested with the Raven's coloured progressive matrices (RCPM), memory functions assessed with the Corsi block-tapping task (CB) and the Digit Span from the Wechsler Intelligence Scale for Children-III. ToM was assessed with the false belief items of the two story tests "Kiki and the Cat" and "Birthday Puppy". The children had severe problems with retelling ability corresponding to an age-equivalent of 5;2-6;9 years. Receptive and expressive language, visuo-spatial and auditory memory, non-verbal cognitive level and ToM varied widely within and among the children. Both expressive and receptive language correlated significantly with narrative ability in terms of NAP total scores, so did auditory memory. The results suggest that retelling ability in the children with CP in the present study is dependent on language comprehension and production, and memory functions. Consequently, it is important to examine retelling ability together with language and cognitive abilities in these children in order to provide appropriate support. © 2015 Royal College of Speech and Language Therapists.
Speech and language development in 2-year-old children with cerebral palsy.
Hustad, Katherine C; Allison, Kristen; McFadd, Emily; Riehle, Katherine
2014-06-01
We examined early speech and language development in children who had cerebral palsy. Questions addressed whether children could be classified into early profile groups on the basis of speech and language skills and whether there were differences on selected speech and language measures among groups. Speech and language assessments were completed on 27 children with CP who were between the ages of 24 and 30 months (mean age 27.1 months; SD 1.8). We examined several measures of expressive and receptive language, along with speech intelligibility. Two-step cluster analysis was used to identify homogeneous groups of children based on their performance on the seven dependent variables characterizing speech and language performance. Three groups of children identified were those not yet talking (44% of the sample); those whose talking abilities appeared to be emerging (41% of the sample); and those who were established talkers (15% of the sample). Group differences were evident on all variables except receptive language skills. 85% of 2-year-old children with CP in this study had clinical speech and/or language delays relative to age expectations. Findings suggest that children with CP should receive speech and language assessment and treatment at or before 2 years of age.
NASA Technical Reports Server (NTRS)
Creecy, R.
1974-01-01
A speech modulated white noise device is reported that gives the rhythmic characteristics of a speech signal for intelligible reception by deaf persons. The signal is composed of random amplitudes and frequencies as modulated by the speech envelope characteristics of rhythm and stress. Time intensity parameters of speech are conveyed through the vibro-tactile sensation stimuli.
The multilingual matrix test: Principles, applications, and comparison across languages: A review.
Kollmeier, Birger; Warzybok, Anna; Hochmuth, Sabine; Zokoll, Melanie A; Uslar, Verena; Brand, Thomas; Wagener, Kirsten C
2015-01-01
A review of the development, evaluation, and application of the so-called 'matrix sentence test' for speech intelligibility testing in a multilingual society is provided. The format allows for repeated use with the same patient in her or his native language even if the experimenter does not understand the language. Using a closed-set format, the syntactically fixed, semantically unpredictable sentences (e.g. 'Peter bought eight white ships') provide a vocabulary of 50 words (10 alternatives for each position in the sentence). The principles (i.e. construction, optimization, evaluation, and validation) for 14 different languages are reviewed. Studies of the influence of talker, language, noise, the training effect, open vs. closed conduct of the test, and the subjects' language proficiency are reported and application examples are discussed. The optimization principles result in a steep intelligibility function and a high homogeneity of the speech materials presented and test lists employed, yielding a high efficiency and excellent comparability across languages. The characteristics of speakers generally dominate the differences across languages. The matrix test format with the principles outlined here is recommended for producing efficient, reliable, and comparable speech reception thresholds across different languages.
Hearing loss in former prisoners of war of the Japanese.
Grossman, T W; Kerr, H D; Byrd, J C
1996-09-01
To describe the prevalence, degree, and types of hearing loss present in a group of older American veterans who had been prisoners of war of the Japanese. A descriptive study. A Veterans Affairs university hospital. Seventy-five male veterans, mean age 68 (+/- 3.6) years. Hearing aids were prescribed for eight veterans. Subjects were examined, and pure tone air and bone conduction, speech reception threshold, and speech discrimination were determined. Results were compared with age- and sex-matched controls from the largest recent American population study of hearing loss. 95% of subjects had been imprisoned longer than 33 months. Starvation conditions (100%), head trauma (85%), and trauma-related loss of consciousness (23%) were commonly reported. A total of 73% complained of hearing loss, and 29% (22/75) dated its onset to captivity. Most of those with the worst losses in hearing and speech discrimination were found in this subgroup. When the entire group was compared with published age- and sex-matched controls from the Framingham Study, no significant differences were found. We advocate screening examinations and long-term follow-up of populations with similar histories of starvation, head trauma, and torture.
Audiologic and subjective evaluation of Baha® Attract device.
Pérez-Carbonell, Tomàs; Pla-Gil, Ignacio; Redondo-Martínez, Jaume; Morant-Ventura, Antonio; García-Callejo, Francisco Javier; Marco-Algarra, Jaime
We included 9 patients implanted with Baha ® Attract. All our patients were evaluated by free field tonal audiometry, free field verbal audiometry and free field verbal audiometry with background noise, all the tests were performed with and without the device. To evaluate the subjective component of the implantation, we used the Glasgow Benefit Inventory (GBI) and Abbreviated Profile of Hearing Aid Benefit (APHAB). The auditive assessment with the device showed average auditive thresholds of 35.8dB with improvements of 25.8dB over the previous situation. Speech reception thresholds were 37dB with Baha ® Attract, showing improvements of 23dB. Maximum discrimination thresholds showed an average gain of 60dB with the device. Baha ® Attract achieves auditive improvements in patients for whom it is correctly indicated, with a consequent positive subjective evaluation. This study shows the attenuation effect in transcutaneous transmission, that prevents the device achieving greater improvements. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. All rights reserved.
Segal, Nili; Shkolnik, Mark; Kochba, Anat; Segal, Avichai; Kraus, Mordechai
2007-01-01
We evaluated the correlation of asymmetric hearing loss, in a random population of patients with mild to moderate sensorineural hearing loss, to several clinical factors such as age, sex, handedness, and noise exposure. We randomly selected, from 8 hearing institutes in Israel, 429 patients with sensorineural hearing loss of at least 30 dB at one frequency and a speech reception threshold not exceeding 30 dB. Patients with middle ear disease or retrocochlear disorders were excluded. The results of audiometric examinations were compared binaurally and in relation to the selected factors. The left ear's hearing threshold level was significantly higher than that of the right ear at all frequencies except 1.0 kHz (p < .05). One hundred fifty patients (35%) had asymmetric hearing loss (more than 10 dB difference between ears). In most of the patients (85%) the binaural difference in hearing threshold level, at any frequency, was less than 20 dB. Age, handedness, and sex were not found to be correlated to asymmetric hearing loss. Noise exposure was found to be correlated to asymmetric hearing loss.
Naylor, Graham
2016-07-01
Adaptive Speech Reception Threshold in noise (SRTn) measurements are often used to make comparisons between alternative hearing aid (HA) systems. Such measurements usually do not constrain the signal-to-noise ratio (SNR) at which testing takes place. Meanwhile, HA systems increasingly include nonlinear features that operate differently in different SNRs, and listeners differ in their inherent SNR requirements. To show that SRTn measurements, as commonly used in comparisons of alternative HA systems, suffer from threats to their validity, to illustrate these threats with examples of potentially invalid conclusions in the research literature, and to propose ways to tackle these threats. An examination of the nature of SRTn measurements in the context of test theory, modern nonlinear HAs, and listener diversity. Examples from the audiological research literature were used to estimate typical interparticipant variation in SRTn and to illustrate cases where validity may have been compromised. There can be no doubt that SRTn measurements, when used to compare nonlinear HA systems, in principle, suffer from threats to their internal and external/ecological validity. Interactions between HA nonlinearities and SNR, and interparticipant differences in inherent SNR requirements, can act to generate misleading results. In addition, SRTn may lie at an SNR outside the range for which the HA system is designed or expected to operate in. Although the extent of invalid conclusions in the literature is difficult to evaluate, examples of studies were nevertheless identified where the risk of each form of invalidity is significant. Reliable data on ecological SNRs is becoming available, so that ecological validity can be assessed. Methodological developments that can reduce the risk of invalid conclusions include variations on the SRTn measurement procedure itself, manipulations of stimulus or scoring conditions to place SRTn in an ecologically relevant range, and design and analysis approaches that take account of interparticipant differences. American Academy of Audiology.
Baker, Shaun; Centric, Aaron; Chennupati, Sri Kiran
2015-10-01
Bone-anchored hearing devices are an accepted treatment option for hearing restoration in various types of hearing loss. Traditional devices have a percutaneous abutment for attachment of the sound processor that contributes to a high complication rate. Previously, our institution reported on the Sophono (Boulder, CO, USA) abutment-free system that produced similar audiologic results to devices with abutments. Recently, Cochlear Americas (Centennial, CO, USA) released an abutment-free bone-anchored hearing device, the BAHA Attract. In contrast to the Sophono implant, the BAHA Attract utilizes an osseointegrated implant. This study aims to demonstrate patient benefit abutment-free devices, compare the results of the two abutment-free devices, and examine complication rates. A retrospective chart review was conducted for the first eleven Sophono implanted patients and for the first six patients implanted with the BAHA Attract at our institution. Subsequently, we analyzed patient demographics, audiometric data, clinical course and outcomes. Average improvement for the BAHA Attract in pure-tone average (PTA) and speech reception threshold (SRT) was 41dB hearing level (dBHL) and 56dBHL, respectively. Considering all frequencies, the BAHA Attract mean improvement was 39dBHL (range 32-45dBHL). The Sophono average improvement in PTA and SRT was 38dBHL and 39dBHL, respectively. The mean improvement with Sophono for all frequencies was 34dBHL (range 24-43dBHL). Significant improvements in both pure-tone averages and speech reception threshold for both devices were achieved. In direct comparison of the two separate devices using the chi-square test, the PTA and SRT data between the two devices do not show a statistically significant difference (p-value 0.68 and 0.56, respectively). The complication rate for these abutment-free devices is lower than that of those featuring the transcutaneous abutment, although more studies are needed to further assess this potential advantage. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2013-01-01
Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up…
Tchoungui Oyono, Lilly; Pascoe, Michelle; Singh, Shajila
2018-05-17
The purpose of this study was to determine the prevalence of speech and language disorders in French-speaking preschool-age children in Yaoundé, the capital city of Cameroon. A total of 460 participants aged 3-5 years were recruited from the 7 communes of Yaoundé using a 2-stage cluster sampling method. Speech and language assessment was undertaken using a standardized speech and language test, the Evaluation du Langage Oral (Khomsi, 2001), which was purposefully renormed on the sample. A predetermined cutoff of 2 SDs below the normative mean was applied to identify articulation, expressive language, and receptive language disorders. Fluency and voice disorders were identified using clinical judgment by a speech-language pathologist. Overall prevalence was calculated as follows: speech disorders, 14.7%; language disorders, 4.3%; and speech and language disorders, 17.1%. In terms of disorders, prevalence findings were as follows: articulation disorders, 3.6%; expressive language disorders, 1.3%; receptive language disorders, 3%; fluency disorders, 8.4%; and voice disorders, 3.6%. Prevalence figures are higher than those reported for other countries and emphasize the urgent need to develop speech and language services for the Cameroonian population.
Comparing Binaural Pre-processing Strategies III
Warzybok, Anna; Ernst, Stephan M. A.
2015-01-01
A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level. PMID:26721922
Reception of distorted speech.
DOT National Transportation Integrated Search
1973-12-01
Noise, either in the form of masking or in the form of distortion products, interferes with speech intelligibility. When the signal-to-noise ratio is bad enough, articulation can drop to unacceptably--even dangerously--low levels. However, listeners ...
Brand, Thomas
2018-01-01
In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect “binaural sluggishness.” In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization–cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism. PMID:29338577
Hauth, Christopher F; Brand, Thomas
2018-01-01
In studies investigating binaural processing in human listeners, relatively long and task-dependent time constants of a binaural window ranging from 10 ms to 250 ms have been observed. Such time constants are often thought to reflect "binaural sluggishness." In this study, the effect of binaural sluggishness on binaural unmasking of speech in stationary speech-shaped noise is investigated in 10 listeners with normal hearing. In order to design a masking signal with temporally varying binaural cues, the interaural phase difference of the noise was modulated sinusoidally with frequencies ranging from 0.25 Hz to 64 Hz. The lowest, that is the best, speech reception thresholds (SRTs) were observed for the lowest modulation frequency. SRTs increased with increasing modulation frequency up to 4 Hz. For higher modulation frequencies, SRTs remained constant in the range of 1 dB to 1.5 dB below the SRT determined in the diotic situation. The outcome of the experiment was simulated using a short-term binaural speech intelligibility model, which combines an equalization-cancellation (EC) model with the speech intelligibility index. This model segments the incoming signal into 23.2-ms time frames in order to predict release from masking in modulated noises. In order to predict the results from this study, the model required a further time constant applied to the EC mechanism representing binaural sluggishness. The best agreement with perceptual data was achieved using a temporal window of 200 ms in the EC mechanism.
Firszt, Jill B; Reeder, Ruth M; Holden, Laura K
At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of covariables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc), and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-sex-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal-hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal-hearing participant groups were not significantly different for speech in noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments, and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates.
Fan, Yue; Zhang, Ying; Wang, Pu; Wang, Zhen; Zhu, Xiaoli; Yang, Hua; Chen, Xiaowei
2014-04-01
The bone-anchored hearing device (BAHD) was not introduced in China until 2010. To our knowledge, this is the first study to assess the efficacy of Chinese Mandarin-speaking patients with bilateral aural atresia. To evaluate the speech recognition of Chinese Mandarin-speaking patients with BAHDs as well as patients' satisfaction using 2 questionnaires. A retrospective case review of 16 patients with bilateral aural atresia conducted at a tertiary referral center. A BAHD was implanted during auricle reconstruction surgery or after the auricle was rebuilt. A surgical method to combine the BAHD implantation with the second stage of ear reconstruction was introduced. Speech audiometry test and mean pure-tone threshold results were compared among patients with unaided hearing and those with BAHDs. Scores from the BAHD user questionnaire and Glasgow Children's Benefit Inventory (GCBI) were used to measure patients' satisfaction and subjective health benefit. The mean (SD) speech discrimination scores measured in a sound field with a presentation level of 45 dB HL (hearing level) were 6.7% (7.4%) unaided and 86.5% (4.4%) with a BAHD. Scores with a presentation level of 65 dB HL were 56.5% (7.4%) unaided and 90.1% (3.4%) with a BAHD. The speech reception threshold was 60.6 (7.5) dB HL unaided and 24.7 (5.0) dB HL with a BAHD. The mean (SD) pure-tone threshold of the patients was 61.6 (7.8) dB HL unaided and 23.8 (5.9) dB HL with a BAHD. The BAHD application questionnaire demonstrated excellent patient satisfaction. The mean (SD) benefit score of GCBI was 45.6 (14.4). For aural atresia, the BAHD has been one of the most reliable methods of auditory rehabilitation. It can improve the patient's word recognition performance and quality of life. The technique of BAHD implantation combined with auricular reconstruction in a 2-stages-in-1 surgery and the modified incision of patients with reconstructed auricle proved to be safe and effective.
Speech and Language Development in 2 Year Old Children with Cerebral Palsy
Hustad, Katherine C.; Allison, Kristen; McFadd, Emily; Riehle, Katherine
2013-01-01
Objective We examined early speech and language development in children who had cerebral palsy. Questions addressed whether children could be classified into early profile groups on the basis of speech and language skills and whether there were differences on selected speech and language measures among groups. Methods Speech and language assessments were completed on 27 children with CP who were between the ages of 24-30 months (mean age 27.1 months; SD 1.8). We examined several measures of expressive and receptive language, along with speech intelligibility. Results 2-step cluster analysis was used to identify homogeneous groups of children based on their performance on the 7 dependent variables characterizing speech and language performance. Three groups of children identified were those not yet talking (44% of the sample); those whose talking abilities appeared to be emerging (41% of the sample); and those who were established talkers (15% of the sample). Group differences were evident on all variables except receptive language skills. Conclusion 85% of 2 year old children with CP in this study had clinical speech and /or language delays relative to age expectations. Findings suggest that children with CP should receive speech and language assessment and treatment to identify and treat those with delays at or before 2 years of age. PMID:23627373
Hochmuth, Sabine; Jürgens, Tim; Brand, Thomas; Kollmeier, Birger
2015-01-01
Investigate talker- and language-specific aspects of speech intelligibility in noise and reverberation using highly comparable matrix sentence tests across languages. Matrix sentences spoken by German/Russian and German/Spanish bilingual talkers were recorded. These sentences were used to measure speech reception thresholds (SRTs) with native listeners in the respective languages in different listening conditions (stationary and fluctuating noise, multi-talker babble, reverberated speech-in-noise condition). Four German/Russian and four German/Spanish bilingual talkers; 20 native German-speaking, 10 native Russian-speaking, and 10 native Spanish-speaking listeners. Across-talker SRT differences of up to 6 dB were found for both groups of bilinguals. SRTs of German/Russian bilingual talkers were the same in both languages. SRTs of German/Spanish bilingual talkers were higher when they talked in Spanish than when they talked in German. The benefit from listening in the gaps was similar across all languages. The detrimental effect of reverberation was larger for Spanish than for German and Russian. Within the limitations set by the number and slight accentedness of talkers and other possible confounding factors, talker- and test-condition-dependent differences were isolated from the language effect: Russian and German exhibited similar intelligibility in noise and reverberation, whereas Spanish was more impaired in these situations.
Comparing Binaural Pre-processing Strategies II
Hu, Hongmei; Krawczyk-Becker, Martin; Marquardt, Daniel; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Bomke, Katrin; Plotz, Karsten; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias
2015-01-01
Several binaural audio signal enhancement algorithms were evaluated with respect to their potential to improve speech intelligibility in noise for users of bilateral cochlear implants (CIs). 50% speech reception thresholds (SRT50) were assessed using an adaptive procedure in three distinct, realistic noise scenarios. All scenarios were highly nonstationary, complex, and included a significant amount of reverberation. Other aspects, such as the perfectly frontal target position, were idealized laboratory settings, allowing the algorithms to perform better than in corresponding real-world conditions. Eight bilaterally implanted CI users, wearing devices from three manufacturers, participated in the study. In all noise conditions, a substantial improvement in SRT50 compared to the unprocessed signal was observed for most of the algorithms tested, with the largest improvements generally provided by binaural minimum variance distortionless response (MVDR) beamforming algorithms. The largest overall improvement in speech intelligibility was achieved by an adaptive binaural MVDR in a spatially separated, single competing talker noise scenario. A no-pre-processing condition and adaptive differential microphones without a binaural link served as the two baseline conditions. SRT50 improvements provided by the binaural MVDR beamformers surpassed the performance of the adaptive differential microphones in most cases. Speech intelligibility improvements predicted by instrumental measures were shown to account for some but not all aspects of the perceptually obtained SRT50 improvements measured in bilaterally implanted CI users. PMID:26721921
Stiles, Derek J; Bentler, Ruth A; McGregor, Karla K
2012-06-01
To determine whether a clinically obtainable measure of audibility, the aided Speech Intelligibility Index (SII; American National Standards Institute, 2007), is more sensitive than the pure-tone average (PTA) at predicting the lexical abilities of children who wear hearing aids (CHA). School-age CHA and age-matched children with normal hearing (CNH) repeated words and nonwords, learned novel words, and completed a standardized receptive vocabulary test. Analyses of covariance allowed comparison of the 2 groups. For CHA, regression analyses determined whether SII held predictive value over and beyond PTA. CHA demonstrated poorer performance than CNH on tests of word and nonword repetition and receptive vocabulary. Groups did not differ on word learning. Aided SII was a stronger predictor of word and nonword repetition and receptive vocabulary than PTA. After accounting for PTA, aided SII remained a significant predictor of nonword repetition and receptive vocabulary. Despite wearing hearing aids, CHA performed more poorly on 3 of 4 lexical measures. Individual differences among CHA were predicted by aided SII. Unlike PTA, aided SII incorporates hearing aid amplification characteristics and speech-frequency weightings and may provide a more valid estimate of the child's access to and ability to learn from auditory input in real-world environments.
Developmental language and speech disability.
Spiel, G; Brunner, E; Allmayer, B; Pletz, A
2001-09-01
Speech disabilities (articulation deficits) and language disorders--expressive (vocabulary) receptive (language comprehension) are not uncommon in children. An overview of these along with a global description of the impairment of communication as well as clinical characteristics of language developmental disorders are presented in this article. The diagnostic tables, which are applied in the European and Anglo-American speech areas, ICD-10 and DSM-IV, have been explained and compared. Because of their strengths and weaknesses an alternative classification of language and speech developmental disorders is proposed, which allows a differentiation between expressive and receptive language capabilities with regard to the semantic and the morphological/syntax domains. Prevalence and comorbidity rates, psychosocial influences, biological factors and the biological social interaction have been discussed. The necessity of the use of standardized examinations is emphasised. General logopaedic treatment paradigms, specific therapy concepts and an overview of prognosis have been described.
NASA Astrophysics Data System (ADS)
Samardzic, Nikolina
The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly, as an example of the significance of speech intelligibility evaluation in the context of an applicable listening environment, as indicated in this research, it was found that the jury test participants required on average an approximate 3 dB increase in sound pressure level of speech material while driving and listening compared to when just listening, for an equivalent speech intelligibility performance and the same listening task.
ERIC Educational Resources Information Center
Masso, Sarah; Baker, Elise; McLeod, Sharynne; Wang, Cen
2017-01-01
Purpose: The aim of this study was to determine if polysyllable accuracy in preschoolers with speech sound disorders (SSD) was related to known predictors of later literacy development: phonological processing, receptive vocabulary, and print knowledge. Polysyllables--words of three or more syllables--are important to consider because unlike…
ERIC Educational Resources Information Center
Overby, Megan S.; Masterson, Julie J.; Preston, Jonathan L.
2015-01-01
Purpose: This archival investigation examined the relationship between preliteracy speech sound production skill (SSPS) and spelling in Grade 3 using a dataset in which children's receptive vocabulary was generally within normal limits, speech therapy was not provided until Grade 2, and phonological awareness instruction was discouraged at the…
ERIC Educational Resources Information Center
Peter, Beate
2012-01-01
This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD…
Effect of gap detection threshold on consistency of speech in children with speech sound disorder.
Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz
2017-02-01
The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chamber of Commerce reception for Dr. Lucas
NASA Technical Reports Server (NTRS)
1986-01-01
Dr. William R. Lucas, Marshall's fourth Center Director (1974-1986), delivers a speech in front of a picture of the lunar landscape with Earth looming in the background while attending a Huntsville Chamber of Commerce reception honoring his achievements as Director of Marshall Space Flight Center (MSFC).
Comparison of Audiological Findings in Patients with Vestibular Migraine and Migraine
Kırkım, Günay; Mutlu, Başak; Tanriverdizade, Tural; Keskinoğlu, Pembe; Güneri, Enis Alpin; Akdal, Gülden
2017-01-01
Objective The aim of this study was to investigate and compare the auditory findings in vestibular migraine (VM) and migraine patients without a history of vertigo. Methods This study was conducted on 44 patients diagnosed with definite VM and 31 patients diagnosed with migraine who were followed and treated between January 2011 and February 2015. Also, 52 healthy subjects were included in this study as a control group. All participants underwent a detailed otorhinolaryngological examination followed by audiological evaluation, including pure tone audiometry, speech reception threshold, speech recognition score, and acoustic immitancemetry. Results In the VM group, there were 16 patients (36.4%) with tinnitus, while in the other groups we did not observe any patients with tinnitus. The rate of tinnitus in the VM group was significantly higher in comparison to other groups (p<0.05). None of the groups had any patients with permanent or fluctuating sensorineural hearing loss. Conclusion We conclude that patients with VM should be closely and longitudinally followed up for the early detection of other otological symptoms and possible occurrence of sensorineural hearing loss in the long term. PMID:29515927
ERIC Educational Resources Information Center
Ingvalson, Erin M.; Lansford, Kaitlin L.; Fedorova, Valeriya; Fernandez, Gabriel
2017-01-01
Purpose: Previous research has demonstrated equivocal findings related to the effect of listener age on intelligibility ratings of dysarthric speech. The aim of the present study was to investigate the mechanisms that support younger and older adults' perception of speech by talkers with dysarthria. Method: Younger and older adults identified…
ERIC Educational Resources Information Center
Erdener, Dogu; Burnham, Denis
2018-01-01
Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…
Visual-motor integration performance in children with severe specific language impairment.
Nicola, K; Watter, P
2016-09-01
This study investigated (1) the visual-motor integration (VMI) performance of children with severe specific language impairment (SLI), and any effect of age, gender, socio-economic status and concomitant speech impairment; and (2) the relationship between language and VMI performance. It is hypothesized that children with severe SLI would present with VMI problems irrespective of gender and socio-economic status; however, VMI deficits will be more pronounced in younger children and those with concomitant speech impairment. Furthermore, it is hypothesized that there will be a relationship between VMI and language performance, particularly in receptive scores. Children enrolled between 2000 and 2008 in a school dedicated to children with severe speech-language impairments were included, if they met the criteria for severe SLI with or without concomitant speech impairment which was verified by a government organization. Results from all initial standardized language and VMI assessments found during a retrospective review of chart files were included. The final study group included 100 children (males = 76), from 4 to 14 years of age with mean language scores at least 2SD below the mean. For VMI performance, 52% of the children scored below -1SD, with 25% of the total group scoring more than 1.5SD below the mean. Age, gender and the addition of a speech impairment did not impact on VMI performance; however, children living in disadvantaged suburbs scored significantly better than children residing in advantaged suburbs. Receptive language scores of the Clinical Evaluation of Language Fundamentals was the only score associated with and able to predict VMI performance. A small subgroup of children with severe SLI will also have poor VMI skills. The best predictor of poor VMI is receptive language scores on the Clinical Evaluation of Language Fundamentals. Children with poor receptive language performance may benefit from VMI assessment and multidisciplinary management. © 2016 John Wiley & Sons Ltd.
Firszt, Jill B.; Reeder, Ruth M.; Holden, Laura K.
2016-01-01
Objectives At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of co-variables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. Design The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc) and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-gender-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Results Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal hearing participant groups were not significantly different for speech-in-noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Conclusions Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates. PMID:28067750
Objective Assessment of Listening Effort: Coregistration of Pupillometry and EEG.
Miles, Kelly; McMahon, Catherine; Boisvert, Isabelle; Ibrahim, Ronny; de Lissa, Peter; Graham, Petra; Lyxell, Björn
2017-01-01
Listening to speech in noise is effortful, particularly for people with hearing impairment. While it is known that effort is related to a complex interplay between bottom-up and top-down processes, the cognitive and neurophysiological mechanisms contributing to effortful listening remain unknown. Therefore, a reliable physiological measure to assess effort remains elusive. This study aimed to determine whether pupil dilation and alpha power change, two physiological measures suggested to index listening effort, assess similar processes. Listening effort was manipulated by parametrically varying spectral resolution (16- and 6-channel noise vocoding) and speech reception thresholds (SRT; 50% and 80%) while 19 young, normal-hearing adults performed a speech recognition task in noise. Results of off-line sentence scoring showed discrepancies between the target SRTs and the true performance obtained during the speech recognition task. For example, in the SRT80% condition, participants scored an average of 64.7%. Participants' true performance levels were therefore used for subsequent statistical modelling. Results showed that both measures appeared to be sensitive to changes in spectral resolution (channel vocoding), while pupil dilation only was also significantly related to their true performance levels (%) and task accuracy (i.e., whether the response was correctly or partially recalled). The two measures were not correlated, suggesting they each may reflect different cognitive processes involved in listening effort. This combination of findings contributes to a growing body of research aiming to develop an objective measure of listening effort.
Examining Second Language Receptive Knowledge of Collocation and Factors That Affect Learning
ERIC Educational Resources Information Center
Nguyen, Thi My Hang; Webb, Stuart
2017-01-01
This study investigated Vietnamese EFL learners' knowledge of verb-noun and adjective-noun collocations at the first three 1,000 word frequency levels, and the extent to which five factors (node word frequency, collocation frequency, mutual information score, congruency, and part of speech) predicted receptive knowledge of collocation. Knowledge…
ERIC Educational Resources Information Center
Ebbels, Susan H.; Maric, Nataša; Murphy, Aoife; Turner, Gail
2014-01-01
Background: Little evidence exists for the effectiveness of therapy for children with receptive language difficulties, particularly those whose difficulties are severe and persistent. Aims: To establish the effectiveness of explicit speech and language therapy with visual support for secondary school-aged children with language impairments…
Rader, T; Fastl, H; Baumann, U
2017-03-01
After implantation of cochlear implants with hearing preservation for combined electronic acoustic stimulation (EAS), the residual acoustic hearing ability relays fundamental speech frequency information in the low frequency range. With the help of acoustic simulation of EAS hearing perception the impact of frequency and level fine structure of speech signals can be systematically examined. The aim of this study was to measure the speech reception threshold (SRT) under various noise conditions with acoustic EAS simulation by variation of the frequency and level information of the fundamental frequency f0 of speech. The study was carried out to determine to what extent the SRT is impaired by modification of the f0 fine structure. Using partial tone time pattern analysis an acoustic EAS simulation of the speech material from the Oldenburg sentence test (OLSA) was generated. In addition, determination of the f0 curve of the speech material was conducted. Subsequently, either the parameter frequency or level of f0 was fixed in order to remove one of the two fine contour information of the speech signal. The processed OLSA sentences were used to determine the SRT in background noise under various test conditions. The conditions "f0 fixed frequency" and "f0 fixed level" were tested under two different situations, under "amplitude modulated background noise" and "continuous background noise" conditions. A total of 24 subjects with normal hearing participated in the study. The SRT in background noise for the condition "f0 fixed frequency" was more favorable in continuous noise with 2.7 dB and in modulated noise with 0.8 dB compared to the condition "f0 fixed level" with 3.7 dB and 2.9 dB, respectively. In the simulation of speech perception with cochlear implants and acoustic components, the level information of the fundamental frequency had a stronger impact on speech intelligibility than the frequency information. The method of simulation of transmission of cochlear implants allows investigation of how various parameters influence speech intelligibility in subjects with normal hearing.
Speech and Oral Motor Profile after Childhood Hemispherectomy
ERIC Educational Resources Information Center
Liegeois, Frederique; Morgan, Angela T.; Stewart, Lorna H.; Cross, J. Helen; Vogel, Adam P.; Vargha-Khadem, Faraneh
2010-01-01
Hemispherectomy (disconnection or removal of an entire cerebral hemisphere) is a rare surgical procedure used for the relief of drug-resistant epilepsy in children. After hemispherectomy, contralateral hemiplegia persists whereas gross expressive and receptive language functions can be remarkably spared. Motor speech deficits have rarely been…
A Framework for Speech Activity Detection Using Adaptive Auditory Receptive Fields.
Carlin, Michael A; Elhilali, Mounya
2015-12-01
One of the hallmarks of sound processing in the brain is the ability of the nervous system to adapt to changing behavioral demands and surrounding soundscapes. It can dynamically shift sensory and cognitive resources to focus on relevant sounds. Neurophysiological studies indicate that this ability is supported by adaptively retuning the shapes of cortical spectro-temporal receptive fields (STRFs) to enhance features of target sounds while suppressing those of task-irrelevant distractors. Because an important component of human communication is the ability of a listener to dynamically track speech in noisy environments, the solution obtained by auditory neurophysiology implies a useful adaptation strategy for speech activity detection (SAD). SAD is an important first step in a number of automated speech processing systems, and performance is often reduced in highly noisy environments. In this paper, we describe how task-driven adaptation is induced in an ensemble of neurophysiological STRFs, and show how speech-adapted STRFs reorient themselves to enhance spectro-temporal modulations of speech while suppressing those associated with a variety of nonspeech sounds. We then show how an adapted ensemble of STRFs can better detect speech in unseen noisy environments compared to an unadapted ensemble and a noise-robust baseline. Finally, we use a stimulus reconstruction task to demonstrate how the adapted STRF ensemble better captures the spectrotemporal modulations of attended speech in clean and noisy conditions. Our results suggest that a biologically plausible adaptation framework can be applied to speech processing systems to dynamically adapt feature representations for improving noise robustness.
NASA Technical Reports Server (NTRS)
Leibfritz, Gilbert H.; Larson, Howard K.
1987-01-01
Compact speech synthesizer useful traveling companion to speech-handicapped. User simply enters statement on board, and synthesizer converts statement into spoken words. Battery-powered and housed in briefcase, easily carried on trips. Unit used on telephones and face-to-face communication. Synthesizer consists of micro-computer with memory-expansion module, speech-synthesizer circuit, batteries, recharger, dc-to-dc converter, and telephone amplifier. Components, commercially available, fit neatly in 17-by 13-by 5-in. briefcase. Weighs about 20 lb (9 kg) and operates and recharges from ac receptable.
Study of operator's information in the course of complicated and combined activity
NASA Astrophysics Data System (ADS)
Krylova, N. V.; Bokovikov, A. K.
1982-08-01
Speech characteristics of operators performing control and observation operations, Information reception, transmission and processing, and decision making when exposed to the real stress of parachute jumps were investigated. Form and content speech characteristics whose variations reflect the level of operators adaptation to stressful activities are reported.
Some Generalization and Follow-Up Measures on Autistic Children in Behavior Therapy.
ERIC Educational Resources Information Center
Lovaas, O. Ivar; And Others
Reported was a behavior therapy program emphasizing language training for 20 autistic children who variously exhibited apparent sensory deficit, severe affect isolation, self stimulatory behavior, mutism, echolalic speech, absence of receptive speech and social and self help behaviors, and self destructive tendencies. The treatment emphasized…
Gifford, René H.; Grantham, D. Wesley; Sheffield, Sterling W.; Davis, Timothy J.; Dwyer, Robert; Dorman, Michael F.
2014-01-01
The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from −90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100–900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. PMID:24607490
Gifford, René H; Grantham, D Wesley; Sheffield, Sterling W; Davis, Timothy J; Dwyer, Robert; Dorman, Michael F
2014-06-01
The purpose of this study was to investigate horizontal plane localization and interaural time difference (ITD) thresholds for 14 adult cochlear implant recipients with hearing preservation in the implanted ear. Localization to broadband noise was assessed in an anechoic chamber with a 33-loudspeaker array extending from -90 to +90°. Three listening conditions were tested including bilateral hearing aids, bimodal (implant + contralateral hearing aid) and best aided (implant + bilateral hearing aids). ITD thresholds were assessed, under headphones, for low-frequency stimuli including a 250-Hz tone and bandpass noise (100-900 Hz). Localization, in overall rms error, was significantly poorer in the bimodal condition (mean: 60.2°) as compared to both bilateral hearing aids (mean: 46.1°) and the best-aided condition (mean: 43.4°). ITD thresholds were assessed for the same 14 adult implant recipients as well as 5 normal-hearing adults. ITD thresholds were highly variable across the implant recipients ranging from the range of normal to ITDs not present in real-world listening environments (range: 43 to over 1600 μs). ITD thresholds were significantly correlated with localization, the degree of interaural asymmetry in low-frequency hearing, and the degree of hearing preservation related benefit in the speech reception threshold (SRT). These data suggest that implant recipients with hearing preservation in the implanted ear have access to binaural cues and that the sensitivity to ITDs is significantly correlated with localization and degree of preserved hearing in the implanted ear. Copyright © 2014. Published by Elsevier B.V.
A study of the high-frequency hearing thresholds of dentistry professionals
Lopes, Andréa Cintra; de Melo, Ana Dolores Passarelli; Santos, Cibele Carmelo
2012-01-01
Summary Introduction: In the dentistry practice, dentists are exposed to harmful effects caused by several factors, such as the noise produced by their work instruments. In 1959, the American Dental Association recommended periodical hearing assessments and the use of ear protectors. Aquiring more information regarding dentists', dental nurses', and prosthodontists' hearing abilities is necessary to propose prevention measures and early treatment strategies. Objective: To investigate the auditory thresholds of dentists, dental nurses, and prosthodontists. Method: In this clinical and experimental study, 44 dentists (Group I; GI), 36 dental nurses (Group II; GII), and 28 prosthodontists (Group III; GIII) were included, , with a total of 108 professionals. The procedures that were performed included a specific interview, ear canal inspection, conventional and high-frequency threshold audiometry, a speech reception threshold test, and an acoustic impedance test. Results: In the 3 groups that were tested, the comparison between the mean hearing thresholds provided evidence of worsened hearing ability relative to the increase in frequency. For the tritonal mean at 500 to 2,000 Hz and 3,000 to 6,000 Hz, GIII presented the worst thresholds. For the mean of the high frequencies (9,000 and 16,000 Hz), GII presented the worst thresholds. Conclusion: The conventional hearing threshold evaluation did not demonstrate alterations in the 3 groups that were tested; however, the complementary tests such as high-frequency audiometry provided greater efficacy in the early detection of hearing problems, since this population's hearing loss impaired hearing ability at frequencies that are not tested by the conventional tests. Therefore, we emphasize the need of utilizing high-frequency threshold audiometry in the hearing assessment routine in combination with other audiological tests. PMID:25991940
Harrison, Linda J; McLeod, Sharynne
2010-04-01
To determine risk and protective factors for speech and language impairment in early childhood. Data are presented for a nationally representative sample of 4,983 children participating in the Longitudinal Study of Australian Children (described in McLeod & Harrison, 2009). Thirty-one child, parent, family, and community factors previously reported as being predictors of speech and language impairment were tested as predictors of (a) parent-rated expressive speech/language concern and (b) receptive language concern, (c) use of speech-language pathology services, and (d) low receptive vocabulary. Bivariate logistic regression analyses confirmed 29 of the identified factors. However, when tested concurrently with other predictors in multivariate analyses, only 19 remained significant: 9 for 2-4 outcomes and 10 for 1 outcome. Consistent risk factors were being male, having ongoing hearing problems, and having a more reactive temperament. Protective factors were having a more persistent and sociable temperament and higher levels of maternal well-being. Results differed by outcome for having an older sibling, parents speaking a language other than English, and parental support for children's learning at home. Identification of children requiring speech and language assessment requires consideration of the context of family life as well as biological and psychosocial factors intrinsic to the child.
Measures for assessing architectural speech security (privacy) of closed offices and meeting rooms.
Gover, Bradford N; Bradley, John S
2004-12-01
Objective measures were investigated as predictors of the speech security of closed offices and rooms. A new signal-to-noise type measure is shown to be a superior indicator for security than existing measures such as the Articulation Index, the Speech Intelligibility Index, the ratio of the loudness of speech to that of noise, and the A-weighted level difference of speech and noise. This new measure is a weighted sum of clipped one-third-octave-band signal-to-noise ratios; various weightings and clipping levels are explored. Listening tests had 19 subjects rate the audibility and intelligibility of 500 English sentences, filtered to simulate transmission through various wall constructions, and presented along with background noise. The results of the tests indicate that the new measure is highly correlated with sentence intelligibility scores and also with three security thresholds: the threshold of intelligibility (below which speech is unintelligible), the threshold of cadence (below which the cadence of speech is inaudible), and the threshold of audibility (below which speech is inaudible). The ratio of the loudness of speech to that of noise, and simple A-weighted level differences are both shown to be well correlated with these latter two thresholds (cadence and audibility), but not well correlated with intelligibility.
Pfiffner, Flurin; Kompis, Martin; Stieger, Christof
2009-10-01
To investigate correlations between preoperative hearing thresholds and postoperative aided thresholds and speech understanding of users of Bone-anchored Hearing Aids (BAHA). Such correlations may be useful to estimate the postoperative outcome with BAHA from preoperative data. Retrospective case review. Tertiary referral center. : Ninety-two adult unilaterally implanted BAHA users in 3 groups: (A) 24 subjects with a unilateral conductive hearing loss, (B) 38 subjects with a bilateral conductive hearing loss, and (C) 30 subjects with single-sided deafness. Preoperative air-conduction and bone-conduction thresholds and 3-month postoperative aided and unaided sound-field thresholds as well as speech understanding using German 2-digit numbers and monosyllabic words were measured and analyzed. Correlation between preoperative air-conduction and bone-conduction thresholds of the better and of the poorer ear and postoperative aided thresholds as well as correlations between gain in sound-field threshold and gain in speech understanding. Aided postoperative sound-field thresholds correlate best with BC threshold of the better ear (correlation coefficients, r2 = 0.237 to 0.419, p = 0.0006 to 0.0064, depending on the group of subjects). Improvements in sound-field threshold correspond to improvements in speech understanding. When estimating expected postoperative aided sound-field thresholds of BAHA users from preoperative hearing thresholds, the BC threshold of the better ear should be used. For the patient groups considered, speech understanding in quiet can be estimated from the improvement in sound-field thresholds.
Language Sampling for Preschoolers With Severe Speech Impairments
Ragsdale, Jamie; Bustos, Aimee
2016-01-01
Purpose The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Method Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Results Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur–Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Conclusion Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information. PMID:27552110
Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech.
Borrie, Stephanie A; Lansford, Kaitlin L; Barrett, Tyson S
2017-03-01
The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception (initial intelligibility) and learning (intelligibility improvement) of naturally dysrhythmic speech, dysarthria. Fifty young adults with typical hearing participated in 3 key tests, including a rhythm perception test, a receptive vocabulary test, and a speech perception and learning test, with standard pretest, familiarization, and posttest phases. Initial intelligibility scores were calculated as the proportion of correct pretest words, while intelligibility improvement scores were calculated by subtracting this proportion from the proportion of correct posttest words. Rhythm perception scores predicted intelligibility improvement scores but not initial intelligibility. On the other hand, receptive vocabulary scores predicted initial intelligibility scores but not intelligibility improvement. Expertise in rhythm perception appears to provide an advantage for processing dysrhythmic speech, but a familiarization experience is required for the advantage to be realized. Findings are discussed in relation to the role of rhythm in speech processing and shed light on processing models that consider the consequence of rhythm abnormalities in dysarthria.
Language Sampling for Preschoolers With Severe Speech Impairments.
Binger, Cathy; Ragsdale, Jamie; Bustos, Aimee
2016-11-01
The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information.
Rapid tuning shifts in human auditory cortex enhance speech intelligibility
Holdgraf, Christopher R.; de Heer, Wendy; Pasley, Brian; Rieger, Jochem; Crone, Nathan; Lin, Jack J.; Knight, Robert T.; Theunissen, Frédéric E.
2016-01-01
Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement' in understanding speech. PMID:27996965
Finke, Mareike; Strauß-Schier, Angelika; Kludt, Eugen; Büchner, Andreas; Illg, Angelika
2017-05-01
Treatment with cochlear implants (CIs) in single-sided deaf individuals started less than a decade ago. CIs can successfully reduce incapacitating tinnitus on the deaf ear and allow, so some extent, the restoration of binaural hearing. Until now, systematic evaluations of subjective CI benefit in post-lingually single-sided deaf individuals and analyses of speech intelligibility outcome for the CI in isolation have been lacking. For the prospective part of this study, the Bern Benefit in Single-Sided Deafness Questionnaire (BBSS) was administered to 48 single-sided deaf CI users to evaluate the subjectively perceived CI benefit across different listening situations. In the retrospective part, speech intelligibility outcome with the CI up to 12 month post-activation was compared between 100 single-sided deaf CI users and 125 bilaterally implanted CI users (2nd implant). The positive median ratings in the BBSS differed significantly from zero for all items suggesting that most individuals with single-sided deafness rate their CI as beneficial across listening situations. The speech perception scores in quiet and noise improved significantly over time in both groups of CI users. Speech intelligibility with the CI in isolation was significantly better in bilaterally implanted CI users (2nd implant) compared to the scores obtained from single-sided deaf CI users. Our results indicate that CI users with single-sided deafness can reach open set speech understanding with their CI in isolation, encouraging the extension of the CI indication to individuals with normal hearing on the contralateral ear. Compared to the performance reached with bilateral CI users' second implant, speech reception threshold are lower, indicating an aural preference and dominance of the normal hearing ear. The results from the BBSS propose good satisfaction with the CI across several listening situations. Copyright © 2017 Elsevier B.V. All rights reserved.
van Hoesel, Richard J M
2015-04-01
One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.
Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.
Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200
Influence of contralateral acoustic hearing on adult bimodal outcomes after cochlear implantation.
Plant, Kerrie; van Hoesel, Richard; McDermott, Hugh; Dawson, Pamela; Cowan, Robert
2016-08-01
To examine post-implantation benefit and time taken to acclimate to the cochlear implant for adult candidates with more hearing in the contralateral non-implanted ear than has been previously considered within local candidacy guidelines. Prospective, within-subject experimental design. Forty postlingual hearing-impaired adult subjects with a contralateral ear word score in quiet ranging from 27% to 100% (median 67%). Post-implantation improvement of 2.4 dB and 4.0 dB was observed on a sentence in coincident babble test at presentation levels of 65 and 55 dB SPL respectively, and a 2.1 dB benefit in spatial release from masking (SRM) advantage observed when the noise location favoured the implanted side. Significant post-operative group mean change of between 2.1 and 3.0 was observed on the sub-scales of the speech, spatial, and qualities (SSQ) questionnaire. Degree of post-implantation speech reception threshold (SRT) benefit on the coincident babble test and on perception of soft speech and sounds in the environment was greater for subjects with less contralateral hearing. The degree of contralateral acoustic hearing did not affect time taken to acclimate to the device. The findings from this study support cochlear implantation for candidates with substantial acoustic hearing in the contralateral ear, and provide guidance regarding post-implantation expectations.
Cued Speech Transliteration: Effects of Accuracy and Lag Time on Message Intelligibility
ERIC Educational Resources Information Center
Krause, Jean C.; Lopez, Katherine A.
2017-01-01
This paper is the second in a series concerned with the level of access afforded to students who use educational interpreters. The first paper (Krause & Tessler, 2016) focused on factors affecting accuracy of messages produced by Cued Speech (CS) transliterators (expression). In this study, factors affecting intelligibility (reception by deaf…
ERIC Educational Resources Information Center
Waring, Rebecca; Eadie, Patricia; Liow, Susan Rickard; Dodd, Barbara
2017-01-01
While little is known about why children make speech errors, it has been hypothesized that cognitive-linguistic factors may underlie phonological speech sound disorders. This study compared the phonological short-term and phonological working memory abilities (using immediate memory tasks) and receptive vocabulary size of 14 monolingual preschool…
ERIC Educational Resources Information Center
Darley, Frederic L., Ed.
The conference proceedings of scientists specializing in language processes and neurophysiological mechanisms are reported to stimulate a cross-over of interest and research in the central brain phenomena (reception, understanding, retention, integration, formulation, and expression) as they relate to speech and language. Eighteen research reports…
Stam, Mariska; Smits, Cas; Twisk, Jos W R; Lemke, Ulrike; Festen, Joost M; Kramer, Sophia E
2015-01-01
The first aim of the present study was to determine the change in speech recognition in noise over a period of 5 years in participants ages 18 to 70 years at baseline. The second aim was to investigate whether age, gender, educational level, the level of initial speech recognition in noise, and reported chronic conditions were associated with a change in speech recognition in noise. The baseline and 5-year follow-up data of 427 participants with and without hearing impairment participating in the National Longitudinal Study on Hearing (NL-SH) were analyzed. The ability to recognize speech in noise was measured twice with the online National Hearing Test, a digit-triplet speech-in-noise test. Speech-reception-threshold in noise (SRTn) scores were calculated, corresponding to 50% speech intelligibility. Unaided SRTn scores obtained with the same transducer (headphones or loudspeakers) at both test moments were included. Changes in SRTn were calculated as a raw shift (T1 - T0) and an adjusted shift for regression towards the mean. Paired t tests and multivariable linear regression analyses were applied. The mean increase (i.e., deterioration) in SRTn was 0.38-dB signal-to-noise ratio (SNR) over 5 years (p < 0.001). Results of the multivariable regression analyses showed that the age group of 50 to 59 years had a significantly larger deterioration in SRTn compared with the age group of 18 to 39 years (raw shift: beta: 0.64-dB SNR; 95% confidence interval: 0.07-1.22; p = 0.028, adjusted for initial speech recognition level - adjusted shift: beta: 0.82-dB SNR; 95% confidence interval: 0.27-1.34; p = 0.004). Gender, educational level, and the number of chronic conditions were not associated with a change in SRTn over time. No significant differences in increase of SRTn were found between the initial levels of speech recognition (i.e., good, insufficient, or poor) when taking into account the phenomenon regression towards the mean. The study results indicate that hearing deterioration of speech recognition in noise over 5 years can also be detected in adults ages 18 to 70 years. This rather small numeric change might represent a relevant impact on an individual's ability to understand speech in everyday life.
Comprehension: an overlooked component in augmented language development.
Sevcik, Rose A
2006-02-15
Despite the importance of children's receptive skills as a foundation for later productive word use, the role of receptive language traditionally has received very limited attention since the focus in linguistic development has centered on language production. For children with significant developmental disabilities and communication impairments, augmented language systems have been devised as a tool both for language input and output. The role of both speech and symbol comprehension skills is emphasized in this paper. Data collected from two longitudinal studies of children and youth with severe disabilities and limited speech serve as illustrations in this paper. The acquisition and use of the System for Augmenting Language (SAL) was studied in home and school settings. Communication behaviors of the children and youth and their communication partners were observed and language assessment measures were collected. Two patterns of symbol learning and achievement--beginning and advanced--were observed. Extant speech comprehension skills brought to the augmented language learning task impacted the participants' patterns of symbol learning and use. Though often overlooked, the importance of speech and symbol comprehension skills were underscored in the studies described. Future areas for research are identified.
Masso, Sarah; Baker, Elise; McLeod, Sharynne; Wang, Cen
2017-07-12
The aim of this study was to determine if polysyllable accuracy in preschoolers with speech sound disorders (SSD) was related to known predictors of later literacy development: phonological processing, receptive vocabulary, and print knowledge. Polysyllables-words of three or more syllables-are important to consider because unlike monosyllables, polysyllables have been associated with phonological processing and literacy difficulties in school-aged children. They therefore have the potential to help identify preschoolers most at risk of future literacy difficulties. Participants were 93 preschool children with SSD from the Sound Start Study. Participants completed the Polysyllable Preschool Test (Baker, 2013) as well as phonological processing, receptive vocabulary, and print knowledge tasks. Cluster analysis was completed, and 2 clusters were identified: low polysyllable accuracy and moderate polysyllable accuracy. The clusters were significantly different based on 2 measures of phonological awareness and measures of receptive vocabulary, rapid naming, and digit span. The clusters were not significantly different on sound matching accuracy or letter, sound, or print concept knowledge. The participants' poor performance on print knowledge tasks suggested that as a group, they were at risk of literacy difficulties but that there was a cluster of participants at greater risk-those with both low polysyllable accuracy and poor phonological processing.
Assessment of directionality performances: comparison between Freedom and CP810 sound processors.
Razza, Sergio; Albanese, Greta; Ermoli, Lucilla; Zaccone, Monica; Cristofari, Eliana
2013-10-01
To compare speech recognition in noise for the Nucleus Freedom and CP810 sound processors using different directional settings among those available in the SmartSound portfolio. Single-subject, repeated measures study. Tertiary care referral center. Thirty-one monoaurally and binaurally implanted subjects (24 children and 7 adults) were enrolled. They were all experienced Nucleus Freedom sound processor users and achieved a 100% open set word recognition score in quiet listening conditions. Each patient was fitted with the Freedom and the CP810 processor. The program setting incorporated Adaptive Dynamic Range Optimization (ADRO) and adopted the directional algorithm BEAM (both devices) and ZOOM (only on CP810). Speech reception threshold (SRT) was assessed in a free-field layout, with disyllabic word list and interfering multilevel babble noise in the 3 different pre-processing configurations. On average, CP810 improved significantly patients' SRTs as compared to Freedom SP after 1 hour of use. Instead, no significant difference was observed in patients' SRT between the BEAM and the ZOOM algorithm fitted in the CP810 processor. The results suggest that hardware developments achieved in the design of CP810 allow an immediate and relevant directional advantage as compared to the previous-generation Freedom device.
Lee, Youngmee; Jeong, Sung-Wook; Kim, Lee-Suk
2013-12-01
The aim of this study was to examine the efficacy of a new habilitation approach, augmentative and alternative communication (AAC) intervention using a voice output communication aid (VOCA), in improving speech perception, speech production, receptive vocabulary skills, and communicative behaviors in children with cochlear implants (CIs) who had multiple disabilities. Five children with mental retardation and/or cerebral palsy who had used CIs over two years were included in this study. Five children in the control group were matched to children who had AAC intervention on the basis of the type/severity of their additional disabilities and chronological age. They had limited oral communication skills after cochlear implantation because of their limited cognition and oromotor function. The children attended the AAC intervention with parents once a week for 6 months. We evaluated their performance using formal tests, including the monosyllabic word tests, the articulation test, and the receptive vocabulary test. We also assessed parent-child interactions. We analyzed the data using a one-group pretest and posttest design. The mean scores of the formal tests performed in these children improved from 26% to 48% in the phoneme scores of the monosyllabic word tests, from 17% to 35% in the articulation test, and from 11 to 18.4 in the receptive vocabulary test after AAC intervention (all p < .05). Some children in the control group showed improvement in the speech perception, speech production, and receptive vocabulary tests for 6 months, but the differences did not achieve statistical significance (all p > .05). The frequency of spontaneous communicative behaviors (i.e., vocalization, gestures, and words) and imitative words significantly increased after AAC intervention (p < .05). AAC intervention using a VOCA was very useful and effective on improving communicative skills in children with multiple disabilities who had very limited oral communication skills after cochlear implantation. Copyright © 2013. Published by Elsevier Ireland Ltd.
ERIC Educational Resources Information Center
Nordberg, Ann; Dahlgren Sandberg, Annika; Miniscalco, Carmela
2015-01-01
Background: Research on retelling ability and cognition is limited in children with cerebral palsy (CP) and speech impairment. Aims: To explore the impact of expressive and receptive language, narrative discourse dimensions (Narrative Assessment Profile measures), auditory and visual memory, theory of mind (ToM) and non-verbal cognition on the…
Liu, Chang; Jin, Su-Hyun
2015-11-01
This study investigated whether native listeners processed speech differently from non-native listeners in a speech detection task. Detection thresholds of Mandarin Chinese and Korean vowels and non-speech sounds in noise, frequency selectivity, and the nativeness of Mandarin Chinese and Korean vowels were measured for Mandarin Chinese- and Korean-native listeners. The two groups of listeners exhibited similar non-speech sound detection and frequency selectivity; however, the Korean listeners had better detection thresholds of Korean vowels than Chinese listeners, while the Chinese listeners performed no better at Chinese vowel detection than the Korean listeners. Moreover, thresholds predicted from an auditory model highly correlated with behavioral thresholds of the two groups of listeners, suggesting that detection of speech sounds not only depended on listeners' frequency selectivity, but also might be affected by their native language experience. Listeners evaluated their native vowels with higher nativeness scores than non-native listeners. Native listeners may have advantages over non-native listeners when processing speech sounds in noise, even without the required phonetic processing; however, such native speech advantages might be offset by Chinese listeners' lower sensitivity to vowel sounds, a characteristic possibly resulting from their sparse vowel system and their greater cognitive and attentional demands for vowel processing.
Impact of a Moving Noise Masker on Speech Perception in Cochlear Implant Users
Weissgerber, Tobias; Rader, Tobias; Baumann, Uwe
2015-01-01
Objectives Previous studies investigating speech perception in noise have typically been conducted with static masker positions. The aim of this study was to investigate the effect of spatial separation of source and masker (spatial release from masking, SRM) in a moving masker setup and to evaluate the impact of adaptive beamforming in comparison with fixed directional microphones in cochlear implant (CI) users. Design Speech reception thresholds (SRT) were measured in S0N0 and in a moving masker setup (S0Nmove) in 12 normal hearing participants and 14 CI users (7 subjects bilateral, 7 bimodal with a hearing aid in the contralateral ear). Speech processor settings were a moderately directional microphone, a fixed beamformer, or an adaptive beamformer. The moving noise source was generated by means of wave field synthesis and was smoothly moved in a shape of a half-circle from one ear to the contralateral ear. Noise was presented in either of two conditions: continuous or modulated. Results SRTs in the S0Nmove setup were significantly improved compared to the S0N0 setup for both the normal hearing control group and the bilateral group in continuous noise, and for the control group in modulated noise. There was no effect of subject group. A significant effect of directional sensitivity was found in the S0Nmove setup. In the bilateral group, the adaptive beamformer achieved lower SRTs than the fixed beamformer setting. Adaptive beamforming improved SRT in both CI user groups substantially by about 3 dB (bimodal group) and 8 dB (bilateral group) depending on masker type. Conclusions CI users showed SRM that was comparable to normal hearing subjects. In listening situations of everyday life with spatial separation of source and masker, directional microphones significantly improved speech perception with individual improvements of up to 15 dB SNR. Users of bilateral speech processors with both directional microphones obtained the highest benefit. PMID:25970594
Auditory Temporal Acuity Probed With Cochlear Implant Stimulation and Cortical Recording
Kirby, Alana E.
2010-01-01
Cochlear implants stimulate the auditory nerve with amplitude-modulated (AM) electric pulse trains. Pulse rates >2,000 pulses per second (pps) have been hypothesized to enhance transmission of temporal information. Recent studies, however, have shown that higher pulse rates impair phase locking to sinusoidal AM in the auditory cortex and impair perceptual modulation detection. Here, we investigated the effects of high pulse rates on the temporal acuity of transmission of pulse trains to the auditory cortex. In anesthetized guinea pigs, signal-detection analysis was used to measure the thresholds for detection of gaps in pulse trains at rates of 254, 1,017, and 4,069 pps and in acoustic noise. Gap-detection thresholds decreased by an order of magnitude with increases in pulse rate from 254 to 4,069 pps. Such a pulse-rate dependence would likely influence speech reception through clinical speech processors. To elucidate the neural mechanisms of gap detection, we measured recovery from forward masking after a 196.6-ms pulse train. Recovery from masking was faster at higher carrier pulse rates and masking increased linearly with current level. We fit the data with a dual-exponential recovery function, consistent with a peripheral and a more central process. High-rate pulse trains evoked less central masking, possibly due to adaptation of the response in the auditory nerve. Neither gap detection nor forward masking varied with cortical depth, indicating that these processes are likely subcortical. These results indicate that gap detection and modulation detection are mediated by two separate neural mechanisms. PMID:19923242
Speech and language development in cognitively delayed children with cochlear implants.
Holt, Rachael Frush; Kirk, Karen Iler
2005-04-01
The primary goals of this investigation were to examine the speech and language development of deaf children with cochlear implants and mild cognitive delay and to compare their gains with those of children with cochlear implants who do not have this additional impairment. We retrospectively examined the speech and language development of 69 children with pre-lingual deafness. The experimental group consisted of 19 children with cognitive delays and no other disabilities (mean age at implantation = 38 months). The control group consisted of 50 children who did not have cognitive delays or any other identified disability. The control group was stratified by primary communication mode: half used total communication (mean age at implantation = 32 months) and the other half used oral communication (mean age at implantation = 26 months). Children were tested on a variety of standard speech and language measures and one test of auditory skill development at 6-month intervals. The results from each test were collapsed from blocks of two consecutive 6-month intervals to calculate group mean scores before implantation and at 1-year intervals after implantation. The children with cognitive delays and those without such delays demonstrated significant improvement in their speech and language skills over time on every test administered. Children with cognitive delays had significantly lower scores than typically developing children on two of the three measures of receptive and expressive language and had significantly slower rates of auditory-only sentence recognition development. Finally, there were no significant group differences in auditory skill development based on parental reports or in auditory-only or multimodal word recognition. The results suggest that deaf children with mild cognitive impairments benefit from cochlear implantation. Specifically, improvements are evident in their ability to perceive speech and in their reception and use of language. However, it may be reduced relative to their typically developing peers with cochlear implants, particularly in domains that require higher level skills, such as sentence recognition and receptive and expressive language. These findings suggest that children with mild cognitive deficits be considered for cochlear implantation with less trepidation than has been the case in the past. Although their speech and language gains may be tempered by their cognitive abilities, these limitations do not appear to preclude benefit from cochlear implant stimulation, as assessed by traditional measures of speech and language development.
Speech Recognition Thresholds for Multilingual Populations.
ERIC Educational Resources Information Center
Ramkissoon, Ishara
2001-01-01
This article traces the development of speech audiometry in the United States and reports on the current status, focusing on the needs of a multilingual population in terms of measuring speech recognition threshold (SRT). It also discusses sociolinguistic considerations, alternative SRT stimuli for second language learners, and research on using…
ERIC Educational Resources Information Center
Lee, Andrew H.; Lyster, Roy
2016-01-01
This study investigated the effects of different types of corrective feedback (CF) provided during second language (L2) speech perception training. One hundred Korean learners of L2 English, randomly assigned to five groups (n = 20 per group), participated in eight computer-assisted perception training sessions targeting two minimal pairs of…
ERIC Educational Resources Information Center
Charlesworth, Dacia
2010-01-01
Invention deals with the content of a speech, arrangement involves placing the content in an order that is most strategic, style focuses on selecting linguistic devices, such as metaphor, to make the message more appealing, memory assists the speaker in delivering the message correctly, and delivery ideally enables great reception of the message.…
Thresholds of information leakage for speech security outside meeting rooms.
Robinson, Matthew; Hopkins, Carl; Worrall, Ken; Jackson, Tim
2014-09-01
This paper describes an approach to provide speech security outside meeting rooms where a covert listener might attempt to extract confidential information. Decision-based experiments are used to establish a relationship between an objective measurement of the Speech Transmission Index (STI) and a subjective assessment relating to the threshold of information leakage. This threshold is defined for a specific percentage of English words that are identifiable with a maximum safe vocal effort (e.g., "normal" speech) used by the meeting participants. The results demonstrate that it is possible to quantify an offset that links STI with a specific threshold of information leakage which describes the percentage of words identified. The offsets for male talkers are shown to be approximately 10 dB larger than for female talkers. Hence for speech security it is possible to determine offsets for the threshold of information leakage using male talkers as the "worst case scenario." To define a suitable threshold of information leakage, the results show that a robust definition can be based upon 1%, 2%, or 5% of words identified. For these percentages, results are presented for offset values corresponding to different STI values in a range from 0.1 to 0.3.
Na, Sung Dae; Wei, Qun; Seong, Ki Woong; Cho, Jin Ho; Kim, Myoung Nam
2018-01-01
The conventional methods of speech enhancement, noise reduction, and voice activity detection are based on the suppression of noise or non-speech components of the target air-conduction signals. However, air-conduced speech is hard to differentiate from babble or white noise signals. To overcome this problem, the proposed algorithm uses the bone-conduction speech signals and soft thresholding based on the Shannon entropy principle and cross-correlation of air- and bone-conduction signals. A new algorithm for speech detection and noise reduction is proposed, which makes use of the Shannon entropy principle and cross-correlation with the bone-conduction speech signals to threshold the wavelet packet coefficients of the noisy speech. The proposed method can be get efficient result by objective quality measure that are PESQ, RMSE, Correlation, SNR. Each threshold is generated by the entropy and cross-correlation approaches in the decomposed bands using the wavelet packet decomposition. As a result, the noise is reduced by the proposed method using the MATLAB simulation. To verify the method feasibility, we compared the air- and bone-conduction speech signals and their spectra by the proposed method. As a result, high performance of the proposed method is confirmed, which makes it quite instrumental to future applications in communication devices, noisy environment, construction, and military operations.
Law, J; Campbell, C; Roulstone, S; Adams, C; Boyle, J
2008-01-01
Receptive language impairment (RLI) is one of the most significant indicators of negative sequelae for children with speech and language disorders. Despite this, relatively little is known about the most effective treatments for these children in the primary school period. To explore the relationship between the reported practice of speech and language practitioners and the underlying rationales for the therapy that they provide. A phenomenological approach was adopted, drawing on the experiences of speech and language practitioners. Practitioners completed a questionnaire relating to their practice for a single child with receptive language impairment within the 5-11 age range, providing details and rationales for three recent therapy activities. The responses of 56 participants were coded. All the children described experienced marked receptive language impairments, in the main associated with expressive language difficulties and/or social communication problems. The relative homogeneity of the presenting symptoms in terms of test performance was not reflected in the highly differentiated descriptions of intervention. One of the key determinants of how therapists described their practice was the child's age. As the child develops the therapists appeared to shift from a 'skills acquisition' orientation to a 'meta-cognitive' orientation, that is they move away from teaching specific linguistic behaviours towards teaching children strategies for thinking and using their language. A third of rationales refer to explicit theories but only half of these refer to the work of specific authors. Many of these were theories of practice rather than theories of deficit, and of those that do cite specific theories, no less than 29 different authors were cited many of whom might best be described as translators of existing theories rather than generators of novel theories. While theories of the deficit dominate the literature they appear to play a relatively small part in the eclectic practice of speech and language therapists. Theories of therapy may develop relatively independent of theories of deficit. While this may not present a problem for the practitioner, whose principal focus is remediation, it may present a problem for the researcher developing intervention efficacy studies, where the theory of the deficit will need to be well-defined in order to describe both the subgroup of children under investigation and the parameters of the deficit to be targeted in intervention.
Lauritsen, Maj-Britt Glenn; Söderström, Margareta; Kreiner, Svend; Dørup, Jens; Lous, Jørgen
2016-01-01
We tested "the Galker test", a speech reception in noise test developed for primary care for Danish preschool children, to explore if the children's ability to hear and understand speech was associated with gender, age, middle ear status, and the level of background noise. The Galker test is a 35-item audio-visual, computerized word discrimination test in background noise. Included were 370 normally developed children attending day care center. The children were examined with the Galker test, tympanometry, audiometry, and the Reynell test of verbal comprehension. Parents and daycare teachers completed questionnaires on the children's ability to hear and understand speech. As most of the variables were not assessed using interval scales, non-parametric statistics (Goodman-Kruskal's gamma) were used for analyzing associations with the Galker test score. For comparisons, analysis of variance (ANOVA) was used. Interrelations were adjusted for using a non-parametric graphic model. In unadjusted analyses, the Galker test was associated with gender, age group, language development (Reynell revised scale), audiometry, and tympanometry. The Galker score was also associated with the parents' and day care teachers' reports on the children's vocabulary, sentence construction, and pronunciation. Type B tympanograms were associated with a mean hearing 5-6dB below that of than type A, C1, or C2. In the graphic analysis, Galker scores were closely and significantly related to Reynell test scores (Gamma (G)=0.35), the children's age group (G=0.33), and the day care teachers' assessment of the children's vocabulary (G=0.26). The Galker test of speech reception in noise appears promising as an easy and quick tool for evaluating preschool children's understanding of spoken words in noise, and it correlated well with the day care teachers' reports and less with the parents' reports. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo
2009-04-01
The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.
Arrabito, G R; McFadden, S M; Crabtree, R B
2001-07-01
Auditory speech thresholds were measured in this study. Subjects were required to discriminate a female voice recording of three-digit numbers in the presence of diotic speech babble. The voice stimulus was spatialized at 11 static azimuth positions on the horizontal plane using three different head-related transfer functions (HRTFs) measured on individuals who did not participate in this study. The diotic presentation of the voice stimulus served as the control condition. The results showed that two of the HRTFS performed similarly and had significantly lower auditory speech thresholds than the third HRTF. All three HRTFs yielded significantly lower auditory speech thresholds compared with the diotic presentation of the voice stimulus, with the largest difference at 60 degrees azimuth. The practical implications of these results suggest that lower headphone levels of the communication system in military aircraft can be achieved without sacrificing intelligibility, thereby lessening the risk of hearing loss.
Word production inconsistency of Singaporean-English-speaking adolescents with Down Syndrome.
Wong, Betty; Brebner, Chris; McCormack, Paul; Butcher, Andy
2015-01-01
The nature of speech disorders in individuals with Down Syndrome (DS) remains controversial despite various explanations put forth in the literature to account for the observed speech profiles. A high level of word production inconsistency in children with DS has led researchers to query whether the inconsistency continues into adolescence, and if the inconsistency stems from inconsistent phonological disorder (IPD) or childhood apraxia of speech (CAS). Of the studies that have been published, most suggest that the speech profile of individuals with DS is delayed, while a few recent studies suggest a combination of delayed and disordered patterns. However, no studies have explored the nature of word production inconsistency in this population, and the relationship between word production inconsistency, receptive vocabulary and severity of speech disorder. To investigate in a pilot study the extent of word production inconsistency in adolescents with DS and to examine the correlations between word production inconsistency, measures of receptive vocabulary, severity of speech disorder and oromotor skills in adolescents with DS. The participants were 32 native speakers of Singaporean-English adolescents, comprising 16 participants with DS and 16 typically developing (TD) participants. The participants completed a battery of standardized speech and language assessments, including The Diagnostic Evaluation of Articulation and Phonology (DEAP) assessment. Results from each test were correlated to determine relationships. Qualitative analyses were also carried out on all the data collected. In this study, seven out of 16 participants with DS scored above 40% on word production inconsistency, a diagnostic criterion for IPD. In addition, all participants with DS performed poorly on the oromotor assessment of DEAP. The overall speech profile observed did not exactly correspond with the cluster symptoms observed in children with IPD or CAS. Word production inconsistency is a noticeable feature in the speech of individuals with DS. In addition, the speech profiles of individuals with DS consist of atypical and unusual errors alongside developmental errors. Significant correlations were found between the measures investigated, suggesting that speech disorder in DS is multifactorial. The results from this study will help to improve differential diagnosis of speech disorders and individualized treatment plans in the population with DS. © 2015 Royal College of Speech and Language Therapists.
Polat, Zahra; Bulut, Erdoğan; Ataş, Ahmet
2016-09-01
Spoken word recognition and speech perception tests in quiet are being used as a routine in assessment of the benefit which children and adult cochlear implant users receive from their devices. Cochlear implant users generally demonstrate high level performances in these test materials as they are able to achieve high level speech perception ability in quiet situations. Although these test materials provide valuable information regarding Cochlear Implant (CI) users' performances in optimal listening conditions, they do not give realistic information regarding performances in adverse listening conditions, which is the case in the everyday environment. The aim of this study was to assess the speech intelligibility performance of post lingual CI users in the presence of noise at different signal-to-noise ratio with the Matrix Test developed for Turkish language. Cross-sectional study. The thirty post lingual implant user adult subjects, who had been using implants for a minimum of one year, were evaluated with Turkish Matrix test. Subjects' speech intelligibility was measured using the adaptive and non-adaptive Matrix Test in quiet and noisy environments. The results of the study show a correlation between Pure Tone Average (PTA) values of the subjects and Matrix test Speech Reception Threshold (SRT) values in the quiet. Hence, it is possible to asses PTA values of CI users using the Matrix Test also. However, no correlations were found between Matrix SRT values in the quiet and Matrix SRT values in noise. Similarly, the correlation between PTA values and intelligibility scores in noise was also not significant. Therefore, it may not be possible to assess the intelligibility performance of CI users using test batteries performed in quiet conditions. The Matrix Test can be used to assess the benefit of CI users from their systems in everyday life, since it is possible to perform intelligibility test with the Matrix test using a material that CI users experience in their everyday life and it is possible to assess their difficulty in speech discrimination in noisy conditions they have to cope with.
Auinger, Alice Barbara; Riss, Dominik; Liepins, Rudolfs; Rader, Tobias; Keck, Tilman; Keintzel, Thomas; Kaider, Alexandra; Baumgartner, Wolf-Dieter; Gstoettner, Wolfgang; Arnoldner, Christoph
2017-07-01
It has been shown that patients with electric acoustic stimulation (EAS) perform better in noisy environments than patients with a cochlear implant (CI). One reason for this could be the preserved access to acoustic low-frequency cues including the fundamental frequency (F0). Therefore, our primary aim was to investigate whether users of EAS experience a release from masking with increasing F0 difference between target talker and masking talker. The study comprised 29 patients and consisted of three groups of subjects: EAS users, CI users and normal-hearing listeners (NH). All CI and EAS users were implanted with a MED-EL cochlear implant and had at least 12 months of experience with the implant. Speech perception was assessed with the Oldenburg sentence test (OlSa) using one sentence from the test corpus as speech masker. The F0 in this masking sentence was shifted upwards by 4, 8, or 12 semitones. For each of these masker conditions the speech reception threshold (SRT) was assessed by adaptively varying the masker level while presenting the target sentences at a fixed level. A statistically significant improvement in speech perception was found for increasing difference in F0 between target sentence and masker sentence in EAS users (p = 0.038) and in NH listeners (p = 0.003). In CI users (classic CI or EAS users with electrical stimulation only) speech perception was independent from differences in F0 between target and masker. A release from masking with increasing difference in F0 between target and masking speech was only observed in listeners and configurations in which the low-frequency region was presented acoustically. Thus, the speech information contained in the low frequencies seems to be crucial for allowing listeners to separate multiple sources. By combining acoustic and electric information, EAS users even manage tasks as complicated as segregating the audio streams from multiple talkers. Preserving the natural code, like fine-structure cues in the low-frequency region, seems to be crucial to provide CI users with the best benefit. Copyright © 2017 Elsevier B.V. All rights reserved.
Chang, Hung-Yue; Luo, Ching-Hsing; Lo, Tun-Shin; Chen, Hsiao-Chuan; Huang, Kuo-You; Liao, Wen-Huei; Su, Mao-Chang; Liu, Shu-Yu; Wang, Nan-Mai
2017-08-28
This study investigated whether a self-designed assistive listening device (ALD) that incorporates an adaptive dynamic range optimization (ADRO) amplification strategy can surpass a commercially available monaurally worn linear ALD, SM100. Both subjective and objective measurements were implemented. Mandarin Hearing-In-Noise Test (MHINT) scores were the objective measurement, whereas participant satisfaction was the subjective measurement. The comparison was performed in a mixed design (i.e., subjects' hearing status being mild or moderate, quiet versus noisy, and linear versus ADRO scheme). The participants were two groups of hearing-impaired subjects, nine mild and eight moderate, respectively. The results of the ADRO system revealed a significant difference in the MHINT sentence reception threshold (SRT) in noisy environments between monaurally aided and unaided conditions, whereas the linear system did not. The benchmark results showed that the ADRO scheme is effectively beneficial to people who experience mild or moderate hearing loss in noisy environments. The satisfaction rating regarding overall speech quality indicated that the participants were satisfied with the speech quality of both ADRO and linear schemes in quiet environments, and they were more satisfied with ADRO than they with the linear scheme in noisy environments.
Davidson, Lisa S; Skinner, Margaret W; Holstad, Beth A; Fears, Beverly T; Richter, Marie K; Matusofsky, Margaret; Brenner, Christine; Holden, Timothy; Birath, Amy; Kettel, Jerrica L; Scollie, Susan
2009-06-01
The purpose of this study was to examine the effects of a wider instantaneous input dynamic range (IIDR) setting on speech perception and comfort in quiet and noise for children wearing the Nucleus 24 implant system and the Freedom speech processor. In addition, children's ability to understand soft and conversational level speech in relation to aided sound-field thresholds was examined. Thirty children (age, 7 to 17 years) with the Nucleus 24 cochlear implant system and the Freedom speech processor with two different IIDR settings (30 versus 40 dB) were tested on the Consonant Nucleus Consonant (CNC) word test at 50 and 60 dB SPL, the Bamford-Kowal-Bench Speech in Noise Test, and a loudness rating task for four-talker speech noise. Aided thresholds for frequency-modulated tones, narrowband noise, and recorded Ling sounds were obtained with the two IIDRs and examined in relation to CNC scores at 50 dB SPL. Speech Intelligibility Indices were calculated using the long-term average speech spectrum of the CNC words at 50 dB SPL measured at each test site and aided thresholds. Group mean CNC scores at 50 dB SPL with the 40 IIDR were significantly higher (p < 0.001) than with the 30 IIDR. Group mean CNC scores at 60 dB SPL, loudness ratings, and the signal to noise ratios-50 for Bamford-Kowal-Bench Speech in Noise Test were not significantly different for the two IIDRs. Significantly improved aided thresholds at 250 to 6000 Hz as well as higher Speech Intelligibility Indices afforded improved audibility for speech presented at soft levels (50 dB SPL). These results indicate that an increased IIDR provides improved word recognition for soft levels of speech without compromising comfort of higher levels of speech sounds or sentence recognition in noise.
Spriet, Ann; Van Deun, Lieselot; Eftaxiadis, Kyriaky; Laneau, Johan; Moonen, Marc; van Dijk, Bas; van Wieringen, Astrid; Wouters, Jan
2007-02-01
This paper evaluates the benefit of the two-microphone adaptive beamformer BEAM in the Nucleus Freedom cochlear implant (CI) system for speech understanding in background noise by CI users. A double-blind evaluation of the two-microphone adaptive beamformer BEAM and a hardware directional microphone was carried out with five adult Nucleus CI users. The test procedure consisted of a pre- and post-test in the lab and a 2-wk trial period at home. In the pre- and post-test, the speech reception threshold (SRT) with sentences and the percentage correct phoneme scores for CVC words were measured in quiet and background noise at different signal-to-noise ratios. Performance was assessed for two different noise configurations (with a single noise source and with three noise sources) and two different noise materials (stationary speech-weighted noise and multitalker babble). During the 2-wk trial period at home, the CI users evaluated the noise reduction performance in different listening conditions by means of the SSQ questionnaire. In addition to the perceptual evaluation, the noise reduction performance of the beamformer was measured physically as a function of the direction of the noise source. Significant improvements of both the SRT in noise (average improvement of 5-16 dB) and the percentage correct phoneme scores (average improvement of 10-41%) were observed with BEAM compared to the standard hardware directional microphone. In addition, the SSQ questionnaire and subjective evaluation in controlled and real-life scenarios suggested a possible preference for the beamformer in noisy environments. The evaluation demonstrates that the adaptive noise reduction algorithm BEAM in the Nucleus Freedom CI-system may significantly increase the speech perception by cochlear implantees in noisy listening conditions. This is the first monolateral (adaptive) noise reduction strategy actually implemented in a mainstream commercial CI.
Result on speech perception after conversion from Spectra® to Freedom®.
Magalhães, Ana Tereza de Matos; Goffi-Gomez, Maria Valéria Schmidt; Hoshino, Ana Cristina; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; Brito, Rubens
2012-04-01
New technology in the Freedom® speech processor for cochlear implants was developed to improve how incoming acoustic sound is processed; this applies not only for new users, but also for previous generations of cochlear implants. To identify the contribution of this technology-- the Nucleus 22®--on speech perception tests in silence and in noise, and on audiometric thresholds. A cross-sectional cohort study was undertaken. Seventeen patients were selected. The last map based on the Spectra® was revised and optimized before starting the tests. Troubleshooting was used to identify malfunction. To identify the contribution of the Freedom® technology for the Nucleus22®, auditory thresholds and speech perception tests were performed in free field in sound-proof booths. Recorded monosyllables and sentences in silence and in noise (SNR = 0dB) were presented at 60 dBSPL. The nonparametric Wilcoxon test for paired data was used to compare groups. Freedom® applied for the Nucleus22® showed a statistically significant difference in all speech perception tests and audiometric thresholds. The Freedom® technology improved the performance of speech perception and audiometric thresholds of patients with Nucleus 22®.
A Study of the Combined Use of a Hearing Aid and Tactual Aid in an Adult with Profound Hearing Loss
ERIC Educational Resources Information Center
Reed, Charlotte M.; Delhorne, Lorraine A.
2006-01-01
This study examined the benefits of the combined used of a hearing aid and tactual aid to supplement lip-reading in the reception of speech and for the recognition of environmental sounds in an adult with profound hearing loss. Speech conditions included lip-reading alone (L), lip-reading + tactual aid (L+TA) lip-reading + hearing aid (L+HA) and…
Glennen, Sharon
2014-07-01
The author followed 56 internationally adopted children during the first 3 years after adoption to determine how and when they reached age-expected language proficiency in Standard American English. The influence of age of adoption was measured, along with the relationship between early and later language and speech outcomes. Children adopted from Eastern Europe at ages 12 months to 4 years, 11 months, were assessed 5 times across 3 years. Norm-referenced measures of receptive and expressive language and articulation were compared over time. In addition, mean length of utterance (MLU) was measured. Across all children, receptive language reached age-expected levels more quickly than expressive language. Children adopted at ages 1 and 2 "caught up" more quickly than children adopted at ages 3 and 4. Three years after adoption, there was no difference in test scores across age of adoption groups, and the percentage of children with language or speech delays matched population estimates. MLU was within the average range 3 years after adoption but significantly lower than other language test scores. Three years after adoption, age of adoption did not influence language or speech outcomes, and most children reached age-expected language levels. Expressive syntax as measured by MLU was an area of relative weakness.
What Is an Arteriovenous Malformation (AVM)?
... sensory information, such as interpretation of pain and temperature, light touch, vibration and more. The temporal lobe functions to process things related to hearing, memory, learning and receptive speech. The occipital lobe functions to ...
Individual differences in children’s private speech: The role of imaginary companions
Davis, Paige E.; Meins, Elizabeth; Fernyhough, Charles
2013-01-01
Relations between children’s imaginary companion status and their engagement in private speech during free play were investigated in a socially diverse sample of 5-year-olds (N = 148). Controlling for socioeconomic status, receptive verbal ability, total number of utterances, and duration of observation, there was a main effect of imaginary companion status on type of private speech. Children who had imaginary companions were more likely to engage in covert private speech compared with their peers who did not have imaginary companions. These results suggest that the private speech of children with imaginary companions is more internalized than that of their peers who do not have imaginary companions and that social engagement with imaginary beings may fulfill a similar role to social engagement with real-life partners in the developmental progression of private speech. PMID:23978382
Perceptual learning for speech in noise after application of binary time-frequency masks
Ahmadi, Mahnaz; Gross, Vauna L.; Sinex, Donal G.
2013-01-01
Ideal time-frequency (TF) masks can reject noise and improve the recognition of speech-noise mixtures. An ideal TF mask is constructed with prior knowledge of the target speech signal. The intelligibility of a processed speech-noise mixture depends upon the threshold criterion used to define the TF mask. The study reported here assessed the effect of training on the recognition of speech in noise after processing by ideal TF masks that did not restore perfect speech intelligibility. Two groups of listeners with normal hearing listened to speech-noise mixtures processed by TF masks calculated with different threshold criteria. For each group, a threshold criterion that initially produced word recognition scores between 0.56–0.69 was chosen for training. Listeners practiced with one set of TF-masked sentences until their word recognition performance approached asymptote. Perceptual learning was quantified by comparing word-recognition scores in the first and last training sessions. Word recognition scores improved with practice for all listeners with the greatest improvement observed for the same materials used in training. PMID:23464038
Hostile reception greets Bottomley Congress speech.
1990-04-04
Health Minister Virginia Bottomley's attempts to persuade delegates that there are 'exciting' opportunities for nurses in the Government's plans for the health service failed, as she faced growing hostility from the audience at RCN Congress last week.
Speech Intelligibility in Various Noise Conditions with the Nucleus® 5 CP810 Sound Processor.
Dillier, Norbert; Lai, Wai Kong
2015-06-11
The Nucleus(®) 5 System Sound Processor (CP810, Cochlear™, Macquarie University, NSW, Australia) contains two omnidirectional microphones. They can be configured as a fixed directional microphone combination (called Zoom) or as an adaptive beamformer (called Beam), which adjusts the directivity continuously to maximally reduce the interfering noise. Initial evaluation studies with the CP810 had compared performance and usability of the new processor in comparison with the Freedom™ Sound Processor (Cochlear™) for speech in quiet and noise for a subset of the processing options. This study compares the two processing options suggested to be used in noisy environments, Zoom and Beam, for various sound field conditions using a standardized speech in noise matrix test (Oldenburg sentences test). Nine German-speaking subjects who previously had been using the Freedom speech processor and subsequently were upgraded to the CP810 device participated in this series of additional evaluation tests. The speech reception threshold (SRT for 50% speech intelligibility in noise) was determined using sentences presented via loudspeaker at 65 dB SPL in front of the listener and noise presented either via the same loudspeaker (S0N0) or at 90 degrees at either the ear with the sound processor (S0NCI+) or the opposite unaided ear (S0NCI-). The fourth noise condition consisted of three uncorrelated noise sources placed at 90, 180 and 270 degrees. The noise level was adjusted through an adaptive procedure to yield a signal to noise ratio where 50% of the words in the sentences were correctly understood. In spatially separated speech and noise conditions both Zoom and Beam could improve the SRT significantly. For single noise sources, either ipsilateral or contralateral to the cochlear implant sound processor, average improvements with Beam of 12.9 and 7.9 dB in SRT were found. The average SRT of -8 dB for Beam in the diffuse noise condition (uncorrelated noise from both sides and back) is truly remarkable and comparable to the performance of normal hearing listeners in the same test environment. The static directivity (Zoom) option in the diffuse noise condition still provides a significant benefit of 5.9 dB in comparison with the standard omnidirectional microphone setting. These results indicate that CI recipients may improve their speech recognition in noisy environments significantly using these directional microphone-processing options.
Lorens, Artur; Zgoda, Małgorzata; Obrycka, Anita; Skarżynski, Henryk
2010-12-01
Presently, there are only few studies examining the benefits of fine structure information in coding strategies. Against this background, this study aims to assess the objective and subjective performance of children experienced with the C40+ cochlear implant using the CIS+ coding strategy who were upgraded to the OPUS 2 processor using FSP and HDCIS. In this prospective study, 60 children with more than 3.5 years of experience with the C40+ cochlear implant were upgraded to the OPUS 2 processor and fit and tested with HDCIS (Interval I). After 3 months of experience with HDCIS, they were fit with the FSP coding strategy (Interval II) and tested with all strategies (FSP, HDCIS, CIS+). After an additional 3-4 months, they were assessed on all three strategies and asked to choose their take-home strategy (Interval III). The children were tested using the Adaptive Auditory Speech Test which measures speech reception threshold (SRT) in quiet and noise at each test interval. The children were also asked to rate on a Visual Analogue Scale their satisfaction and coding strategy preference when listening to speech and a pop song. However, since not all tests could be performed at one single visit, some children were not able complete all tests at all intervals. At the study endpoint, speech in quiet showed a significant difference in SRT of 1.0 dB between FSP and HDCIS, with FSP performing better. FSP proved a better strategy compared with CIS+, showing lower SRT results of 5.2 dB. Speech in noise tests showed FSP to be significantly better than CIS+ by 0.7 dB, and HDCIS to be significantly better than CIS+ by 0.8 dB. Both satisfaction and coding strategy preference ratings also revealed that FSP and HDCIS strategies were better than CIS+ strategy when listening to speech and music. FSP was better than HDCIS when listening to speech. This study demonstrates that long-term pediatric users of the COMBI 40+ are able to upgrade to a newer processor and coding strategy without compromising their listening performance and even improving their performance with FSP after a short time of experience. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Using Speech Recall in Hearing Aid Fitting and Outcome Evaluation Under Ecological Test Conditions.
Lunner, Thomas; Rudner, Mary; Rosenbom, Tove; Ågren, Jessica; Ng, Elaine Hoi Ning
2016-01-01
In adaptive Speech Reception Threshold (SRT) tests used in the audiological clinic, speech is presented at signal to noise ratios (SNRs) that are lower than those generally encountered in real-life communication situations. At higher, ecologically valid SNRs, however, SRTs are insensitive to changes in hearing aid signal processing that may be of benefit to listeners who are hard of hearing. Previous studies conducted in Swedish using the Sentence-final Word Identification and Recall test (SWIR) have indicated that at such SNRs, the ability to recall spoken words may be a more informative measure. In the present study, a Danish version of SWIR, known as the Sentence-final Word Identification and Recall Test in a New Language (SWIRL) was introduced and evaluated in two experiments. The objective of experiment 1 was to determine if the Swedish results demonstrating benefit from noise reduction signal processing for hearing aid wearers could be replicated in 25 Danish participants with mild to moderate symmetrical sensorineural hearing loss. The objective of experiment 2 was to compare direct-drive and skin-drive transmission in 16 Danish users of bone-anchored hearing aids with conductive hearing loss or mixed sensorineural and conductive hearing loss. In experiment 1, performance on SWIRL improved when hearing aid noise reduction was used, replicating the Swedish results and generalizing them across languages. In experiment 2, performance on SWIRL was better for direct-drive compared with skin-drive transmission conditions. These findings indicate that spoken word recall can be used to identify benefits from hearing aid signal processing at ecologically valid, positive SNRs where SRTs are insensitive.
Role of the middle ear muscle apparatus in mechanisms of speech signal discrimination
NASA Technical Reports Server (NTRS)
Moroz, B. S.; Bazarov, V. G.; Sachenko, S. V.
1980-01-01
A method of impedance reflexometry was used to examine 101 students with hearing impairment in order to clarify the interrelation between speech discrimination and the state of the middle ear muscles. Ability to discriminate speech signals depends to some extent on the functional state of intraaural muscles. Speech discrimination was greatly impaired in the absence of stapedial muscle acoustic reflex, in the presence of low thresholds of stimulation and in very small values of reflex amplitude increase. Discrimination was not impeded in positive AR, high values of relative thresholds and normal increase of reflex amplitude in response to speech signals with augmenting intensity.
Davidson, Lisa S; Geers, Ann E; Brenner, Christine
2010-10-01
Updated cochlear implant technology and optimized fitting can have a substantial impact on speech perception. The effects of upgrades in processor technology and aided thresholds on word recognition at soft input levels and sentence recognition in noise were examined. We hypothesized that updated speech processors and lower aided thresholds would allow improved recognition of soft speech without compromising performance in noise. 109 teenagers who had used a Nucleus 22-cochlear implant since preschool were tested with their current speech processor(s) (101 unilateral and 8 bilateral): 13 used the Spectra, 22 the ESPrit 22, 61 the ESPrit 3G, and 13 the Freedom. The Lexical Neighborhood Test (LNT) was administered at 70 and 50 dB SPL and the Bamford Kowal Bench sentences were administered in quiet and in noise. Aided thresholds were obtained for frequency-modulated tones from 250 to 4,000 Hz. Results were analyzed using repeated measures analysis of variance. Aided thresholds for the Freedom/3G group were significantly lower (better) than the Spectra/Sprint group. LNT scores at 50 dB were significantly higher for the Freedom/3G group. No significant differences between the 2 groups were found for the LNT at 70 or sentences in quiet or noise. Adolescents using updated processors that allowed for aided detection thresholds of 30 dB HL or better performed the best at soft levels. The BKB in noise results suggest that greater access to soft speech does not compromise listening in noise.
Marschik, Peter B.; Vollmann, Ralf; Bartl-Pokorny, Katrin D.; Green, Vanessa A.; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa
2018-01-01
Objective We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant (PSV) of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. Methods For this study we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples, and picture stories to elicit narrative competences. Results Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Conclusion Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note. PMID:23870013
Marschik, Peter B; Vollmann, Ralf; Bartl-Pokorny, Katrin D; Green, Vanessa A; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa
2014-08-01
We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.
Erb, Julia; Ludwig, Alexandra Annemarie; Kunke, Dunja; Fuchs, Michael; Obleser, Jonas
2018-04-24
Psychoacoustic tests assessed shortly after cochlear implantation are useful predictors of the rehabilitative speech outcome. While largely independent, both spectral and temporal resolution tests are important to provide an accurate prediction of speech recognition. However, rapid tests of temporal sensitivity are currently lacking. Here, we propose a simple amplitude modulation rate discrimination (AMRD) paradigm that is validated by predicting future speech recognition in adult cochlear implant (CI) patients. In 34 newly implanted patients, we used an adaptive AMRD paradigm, where broadband noise was modulated at the speech-relevant rate of ~4 Hz. In a longitudinal study, speech recognition in quiet was assessed using the closed-set Freiburger number test shortly after cochlear implantation (t0) as well as the open-set Freiburger monosyllabic word test 6 months later (t6). Both AMRD thresholds at t0 (r = -0.51) and speech recognition scores at t0 (r = 0.56) predicted speech recognition scores at t6. However, AMRD and speech recognition at t0 were uncorrelated, suggesting that those measures capture partially distinct perceptual abilities. A multiple regression model predicting 6-month speech recognition outcome with deafness duration and speech recognition at t0 improved from adjusted R = 0.30 to adjusted R = 0.44 when AMRD threshold was added as a predictor. These findings identify AMRD thresholds as a reliable, nonredundant predictor above and beyond established speech tests for CI outcome. This AMRD test could potentially be developed into a rapid clinical temporal-resolution test to be integrated into the postoperative test battery to improve the reliability of speech outcome prognosis.
Code of Federal Regulations, 2012 CFR
2012-01-01
... and any equipment or interconnected system or subsystem of equipment that is used in the creation..., display, switching, interchange, transmission, or reception of data or information. For example, HVAC... following body systems: Neurological; musculoskeletal; special sense organs; respiratory, including speech...
Code of Federal Regulations, 2013 CFR
2013-01-01
... and any equipment or interconnected system or subsystem of equipment that is used in the creation..., display, switching, interchange, transmission, or reception of data or information. For example, HVAC... following body systems: Neurological; musculoskeletal; special sense organs; respiratory, including speech...
Code of Federal Regulations, 2014 CFR
2014-01-01
... and any equipment or interconnected system or subsystem of equipment that is used in the creation..., display, switching, interchange, transmission, or reception of data or information. For example, HVAC... following body systems: Neurological; musculoskeletal; special sense organs; respiratory, including speech...
Söderlund, Göran B. W.; Jobs, Elisabeth Nilsson
2016-01-01
The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6–9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure. PMID:26858679
Using the structure of natural scenes and sounds to predict neural response properties in the brain
NASA Astrophysics Data System (ADS)
Deweese, Michael
2014-03-01
The natural scenes and sounds we encounter in the world are highly structured. The fact that animals and humans are so efficient at processing these sensory signals compared with the latest algorithms running on the fastest modern computers suggests that our brains can exploit this structure. We have developed a sparse mathematical representation of speech that minimizes the number of active model neurons needed to represent typical speech sounds. The model learns several well-known acoustic features of speech such as harmonic stacks, formants, onsets and terminations, but we also find more exotic structures in the spectrogra representation of sound such as localized checkerboard patterns and frequency-modulated excitatory subregions flanked by suppressive sidebands. Moreover, several of these novel features resemble neuronal receptive fields reported in the Inferior Colliculus (IC), as well as auditory thalamus (MGBv) and primary auditory cortex (A1), and our model neurons exhibit the same tradeoff in spectrotemporal resolution as has been observed in IC. To our knowledge, this is the first demonstration that receptive fields of neurons in the ascending mammalian auditory pathway beyond the auditory nerve can be predicted based on coding principles and the statistical properties of recorded sounds. We have also developed a biologically-inspired neural network model of primary visual cortex (V1) that can learn a sparse representation of natural scenes using spiking neurons and strictly local plasticity rules. The representation learned by our model is in good agreement with measured receptive fields in V1, demonstrating that sparse sensory coding can be achieved in a realistic biological setting.
Caversaccio, Marco
2014-01-01
Objective. To compare hearing and speech understanding between a new, nonskin penetrating Baha system (Baha Attract) to the current Baha system using a skin-penetrating abutment. Methods. Hearing and speech understanding were measured in 16 experienced Baha users. The transmission path via the abutment was compared to a simulated Baha Attract transmission path by attaching the implantable magnet to the abutment and then by adding a sample of artificial skin and the external parts of the Baha Attract system. Four different measurements were performed: bone conduction thresholds directly through the sound processor (BC Direct), aided sound field thresholds, aided speech understanding in quiet, and aided speech understanding in noise. Results. The simulated Baha Attract transmission path introduced an attenuation starting from approximately 5 dB at 1000 Hz, increasing to 20–25 dB above 6000 Hz. However, aided sound field threshold shows smaller differences and aided speech understanding in quiet and in noise does not differ significantly between the two transmission paths. Conclusion. The Baha Attract system transmission path introduces predominately high frequency attenuation. This attenuation can be partially compensated by adequate fitting of the speech processor. No significant decrease in speech understanding in either quiet or in noise was found. PMID:25140314
Wardenga, Nina; Batsoulis, Cornelia; Wagener, Kirsten C; Brand, Thomas; Lenarz, Thomas; Maier, Hannes
2015-01-01
The aim of this study was to determine the relationship between hearing loss and speech reception threshold (SRT) in a fixed noise condition using the German Oldenburg sentence test (OLSA). After training with two easily-audible lists of the OLSA, SRTs were determined monaurally with headphones at a fixed noise level of 65 dB SPL using a standard adaptive procedure, converging to 50% speech intelligibility. Data was obtained from 315 ears of 177 subjects with hearing losses ranging from -5 to 90 dB HL pure-tone average (PTA, 0.5, 1, 2, 3 kHz). Two domains were identified with a linear dependence of SRT on PTA. The SRT increased with a slope of 0.094 ± 0.006 dB SNR/dB HL (standard deviation (SD) of residuals = 1.17 dB) for PTAs < 47 dB HL and with a slope of 0.811 ± 0.049 dB SNR/dB HL (SD of residuals = 5.54 dB) for higher PTAs. The OLSA can be applied to subjects with a wide range of hearing losses. With 65 dB SPL fixed noise presentation level the SRT is determined by listening in noise for PTAs < ∼47 dB HL, and above it is determined by listening in quiet.
Engaged listeners: shared neural processing of powerful political speeches
Häcker, Frank E. K.; Honey, Christopher J.; Hasson, Uri
2015-01-01
Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners’ brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages. PMID:25653012
1986-07-08
Dr. William R. Lucas, Marshall's fourth Center Director (1974-1986), delivers a speech in front of a picture of the lunar landscape with Earth looming in the background while attending a Huntsville Chamber of Commerce reception honoring his achievements as Director of Marshall Space Flight Center (MSFC).
2016-01-01
The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658
Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W
2015-01-01
As cochlear implant (CI) acceptance increases and candidacy criteria are expanded, these devices are increasingly recommended for individuals with less than profound hearing loss. As a result, many individuals who receive a CI also retain acoustic hearing, often in the low frequencies, in the nonimplanted ear (i.e., bimodal hearing) and in some cases in the implanted ear (i.e., hybrid hearing) which can enhance the performance achieved by the CI alone. However, guidelines for clinical decisions pertaining to cochlear implantation are largely based on expectations for postsurgical speech-reception performance with the CI alone in auditory-only conditions. A more comprehensive prediction of postimplant performance would include the expected effects of residual acoustic hearing and visual cues on speech understanding. An evaluation of auditory-visual performance might be particularly important because of the complementary interaction between the speech information relayed by visual cues and that contained in the low-frequency auditory signal. The goal of this study was to characterize the benefit provided by residual acoustic hearing to consonant identification under auditory-alone and auditory-visual conditions for CI users. Additional information regarding the expected role of residual hearing in overall communication performance by a CI listener could potentially lead to more informed decisions regarding cochlear implantation, particularly with respect to recommendations for or against bilateral implantation for an individual who is functioning bimodally. Eleven adults 23 to 75 years old with a unilateral CI and air-conduction thresholds in the nonimplanted ear equal to or better than 80 dB HL for at least one octave frequency between 250 and 1000 Hz participated in this study. Consonant identification was measured for conditions involving combinations of electric hearing (via the CI), acoustic hearing (via the nonimplanted ear), and speechreading (visual cues). The results suggest that the benefit to CI consonant-identification performance provided by the residual acoustic hearing is even greater when visual cues are also present. An analysis of consonant confusions suggests that this is because the voicing cues provided by the residual acoustic hearing are highly complementary with the mainly place-of-articulation cues provided by the visual stimulus. These findings highlight the need for a comprehensive prediction of trimodal (acoustic, electric, and visual) postimplant speech-reception performance to inform implantation decisions. The increased influence of residual acoustic hearing under auditory-visual conditions should be taken into account when considering surgical procedures or devices that are intended to preserve acoustic hearing in the implanted ear. This is particularly relevant when evaluating the candidacy of a current bimodal CI user for a second CI (i.e., bilateral implantation). Although recent developments in CI technology and surgical techniques have increased the likelihood of preserving residual acoustic hearing, preservation cannot be guaranteed in each individual case. Therefore, the potential gain to be derived from bilateral implantation needs to be weighed against the possible loss of the benefit provided by residual acoustic hearing.
Hearing gain with a BAHA test-band in patients with single-sided deafness.
Kim, Do-Youn; Kim, Tae Su; Shim, Byoung Soo; Jin, In Suk; Ahn, Joong Ho; Chung, Jong Woo; Yoon, Tae Hyun; Park, Hong Ju
2014-01-01
It is assumed that preoperative use of a bone-anchored hearing aid (BAHA) test-band will give a patient lower gain compared to real post-operative gain because of the reduction of energy through the scalp when using a test-band. Hearing gains using a BAHA test-band were analyzed in patients with unilateral hearing loss. Nineteen patients with unilateral sensorineural hearing loss were enrolled. A test-band, which was connected to BAHA Intenso with full-on gain, was put on the mastoid. Conventional air-conduction (AC) pure-tone averages (PTAs) and sound-field PTAs and speech reception thresholds (SRTs) were obtained in conditions A (the better ear naked), B (the better ear plugged), and C (the better ear plugged with a test-band on the poorer mastoid). Air-conduction PTAs of the poorer and better ears were 91 ± 19 and 18 ± 8 dB HL. Sound-field PTAs in condition B were higher than those in condition A (54 vs. 26 dB HL), which means that earplugs can block the sound grossly up to 54 dB HL through the better ears. The aided PTAs (24 ± 6 dB HL) in condition C were similar to those of the better ears in condition A (26±9 dB HL), though condition C showed higher thresholds at 500 Hz and lower thresholds at 1 and 2kHz when compared to condition A. The hearing thresholds using a test-band were similar to the published results of BAHA users with the volume to most comfortable level (MCL). Our findings showed that a BAHA test-band on the poorer ear could transmit sound to the cochlea as much as the better ears can hear. The increased functional gain at 1 and 2kHz reflects the technical characteristics of BAHA processor. The reduction of energy through the scalp when using a test-band seems to be offset by the difference of output by setting the volume to full-on gain and using a high-powered speech processor. Preoperative hearing gains using a test-band with full-on gain seems to be similar to the post-operative gains of BAHA users with the volume to MCL. © 2013.
Boyd, Paul J
2006-12-01
The principal task in the programming of a cochlear implant (CI) speech processor is the setting of the electrical dynamic range (output) for each electrode, to ensure that a comfortable loudness percept is obtained for a range of input levels. This typically involves separate psychophysical measurement of electrical threshold ([theta] e) and upper tolerance levels using short current bursts generated by the fitting software. Anecdotal clinical experience and some experimental studies suggest that the measurement of [theta]e is relatively unimportant and that the setting of upper tolerance limits is more critical for processor programming. The present study aims to test this hypothesis and examines in detail how acoustic thresholds and speech recognition are affected by setting of the lower limit of the output ("Programming threshold" or "PT") to understand better the influence of this parameter and how it interacts with certain other programming parameters. Test programs (maps) were generated with PT set to artificially high and low values and tested on users of the MED-EL COMBI 40+ CI system. Acoustic thresholds and speech recognition scores (sentence tests) were measured for each of the test maps. Acoustic thresholds were also measured using maps with a range of output compression functions ("maplaws"). In addition, subjective reports were recorded regarding the presence of "background threshold stimulation" which is occasionally reported by CI users if PT is set to relatively high values when using the CIS strategy. Manipulation of PT was found to have very little effect. Setting PT to minimum produced a mean 5 dB (S.D. = 6.25) increase in acoustic thresholds, relative to thresholds with PT set normally, and had no statistically significant effect on speech recognition scores on a sentence test. On the other hand, maplaw setting was found to have a significant effect on acoustic thresholds (raised as maplaw is made more linear), which provides some theoretical explanation as to why PT has little effect when using the default maplaw of c = 500. Subjective reports of background threshold stimulation showed that most users could perceive a relatively loud auditory percept, in the absence of microphone input, when PT was set to double the behaviorally measured electrical thresholds ([theta]e), but that this produced little intrusion when microphone input was present. The results of these investigations have direct clinical relevance, showing that setting of PT is indeed relatively unimportant in terms of speech discrimination, but that it is worth ensuring that PT is not set excessively high, as this can produce distracting background stimulation. Indeed, it may even be set to minimum values without deleterious effect.
The development and validation of the Closed-set Mandarin Sentence (CMS) test.
Tao, Duo-Duo; Fu, Qian-Jie; Galvin, John J; Yu, Ya-Feng
2017-09-01
Matrix-styled sentence tests offer a closed-set paradigm that may be useful when evaluating speech intelligibility. Ideally, sentence test materials should reflect the distribution of phonemes within the target language. We developed and validated the Closed-set Mandarin Sentence (CMS) test to assess Mandarin speech intelligibility in noise. CMS test materials were selected to be familiar words and to represent the natural distribution of vowels, consonants, and lexical tones found in Mandarin Chinese. Ten key words in each of five categories (Name, Verb, Number, Color, and Fruit) were produced by a native Mandarin talker, resulting in a total of 50 words that could be combined to produce 100,000 unique sentences. Normative data were collected in 10 normal-hearing, adult Mandarin-speaking Chinese listeners using a closed-set test paradigm. Two test runs were conducted for each subject, and 20 sentences per run were randomly generated while ensuring that each word was presented only twice in each run. First, the level of the words in each category were adjusted to produce equal intelligibility in noise. Test-retest reliability for word-in-sentence recognition was excellent according to Cronbach's alpha (0.952). After the category level adjustments, speech reception thresholds (SRTs) for sentences in noise, defined as the signal-to-noise ratio (SNR) that produced 50% correct whole sentence recognition, were adaptively measured by adjusting the SNR according to the correctness of response. The mean SRT was -7.9 (SE=0.41) and -8.1 (SE=0.34) dB for runs 1 and 2, respectively. The mean standard deviation across runs was 0.93 dB, and paired t-tests showed no significant difference between runs 1 and 2 (p=0.74) despite random sentences being generated for each run and each subject. The results suggest that the CMS provides large stimulus set with which to repeatedly and reliably measure Mandarin-speaking listeners' speech understanding in noise using a closed-set paradigm.
Dai, Chuanfu; Zhao, Zeqi; Zhang, Duo; Lei, Guanxiong
2018-01-01
Background The aim of this study was to explore the value of the spectral ripple discrimination test in speech recognition evaluation among a deaf (post-lingual) Mandarin-speaking population in China following cochlear implantation. Material/Methods The study included 23 Mandarin-speaking adult subjects with normal hearing (normal-hearing group) and 17 deaf adults who were former Mandarin-speakers, with cochlear implants (cochlear implantation group). The normal-hearing subjects were divided into men (n=10) and women (n=13). The spectral ripple discrimination thresholds between the groups were compared. The correlation between spectral ripple discrimination thresholds and Mandarin speech recognition rates in the cochlear implantation group were studied. Results Spectral ripple discrimination thresholds did not correlate with age (r=−0.19; p=0.22), and there was no significant difference in spectral ripple discrimination thresholds between the male and female groups (p=0.654). Spectral ripple discrimination thresholds of deaf adults with cochlear implants were significantly correlated with monosyllabic recognition rates (r=0.84; p=0.000). Conclusions In a Mandarin Chinese speaking population, spectral ripple discrimination thresholds of normal-hearing individuals were unaffected by both gender and age. Spectral ripple discrimination thresholds were correlated with Mandarin monosyllabic recognition rates of Mandarin-speaking in post-lingual deaf adults with cochlear implants. The spectral ripple discrimination test is a promising method for speech recognition evaluation in adults following cochlear implantation in China. PMID:29806954
Dai, Chuanfu; Zhao, Zeqi; Shen, Weidong; Zhang, Duo; Lei, Guanxiong; Qiao, Yuehua; Yang, Shiming
2018-05-28
BACKGROUND The aim of this study was to explore the value of the spectral ripple discrimination test in speech recognition evaluation among a deaf (post-lingual) Mandarin-speaking population in China following cochlear implantation. MATERIAL AND METHODS The study included 23 Mandarin-speaking adult subjects with normal hearing (normal-hearing group) and 17 deaf adults who were former Mandarin-speakers, with cochlear implants (cochlear implantation group). The normal-hearing subjects were divided into men (n=10) and women (n=13). The spectral ripple discrimination thresholds between the groups were compared. The correlation between spectral ripple discrimination thresholds and Mandarin speech recognition rates in the cochlear implantation group were studied. RESULTS Spectral ripple discrimination thresholds did not correlate with age (r=-0.19; p=0.22), and there was no significant difference in spectral ripple discrimination thresholds between the male and female groups (p=0.654). Spectral ripple discrimination thresholds of deaf adults with cochlear implants were significantly correlated with monosyllabic recognition rates (r=0.84; p=0.000). CONCLUSIONS In a Mandarin Chinese speaking population, spectral ripple discrimination thresholds of normal-hearing individuals were unaffected by both gender and age. Spectral ripple discrimination thresholds were correlated with Mandarin monosyllabic recognition rates of Mandarin-speaking in post-lingual deaf adults with cochlear implants. The spectral ripple discrimination test is a promising method for speech recognition evaluation in adults following cochlear implantation in China.
Speech processing in children with functional articulation disorders.
Gósy, Mária; Horváth, Viktória
2015-03-01
This study explored auditory speech processing and comprehension abilities in 5-8-year-old monolingual Hungarian children with functional articulation disorders (FADs) and their typically developing peers. Our main hypothesis was that children with FAD would show co-existing auditory speech processing disorders, with different levels of these skills depending on the nature of the receptive processes. The tasks included (i) sentence and non-word repetitions, (ii) non-word discrimination and (iii) sentence and story comprehension. Results suggest that the auditory speech processing of children with FAD is underdeveloped compared with that of typically developing children, and largely varies across task types. In addition, there are differences between children with FAD and controls in all age groups from 5 to 8 years. Our results have several clinical implications.
De Ceulaer, Geert; Bestel, Julie; Mülder, Hans E; Goldbeck, Felix; de Varebeke, Sebastien Pierre Janssens; Govaerts, Paul J
2016-05-01
Roger is a digital adaptive multi-channel remote microphone technology that wirelessly transmits a speaker's voice directly to a hearing instrument or cochlear implant sound processor. Frequency hopping between channels, in combination with repeated broadcast, avoids interference issues that have limited earlier generation FM systems. This study evaluated the benefit of the Roger Pen transmitter microphone in a multiple talker network (MTN) for cochlear implant users in a simulated noisy conversation setting. Twelve post-lingually deafened adult Advanced Bionics CII/HiRes 90K recipients were recruited. Subjects used a Naida CI Q70 processor with integrated Roger 17 receiver. The test environment simulated four people having a meal in a noisy restaurant, one the CI user (listener), and three companions (talkers) talking non-simultaneously in a diffuse field of multi-talker babble. Speech reception thresholds (SRTs) were determined without the Roger Pen, with one Roger Pen, and with three Roger Pens in an MTN. Using three Roger Pens in an MTN improved the SRT by 14.8 dB over using no Roger Pen, and by 13.1 dB over using a single Roger Pen (p < 0.0001). The Roger Pen in an MTN provided statistically and clinically significant improvement in speech perception in noise for Advanced Bionics cochlear implant recipients. The integrated Roger 17 receiver made it easy for users of the Naida CI Q70 processor to take advantage of the Roger system. The listening advantage and ease of use should encourage more clinicians to recommend and fit Roger in adult cochlear implant patients.
Mukari, Siti Zamratol-Mai Sarah; Umat, Cila; Razak, Ummu Athiyah Abdul
2011-07-01
The aim of the present study was to compare the benefit of monaural versus binaural ear-level frequency modulated (FM) fitting on speech perception in noise in children with normal hearing. Reception threshold for sentences (RTS) was measured in no-FM, monaural FM, and binaural FM conditions in 22 normally developing children with bilateral normal hearing, aged 8 to 9 years old. Data were gathered using the Pediatric Malay Hearing in Noise Test (P-MyHINT) with speech presented from front and multi-talker babble presented from 90°, 180°, 270° azimuths in a sound treated booth. The results revealed that the use of either monaural or binaural ear level FM receivers provided significantly better mean RTSs than the no-FM condition (P<0.001). However, binaural FM did not produce a significantly greater benefit in mean RTS than monaural fitting. The benefit of binaural over monaural FM varies across individuals; while binaural fitting provided better RTSs in about 50% of study subjects, there were those in whom binaural fitting resulted in either deterioration or no additional improvement compared to monaural FM fitting. The present study suggests that the use of monaural ear-level FM receivers in children with normal hearing might provide similar benefit as binaural use. Individual subjects' variations of binaural FM benefit over monaural FM suggests that the decision to employ monaural or binaural fitting should be individualized. It should be noted however, that the current study recruits typically developing normal hearing children. Future studies involving normal hearing children with high risk of having difficulty listening in noise is indicated to see if similar findings are obtained.
Simultaneous Communication Supports Learning in Noise by Cochlear Implant Users
Blom, Helen C.; Marschark, Marc; Machmer, Elizabeth
2017-01-01
Objectives This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant users in noisy contexts. Methods Forty eight college students who were active cochlear implant users, watched videos of three short presentations, the text versions of which were standardized at the 8th grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants’ spoken language and sign language skills were obtained via self-reports and objective assessments. Results When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants’ receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. Discussion Students who are cochlear implant users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those cochlear implant users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. Conclusion Accompanying spoken language with signs can benefit learners who are cochlear implant users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined. PMID:28010675
Simultaneous communication supports learning in noise by cochlear implant users.
Blom, Helen; Marschark, Marc; Machmer, Elizabeth
2017-01-01
This study sought to evaluate the potential of using spoken language and signing together (simultaneous communication, SimCom, sign-supported speech) as a means of improving speech recognition, comprehension, and learning by cochlear implant (CI) users in noisy contexts. Forty eight college students who were active CI users, watched videos of three short presentations, the text versions of which were standardized at the 8 th -grade reading level. One passage was presented in spoken language only, one was presented in spoken language with multi-talker babble background noise, and one was presented via simultaneous communication with the same background noise. Following each passage, participants responded to 10 (standardized) open-ended questions designed to assess comprehension. Indicators of participants' spoken language and sign language skills were obtained via self-reports and objective assessments. When spoken materials were accompanied by signs, scores were significantly higher than when materials were spoken in noise without signs. Participants' receptive spoken language skills significantly predicted scores in all three conditions; neither their receptive sign skills nor age of implantation predicted performance. Students who are CI users typically rely solely on spoken language in the classroom. The present results, however, suggest that there are potential benefits of simultaneous communication for such learners in noisy settings. For those CI users who know sign language, the redundancy of speech and signs potentially can offset the reduced fidelity of spoken language in noise. Accompanying spoken language with signs can benefit learners who are CI users in noisy situations such as classroom settings. Factors associated with such benefits, such as receptive skills in signed and spoken modalities, classroom acoustics, and material difficulty need to be empirically examined.
Wang, Yang; Naylor, Graham; Kramer, Sophia E; Zekveld, Adriana A; Wendt, Dorothea; Ohlenforst, Barbara; Lunner, Thomas
People with hearing impairment are likely to experience higher levels of fatigue because of effortful listening in daily communication. This hearing-related fatigue might not only constrain their work performance but also result in withdrawal from major social roles. Therefore, it is important to understand the relationships between fatigue, listening effort, and hearing impairment by examining the evidence from both subjective and objective measurements. The aim of the present study was to investigate these relationships by assessing subjectively measured daily-life fatigue (self-report questionnaires) and objectively measured listening effort (pupillometry) in both normally hearing and hearing-impaired participants. Twenty-seven normally hearing and 19 age-matched participants with hearing impairment were included in this study. Two self-report fatigue questionnaires Need For Recovery and Checklist Individual Strength were given to the participants before the test session to evaluate the subjectively measured daily fatigue. Participants were asked to perform a speech reception threshold test with single-talker masker targeting a 50% correct response criterion. The pupil diameter was recorded during the speech processing, and we used peak pupil dilation (PPD) as the main outcome measure of the pupillometry. No correlation was found between subjectively measured fatigue and hearing acuity, nor was a group difference found between the normally hearing and the hearing-impaired participants on the fatigue scores. A significant negative correlation was found between self-reported fatigue and PPD. A similar correlation was also found between Speech Intelligibility Index required for 50% correct and PPD. Multiple regression analysis showed that factors representing "hearing acuity" and "self-reported fatigue" had equal and independent associations with the PPD during the speech in noise test. Less fatigue and better hearing acuity were associated with a larger pupil dilation. To the best of our knowledge, this is the first study to investigate the relationship between a subjective measure of daily-life fatigue and an objective measure of pupil dilation, as an indicator of listening effort. These findings help to provide an empirical link between pupil responses, as observed in the laboratory, and daily-life fatigue.
Wireless and acoustic hearing with bone-anchored hearing devices.
Bosman, Arjan J; Mylanus, Emmanuel A M; Hol, Myrthe K S; Snik, Ad F M
2015-07-01
The efficacy of wireless connectivity in bone-anchored hearing was studied by comparing the wireless and acoustic performance of the Ponto Plus sound processor from Oticon Medical relative to the acoustic performance of its predecessor, the Ponto Pro. Nineteen subjects with more than two years' experience with a bone-anchored hearing device were included. Thirteen subjects were fitted unilaterally and six bilaterally. Subjects served as their own control. First, subjects were tested with the Ponto Pro processor. After a four-week acclimatization period performance the Ponto Plus processor was measured. In the laboratory wireless and acoustic input levels were made equal. In daily life equal settings of wireless and acoustic input were used when watching TV, however when using the telephone the acoustic input was reduced by 9 dB relative to the wireless input. Speech scores for microphone with Ponto Pro and for both input modes of the Ponto Plus processor were essentially equal when equal input levels of wireless and microphone inputs were used. Only the TV-condition showed a statistically significant (p <5%) lower speech reception threshold for wireless relative to microphone input. In real life, evaluation of speech quality, speech intelligibility in quiet and noise, and annoyance by ambient noise, when using landline phone, mobile telephone, and watching TV showed a clear preference (p <1%) for the Ponto Plus system with streamer over the microphone input. Due to the small number of respondents with landline phone (N = 7) the result for noise annoyance was only significant at the 5% level. Equal input levels for acoustic and wireless inputs results in equal speech scores, showing a (near) equivalence for acoustic and wireless sound transmission with Ponto Pro and Ponto Plus. The default 9-dB difference between microphone and wireless input when using the telephone results in a substantial wireless benefit when using the telephone. The preference of wirelessly transmitted audio when watching TV can be attributed to the relatively poor sound quality of backward facing loudspeakers in flat screen TVs. The ratio of wireless and acoustic input can be easily set to the user's preference with the streamer's volume control.
Sequencing Stories in Spanish and English.
ERIC Educational Resources Information Center
Steckbeck, Pamela Meza
The guide was designed for speech pathologists, bilingual teachers, and specialists in English as a second language who work with Spanish-speaking children. The guide contains twenty illustrated stories that facilitate the learning of auditory sequencing, auditory and visual memory, receptive and expressive vocabulary, and expressive language…
A cascaded neuro-computational model for spoken word recognition
NASA Astrophysics Data System (ADS)
Hoya, Tetsuya; van Leeuwen, Cees
2010-03-01
In human speech recognition, words are analysed at both pre-lexical (i.e., sub-word) and lexical (word) levels. The aim of this paper is to propose a constructive neuro-computational model that incorporates both these levels as cascaded layers of pre-lexical and lexical units. The layered structure enables the system to handle the variability of real speech input. Within the model, receptive fields of the pre-lexical layer consist of radial basis functions; the lexical layer is composed of units that perform pattern matching between their internal template and a series of labels, corresponding to the winning receptive fields in the pre-lexical layer. The model adapts through self-tuning of all units, in combination with the formation of a connectivity structure through unsupervised (first layer) and supervised (higher layers) network growth. Simulation studies show that the model can achieve a level of performance in spoken word recognition similar to that of a benchmark approach using hidden Markov models, while enabling parallel access to word candidates in lexical decision making.
Paatsch, Louise E; Blamey, Peter J; Sarant, Julia Z; Bow, Catherine P
2006-01-01
A group of 21 hard-of-hearing and deaf children attending primary school were trained by their teachers on the production of selected consonants and on the meanings of selected words. Speech production, vocabulary knowledge, reading aloud, and speech perception measures were obtained before and after each type of training. The speech production training produced a small but significant improvement in the percentage of consonants correctly produced in words. The vocabulary training improved knowledge of word meanings substantially. Performance on speech perception and reading aloud were significantly improved by both types of training. These results were in accord with the predictions of a mathematical model put forward to describe the relationships between speech perception, speech production, and language measures in children (Paatsch, Blamey, Sarant, Martin, & Bow, 2004). These training data demonstrate that the relationships between the measures are causal. In other words, improvements in speech production and vocabulary performance produced by training will carry over into predictable improvements in speech perception and reading scores. Furthermore, the model will help educators identify the most effective methods of improving receptive and expressive spoken language for individual children who are deaf or hard of hearing.
Di Berardino, F; Tognola, G; Paglialonga, A; Alpini, D; Grandori, F; Cesarani, A
2010-08-01
To assess whether different compact disk recording protocols, used to prepare speech test material, affect the reliability and comparability of speech audiometry testing. We conducted acoustic analysis of compact disks used in clinical practice, to determine whether speech material had been recorded using similar procedures. To assess the impact of different recording procedures on speech test outcomes, normal hearing subjects were tested using differently prepared compact disks, and their psychometric curves compared. Acoustic analysis revealed that speech material had been recorded using different protocols. The major difference was the gain between the levels at which the speech material and the calibration signal had been recorded. Although correct calibration of the audiometer was performed for each compact disk before testing, speech recognition thresholds and maximum intelligibility thresholds differed significantly between compact disks (p < 0.05), and were influenced by the gain between the recording level of the speech material and the calibration signal. To ensure the reliability and comparability of speech test outcomes obtained using different compact disks, it is recommended to check for possible differences in the recording gains used to prepare the compact disks, and then to compensate for any differences before testing.
Engaged listeners: shared neural processing of powerful political speeches.
Schmälzle, Ralf; Häcker, Frank E K; Honey, Christopher J; Hasson, Uri
2015-08-01
Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners' brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performance.
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2013-02-01
To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) and pure-tone hearing thresholds. Participants included 111 middle- to older-age adults (range = 45-78) with audiometric configurations ranging from normal hearing levels to moderate sensorineural hearing loss. In addition to using audiometric testing, the authors also used such evaluation measures as the QuickSIN, the SSQ, and the cABR. Multiple linear regression analysis indicated that the inclusion of brainstem variables in a model with QuickSIN, hearing thresholds, and age accounted for 30% of the variance in the Speech subtest of the SSQ, compared with significantly less variance (19%) when brainstem variables were not included. The authors' results demonstrate the cABR's efficacy for predicting self-reported speech-in-noise perception difficulties. The fact that the cABR predicts more variance in self-reported speech-in-noise (SIN) perception than either the QuickSIN or hearing thresholds indicates that the cABR provides additional insight into an individual's ability to hear in background noise. In addition, the findings underscore the link between the cABR and hearing in noise.
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers.
Liu, Fang; Chan, Alice H D; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C M
2016-07-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production.
Pitch perception and production in congenital amusia: Evidence from Cantonese speakers
Liu, Fang; Chan, Alice H. D.; Ciocca, Valter; Roquet, Catherine; Peretz, Isabelle; Wong, Patrick C. M.
2016-01-01
This study investigated pitch perception and production in speech and music in individuals with congenital amusia (a disorder of musical pitch processing) who are native speakers of Cantonese, a tone language with a highly complex tonal system. Sixteen Cantonese-speaking congenital amusics and 16 controls performed a set of lexical tone perception, production, singing, and psychophysical pitch threshold tasks. Their tone production accuracy and singing proficiency were subsequently judged by independent listeners, and subjected to acoustic analyses. Relative to controls, amusics showed impaired discrimination of lexical tones in both speech and non-speech conditions. They also received lower ratings for singing proficiency, producing larger pitch interval deviations and making more pitch interval errors compared to controls. Demonstrating higher pitch direction identification thresholds than controls for both speech syllables and piano tones, amusics nevertheless produced native lexical tones with comparable pitch trajectories and intelligibility as controls. Significant correlations were found between pitch threshold and lexical tone perception, music perception and production, but not between lexical tone perception and production for amusics. These findings provide further evidence that congenital amusia is a domain-general language-independent pitch-processing deficit that is associated with severely impaired music perception and production, mildly impaired speech perception, and largely intact speech production. PMID:27475178
Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.
Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva
2016-01-01
Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.
NASA Astrophysics Data System (ADS)
Saweikis, Meghan; Surprenant, Aimée M.; Davies, Patricia; Gallant, Don
2003-10-01
While young and old subjects with comparable audiograms tend to perform comparably on speech recognition tasks in quiet environments, the older subjects have more difficulty than the younger subjects with recognition tasks in degraded listening conditions. This suggests that factors other than an absolute threshold may account for some of the difficulty older listeners have on recognition tasks in noisy environments. Many metrics, including the Speech Intelligibility Index (SII), used to measure speech intelligibility, only consider an absolute threshold when accounting for age related hearing loss. Therefore these metrics tend to overestimate the performance for elderly listeners in noisy environments [Tobias et al., J. Acoust. Soc. Am. 83, 859-895 (1988)]. The present studies examine the predictive capabilities of the SII in an environment with automobile noise present. This is of interest because people's evaluation of the automobile interior sound is closely linked to their ability to carry on conversations with their fellow passengers. The four studies examine whether, for subjects with age related hearing loss, the accuracy of the SII can be improved by incorporating factors other than an absolute threshold into the model. [Work supported by Ford Motor Company.
On the Etiology of Listening Difficulties in Noise Despite Clinically Normal Audiograms
2017-01-01
Many people with difficulties following conversations in noisy settings have “clinically normal” audiograms, that is, tone thresholds better than 20 dB HL from 0.1 to 8 kHz. This review summarizes the possible causes of such difficulties, and examines established as well as promising new psychoacoustic and electrophysiologic approaches to differentiate between them. Deficits at the level of the auditory periphery are possible even if thresholds remain around 0 dB HL, and become probable when they reach 10 to 20 dB HL. Extending the audiogram beyond 8 kHz can identify early signs of noise-induced trauma to the vulnerable basal turn of the cochlea, and might point to “hidden” losses at lower frequencies that could compromise speech reception in noise. Listening difficulties can also be a consequence of impaired central auditory processing, resulting from lesions affecting the auditory brainstem or cortex, or from abnormal patterns of sound input during developmental sensitive periods and even in adulthood. Such auditory processing disorders should be distinguished from (cognitive) linguistic deficits, and from problems with attention or working memory that may not be specific to the auditory modality. Improved diagnosis of the causes of listening difficulties in noise should lead to better treatment outcomes, by optimizing auditory training procedures to the specific deficits of individual patients, for example. PMID:28002080
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2012-01-01
Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137
Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise
2013-05-01
To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.
Bone conduction reception: head sensitivity mapping.
McBride, Maranda; Letowski, Tomasz; Tran, Phuong
2008-05-01
This study sought to identify skull locations that are highly sensitive to bone conduction (BC) auditory signal reception and could be used in the design of military radio communication headsets. In Experiment 1, pure tone signals were transmitted via BC to 11 skull locations of 14 volunteers seated in a quiet environment. In Experiment 2, the same signals were transmitted via BC to nine skull locations of 12 volunteers seated in an environment with 60 decibels of white background noise. Hearing threshold levels for each signal per location were measured. In the quiet condition, the condyle had the lowest mean threshold for all signals followed by the jaw angle, mastoid and vertex. In the white noise condition, the condyle also had the lowest mean threshold followed by the mastoid, vertex and temple. Overall results of both experiments were very similar and implicated the condyle as the most effective location.
The Role of the Arcuate Fasciculus in Conduction Aphasia
ERIC Educational Resources Information Center
Bernal, Byron; Ardila, Alfredo
2009-01-01
In aphasia literature, it has been considered that a speech repetition defect represents the main constituent of conduction aphasia. Conduction aphasia has frequently been interpreted as a language impairment due to lesions of the arcuate fasciculus (AF) that disconnect receptive language areas from expressive ones. Modern neuroradiological…
Narrative Skills in Children with Selective Mutism: An Exploratory Study
ERIC Educational Resources Information Center
McInnes, Alison; Fung, Daniel; Manassis, Katharina; Fiksenbaum, Lisa; Tannock, Rosemary
2004-01-01
Selective mutism (SM) is a rare and complex disorder associated with anxiety symptoms and speech-language deficits; however, the nature of these language deficits has not been studied systematically. A novel cross-disciplinary assessment protocol was used to assess anxiety and nonverbal cognitive, receptive language, and expressive narrative…
Effects of Amplification, Speechreading, and Classroom Environments on Reception of Speech
ERIC Educational Resources Information Center
Blair, James C.
1977-01-01
Compared with 18 hard-of-hearing students (7 to 14-years-old) were two sources of amplification (binaural ear-level hearing aids and R F auditory training units with environmental microphones on) in "ideal" and "typical" classroom noise levels, with and without visual speechreading cues provided. (Author/IM)
Hearing in middle age: a population snapshot of 40–69 year olds in the UK
Dawes, Piers; Fortnum, Heather; Moore, David R.; Emsley, Richard; Norman, Paul; Cruickshanks, Karen; Davis, Adrian; Edmondson-Jones, Mark; McCormack, Abby; Lutman, Mark; Munro, Kevin
2014-01-01
Objective To report population-based prevalence of hearing impairment based on speech recognition in noise testing in a large and inclusive sample of UK adults aged 40 to 69 years. The present study is the first to report such data. Prevalence of tinnitus and use of hearing aids is also reported. Design The research was conducted using the UK Biobank resource. The better-ear unaided speech reception threshold was measured adaptively using the Digit Triplet Test (n = 164,770). Self-report data on tinnitus, hearing aid use, noise exposure as well as demographic variables were collected. Results Overall, 10.7% of adults (95%CI 10.5–10.9%) had significant hearing impairment. Prevalence of tinnitus was 16.9% (95%CI 16.6–17.1%) and hearing aid use was 2.0% (95%CI 1.9–2.1%). Odds of hearing impairment increased with age, with a history of work- and music-related noise exposure, for lower socioeconomic background and for ethnic minority backgrounds. Males were at no higher risk of hearing impairment than females. Conclusion Around 1 in 10 adults aged 40 to 69 years have substantial hearing impairment. The reasons for excess risk of hearing impairment particularly for those from low socioeconomic and ethnic minority backgrounds require identification, as this represents a serious health inequality. The underutilization of hearing aids has altered little since the 1980s, and is a major cause for concern. PMID:24518430
Lifetime leisure music exposure associated with increased frequency of tinnitus.
Moore, David R; Zobay, Oliver; Mackinnon, Robert C; Whitmer, William M; Akeroyd, Michael A
2017-04-01
Tinnitus has been linked to noise exposure, a common form of which is listening to music as a leisure activity. The relationship between tinnitus and type and duration of music exposure is not well understood. We conducted an internet-based population study that asked participants questions about lifetime music exposure and hearing, and included a hearing test involving speech intelligibility in noise, the High Frequency Digit Triplets Test. 4950 people aged 17-75 years completed all questions and the hearing test. Results were analyzed using multinomial regression models. High exposure to leisure music, hearing difficulty, increasing age and workplace noise exposure were independently associated with increased tinnitus. Three forms of music exposure (pubs/clubs, concerts, personal music players) did not differ in their relationship to tinnitus. More males than females reported tinnitus. The objective measure of speech reception threshold had only a minimal relationship with tinnitus. Self-reported hearing difficulty was more strongly associated with tinnitus, but 76% of people reporting usual or constant tinnitus also reported little or no hearing difficulty. Overall, around 40% of participants of all ages reported never experiencing tinnitus, while 29% reported sometimes, usually or constantly experiencing tinnitus that lasted more than 5 min. Together, the results suggest that tinnitus is much more common than hearing loss, but that there is little association between the two, especially among the younger adults disproportionately sampled in this study. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Association of Age Related Macular Degeneration and Age Related Hearing Impairment
Ghasemi, Hassan; Pourakbari, Malihe Shahidi; Entezari, Morteza; Yarmohammadi, Mohammad Ebrahim
2016-01-01
Purpose: To evaluate the association between age-related macular degeneration (ARMD) and sensory neural hearing impairment (SHI). Methods: In this case-control study, hearing status of 46 consecutive patients with ARMD were compared with 46 age-matched cases without clinical ARMD as a control group. In all patients, retinal involvements were confirmed by clinical examination, fluorescein angiography (FA) and optical coherence tomography (OCT). All participants were examined with an otoscope and underwent audiological tests including pure tone audiometry (PTA), speech reception threshold (SRT), speech discrimination score (SDS), tympanometry, reflex tests and auditory brainstem response (ABR). Results: A significant (P = 0.009) association was present between ARMD, especially with exudative and choroidal neovascularization (CNV) components, and age-related hearing impairment primarily involving high frequencies. Patients had higher SRT and lower SDS against anticipated presbycusis than control subjects. Similar results were detected in exudative, CNV and scar patterns supporting an association between late ARMD with SRT and SDS abnormalities. ABR showed significantly prolonged wave I and IV latency times in ARMD (P = 0.034 and 0.022, respectively). Average latency periods for wave I in geographic atrophy (GA) and CNV, and that for wave IV in drusen patterns of ARMD were significantly higher than controls (P = 0.030, 0.007 and 0.050, respectively). Conclusion: The association between ARMD and age-related SHI may be attributed to common anatomical components such as melanin in these two sensory organs. PMID:27195086
The role of the supplementary motor area for speech and language processing.
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann
2016-09-01
Apart from its function in speech motor control, the supplementary motor area (SMA) has largely been neglected in models of speech and language processing in the brain. The aim of this review paper is to summarize more recent work, suggesting that the SMA has various superordinate control functions during speech communication and language reception, which is particularly relevant in case of increased task demands. The SMA is subdivided into a posterior region serving predominantly motor-related functions (SMA proper) whereas the anterior part (pre-SMA) is involved in higher-order cognitive control mechanisms. In analogy to motor triggering functions of the SMA proper, the pre-SMA seems to manage procedural aspects of cognitive processing. These latter functions, among others, comprise attentional switching, ambiguity resolution, context integration, and coordination between procedural and declarative memory structures. Regarding language processing, this refers, for example, to the use of inner speech mechanisms during language encoding, but also to lexical disambiguation, syntax and prosody integration, and context-tracking. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.
2014-01-01
Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210
Temporal plasticity in auditory cortex improves neural discrimination of speech sounds
Engineer, Crystal T.; Shetake, Jai A.; Engineer, Navzer D.; Vrana, Will A.; Wolf, Jordan T.; Kilgard, Michael P.
2017-01-01
Background Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. Objective/Hypothesis We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. Methods VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Results Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. Conclusion This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. PMID:28131520
Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant.
Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa
2016-07-01
The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied.
Nuesse, Theresa; Steenken, Rike; Neher, Tobias; Holube, Inga
2018-01-01
Elderly listeners are known to differ considerably in their ability to understand speech in noise. Several studies have addressed the underlying factors that contribute to these differences. These factors include audibility, and age-related changes in supra-threshold auditory processing abilities, and it has been suggested that differences in cognitive abilities may also be important. The objective of this study was to investigate associations between performance in cognitive tasks and speech recognition under different listening conditions in older adults with either age appropriate hearing or hearing-impairment. To that end, speech recognition threshold (SRT) measurements were performed under several masking conditions that varied along the perceptual dimensions of dip listening, spatial separation, and informational masking. In addition, a neuropsychological test battery was administered, which included measures of verbal working and short-term memory, executive functioning, selective and divided attention, and lexical and semantic abilities. Age-matched groups of older adults with either age-appropriate hearing (ENH, n = 20) or aided hearing impairment (EHI, n = 21) participated. In repeated linear regression analyses, composite scores of cognitive test outcomes (evaluated using PCA) were included to predict SRTs. These associations were different for the two groups. When hearing thresholds were controlled for, composed cognitive factors were significantly associated with the SRTs for the ENH listeners. Whereas better lexical and semantic abilities were associated with lower (better) SRTs in this group, there was a negative association between attentional abilities and speech recognition in the presence of spatially separated speech-like maskers. For the EHI group, the pure-tone thresholds (averaged across 0.5, 1, 2, and 4 kHz) were significantly associated with the SRTs, despite the fact that all signals were amplified and therefore in principle audible. PMID:29867654
Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey
2017-09-01
Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Lalonde, Kaylah; Holt, Rachael Frush
2017-01-01
Purpose This preliminary investigation explored potential cognitive and linguistic sources of variance in 2-year-olds’ speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method Twenty typically developing 2-year-olds completed the newly modified toddler change/no-change procedure. Behavioral tests and parent report questionnaires were used to measure several cognitive and linguistic constructs. Stepwise linear regression was used to relate discrimination sensitivity to the cognitive and linguistic measures. In addition, discrimination results from the current experiment were compared with those from 2-year-old children tested in a previous experiment. Results Receptive vocabulary and working memory explained 56.6% of variance in discrimination performance. Performance was not different on the modified toddler change/no-change procedure used in the current experiment from in a previous investigation, which used the original version of the procedure. Conclusions The relationship between speech discrimination and receptive vocabulary and working memory provides further evidence that the procedure is sensitive to the strength of perceptual representations. The role for working memory might also suggest that there are specific subject-related, nonsensory factors limiting the applicability of the procedure to children who have not reached the necessary levels of cognitive and linguistic development. PMID:24023371
Lalonde, Kaylah; Holt, Rachael Frush
2014-02-01
This preliminary investigation explored potential cognitive and linguistic sources of variance in 2-year-olds’ speech-sound discrimination by using the toddler change/ no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Twenty typically developing 2-year-olds completed the newly modified toddler change/no-change procedure. Behavioral tests and parent report questionnaires were used to measure several cognitive and linguistic constructs. Stepwise linear regression was used to relate discrimination sensitivity to the cognitive and linguistic measures. In addition, discrimination results from the current experiment were compared with those from 2-year-old children tested in a previous experiment. Receptive vocabulary and working memory explained 56.6% of variance in discrimination performance. Performance was not different on the modified toddler change/no-change procedure used in the current experiment from in a previous investigation, which used the original version of the procedure. The relationship between speech discrimination and receptive vocabulary and working memory provides further evidence that the procedure is sensitive to the strength of perceptual representations. The role for working memory might also suggest that there are specific subject-related, nonsensory factors limiting the applicability of the procedure to children who have not reached the necessary levels of cognitive and linguistic development.
ERIC Educational Resources Information Center
Wong, Simpson W. L.; Chow, Bonnie Wing-Yin; Ho, Connie Suk-Han; Waye, Mary M. Y.; Bishop, Dorothy V. M.
2014-01-01
This twin study examined the relative contributions of genes and environment on 2nd language reading acquisition of Chinese-speaking children learning English. We examined whether specific skills-visual word recognition, receptive vocabulary, phonological awareness, phonological memory, and speech discrimination-in the 1st and 2nd languages have…
Dual Diathesis-Stressor Model of Emotional and Linguistic Contributions to Developmental Stuttering
ERIC Educational Resources Information Center
Walden, Tedra A.; Frankel, Carl B.; Buhr, Anthony P.; Johnson, Kia N.; Conture, Edward G.; Karrass, Jan M.
2012-01-01
This study assessed emotional and speech-language contributions to childhood stuttering. A dual diathesis-stressor framework guided this study, in which both linguistic requirements and skills, and emotion and its regulation, are hypothesized to contribute to stuttering. The language diathesis consists of expressive and receptive language skills.…
Sign Language Echolalia in Deaf Children with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Shield, Aaron; Cooley, Frances; Meier, Richard P.
2017-01-01
Purpose: We present the first study of echolalia in deaf, signing children with autism spectrum disorder (ASD). We investigate the nature and prevalence of sign echolalia in native-signing children with ASD, the relationship between sign echolalia and receptive language, and potential modality differences between sign and speech. Method: Seventeen…
Bone Conduction Systems for Full-Face Respirators: Speech Intelligibility Analysis
2014-04-01
Communication Headset Design . International Journal of Industrial Ergonomics 2008a, 38, 1038–1044. McBride, M.; Letowski, T.; Tran, P. Bone Conduction Reception...Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other...7 3.3 Study Design
Acoustical considerations for secondary uses of government facilities
NASA Astrophysics Data System (ADS)
Evans, Jack B.
2003-10-01
Government buildings are by their nature, public and multi-functional. Whether in meetings, presentations, documentation processing, work instructions or dispatch, speech communications are critical. Full-time occupancy facilities may require sleep or rest areas adjacent to active spaces. Rooms designed for some other primary use may be used for public assembly, receptions or meetings. In addition, environmental noise impacts to the building or from the building should be considered, especially where adjacent to hospitals, hotels, apartments or other urban sensitive land uses. Acoustical criteria and design parameters for reverberation, background noise and sound isolation should enhance speech intelligibility and privacy. This presentation looks at unusual spaces and unexpected uses of spaces with regard to room acoustics and noise control. Examples of various spaces will be discussed, including an atrium used for reception and assembly, multi-jurisdictional (911) emergency control center, frequent or long-duration use of emergency generators, renovations of historically significant buildings, and the juxtaposition of acoustically incompatible functions. Brief case histories of acoustical requirements, constraints and design solutions will be presented, including acoustical measurements, plan illustrations and photographs. Acoustical criteria for secondary functional uses of spaces will be proposed.
Prosody perception and musical pitch discrimination in adults using cochlear implants.
Kalathottukaren, Rose Thomas; Purdy, Suzanne C; Ballard, Elaine
2015-07-01
This study investigated prosodic perception and musical pitch discrimination in adults using cochlear implants (CI), and examined the relationship between prosody perception scores and non-linguistic auditory measures, demographic variables, and speech recognition scores. Participants were given four subtests of the PEPS-C (profiling elements of prosody in speech-communication), the adult paralanguage subtest of the DANVA 2 (diagnostic analysis of non verbal accuracy 2), and the contour and interval subtests of the MBEA (Montreal battery of evaluation of amusia). Twelve CI users aged 25;5 to 78;0 years participated. CI participants performed significantly more poorly than normative values for New Zealand adults for PEPS-C turn-end, affect, and contrastive stress reception subtests, but were not different from the norm for the chunking reception subtest. Performance on the DANVA 2 adult paralanguage subtest was lower than the normative mean reported by Saindon (2010) . Most of the CI participants performed at chance level on both MBEA subtests. CI users have difficulty perceiving prosodic information accurately. Difficulty in understanding different aspects of prosody and music may be associated with reduced pitch perception ability.
Dunn, Camille C; Walker, Elizabeth A; Oleson, Jacob; Kenworthy, Maura; Van Voorst, Tanya; Tomblin, J. Bruce; Ji, Haihong; Kirk, Karen I; McMurray, Bob; Hanson, Marlan; Gantz, Bruce J
2013-01-01
Objectives Few studies have examined the long-term effect of age at implantation on outcomes using multiple data points in children with cochlear implants. The goal of this study was to determine if age at implantation has a significant, lasting impact on speech perception, language, and reading performance for children with prelingual hearing loss. Design A linear mixed model framework was utilized to determine the effect of age at implantation on speech perception, language, and reading abilities in 83 children with prelingual hearing loss who received cochlear implants by age 4. The children were divided into two groups based on their age at implantation: 1) under 2 years of age and 2) between 2 and 3.9 years of age. Differences in model specified mean scores between groups were compared at annual intervals from 5 to 13 years of age for speech perception, and 7 to 11 years of age for language and reading. Results After controlling for communication mode, device configuration, and pre-operative pure-tone average, there was no significant effect of age at implantation for receptive language by 8 years of age, expressive language by 10 years of age, reading by 7 years of age. In terms of speech perception outcomes, significance varied between 7 and 13 years of age, with no significant difference in speech perception scores between groups at ages 7, 11 and 13 years. Children who utilized oral communication (OC) demonstrated significantly higher speech perception scores than children who used total communication (TC). OC users tended to have higher expressive language scores than TC users, although this did not reach significance. There was no significant difference between OC and TC users for receptive language or reading scores. Conclusions Speech perception, language, and reading performance continue to improve over time for children implanted before 4 years of age. The current results indicate that the effect of age at implantation diminishes with time, particularly for higher-order skills such as language and reading. Some children who receive CIs after the age of 2 years have the capacity to approximate the language and reading skills of their earlier-implanted peers, suggesting that additional factors may moderate the influence of age at implantation on outcomes over time. PMID:24231628
Magalhães, Ana Tereza de Matos; Goffi-Gomez, M Valéria Schmidt; Hoshino, Ana Cristina; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; Brito, Rubens
2013-09-01
To identify the technological contributions of the newer version of speech processor to the first generation of multichannel cochlear implant and the satisfaction of users of the new technology. Among the new features available, we focused on the effect of the frequency allocation table, the T-SPL and C-SPL, and the preprocessing gain adjustments (adaptive dynamic range optimization). Prospective exploratory study. Cochlear implant center at hospital. Cochlear implant users of the Spectra processor with speech recognition in closed set. Seventeen patients were selected between the ages of 15 and 82 and deployed for more than 8 years. The technology update of the speech processor for the Nucleus 22. To determine Freedom's contribution, thresholds and speech perception tests were performed with the last map used with the Spectra and the maps created for Freedom. To identify the effect of the frequency allocation table, both upgraded and converted maps were programmed. One map was programmed with 25 dB T-SPL and 65 dB C-SPL and the other map with adaptive dynamic range optimization. To assess satisfaction, SADL and APHAB were used. All speech perception tests and all sound field thresholds were statistically better with the new speech processor; 64.7% of patients preferred maintaining the same frequency table that was suggested for the older processor. The sound field threshold was statistically significant at 500, 1,000, 1,500, and 2,000 Hz with 25 dB T-SPL/65 dB C-SPL. Regarding patient's satisfaction, there was a statistically significant improvement, only in the subscale of speech in noise abilities and phone use. The new technology improved the performance of patients with the first generation of multichannel cochlear implant.
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates
Malone, Brian J.
2017-01-01
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194
Results of FM-TV threshold reduction investigation for the ATS F trust experiment
NASA Technical Reports Server (NTRS)
Brown, J. P.
1972-01-01
An investigation of threshold effects in FM TV was initiated to determine if any simple, low cost techniques were available which can reduce the subjective video threshold, applicable to low cost community TV reception via satellite. Two methods of eliminating these effects were examined: the use of standard video pre-emphasis, and the use of an additional circuit to blank the picture tube during the retrace period.
Schramm, Bianka; Bohnert, Andrea; Keilmann, Annerose
2010-07-01
This study had two aims: (1) to document the auditory and lexical development of children who are deaf and received the first cochlear implant (CI) by the age of 16 months and the second CI by the age of 31 months and (2) to compare these children's results with those of children with normal hearing (NH). This longitudinal study included five children with NH and five with sensorineural deafness. All children of the second group were observed for 36 months after the first fitting of the device (cochlear implant). The auditory development of the CI group was documented every 3 months up to the age of two years in hearing age and chronological age and for the NH group in chronological age. The language development of each NH child was assessed at 12, 18, 24 and 36 months of chronological age. Children with CIs were examined at the same age intervals at chronological and hearing age. In both groups, children showed individual patterns of auditory and language development. The children with CIs developed differently in the amount of receptive and expressive vocabulary compared with the NH control group. Three children in the CI group needed almost 6 months to make gains in speech development that were consistent with what would be expected for their chronological age. Overall, the receptive and expressive development in all children of the implanted group increased with their hearing age. These results indicate that early identification and early implantation is advisable to give children with sensorineural hearing loss a realistic chance to develop satisfactory expressive and receptive vocabulary and also to develop stable phonological, morphological and syntactical skills for school life. On the basis of these longitudinal data, we will be able to develop new diagnostic tools that enable clinicians to assess child's progress in hearing and speech development. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Fluent aphasia in children: definition and natural history.
Klein, S K; Masur, D; Farber, K; Shinnar, S; Rapin, I
1992-01-01
We compared the course of a preschool child we followed for 4 years with published reports of 24 children with fluent aphasia. Our patient spoke fluently within 3 weeks of the injury. She was severely anomic and made many semantic paraphasic errors. Unlike other children with fluent aphasia, her prosody of speech was impaired initially, and her spontaneous language was dominated by stock phrases. Residual deficits include chronic impairment of auditory comprehension, repetition, and word retrieval. She has more disfluencies in spontaneous speech 4 years after her head injury than acutely. School achievement in reading and mathematics remains below age level. Attention to the timing of recovery of fluent speech and to the characteristics of receptive and expressive language over time will permit more accurate description of fluent aphasia in childhood.
Improving speech perception in noise for children with cochlear implants.
Gifford, René H; Olund, Amy P; Dejong, Melissa
2011-10-01
Current cochlear implant recipients are achieving increasingly higher levels of speech recognition; however, the presence of background noise continues to significantly degrade speech understanding for even the best performers. Newer generation Nucleus cochlear implant sound processors can be programmed with SmartSound strategies that have been shown to improve speech understanding in noise for adult cochlear implant recipients. The applicability of these strategies for use in children, however, is not fully understood nor widely accepted. To assess speech perception for pediatric cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether Nucleus sound processor SmartSound strategies yield improved sentence recognition in noise for children who learn language through the implant. Single subject, repeated measures design. Twenty-two experimental subjects with cochlear implants (mean age 11.1 yr) and 25 control subjects with normal hearing (mean age 9.6 yr) participated in this prospective study. Speech reception thresholds (SRT) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the experimental subjects' everyday program incorporating Adaptive Dynamic Range Optimization (ADRO) as well as with the addition of Autosensitivity control (ASC). Adaptive SRTs with the Hearing In Noise Test (HINT) sentences were obtained for all 22 experimental subjects, and performance-in percent correct-was assessed in a fixed +6 dB SNR (signal-to-noise ratio) for a six-subject subset. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the SmartSound setting on the SRT in noise. The primary findings mirrored those reported previously with adult cochlear implant recipients in that the addition of ASC to ADRO significantly improved speech recognition in noise for pediatric cochlear implant recipients. The mean degree of improvement in the SRT with the addition of ASC to ADRO was 3.5 dB for a mean SRT of 10.9 dB SNR. Thus, despite the fact that these children have acquired auditory/oral speech and language through the use of their cochlear implant(s) equipped with ADRO, the addition of ASC significantly improved their ability to recognize speech in high levels of diffuse background noise. The mean SRT for the control subjects with normal hearing was 0.0 dB SNR. Given that the mean SRT for the experimental group was 10.9 dB SNR, despite the improvements in performance observed with the addition of ASC, cochlear implants still do not completely overcome the speech perception deficit encountered in noisy environments accompanying the diagnosis of severe-to-profound hearing loss. SmartSound strategies currently available in latest generation Nucleus cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise for pediatric cochlear implant recipients. Despite the reluctance of pediatric audiologists to utilize SmartSound settings for regular use, the results of the current study support the addition of ASC to ADRO for everyday listening environments to improve speech perception in a child's typical everyday program. American Academy of Audiology.
Sound frequency affects speech emotion perception: results from congenital amusia
Lolli, Sydney L.; Lewenstein, Ari D.; Basurto, Julian; Winnik, Sean; Loui, Psyche
2015-01-01
Congenital amusics, or “tone-deaf” individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech. PMID:26441718
Overall intelligibility, articulation, resonance, voice and language in a child with Nager syndrome.
Van Lierde, Kristiane M; Luyten, Anke; Mortier, Geert; Tijskens, Anouk; Bettens, Kim; Vermeersch, Hubert
2011-02-01
The purpose of this study was to provide a description of the language and speech (intelligibility, voice, resonance, articulation) in a 7-year-old Dutch speaking boy with Nager syndrome. To reveal these features comparison was made with an age and gender related child with a similar palatal or hearing problem. Language was tested with an age appropriate language test namely the Dutch version of the Clinical Evaluation of Language Fundamentals. Regarding articulation a phonetic inventory, phonetic analysis and phonological process analysis was performed. A nominal scale with four categories was used to judge the overall speech intelligibility. A voice and resonance assessment included a videolaryngostroboscopy, a perceptual evaluation, acoustic analysis and nasometry. The most striking communication problems in this child were expressive and receptive language delay, moderately impaired speech intelligibility, the presence of phonetic and phonological disorders, resonance disorders and a high-pitched voice. The explanation for this pattern of communication is not completely straightforward. The language and the phonological impairment, only present in the child with the Nager syndrome, are not part of a more general developmental delay. The resonance disorders can be related to the cleft palate, but were not present in the child with the isolated cleft palate. One might assume that the cul-de-sac resonance and the much decreased mandibular movement and the restricted tongue lifting are caused by the restricted jaw mobility and micrognathia. To what extent the suggested mandibular distraction osteogenesis in early childhood allows increased mandibular movement and better speech outcome with increased oral resonance is subject for further research. According to the results of this study the speech and language management must be focused on receptive and expressive language skills and linguistic conceptualization, correct phonetic placement and the modification of hypernasality and nasal emission. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Free Speech for Public Employees: The Supreme Court Strikes a New Balance.
ERIC Educational Resources Information Center
Bernheim, Emily
1986-01-01
In "Connick vs. Myers" the Supreme Court applied a threshold requirement to an employee's First Amendment protection of free speech: speech must be related to public concerns as determined by the content, form, and context of a given statement. Discusses applications of this decision to lower court cases. (MLF)
The NTID speech recognition test: NSRT(®).
Bochner, Joseph H; Garrison, Wayne M; Doherty, Karen A
2015-07-01
The purpose of this study was to collect and analyse data necessary for expansion of the NSRT item pool and to evaluate the NSRT adaptive testing software. Participants were administered pure-tone and speech recognition tests including W-22 and QuickSIN, as well as a set of 323 new NSRT items and NSRT adaptive tests in quiet and background noise. Performance on the adaptive tests was compared to pure-tone thresholds and performance on other speech recognition measures. The 323 new items were subjected to Rasch scaling analysis. Seventy adults with mild to moderately severe hearing loss participated in this study. Their mean age was 62.4 years (sd = 20.8). The 323 new NSRT items fit very well with the original item bank, enabling the item pool to be more than doubled in size. Data indicate high reliability coefficients for the NSRT and moderate correlations with pure-tone thresholds (PTA and HFPTA) and other speech recognition measures (W-22, QuickSIN, and SRT). The adaptive NSRT is an efficient and effective measure of speech recognition, providing valid and reliable information concerning respondents' speech perception abilities.
ERIC Educational Resources Information Center
Ebbels, Susan H.; Wright, Lisa; Brockbank, Sally; Godfrey, Caroline; Harris, Catherine; Leniston, Hannah; Neary, Kate; Nicoll, Hilary; Nicoll, Lucy; Scott, Jackie; Maric, Nataša
2017-01-01
Background: Evidence of the effectiveness of therapy for older children with (developmental) language disorder (DLD), and particularly those with receptive language impairments, is very limited. The few existing studies have focused on particular target areas, but none has looked at a whole area of a service. Aims: To establish whether for…
An Investigation of Cultural Differences in Perceived Speaker Effectiveness.
ERIC Educational Resources Information Center
Masterson, John T.; Watson, Norman H.
Culture is a powerful force that may affect the reception and acceptance of communication. To determine if culture has an effect on perceived speaker effectiveness, students in one introductory speech communication class in a Florida university and a variety of courses in a university in the Bahamas, were given an 80-item questionnaire that was…
Does Hearing Several Speakers Reduce Foreign Word Learning?
ERIC Educational Resources Information Center
Ludington, Jason Darryl
2016-01-01
Learning spoken word forms is a vital part of second language learning, and CALL lends itself well to this training. Not enough is known, however, about how auditory variation across speech tokens may affect receptive word learning. To find out, 144 Thai university students with no knowledge of the Patani Malay language learned 24 foreign words in…
Communicative Development in Twins with Discordant Histories of Recurrent Otitis Media.
ERIC Educational Resources Information Center
Hemmer, Virginia Hoey; Ratner, Nan Bernstein
1994-01-01
The communicative abilities of six sets of same-sex, preschool dizygotic twins were examined. In each dyad, one sibling had a strong history of recurrent otitis media (ROM) but the other twin did not. History of ROM was associated with lowered receptive vocabulary, with no consistent effects detected in expressive speech and language tasks.…
ERIC Educational Resources Information Center
Stamou, Anastasia G.; Maroniti, Katerina; Griva, Eleni
2015-01-01
Considering the role of popular cultural texts in shaping sociolinguistic reality, it makes sense to explore how children actually receive those texts and what conceptualisations of sociolinguistic diversity they form through those texts. Therefore, the aim of the present study was to examine Greek young children's views on sociolinguistic…
ECoG Gamma Activity during a Language Task: Differentiating Expressive and Receptive Speech Areas
ERIC Educational Resources Information Center
Towle, Vernon L.; Yoon, Hyun-Ah; Castelle, Michael; Edgar, J. Christopher; Biassou, Nadia M.; Frim, David M.; Spire, Jean-Paul; Kohrman, Michael H.
2008-01-01
Electrocorticographic (ECoG) spectral patterns obtained during language tasks from 12 epilepsy patients (age: 12-44 years) were analyzed in order to identify and characterize cortical language areas. ECoG from 63 subdural electrodes (500 Hz/channel) chronically implanted over frontal, parietal and temporal lobes were examined. Two language tasks…
ERIC Educational Resources Information Center
Yu, Jennifer W.; Wei, Xin; Wagner, Mary
2014-01-01
This study used propensity score techniques on data from the National Longitudinal Transition Study-2 to assess the causal relationship between speech and behavior-based support services and rates of social communication among high school students with Autism Spectrum Disorder (ASD). Findings indicate that receptive language problems were…
Assessing the Relationship between Prosody and Reading Outcomes in Children Using the PEPS-C
ERIC Educational Resources Information Center
Lochrin, Margaret; Arciuli, Joanne; Sharma, Mridula
2015-01-01
This study investigated the relationship between both receptive and expressive prosody and each of three reading outcomes: accuracy of reading aloud words, accuracy of reading aloud nonwords, and comprehension. Participants were 63 children aged 7 to 12 years. To assess prosody, we used the Profiling Elements of Prosody in Speech Communication…
Saunders, Gabrielle H; Forsline, Anna
2006-06-01
Results of objective clinical tests (e.g., measures of speech understanding in noise) often conflict with subjective reports of hearing aid benefit and satisfaction. The Performance-Perceptual Test (PPT) is an outcome measure in which objective and subjective evaluations are made by using the same test materials, testing format, and unit of measurement (signal-to-noise ratio, S/N), permitting a direct comparison between measured and perceived ability to hear. Two variables are measured: a Performance Speech Reception Threshold in Noise (SRTN) for 50% correct performance and a Perceptual SRTN, which is the S/N at which listeners perceive that they can understand the speech material. A third variable is computed: the Performance-Perceptual Discrepancy (PPDIS); it is the difference between the Performance and Perceptual SRTNs and measures the extent to which listeners "misjudge" their hearing ability. Saunders et al. in 2004 examined the relation between PPT scores and unaided hearing handicap. In this publication, the relations between the PPT, residual aided handicap, and hearing aid satisfaction are described. Ninety-four individuals between the ages of 47 and 86 yr participated. All had symmetrical sensorineural hearing loss and had worn binaural hearing aids for at least 6 wk before participating. All subjects underwent routine audiological examination and completed the PPT, the Hearing Handicap Inventory for the Elderly/Adults (HHIE/A), and the Satisfaction for Amplification in Daily Life questionnaire. Sixty-five subjects attended one research visit for participation in this study, and 29 attended a second visit to complete the PPT a second time. Performance and Perceptual SRTN and PPDIS scores were normally distributed and showed excellent test-retest reliability. Aided SRTNs were significantly better than unaided SRTNs; aided and unaided PPDIS values did not differ. Stepwise multiple linear regression showed that the PPDIS, the Performance SRTN, and age were significant predictors of scores on the HHIE/A such that greater reported handicap is associated with underestimating hearing ability, poorer aided ability to understand speech in noise, and being younger. Scores on the Satisfaction with Amplification in Daily Life were not well explained by the PPT, age, or audiometric thresholds. When individuals were grouped by their HHIE/A scores, it was seen that individuals who report more handicap than expected based on their audiometric thresholds, have a more negative PPDIS, i.e., underestimate their hearing ability, relative to individuals who report expected handicap, who in turn have a more negative PPDIS than individuals who report less handicap than expected. No such patterns were apparent for the Performance SRTN. The study showed the PPT to be a reliable outcome measure that can provide more information than a performance measure and/or a questionnaire measure alone, in that the PPDIS can provide the clinician with an explanation for discrepant objective and subjective reports of hearing difficulties. The finding that self-reported handicap is affected independently by both actual ability to hear and the (mis)perception of ability to hear underscores the difficulty clinicians encounter when trying to interpret outcomes questionnaires. We suggest that this variable should be measured and taken into account when interpreting questionnaires and counseling patients.
Isaacson, M D; Srinivasan, S; Lloyd, L L
2010-01-01
MathSpeak is a set of rules for non speaking of mathematical expressions. These rules have been incorporated into a computerised module that translates printed mathematics into the non-ambiguous MathSpeak form for synthetic speech rendering. Differences between individual utterances produced with the translator module are difficult to discern because of insufficient pausing between utterances; hence, the purpose of this study was to develop an algorithm for improving the synthetic speech rendering of MathSpeak. To improve synthetic speech renderings, an algorithm for inserting pauses was developed based upon recordings of middle and high school math teachers speaking mathematic expressions. Efficacy testing of this algorithm was conducted with college students without disabilities and high school/college students with visual impairments. Parameters measured included reception accuracy, short-term memory retention, MathSpeak processing capacity and various rankings concerning the quality of synthetic speech renderings. All parameters measured showed statistically significant improvements when the algorithm was used. The algorithm improves the quality and information processing capacity of synthetic speech renderings of MathSpeak. This increases the capacity of individuals with print disabilities to perform mathematical activities and to successfully fulfill science, technology, engineering and mathematics academic and career objectives.
Binaural Speech Understanding With Bilateral Cochlear Implants in Reverberation.
Kokkinakis, Kostas
2018-03-08
The purpose of this study was to investigate whether bilateral cochlear implant (CI) listeners who are fitted with clinical processors are able to benefit from binaural advantages under reverberant conditions. Another aim of this contribution was to determine whether the magnitude of each binaural advantage observed inside a highly reverberant environment differs significantly from the magnitude measured in a near-anechoic environment. Ten adults with postlingual deafness who are bilateral CI users fitted with either Nucleus 5 or Nucleus 6 clinical sound processors (Cochlear Corporation) participated in this study. Speech reception thresholds were measured in sound field and 2 different reverberation conditions (0.06 and 0.6 s) as a function of the listening condition (left, right, both) and the noise spatial location (left, front, right). The presence of the binaural effects of head-shadow, squelch, summation, and spatial release from masking in the 2 different reverberation conditions tested was determined using nonparametric statistical analysis. In the bilateral population tested, when the ambient reverberation time was equal to 0.6 s, results indicated strong positive effects of head-shadow and a weaker spatial release from masking advantage, whereas binaural squelch and summation contributed no statistically significant benefit to bilateral performance under this acoustic condition. These findings are consistent with those of previous studies, which have demonstrated that head-shadow yields the most pronounced advantage in noise. The finding that spatial release from masking produced little to almost no benefit in bilateral listeners is consistent with the hypothesis that additive reverberation degrades spatial cues and negatively affects binaural performance. The magnitude of 4 different binaural advantages was measured on the same group of bilateral CI subjects fitted with clinical processors in 2 different reverberation conditions. The results of this work demonstrate the impeding properties of reverberation on binaural speech understanding. In addition, results indicate that CI recipients who struggle in everyday listening environments are also more likely to benefit less in highly reverberant environments from their bilateral processors.
Dynamics of infant cortical auditory evoked potentials (CAEPs) for tone and speech tokens.
Cone, Barbara; Whitaker, Richard
2013-07-01
Cortical auditory evoked potentials (CAEPs) to tones and speech sounds were obtained in infants to: (1) further knowledge of auditory development above the level of the brainstem during the first year of life; (2) establish CAEP input-output functions for tonal and speech stimuli as a function of stimulus level and (3) elaborate the data-base that establishes CAEP in infants tested while awake using clinically relevant stimuli, thus providing methodology that would have translation to pediatric audiological assessment. Hypotheses concerning CAEP development were that the latency and amplitude input-output functions would reflect immaturity in encoding stimulus level. In a second experiment, infants were tested with the same stimuli used to evoke the CAEPs. Thresholds for these stimuli were determined using observer-based psychophysical techniques. The hypothesis was that the behavioral thresholds would be correlated with CAEP input-output functions because of shared cortical response areas known to be active in sound detection. 36 infants, between the ages of 4 and 12 months (mean=8 months, s.d.=1.8 months) and 9 young adults (mean age 21 years) with normal hearing were tested. First, CAEPs amplitude and latency input-output functions were obtained for 4 tone bursts and 7 speech tokens. The tone bursts stimuli were 50 ms tokens of pure tones at 0.5, 1.0, 2.0 and 4.0 kHz. The speech sound tokens, /a/, /i/, /o/, /u/, /m/, /s/, and /∫/, were created from natural speech samples and were also 50 ms in duration. CAEPs were obtained for tone burst and speech token stimuli at 10 dB level decrements in descending order from 70 dB SPL. All CAEP tests were completed while the infants were awake and engaged in quiet play. For the second experiment, observer-based psychophysical methods were used to establish perceptual threshold for the same speech sound and tone tokens. Infant CAEP component latencies were prolonged by 100-150 ms in comparison to adults. CAEP latency-intensity input output functions were steeper in infants compared to adults. CAEP amplitude growth functions with respect to stimulus SPL are adult-like at this age, particularly for the earliest component, P1-N1. Infant perceptual thresholds were elevated with respect to those found in adults. Furthermore, perceptual thresholds were higher, on average, than levels at which CAEPs could be obtained. When CAEP amplitudes were plotted with respect to perceptual threshold (dB SL), the infant CAEP amplitude growth slopes were steeper than in adults. Although CAEP latencies indicate immaturity in neural transmission at the level of the cortex, amplitude growth with respect to stimulus SPL is adult-like at this age, particularly for the earliest component, P1-N1. The latency and amplitude input-output functions may provide additional information as to how infants perceive stimulus level. The reasons for the discrepancy between electrophysiologic and perceptual threshold may be due to immaturity in perceptual temporal resolution abilities and the broad-band listening strategy employed by infants. The findings from the current study can be translated to the clinical setting. It is possible to use tonal or speech sound tokens to evoke CAEPs in an awake, passively alert infant, and thus determine whether these sounds activate the auditory cortex. This could be beneficial in the verification of hearing aid or cochlear implant benefit. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Spatial Release From Masking in 2-Year-Olds With Normal Hearing and With Bilateral Cochlear Implants
Hess, Christi L.; Misurelli, Sara M.; Litovsky, Ruth Y.
2018-01-01
This study evaluated spatial release from masking (SRM) in 2- to 3-year-old children who are deaf and were implanted with bilateral cochlear implants (BiCIs), and in age-matched normal-hearing (NH) toddlers. Here, we examined whether early activation of bilateral hearing has the potential to promote SRM that is similar to age-matched NH children. Listeners were 13 NH toddlers and 13 toddlers with BiCIs, ages 27 to 36 months. Speech reception thresholds (SRTs) were measured for target speech in front (0°) and for competitors that were either Colocated in front (0°) or Separated toward the right (+90°). SRM was computed as the difference between SRTs in the front versus in the asymmetrical condition. Results show that SRTs were higher in the BiCI than NH group in all conditions. Both groups had higher SRTs in the Colocated and Separated conditions compared with Quiet, indicating masking. SRM was significant only in the NH group. In the BiCI group, the group effect of SRM was not significant, likely limited by the small sample size; however, all but two children had SRM values within the NH range. This work shows that to some extent, the ability to use spatial cues for source segregation develops by age 2 to 3 in NH children and is attainable in most of the children in the BiCI group. There is potential for the paradigm used here to be used in clinical settings to evaluate outcomes of bilateral hearing in very young children. PMID:29761735
What Makes a Caseload (Un) Manageable? School-Based Speech-Language Pathologists Speak
ERIC Educational Resources Information Center
Katz, Lauren A.; Maag, Abby; Fallon, Karen A.; Blenkarn, Katie; Smith, Megan K.
2010-01-01
Purpose: Large caseload sizes and a shortage of speech-language pathologists (SLPs) are ongoing concerns in the field of speech and language. This study was conducted to identify current mean caseload size for school-based SLPs, a threshold at which caseload size begins to be perceived as unmanageable, and variables contributing to school-based…
Speech Perception with Music Maskers by Cochlear Implant Users and Normal-Hearing Listeners
ERIC Educational Resources Information Center
Eskridge, Elizabeth N.; Galvin, John J., III; Aronoff, Justin M.; Li, Tianhao; Fu, Qian-Jie
2012-01-01
Purpose: The goal of this study was to investigate how the spectral and temporal properties in background music may interfere with cochlear implant (CI) and normal-hearing listeners' (NH) speech understanding. Method: Speech-recognition thresholds (SRTs) were adaptively measured in 11 CI and 9 NH subjects. CI subjects were tested while using their…
Anderson, Elizabeth S; Nelson, David A; Kreft, Heather; Nelson, Peggy B; Oxenham, Andrew J
2011-07-01
Spectral ripple discrimination thresholds were measured in 15 cochlear-implant users with broadband (350-5600 Hz) and octave-band noise stimuli. The results were compared with spatial tuning curve (STC) bandwidths previously obtained from the same subjects. Spatial tuning curve bandwidths did not correlate significantly with broadband spectral ripple discrimination thresholds but did correlate significantly with ripple discrimination thresholds when the rippled noise was confined to an octave-wide passband, centered on the STC's probe electrode frequency allocation. Ripple discrimination thresholds were also measured for octave-band stimuli in four contiguous octaves, with center frequencies from 500 Hz to 4000 Hz. Substantial variations in thresholds with center frequency were found in individuals, but no general trends of increasing or decreasing resolution from apex to base were observed in the pooled data. Neither ripple nor STC measures correlated consistently with speech measures in noise and quiet in the sample of subjects in this study. Overall, the results suggest that spectral ripple discrimination measures provide a reasonable measure of spectral resolution that correlates well with more direct, but more time-consuming, measures of spectral resolution, but that such measures do not always provide a clear and robust predictor of performance in speech perception tasks. © 2011 Acoustical Society of America
Anderson, Elizabeth S.; Nelson, David A.; Kreft, Heather; Nelson, Peggy B.; Oxenham, Andrew J.
2011-01-01
Spectral ripple discrimination thresholds were measured in 15 cochlear-implant users with broadband (350–5600 Hz) and octave-band noise stimuli. The results were compared with spatial tuning curve (STC) bandwidths previously obtained from the same subjects. Spatial tuning curve bandwidths did not correlate significantly with broadband spectral ripple discrimination thresholds but did correlate significantly with ripple discrimination thresholds when the rippled noise was confined to an octave-wide passband, centered on the STC’s probe electrode frequency allocation. Ripple discrimination thresholds were also measured for octave-band stimuli in four contiguous octaves, with center frequencies from 500 Hz to 4000 Hz. Substantial variations in thresholds with center frequency were found in individuals, but no general trends of increasing or decreasing resolution from apex to base were observed in the pooled data. Neither ripple nor STC measures correlated consistently with speech measures in noise and quiet in the sample of subjects in this study. Overall, the results suggest that spectral ripple discrimination measures provide a reasonable measure of spectral resolution that correlates well with more direct, but more time-consuming, measures of spectral resolution, but that such measures do not always provide a clear and robust predictor of performance in speech perception tasks. PMID:21786905
Potts, Lisa G; Skinner, Margaret W; Litovsky, Ruth A; Strube, Michael J; Kuk, Francis
2009-06-01
The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing). This study evaluated the effect of wearing a cochlear implant in one ear and a digital hearing aid in the opposite ear on speech recognition and localization. A repeated-measures correlational study was completed. Nineteen adult Cochlear Nucleus 24 implant recipients participated in the study. The participants were fit with a Widex Senso Vita 38 hearing aid to achieve maximum audibility and comfort within their dynamic range. Soundfield thresholds, loudness growth, speech recognition, localization, and subjective questionnaires were obtained six-eight weeks after the hearing aid fitting. Testing was completed in three conditions: hearing aid only, cochlear implant only, and cochlear implant and hearing aid (bimodal). All tests were repeated four weeks after the first test session. Repeated-measures analysis of variance was used to analyze the data. Significant effects were further examined using pairwise comparison of means or in the case of continuous moderators, regression analyses. The speech-recognition and localization tasks were unique, in that a speech stimulus presented from a variety of roaming azimuths (140 degree loudspeaker array) was used. Performance in the bimodal condition was significantly better for speech recognition and localization compared to the cochlear implant-only and hearing aid-only conditions. Performance was also different between these conditions when the location (i.e., side of the loudspeaker array that presented the word) was analyzed. In the bimodal condition, the speech-recognition and localization tasks were equal regardless of which side of the loudspeaker array presented the word, while performance was significantly poorer for the monaural conditions (hearing aid only and cochlear implant only) when the words were presented on the side with no stimulation. Binaural loudness summation of 1-3 dB was seen in soundfield thresholds and loudness growth in the bimodal condition. Measures of the audibility of sound with the hearing aid, including unaided thresholds, soundfield thresholds, and the Speech Intelligibility Index, were significant moderators of speech recognition and localization. Based on the questionnaire responses, participants showed a strong preference for bimodal stimulation. These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid.
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex; Trawick, David
1991-01-01
The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.
Hemispheric Lateralization of Motor Thresholds in Relation to Stuttering
Alm, Per A.; Karlsson, Ragnhild; Sundberg, Madeleine; Axelson, Hans W.
2013-01-01
Stuttering is a complex speech disorder. Previous studies indicate a tendency towards elevated motor threshold for the left hemisphere, as measured using transcranial magnetic stimulation (TMS). This may reflect a monohemispheric motor system impairment. The purpose of the study was to investigate the relative side-to-side difference (asymmetry) and the absolute levels of motor threshold for the hand area, using TMS in adults who stutter (n = 15) and in controls (n = 15). In accordance with the hypothesis, the groups differed significantly regarding the relative side-to-side difference of finger motor threshold (p = 0.0026), with the stuttering group showing higher motor threshold of the left hemisphere in relation to the right. Also the absolute level of the finger motor threshold for the left hemisphere differed between the groups (p = 0.049). The obtained results, together with previous investigations, provide support for the hypothesis that stuttering tends to be related to left hemisphere motor impairment, and possibly to a dysfunctional state of bilateral speech motor control. PMID:24146930
Speech training alters tone frequency tuning in rat primary auditory cortex
Engineer, Crystal T.; Perez, Claudia A.; Carraway, Ryan S.; Chang, Kevin Q.; Roland, Jarod L.; Kilgard, Michael P.
2013-01-01
Previous studies in both humans and animals have documented improved performance following discrimination training. This enhanced performance is often associated with cortical response changes. In this study, we tested the hypothesis that long-term speech training on multiple tasks can improve primary auditory cortex (A1) responses compared to rats trained on a single speech discrimination task or experimentally naïve rats. Specifically, we compared the percent of A1 responding to trained sounds, the responses to both trained and untrained sounds, receptive field properties of A1 neurons, and the neural discrimination of pairs of speech sounds in speech trained and naïve rats. Speech training led to accurate discrimination of consonant and vowel sounds, but did not enhance A1 response strength or the neural discrimination of these sounds. Speech training altered tone responses in rats trained on six speech discrimination tasks but not in rats trained on a single speech discrimination task. Extensive speech training resulted in broader frequency tuning, shorter onset latencies, a decreased driven response to tones, and caused a shift in the frequency map to favor tones in the range where speech sounds are the loudest. Both the number of trained tasks and the number of days of training strongly predict the percent of A1 responding to a low frequency tone. Rats trained on a single speech discrimination task performed less accurately than rats trained on multiple tasks and did not exhibit A1 response changes. Our results indicate that extensive speech training can reorganize the A1 frequency map, which may have downstream consequences on speech sound processing. PMID:24344364
[Receptive and expressive speech development in children with cochlear implant].
Streicher, B; Kral, K; Hahn, M; Lang-Roth, R
2015-04-01
This study's aim is the assessment of language development of children with Cochlea Implant (CI). It focusses on receptive and expressive language development as well as auditory memory skills. Grimm's language development test (SETK 3-5) evaluates receptive, expressive language development and auditory memory. Data of 49 children who received their implant within their first 3 years of life were compared to the norms of hearing children at the age of 3.0-3.5 years. According to the age at implantation the cohort was subdivided in 3 groups: cochlear implantation within the first 12 months of life (group 1), during the 13th and 24th months of life (group 2) and after 25 or more months of life (group 3). It was possible to collect complete data of all SETK 3-5 subtests in 63% of the participants. A homogeneous profile of all subtests indicates a balanced receptive and expressive language development. Thus reduces the gap between hearing/language age and chronological age. Receptive and expressive language and auditory memory milestones in children implanted within their first year of life are achieved earlier in comparison to later implanted children. The Language Test for Children (SETK 3-5) is an appropriate test procedure to be used for language assessment of children who received a CI. It can be used from age 3 on to administer data on receptive and expressive language development and auditory memory. © Georg Thieme Verlag KG Stuttgart · New York.
Determination of the Potential Benefit of Time-Frequency Gain Manipulation
Anzalone, Michael C.; Calandruccio, Lauren; Doherty, Karen A.; Carney, Laurel H.
2008-01-01
Objective The purpose of this study was to determine the maximum benefit provided by a time-frequency gain-manipulation algorithm for noise-reduction (NR) based on an ideal detector of speech energy. The amount of detected energy necessary to show benefit using this type of NR algorithm was examined, as well as the necessary speed and frequency resolution of the gain manipulation. Design NR was performed using time-frequency gain manipulation, wherein the gains of individual frequency bands depended on the absence or presence of speech energy within each band. Three different experiments were performed: (1) NR using ideal detectors, (2) NR with nonideal detectors, and (3) NR with ideal detectors and different processing speeds and frequency resolutions. All experiments were performed using the Hearing-in-Noise test (HINT). A total of 6 listeners with normal hearing and 14 listeners with hearing loss were tested. Results HINT thresholds improved for all listeners with NR based on the ideal detectors used in Experiment I. The nonideal detectors of Experiment II required detection of at least 90% of the speech energy before an improvement was seen in HINT thresholds. The results of Experiment III demonstrated that relatively high temporal resolution (<100 msec) was required by the NR algorithm to improve HINT thresholds. Conclusions The results indicated that a single-microphone NR system based on time-frequency gain manipulation improved the HINT thresholds of listeners. However, to obtain benefit in speech intelligibility, the detectors used in such a strategy were required to detect an unrealistically high percentage of the speech energy and to perform the gain manipulations on a fast temporal basis. PMID:16957499
Masked speech perception across the adult lifespan: Impact of age and hearing impairment.
Goossens, Tine; Vercammen, Charlotte; Wouters, Jan; van Wieringen, Astrid
2017-02-01
As people grow older, speech perception difficulties become highly prevalent, especially in noisy listening situations. Moreover, it is assumed that speech intelligibility is more affected in the event of background noises that induce a higher cognitive load, i.e., noises that result in informational versus energetic masking. There is ample evidence showing that speech perception problems in aging persons are partly due to hearing impairment and partly due to age-related declines in cognition and suprathreshold auditory processing. In order to develop effective rehabilitation strategies, it is indispensable to know how these different degrading factors act upon speech perception. This implies disentangling effects of hearing impairment versus age and examining the interplay between both factors in different background noises of everyday settings. To that end, we investigated open-set sentence identification in six participant groups: a young (20-30 years), middle-aged (50-60 years), and older cohort (70-80 years), each including persons who had normal audiometric thresholds up to at least 4 kHz, on the one hand, and persons who were diagnosed with elevated audiometric thresholds, on the other hand. All participants were screened for (mild) cognitive impairment. We applied stationary and amplitude modulated speech-weighted noise, which are two types of energetic maskers, and unintelligible speech, which causes informational masking in addition to energetic masking. By means of these different background noises, we could look into speech perception performance in listening situations with a low and high cognitive load, respectively. Our results indicate that, even when audiometric thresholds are within normal limits up to 4 kHz, irrespective of threshold elevations at higher frequencies, and there is no indication of even mild cognitive impairment, masked speech perception declines by middle age and decreases further on to older age. The impact of hearing impairment is as detrimental for young and middle-aged as it is for older adults. When the background noise becomes cognitively more demanding, there is a larger decline in speech perception, due to age or hearing impairment. Hearing impairment seems to be the main factor underlying speech perception problems in background noises that cause energetic masking. However, in the event of informational masking, which induces a higher cognitive load, age appears to explain a significant part of the communicative impairment as well. We suggest that the degrading effect of age is mediated by deficiencies in temporal processing and central executive functions. This study may contribute to the improvement of auditory rehabilitation programs aiming to prevent aging persons from missing out on conversations, which, in turn, will improve their quality of life. Copyright © 2016 Elsevier B.V. All rights reserved.
Eckert, Mark A; Matthews, Lois J; Dubno, Judy R
2017-01-01
Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility. We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982) were significantly associated with measures of speech recognition in noise that controlled for differences in speech audibility. One hundred sixty-two middle-aged and older adults had HHIE total scores that were significantly associated with audibility-adjusted measures of speech recognition for low-context but not high-context sentences. These findings were driven by HHIE items involving negative feelings related to communication difficulties that also captured variance in subjective ratings of effort and frustration that predicted speech recognition. The average pure-tone threshold accounted for some of the variance in the association between the HHIE and audibility-adjusted speech recognition, suggesting an effect of central and peripheral auditory system decline related to elevated thresholds. The accumulation of difficult listening experiences appears to produce a self-assessment of hearing handicap resulting from (a) reduced audibility of stimuli, (b) declines in the central and peripheral auditory system function, and (c) additional individual variation in central nervous system function.
Matthews, Lois J.; Dubno, Judy R.
2017-01-01
Purpose Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility. Method We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982) were significantly associated with measures of speech recognition in noise that controlled for differences in speech audibility. Results One hundred sixty-two middle-aged and older adults had HHIE total scores that were significantly associated with audibility-adjusted measures of speech recognition for low-context but not high-context sentences. These findings were driven by HHIE items involving negative feelings related to communication difficulties that also captured variance in subjective ratings of effort and frustration that predicted speech recognition. The average pure-tone threshold accounted for some of the variance in the association between the HHIE and audibility-adjusted speech recognition, suggesting an effect of central and peripheral auditory system decline related to elevated thresholds. Conclusion The accumulation of difficult listening experiences appears to produce a self-assessment of hearing handicap resulting from (a) reduced audibility of stimuli, (b) declines in the central and peripheral auditory system function, and (c) additional individual variation in central nervous system function. PMID:28060993
Speech, Language, and Cognition in Preschool Children with Epilepsy
ERIC Educational Resources Information Center
Selassie, G. Rejno-Habte; Viggedal, G.; Olsson, I.; Jennische, M.
2008-01-01
We studied expressive and receptive language, oral motor ability, attention, memory, and intelligence in 20 6-year-old children with epilepsy (14 females, six males; mean age 6y 5mo, range 6y-6y 11mo) without learning disability, cerebral palsy (CP), and/or autism, and in 30 reference children without epilepsy (18 females, 12 males; mean age 6y…
ERIC Educational Resources Information Center
Binger, Cathy; Kent-Walsh, Jennifer; King, Marika; Mansfield, Lindsay
2017-01-01
Purpose: This study investigated the early rule-based sentence productions of 3- and 4-year-old children with severe speech disorders who used single-meaning graphic symbols to communicate. Method: Ten 3- and 4-year-olds requiring the use of augmentative and alternative communication, who had largely intact receptive language skills, received…
Two Studies of the Syntactic Knowledge of Young Children. A Preliminary Report.
ERIC Educational Resources Information Center
Smith, Carlota S.
This paper deals with two experiments whose purposes are to investigate the linguistic competence of young children and their receptivity to adult speech. In the free response experiment, imperative sentences were presented to 1 1/2- to 2 1/2-year-olds. The sentences were minimal (a single noun), telegraphic, or full adult sentences. The youngest…
Communication and Reception in Teaching: The Age of Image "versus" the "Weight" of Words
ERIC Educational Resources Information Center
Bradea, Adela
2015-01-01
Contemporary culture is mainly a culture of image. We get our information seeing. Examination of images is free, while reading is impelled by the necessity of browsing the whole text. The image seems more appropriate than the text when trying to communicate easy and quickly. The speech calls for articulated language, expressed through a symbolic…
Using Animated Language Software with Children Diagnosed with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Mulholland, Rita; Pete, Ann Marie; Popeson, Joanne
2008-01-01
We examined the impact of using an animated software program (Team Up With Timo) on the expressive and receptive language abilities of five children ages 5-9 in a self-contained Learning and Language Disabilities class. We chose to use Team Up With Timo (Animated Speech Corporation) because it allows the teacher to personalize the animation for…
The Effect of Remote Masking on the Reception of Speech by Young School-Age Children
ERIC Educational Resources Information Center
Youngdahl, Carla L.; Healy, Eric W.; Yoho, Sarah E.; Apoux, Frédéric; Holt, Rachael Frush
2018-01-01
Purpose: Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about children's…
Normal Language Skills and Normal Intelligence in a Child with de Lange Syndrome.
ERIC Educational Resources Information Center
Cameron, Thomas H.; Kelly, Desmond P.
1988-01-01
The subject of this case report is a two-year, seven-month-old girl with de Lange syndrome, normal intelligence, and age-appropriate language skills. She demonstrated initial delays in gross motor skills and in receptive and expressive language but responded well to intensive speech and language intervention, as well as to physical therapy.…
The Mechanism of Speech Processing in Congenital Amusia: Evidence from Mandarin Speakers
Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren
2012-01-01
Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results. PMID:22347374
The mechanism of speech processing in congenital amusia: evidence from Mandarin speakers.
Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren
2012-01-01
Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results.
Zekveld, Adriana A; Festen, Joost M; Kramer, Sophia E
2013-08-01
In this study, the authors assessed the influence of masking level (29% or 71% sentence perception) and test modality on the processing load during language perception as reflected by the pupil response. In addition, the authors administered a delayed cued stimulus recall test to examine whether processing load affected the encoding of the stimuli in memory. Participants performed speech and text reception threshold tests, during which the pupil response was measured. In the cued recall test, the first half of correctly perceived sentences was presented, and participants were asked to complete the sentences. Reading and listening span tests of working memory capacity were presented as well. Regardless of test modality, the pupil response indicated higher processing load in the 29% condition than in the 71% correct condition. Cued recall was better for the 29% condition. The consistent effect of masking level on the pupil response during listening and reading support the validity of the pupil response as a measure of processing load during language perception. The absent relation between pupil response and cued recall may suggest that cued recall is not directly related to processing load, as reflected by the pupil response.
Characterization of hearing loss in aged type II diabetics
Frisina, Susan T.; Mapes, Frances; Kim, SungHee; Frisina, D. Robert; Frisina, Robert D.
2009-01-01
Presbycusis – age-related hearing loss – is the number one communicative disorder and a significant chronic medical condition of the aged. Little is known about how type II diabetes, another prevalent age-related medical condition, and presbycusis interact. The present investigation aimed to comprehensively characterize the nature of hearing impairment in aged type II diabetics. Hearing tests measuring both peripheral (cochlea) and central (brainstem and cortex) auditory processing were utilized. The majority of differences between the hearing abilities of the aged diabetics and their age-matched controls were found in measures of inner ear function. For example, large differences were found in pure-tone audiograms, wideband noise and speech reception thresholds, and otoacoustic emissions. The greatest deficits tended to be at low frequencies. In addition, there was a strong tendency for diabetes to affect the right ear more than the left. One possible interpretation is that as one develops presbycusis, the right ear advantage is lost, and this decline is accelerated by diabetes. In contrast, auditory processing tests that measure both peripheral and central processing showed fewer declines between the elderly diabetics and the control group. Consequences of elevated blood sugar levels as possible underlying physiological mechanisms for the hearing loss are discussed. PMID:16309862
Ionizing Radiation and the Ear
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borsanyi, Steven J.
The effects of ionizing radiation on the ears of 100 patients were studied in the course of treatment of malignant head and neck tumors by teleradiation using Co 60. Early changes consisted of radiation otitis media and a transient vasculitis of the vessels of the inner ear, resulting in hearing loss, tinnitus, and temporary recruitment. While no permanent changes were detected microscopically shortly after the completion of radiation in the cochlea or labyrinth, late changes sometimes occurred in the temporal bone as a result of an obliterating endarteritis. The late changes were separate entities caused primarily by obliterating endarteritis andmore » alterations in the collagen. Radiation affected the hearing of individuals selectively. When hearing threshold shift did occur, the shift was not great. The 4000 cps frequency showed a greater deficit in hearing capacity during the tests, while the area least affected appeared to be in the region of 2000 cps. The shift in speech reception was not significant and it was correlated with the over-all change in response to pure tones. Discrimination did not appear to be affected. Proper shielding of the ear with lead during radiation, when possible, eliminated most complications. (H.R.D.)« less
Speech and motor disturbances in Rett syndrome.
Bashina, V M; Simashkova, N V; Grachev, V V; Gorbachevskaya, N L
2002-01-01
Rett syndrome is a severe, genetically determined disease of early childhood which produces a defined clinical phenotype in girls. The main clinical manifestations include lesions affecting speech functions, involving both expressive and receptive speech, as well as motor functions, producing apraxia of the arms and profound abnormalities of gait in the form of ataxia-apraxia. Most investigators note that patients have variability in the severity of derangement to large motor acts and in the damage to fine hand movements and speech functions. The aims of the present work were to study disturbances of speech and motor functions over 2-5 years in 50 girls aged 12 months to 14 years with Rett syndrome and to analyze the correlations between these disturbances. The results of comparing clinical data and EEG traces supported the stepwise involvement of frontal and parietal-temporal cortical structures in the pathological process. The ability to organize speech and motor activity is affected first, with subsequent development of lesions to gnostic functions, which are in turn followed by derangement of subcortical structures and the cerebellum and later by damage to structures in the spinal cord. A clear correlation was found between the severity of lesions to motor and speech functions and neurophysiological data: the higher the level of preservation of elements of speech and motor functions, the smaller were the contributions of theta activity and the greater the contributions of alpha and beta activities to the EEG. The possible pathogenetic mechanisms underlying the motor and speech disturbances in Rett syndrome are discussed.
Musical duplex perception: perception of figurally good chords with subliminal distinguishing tones.
Hall, M D; Pastore, R E
1992-08-01
In a variant of duplex perception with speech, phoneme perception is maintained when distinguishing components are presented below intensities required for separate detection, forming the basis for the claim that a phonetic module takes precedence over nonspeech processing. This finding is replicated with music chords (C major and minor) created by mixing a piano fifth with a sinusoidal distinguishing tone (E or E flat). Individual threshold intensities for detecting E or E flat in the context of the fixed piano tones are established. Chord discrimination thresholds defined by distinguishing tone intensity were determined. Experiment 2 verified masked detection thresholds and subliminal chord identification for experienced musicians. Accurate chord perception was maintained at distinguishing tone intensities nearly 20 dB below the threshold for separate detection. Speech and music findings are argued to demonstrate general perceptual principles.
NASA Astrophysics Data System (ADS)
Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui
2012-04-01
Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.
[The endpoint detection of cough signal in continuous speech].
Yang, Guoqing; Mo, Hongqiang; Li, Wen; Lian, Lianfang; Zheng, Zeguang
2010-06-01
The endpoint detection of cough signal in continuous speech has been researched in order to improve the efficiency and veracity of manual recognition or computer-based automatic recognition. First, using the short time zero crossing ratio(ZCR) for identifying the suspicious coughs and getting the threshold of short time energy based on acoustic characteristics of cough. Then, the short time energy is combined with short time ZCR in order to implement the endpoint detection of cough in continuous speech. To evaluate the effect of the method, first, the virtual number of coughs in each recording was identified by two experienced doctors using the graphical user interface (GUI). Second, the recordings were analyzed by automatic endpoint detection program under Matlab7.0. Finally, the comparison between these two results showed: The error rate of undetected cough is 2.18%, and 98.13% of noise, silence and speech were removed. The way of setting short time energy threshold is robust. The endpoint detection program can remove most speech and noise, thus maintaining a lower rate of error.
Narrative skills in children with selective mutism: an exploratory study.
McInnes, Alison; Fung, Daniel; Manassis, Katharina; Fiksenbaum, Lisa; Tannock, Rosemary
2004-11-01
Selective mutism (SM) is a rare and complex disorder associated with anxiety symptoms and speech-language deficits; however, the nature of these language deficits has not been studied systematically. A novel cross-disciplinary assessment protocol was used to assess anxiety and nonverbal cognitive, receptive language, and expressive narrative abilities in 7 children with SM and a comparison group of 7 children with social phobia (SP). The children with SM produced significantly shorter narratives than children with SP, despite showing normal nonverbal cognitive and receptive language abilities. The findings suggest that SM may involve subtle expressive language deficits that may influence academic performance and raise additional questions for further research. The assessment procedure developed for this study may be potentially useful for language clinicians.
[Characteristics, advantages, and limits of matrix tests].
Brand, T; Wagener, K C
2017-03-01
Deterioration of communication abilities due to hearing problems is particularly relevant in listening situations with noise. Therefore, speech intelligibility tests in noise are required for audiological diagnostics and evaluation of hearing rehabilitation. This study analyzed the characteristics of matrix tests assessing the 50 % speech recognition threshold in noise. What are their advantages and limitations? Matrix tests are based on a matrix of 50 words (10 five-word sentences with same grammatical structure). In the standard setting, 20 sentences are presented using an adaptive procedure estimating the individual 50 % speech recognition threshold in noise. At present, matrix tests in 17 different languages are available. A high international comparability of matrix tests exists. The German language matrix test (OLSA, male speaker) has a reference 50 % speech recognition threshold of -7.1 (± 1.1) dB SNR. Before using a matrix test for the first time, the test person has to become familiar with the basic speech material using two training lists. Hereafter, matrix tests produce constant results even if repeated many times. Matrix tests are suitable for users of hearing aids and cochlear implants, particularly for assessment of benefit during the fitting process. Matrix tests can be performed in closed form and consequently with non-native listeners, even if the experimenter does not speak the test person's native language. Short versions of matrix tests are available for listeners with a shorter memory span, e.g., children.
Negative blood oxygen level dependent signals during speech comprehension.
Rodriguez Moreno, Diana; Schiff, Nicholas D; Hirsch, Joy
2015-05-01
Speech comprehension studies have generally focused on the isolation and function of regions with positive blood oxygen level dependent (BOLD) signals with respect to a resting baseline. Although regions with negative BOLD signals in comparison to a resting baseline have been reported in language-related tasks, their relationship to regions of positive signals is not fully appreciated. Based on the emerging notion that the negative signals may represent an active function in language tasks, the authors test the hypothesis that negative BOLD signals during receptive language are more associated with comprehension than content-free versions of the same stimuli. Regions associated with comprehension of speech were isolated by comparing responses to passive listening to natural speech to two incomprehensible versions of the same speech: one that was digitally time reversed and one that was muffled by removal of high frequencies. The signal polarity was determined by comparing the BOLD signal during each speech condition to the BOLD signal during a resting baseline. As expected, stimulation-induced positive signals relative to resting baseline were observed in the canonical language areas with varying signal amplitudes for each condition. Negative BOLD responses relative to resting baseline were observed primarily in frontoparietal regions and were specific to the natural speech condition. However, the BOLD signal remained indistinguishable from baseline for the unintelligible speech conditions. Variations in connectivity between brain regions with positive and negative signals were also specifically related to the comprehension of natural speech. These observations of anticorrelated signals related to speech comprehension are consistent with emerging models of cooperative roles represented by BOLD signals of opposite polarity.
Negative Blood Oxygen Level Dependent Signals During Speech Comprehension
Rodriguez Moreno, Diana; Schiff, Nicholas D.
2015-01-01
Abstract Speech comprehension studies have generally focused on the isolation and function of regions with positive blood oxygen level dependent (BOLD) signals with respect to a resting baseline. Although regions with negative BOLD signals in comparison to a resting baseline have been reported in language-related tasks, their relationship to regions of positive signals is not fully appreciated. Based on the emerging notion that the negative signals may represent an active function in language tasks, the authors test the hypothesis that negative BOLD signals during receptive language are more associated with comprehension than content-free versions of the same stimuli. Regions associated with comprehension of speech were isolated by comparing responses to passive listening to natural speech to two incomprehensible versions of the same speech: one that was digitally time reversed and one that was muffled by removal of high frequencies. The signal polarity was determined by comparing the BOLD signal during each speech condition to the BOLD signal during a resting baseline. As expected, stimulation-induced positive signals relative to resting baseline were observed in the canonical language areas with varying signal amplitudes for each condition. Negative BOLD responses relative to resting baseline were observed primarily in frontoparietal regions and were specific to the natural speech condition. However, the BOLD signal remained indistinguishable from baseline for the unintelligible speech conditions. Variations in connectivity between brain regions with positive and negative signals were also specifically related to the comprehension of natural speech. These observations of anticorrelated signals related to speech comprehension are consistent with emerging models of cooperative roles represented by BOLD signals of opposite polarity. PMID:25412406
Schreitmüller, Stefan; Frenken, Miriam; Bentz, Lüder; Ortmann, Magdalene; Walger, Martin; Meister, Hartmut
Watching a talker's mouth is beneficial for speech reception (SR) in many communication settings, especially in noise and when hearing is impaired. Measures for audiovisual (AV) SR can be valuable in the framework of diagnosing or treating hearing disorders. This study addresses the lack of standardized methods in many languages for assessing lipreading, AV gain, and integration. A new method is validated that supplements a German speech audiometric test with visualizations of the synthetic articulation of an avatar that was used, for it is feasible to lip-sync auditory speech in a highly standardized way. Three hypotheses were formed according to the literature on AV SR that used live or filmed talkers. It was tested whether respective effects could be reproduced with synthetic articulation: (1) cochlear implant (CI) users have a higher visual-only SR than normal-hearing (NH) individuals, and younger individuals obtain higher lipreading scores than older persons. (2) Both CI and NH gain from presenting AV over unimodal (auditory or visual) sentences in noise. (3) Both CI and NH listeners efficiently integrate complementary auditory and visual speech features. In a controlled, cross-sectional study with 14 experienced CI users (mean age 47.4) and 14 NH individuals (mean age 46.3, similar broad age distribution), lipreading, AV gain, and integration of a German matrix sentence test were assessed. Visual speech stimuli were synthesized by the articulation of the Talking Head system "MASSY" (Modular Audiovisual Speech Synthesizer), which displayed standardized articulation with respect to the visibility of German phones. In line with the hypotheses and previous literature, CI users had a higher mean visual-only SR than NH individuals (CI, 38%; NH, 12%; p < 0.001). Age was correlated with lipreading such that within each group, younger individuals obtained higher visual-only scores than older persons (rCI = -0.54; p = 0.046; rNH = -0.78; p < 0.001). Both CI and NH benefitted by AV over unimodal speech as indexed by calculations of the measures visual enhancement and auditory enhancement (each p < 0.001). Both groups efficiently integrated complementary auditory and visual speech features as indexed by calculations of the measure integration enhancement (each p < 0.005). Given the good agreement between results from literature and the outcome of supplementing an existing validated auditory test with synthetic visual cues, the introduced method can be considered an interesting candidate for clinical and scientific applications to assess measures important for AV SR in a standardized manner. This could be beneficial for optimizing the diagnosis and treatment of individual listening and communication disorders, such as cochlear implantation.
Potts, Lisa G.; Skinner, Margaret W.; Litovsky, Ruth A.; Strube, Michael J; Kuk, Francis
2010-01-01
Background The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing). Purpose This study evaluated the effect of wearing a cochlear implant in one ear and a digital hearing aid in the opposite ear on speech recognition and localization. Research Design A repeated-measures correlational study was completed. Study Sample Nineteen adult Cochlear Nucleus 24 implant recipients participated in the study. Intervention The participants were fit with a Widex Senso Vita 38 hearing aid to achieve maximum audibility and comfort within their dynamic range. Data Collection and Analysis Soundfield thresholds, loudness growth, speech recognition, localization, and subjective questionnaires were obtained six–eight weeks after the hearing aid fitting. Testing was completed in three conditions: hearing aid only, cochlear implant only, and cochlear implant and hearing aid (bimodal). All tests were repeated four weeks after the first test session. Repeated-measures analysis of variance was used to analyze the data. Significant effects were further examined using pairwise comparison of means or in the case of continuous moderators, regression analyses. The speech-recognition and localization tasks were unique, in that a speech stimulus presented from a variety of roaming azimuths (140 degree loudspeaker array) was used. Results Performance in the bimodal condition was significantly better for speech recognition and localization compared to the cochlear implant–only and hearing aid–only conditions. Performance was also different between these conditions when the location (i.e., side of the loudspeaker array that presented the word) was analyzed. In the bimodal condition, the speech-recognition and localization tasks were equal regardless of which side of the loudspeaker array presented the word, while performance was significantly poorer for the monaural conditions (hearing aid only and cochlear implant only) when the words were presented on the side with no stimulation. Binaural loudness summation of 1–3 dB was seen in soundfield thresholds and loudness growth in the bimodal condition. Measures of the audibility of sound with the hearing aid, including unaided thresholds, soundfield thresholds, and the Speech Intelligibility Index, were significant moderators of speech recognition and localization. Based on the questionnaire responses, participants showed a strong preference for bimodal stimulation. Conclusions These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid. PMID:19594084
The Effect of Remote Masking on the Reception of Speech by Young School-Age Children.
Youngdahl, Carla L; Healy, Eric W; Yoho, Sarah E; Apoux, Frédéric; Holt, Rachael Frush
2018-02-15
Psychoacoustic data indicate that infants and children are less likely than adults to focus on a spectral region containing an anticipated signal and are more susceptible to remote masking of a signal. These detection tasks suggest that infants and children, unlike adults, do not listen selectively. However, less is known about children's ability to listen selectively during speech recognition. Accordingly, the current study examines remote masking during speech recognition in children and adults. Adults and 7- and 5-year-old children performed sentence recognition in the presence of various spectrally remote maskers. Intelligibility was determined for each remote-masker condition, and performance was compared across age groups. It was found that speech recognition for 5-year-olds was reduced in the presence of spectrally remote noise, whereas the maskers had no effect on the 7-year-olds or adults. Maskers of different bandwidth and remoteness had similar effects. In accord with psychoacoustic data, young children do not appear to focus on a spectral region of interest and ignore other regions during speech recognition. This tendency may help account for their typically poorer speech perception in noise. This study also appears to capture an important developmental stage, during which a substantial refinement in spectral listening occurs.
Masterson, Julie J.; Preston, Jonathan L.
2015-01-01
Purpose This archival investigation examined the relationship between preliteracy speech sound production skill (SSPS) and spelling in Grade 3 using a dataset in which children's receptive vocabulary was generally within normal limits, speech therapy was not provided until Grade 2, and phonological awareness instruction was discouraged at the time data were collected. Method Participants (N = 250), selected from the Templin Archive (Templin, 2004), varied on prekindergarten SSPS. Participants' real word spellings in Grade 3 were evaluated using a metric of linguistic knowledge, the Computerized Spelling Sensitivity System (Masterson & Apel, 2013). Relationships between kindergarten speech error types and later spellings also were explored. Results Prekindergarten children in the lowest SPSS (7th percentile) scored poorest among articulatory subgroups on both individual spelling elements (phonetic elements, junctures, and affixes) and acceptable spelling (using relatively more omissions and illegal spelling patterns). Within the 7th percentile subgroup, there were no statistical spelling differences between those with mostly atypical speech sound errors and those with mostly typical speech sound errors. Conclusions Findings were consistent with predictions from dual route models of spelling that SSPS is one of many variables associated with spelling skill and that children with impaired SSPS are at risk for spelling difficulty. PMID:26380965
NASA Astrophysics Data System (ADS)
Tallal, Paula; Miller, Steve L.; Bedi, Gail; Byma, Gary; Wang, Xiaoqin; Nagarajan, Srikantan S.; Schreiner, Christoph; Jenkins, William M.; Merzenich, Michael M.
1996-01-01
A speech processing algorithm was developed to create more salient versions of the rapidly changing elements in the acoustic waveform of speech that have been shown to be deficiently processed by language-learning impaired (LLI) children. LLI children received extensive daily training, over a 4-week period, with listening exercises in which all speech was translated into this synthetic form. They also received daily training with computer "games" designed to adaptively drive improvements in temporal processing thresholds. Significant improvements in speech discrimination and language comprehension abilities were demonstrated in two independent groups of LLI children.
Enhanced perception of pitch changes in speech and music in early blind adults.
Arnaud, Laureline; Gracco, Vincent; Ménard, Lucie
2018-06-12
It is well known that congenitally blind adults have enhanced auditory processing for some tasks. For instance, they show supra-normal capacity to perceive accelerated speech. However, only a few studies have investigated basic auditory processing in this population. In this study, we investigated if pitch processing enhancement in the blind is a domain-general or domain-specific phenomenon, and if pitch processing shares the same properties as in the sighted regarding how scores from different domains are associated. Fifteen congenitally blind adults and fifteen sighted adults participated in the study. We first created a set of personalized native and non-native vowel stimuli using an identification and rating task. Then, an adaptive discrimination paradigm was used to determine the frequency difference limen for pitch direction identification of speech (native and non-native vowels) and non-speech stimuli (musical instruments and pure tones). The results show that the blind participants had better discrimination thresholds than controls for native vowels, music stimuli, and pure tones. Whereas within the blind group, the discrimination thresholds were smaller for musical stimuli than speech stimuli, replicating previous findings in sighted participants, we did not find this effect in the current control group. Further analyses indicate that older sighted participants show higher thresholds for instrument sounds compared to speech sounds. This effect of age was not found in the blind group. Moreover, the scores across domains were not associated to the same extent in the blind as they were in the sighted. In conclusion, in addition to providing further evidence of compensatory auditory mechanisms in early blind individuals, our results point to differences in how auditory processing is modulated in this population. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Emerson, Anne; Dearden, Jackie
2013-01-01
A 10-year-old boy with autism was part of an evaluation of an innovative intervention focused on improving communication skills. His school was using the minimal speech approach (Potter and Whittaker, 2001) with all children in accordance with government guidance. The pupil's receptive language had not been formally assessed due to his lack of…
Sahli, A Sanem; Belgin, Erol
2017-07-01
Speech and language assessment is very important in early diagnosis of children with hearing and speech disorders. Aim of this study is to determine the validity and reliability of Preschool Language Scale (5th edition) test with its Turkish translation and adaptation. Our study is conducted on 1320 children aged between 0-7 years 11 months. While 1044 of these children have normal hearing, language and speech development, 276 of them have receptive and/or expressive language disorder. After the English-Turkish and Turkish-English translations of PLS-5 made by two experts command of both languages, some of the test items are reorganized because of the grammatical features of Turkish and the cultural structure of the country. The pilot study was conducted with 378 children. The test which is reorganized in the light of data obtained in pilot application, is applied to children chosen randomly with layering technique from different regions of Turkey, then 15 days later the first test applied again to 120 children. While 1044 of 1320 children aged between 0 and 7 years 11 months are normal, 276 of them have receptive and/or expressive language disorder. While 98 of 103 healthy children of 120 taken under the second evaluation have normal language development, 8 of 9 who used to have language development disorder in the past still remaining (Kappa coefficient:0,468, p<0,001). Pearson correaltion coefficient for TPLS-5 standard gauge are; IA raw score:0,937, IED raw score: 0,908 and TDP: 0,887 respectively. Correlation coefficient for age equivalance is found as IA:0,871, IED: 0,896, TDP: 0,887. TPLS-5 is the first and only language test in our country that can evaluate receptive and/or expressive language skills of children aged between 0-7 years 11 months. Results of the study show that TPLS-5 is a valid and reliable language test for the Turkish children. Copyright © 2017. Published by Elsevier B.V.
Thresholding of auditory cortical representation by background noise
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
2014-01-01
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
Le Prell, Colleen G; Brungart, Douglas S
2016-09-01
In humans, the accepted clinical standards for detecting hearing loss are the behavioral audiogram, based on the absolute detection threshold of pure-tones, and the threshold auditory brainstem response (ABR). The audiogram and the threshold ABR are reliable and sensitive measures of hearing thresholds in human listeners. However, recent results from noise-exposed animals demonstrate that noise exposure can cause substantial neurodegeneration in the peripheral auditory system without degrading pure-tone audiometric thresholds. It has been suggested that clinical measures of auditory performance conducted with stimuli presented above the detection threshold may be more sensitive than the behavioral audiogram in detecting early-stage noise-induced hearing loss in listeners with audiometric thresholds within normal limits. Supra-threshold speech-in-noise testing and supra-threshold ABR responses are reviewed here, given that they may be useful supplements to the behavioral audiogram for assessment of possible neurodegeneration in noise-exposed listeners. Supra-threshold tests may be useful for assessing the effects of noise on the human inner ear, and the effectiveness of interventions designed to prevent noise trauma. The current state of the science does not necessarily allow us to define a single set of best practice protocols. Nonetheless, we encourage investigators to incorporate these metrics into test batteries when feasible, with an effort to standardize procedures to the greatest extent possible as new reports emerge.
Brainstem transcription of speech is disrupted in children with autism spectrum disorders
Russo, Nicole; Nicol, Trent; Trommer, Barbara; Zecker, Steve; Kraus, Nina
2009-01-01
Language impairment is a hallmark of autism spectrum disorders (ASD). The origin of the deficit is poorly understood although deficiencies in auditory processing have been detected in both perception and cortical encoding of speech sounds. Little is known about the processing and transcription of speech sounds at earlier (brainstem) levels or about how background noise may impact this transcription process. Unlike cortical encoding of sounds, brainstem representation preserves stimulus features with a degree of fidelity that enables a direct link between acoustic components of the speech syllable (e.g., onsets) to specific aspects of neural encoding (e.g., waves V and A). We measured brainstem responses to the syllable /da/, in quiet and background noise, in children with and without ASD. Children with ASD exhibited deficits in both the neural synchrony (timing) and phase locking (frequency encoding) of speech sounds, despite normal click-evoked brainstem responses. They also exhibited reduced magnitude and fidelity of speech-evoked responses and inordinate degradation of responses by background noise in comparison to typically developing controls. Neural synchrony in noise was significantly related to measures of core and receptive language ability. These data support the idea that abnormalities in the brainstem processing of speech contribute to the language impairment in ASD. Because it is both passively-elicited and malleable, the speech-evoked brainstem response may serve as a clinical tool to assess auditory processing as well as the effects of auditory training in the ASD population. PMID:19635083
Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam
2013-09-01
Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school when communicating with their parents and educators prior to and after implantation. The trial has been registered at IRCT.ir, number IRCT201109267637N1. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Watson, Rose Mary; Pennington, Lindsay
2015-01-01
Communication difficulties are common in cerebral palsy (CP) and are frequently associated with motor, intellectual and sensory impairments. Speech and language therapy research comprises single-case experimental design and small group studies, limiting evidence-based intervention and possibly exacerbating variation in practice. To describe the assessment and intervention practices of speech-language therapist (SLTs) in the UK in their management of communication difficulties associated with CP in childhood. An online survey of the assessments and interventions employed by UK SLTs working with children and young people with CP was conducted. The survey was publicized via NHS trusts, the Royal College of Speech and Language Therapists (RCSLT) and private practice associations using a variety of social media. The survey was open from 5 December 2011 to 30 January 2012. Two hundred and sixty-five UK SLTs who worked with children and young people with CP in England (n = 199), Wales (n = 13), Scotland (n = 36) and Northern Ireland (n = 17) completed the survey. SLTs reported using a wide variety of published, standardized tests, but most commonly reported assessing oromotor function, speech, receptive and expressive language, and communication skills by observation or using assessment schedules they had developed themselves. The most highly prioritized areas for intervention were: dysphagia, alternative and augmentative (AAC)/interaction and receptive language. SLTs reported using a wide variety of techniques to address difficulties in speech, language and communication. Some interventions used have no supporting evidence. Many SLTs felt unable to estimate the hours of therapy per year children and young people with CP and communication disorders received from their service. The assessment and management of communication difficulties associated with CP in childhood varies widely in the UK. Lack of standard assessment practices prevents comparisons across time or services. The adoption of a standard set of agreed clinical measures would enable benchmarking of service provision, permit the development of large-scale research studies using routine clinical data and facilitate the identification of potential participants for research studies in the UK. Some interventions provided lack evidence. Recent systematic reviews could guide intervention, but robust evidence is needed in most areas addressed in clinical practice. © 2015 The Authors International Journal of Language & Communication Disorders published by John Wiley & Sons Ltd on behalf of Royal College of Speech and Language Therapists.
Casto, Kristen L; Cho, Timothy H
2012-09-01
This case report describes the in-flight speech intelligibility evaluation of an aircraft crewmember with pure tone audiometric thresholds that exceed the U.S. Army's flight standards. Results of in-flight speech intelligibility testing highlight the inability to predict functional auditory abilities from pure tone audiometry and underscore the importance of conducting validated functional hearing evaluations to determine aviation fitness-for-duty.
Hearing loss in children with otitis media with effusion: a systematic review.
Cai, Ting; McPherson, Bradley
2017-02-01
Otitis media with effusion (OME) is the presence of non-purulent inflammation in the middle ear. Hearing impairment is frequently associated with OME. Pure tone audiometry and speech audiometry are two of the most primarily utilised auditory assessments and provide valuable behavioural and functional estimation on hearing loss. This paper was designed to review and analyse the effects of the presence of OME on children's listening abilities. A systematic and descriptive review. Twelve articles reporting frequency-specific pure tone thresholds and/or speech perception measures in children with OME were identified using PubMed, Ovid, Web of Science, ProQuest and Google Scholar search platforms. The hearing loss related to OME averages 18-35 dB HL. The air conduction configuration is roughly flat with a slight elevation at 2000 Hz and a nadir at 8000 Hz. Both speech-in-quiet and speech-in-noise perception have been found to be impaired. OME imposes a series of disadvantages on hearing sensitivity and speech perception in children. Further studies investigating the full range of frequency-specific pure tone thresholds, and that adopt standardised speech test materials are advocated to evaluate hearing related disabilities with greater comprehensiveness, comparability and enhanced consideration of their real life implications.
Neuroscience-inspired computational systems for speech recognition under noisy conditions
NASA Astrophysics Data System (ADS)
Schafer, Phillip B.
Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes advantage of the neural representation's invariance in noise. The scheme centers on a speech similarity measure based on the longest common subsequence between spike sequences. The combined encoding and decoding scheme outperforms a benchmark system in extremely noisy acoustic conditions. Finally, I consider methods for decoding spike representations of continuous speech. To help guide the alignment of templates to words, I design a syllable detection scheme that robustly marks the locations of syllabic nuclei. The scheme combines SVM-based training with a peak selection algorithm designed to improve noise tolerance. By incorporating syllable information into the ASR system, I obtain strong recognition results in noisy conditions, although the performance in noiseless conditions is below the state of the art. The work presented here constitutes a novel approach to the problem of ASR that can be applied in the many challenging acoustic environments in which we use computer technologies today. The proposed spike-based processing methods can potentially be exploited in effcient hardware implementations and could significantly reduce the computational costs of ASR. The work also provides a framework for understanding the advantages of spike-based acoustic coding in the human brain.
Evaluation of auditory functions for Royal Canadian Mounted Police officers.
Vaillancourt, Véronique; Laroche, Chantal; Giguère, Christian; Beaulieu, Marc-André; Legault, Jean-Pierre
2011-06-01
Auditory fitness for duty (AFFD) testing is an important element in an assessment of workers' ability to perform job tasks safely and effectively. Functional hearing is particularly critical to job performance in law enforcement. Most often, assessment is based on pure-tone detection thresholds; however, its validity can be questioned and challenged in court. In an attempt to move beyond the pure-tone audiogram, some organizations like the Royal Canadian Mounted Police (RCMP) are incorporating additional testing to supplement audiometric data in their AFFD protocols, such as measurements of speech recognition in quiet and/or in noise, and sound localization. This article reports on the assessment of RCMP officers wearing hearing aids in speech recognition and sound localization tasks. The purpose was to quantify individual performance in different domains of hearing identified as necessary components of fitness for duty, and to document the type of hearing aids prescribed in the field and their benefit for functional hearing. The data are to help RCMP in making more informed decisions regarding AFFD in officers wearing hearing aids. The proposed new AFFD protocol included unaided and aided measures of speech recognition in quiet and in noise using the Hearing in Noise Test (HINT) and sound localization in the left/right (L/R) and front/back (F/B) horizontal planes. Sixty-four officers were identified and selected by the RCMP to take part in this study on the basis of hearing thresholds exceeding current audiometrically based criteria. This article reports the results of 57 officers wearing hearing aids. Based on individual results, 49% of officers were reclassified from nonoperational status to operational with limitations on fine hearing duties, given their unaided and/or aided performance. Group data revealed that hearing aids (1) improved speech recognition thresholds on the HINT, the effects being most prominent in Quiet and in conditions of spatial separation between target and noise (Noise Right and Noise Left) and least considerable in Noise Front; (2) neither significantly improved nor impeded L/R localization; and (3) substantially increased F/B errors in localization in a number of cases. Additional analyses also pointed to the poor ability of threshold data to predict functional abilities for speech in noise (r² = 0.26 to 0.33) and sound localization (r² = 0.03 to 0.28). Only speech in quiet (r² = 0.68 to 0.85) is predicted adequately from threshold data. Combined with previous findings, results indicate that the use of hearing aids can considerably affect F/B localization abilities in a number of individuals. Moreover, speech understanding in noise and sound localization abilities were poorly predicted from pure-tone thresholds, demonstrating the need to specifically test these abilities, both unaided and aided, when assessing AFFD. Finally, further work is needed to develop empirically based hearing criteria for the RCMP and identify best practices in hearing aid fittings for optimal functional hearing abilities. American Academy of Audiology.
Emotions in freely varying and mono-pitched vowels, acoustic and EGG analyses.
Waaramaa, Teija; Palo, Pertti; Kankare, Elina
2015-12-01
Vocal emotions are expressed either by speech or singing. The difference is that in singing the pitch is predetermined while in speech it may vary freely. It was of interest to study whether there were voice quality differences between freely varying and mono-pitched vowels expressed by professional actors. Given their profession, actors have to be able to express emotions both by speech and singing. Electroglottogram and acoustic analyses of emotional utterances embedded in expressions of freely varying vowels [a:], [i:], [u:] (96 samples) and mono-pitched protracted vowels (96 samples) were studied. Contact quotient (CQEGG) was calculated using 35%, 55%, and 80% threshold levels. Three different threshold levels were used in order to evaluate their effects on emotions. Genders were studied separately. The results suggested significant gender differences for CQEGG 80% threshold level. SPL, CQEGG, and F4 were used to convey emotions, but to a lesser degree, when F0 was predetermined. Moreover, females showed fewer significant variations than males. Both genders used more hypofunctional phonation type in mono-pitched utterances than in the expressions with freely varying pitch. The present material warrants further study of the interplay between CQEGG threshold levels and formant frequencies, and listening tests to investigate the perceptual value of the mono-pitched vowels in the communication of emotions.
Gifford, René H; Revit, Lawrence J
2010-01-01
Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects' preferred listening programs as well as with the addition of either Beam preprocessing (Cochlear Corporation) or the T-Mic accessory option (Advanced Bionics). In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition, a standard t-test was run to evaluate effectiveness across manufacturer for improving the SRT in noise. In Experiment 2, 16 of the 20 Cochlear Corporation subjects were reassessed obtaining an SRT in noise using the manufacturer-suggested "Everyday," "Noise," and "Focus" preprocessing strategies. A repeated-measures ANOVA was employed to assess the effects of preprocessing. The primary findings were (i) both Noise and Focus preprocessing strategies (Cochlear Corporation) significantly improved the SRT in noise as compared to Everyday preprocessing, (ii) the T-Mic accessory option (Advanced Bionics) significantly improved the SRT as compared to the BTE mic, and (iii) Focus preprocessing and the T-Mic resulted in similar degrees of improvement that were not found to be significantly different from one another. Options available in current cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise with both Cochlear Corporation and Advanced Bionics systems. For Cochlear Corporation recipients, Focus preprocessing yields the best speech-recognition performance in a complex listening environment; however, it is recommended that Noise preprocessing be used as the new default for everyday listening environments to avoid the need for switching programs throughout the day. For Advanced Bionics recipients, the T-Mic offers significantly improved performance in noise and is recommended for everyday use in all listening environments. American Academy of Audiology.
Small intragenic deletion in FOXP2 associated with childhood apraxia of speech and dysarthria.
Turner, Samantha J; Hildebrand, Michael S; Block, Susan; Damiano, John; Fahey, Michael; Reilly, Sheena; Bahlo, Melanie; Scheffer, Ingrid E; Morgan, Angela T
2013-09-01
Relatively little is known about the neurobiological basis of speech disorders although genetic determinants are increasingly recognized. The first gene for primary speech disorder was FOXP2, identified in a large, informative family with verbal and oral dyspraxia. Subsequently, many de novo and familial cases with a severe speech disorder associated with FOXP2 mutations have been reported. These mutations include sequencing alterations, translocations, uniparental disomy, and genomic copy number variants. We studied eight probands with speech disorder and their families. Family members were phenotyped using a comprehensive assessment of speech, oral motor function, language, literacy skills, and cognition. Coding regions of FOXP2 were screened to identify novel variants. Segregation of the variant was determined in the probands' families. Variants were identified in two probands. One child with severe motor speech disorder had a small de novo intragenic FOXP2 deletion. His phenotype included features of childhood apraxia of speech and dysarthria, oral motor dyspraxia, receptive and expressive language disorder, and literacy difficulties. The other variant was found in a family in two of three family members with stuttering, and also in the mother with oral motor impairment. This variant was considered a benign polymorphism as it was predicted to be non-pathogenic with in silico tools and found in database controls. This is the first report of a small intragenic deletion of FOXP2 that is likely to be the cause of severe motor speech disorder associated with language and literacy problems. Copyright © 2013 Wiley Periodicals, Inc.
Rossi, N F; Giacheti, C M
2017-07-01
Williams syndrome (WS) phenotype is described as unique and intriguing. The aim of this study was to investigate the associations between speech-language abilities, general cognitive functioning and behavioural problems in individuals with WS, considering age effects and speech-language characteristics of WS sub-groups. The study's participants were 26 individuals with WS and their parents. General cognitive functioning was assessed with the Wechsler Intelligence Scale. Peabody Picture Vocabulary Test, Token Test and the Cookie Theft Picture test were used as speech-language measures. Five speech-language characteristics were evaluated from a 30-min conversation (clichés, echolalia, perseverative speech, exaggerated prosody and monotone intonation). The Child Behaviour Checklist (CBCL 6-18) was used to assess behavioural problems. Higher single-word receptive vocabulary and narrative vocabulary were negatively associated with CBCL T-scores for Social Problems, Aggressive Behaviour and Total Problems. Speech rate was negatively associated with the CBCL Withdrawn/Depressed T-score. Monotone intonation was associated with shy behaviour, as well as exaggerated prosody with talkative behaviour. WS with perseverative speech and exaggerated prosody presented higher scores on Thought Problems. Echolalia was significantly associated with lower Verbal IQ. No significant association was found between IQ and behaviour problems. Age-associated effects were observed only for the Aggressive Behaviour scale. Associations reported in the present study may represent an insightful background for future predictive studies of speech-language, cognition and behaviour problems in WS. © 2017 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Early speech development in Koolen de Vries syndrome limited by oral praxis and hypotonia.
Morgan, Angela T; Haaften, Leenke van; van Hulst, Karen; Edley, Carol; Mei, Cristina; Tan, Tiong Yang; Amor, David; Fisher, Simon E; Koolen, David A
2018-01-01
Communication disorder is common in Koolen de Vries syndrome (KdVS), yet its specific symptomatology has not been examined, limiting prognostic counselling and application of targeted therapies. Here we examine the communication phenotype associated with KdVS. Twenty-nine participants (12 males, 4 with KANSL1 variants, 25 with 17q21.31 microdeletion), aged 1.0-27.0 years were assessed for oral-motor, speech, language, literacy, and social functioning. Early history included hypotonia and feeding difficulties. Speech and language development was delayed and atypical from onset of first words (2; 5-3; 5 years of age on average). Speech was characterised by apraxia (100%) and dysarthria (93%), with stuttering in some (17%). Speech therapy and multi-modal communication (e.g., sign-language) was critical in preschool. Receptive and expressive language abilities were typically commensurate (79%), both being severely affected relative to peers. Children were sociable with a desire to communicate, although some (36%) had pragmatic impairments in domains, where higher-level language was required. A common phenotype was identified, including an overriding 'double hit' of oral hypotonia and apraxia in infancy and preschool, associated with severely delayed speech development. Remarkably however, speech prognosis was positive; apraxia resolved, and although dysarthria persisted, children were intelligible by mid-to-late childhood. In contrast, language and literacy deficits persisted, and pragmatic deficits were apparent. Children with KdVS require early, intensive, speech motor and language therapy, with targeted literacy and social language interventions as developmentally appropriate. Greater understanding of the linguistic phenotype may help unravel the relevance of KANSL1 to child speech and language development.
Change in Psychosocial Health Status Over 5 Years in Relation to Adults' Hearing Ability in Noise.
Stam, Mariska; Smit, Jan H; Twisk, Jos W R; Lemke, Ulrike; Smits, Cas; Festen, Joost M; Kramer, Sophia E
The aim of this study was to establish the longitudinal relationship between hearing ability in noise and psychosocial health outcomes (i.e., loneliness, anxiety, depression, distress, and somatization) in adults aged 18 to 70 years. An additional objective was to determine whether a change in hearing ability in noise over a period of 5 years was associated with a change in psychosocial functioning. Subgroup effects for a range of factors were investigated. Longitudinal data of the web-based Netherlands Longitudinal Study on Hearing (NL-SH) (N = 508) were analyzed. The ability to recognize speech in noise (i.e., the speech-reception-threshold [SRTn]) was measured with an online digit triplet test at baseline and at 5-year follow-up. Psychosocial health status was assessed by online questionnaires. Multiple linear regression analyses and longitudinal statistical analyses (i.e., generalized estimating equations) were performed. Poorer SRTn was associated longitudinally with more feelings of emotional and social loneliness. For participants with a high educational level, the longitudinal association between SRTn and social loneliness was significant. Changes in hearing ability and loneliness appeared significantly associated only for specific subgroups: those with stable pattern of hearing aid nonuse (increased emotional and social loneliness), who entered matrimony (increased social loneliness), and low educational level (less emotional loneliness). No significant longitudinal associations were found between hearing ability and anxiety, depression, distress, or somatization. Hearing ability in noise was longitudinally associated with loneliness. Decline in hearing ability in noise was related to increase in loneliness for specific subgroups of participants. One of these subgroups included participants whose hearing deteriorated over 5 years, but who continued to report nonuse of hearing aids. This is an important and alarming finding that needs further investigation.
Vision impairment and dual sensory problems in middle age
Dawes, Piers; Dickinson, Christine; Emsley, Richard; Bishop, Paul; Cruickshanks, Karen; Edmondson-Jones, Mark; McCormack, Abby; Fortnum, Heather; Moore, David R.; Norman, Paul; Munro, Kevin
2014-01-01
Purpose Vision and hearing impairments are known to increase in middle age. In this study we describe the prevalence of vision impairment and dual sensory impairment in UK adults aged 40 to 69 years in a very large and recently ascertained data set. The associations between vision impairment, age, sex, socioeconomic status, and ethnicity are reported. Methods This research was conducted using the UK Biobank Resource, with subsets of UK Biobank data analysed with respect to self-report of eye problems and glasses use. Better-eye visual acuity with habitually worn refractive correction was assessed with a logMAR chart (n = 116,682). Better-ear speech reception threshold was measured with an adaptive speech in noise test, the Digit Triplet Test (n = 164,770). Prevalence estimates were weighted with respect to UK 2001 Census data. Results Prevalence of mild visual impairment and low vision was estimated at 15.2% (95% CI 14.9–15.5%) and 0.9% (95% CI 0.8–1.0%), respectively. Use of glasses was 88.0% (95% CI 87.9–88.1%). The prevalence of dual sensory impairment was 3.1% (95% CI 3.0–3.2%) and there was a nine-fold increase in the prevalence of dual sensory problems between the youngest and oldest age groups. Older adults, those from low socioeconomic and ethnic minority backgrounds were most at risk for vision problems. Conclusions Mild vision impairment is common in middle aged UK adults, despite widespread use of spectacles. Possible barriers to optometric care for those from low socioeconomic and ethnic minority backgrounds may require attention. A higher than expected prevalence of dual impairment suggests that hearing and vision problems share common causes. Optometrists should consider screening for hearing problems, particularly among older adults. PMID:24888710
Chang, Jiwon; Ryou, Namhyung; Jun, Hyung Jin; Hwang, Soon Young; Song, Jae-Jun; Chae, Sung Won
2016-01-01
Objectives In the present study, we aimed to determine the effect of both active and passive smoking on the prevalence of the hearing impairment and the hearing thresholds in different age groups through the analysis of data collected from the Korea National Health and Nutrition Examination Survey (KNHANES). Study Design Cross-sectional epidemiological study. Methods The KNHANES is an ongoing population study that started in 1998. We included a total of 12,935 participants aged ≥19 years in the KNHANES, from 2010 to 2012, in the present study. Pure-tone audiometric (PTA) testing was conducted and the frequencies tested were 0.5, 1, 2, 3, 4, and 6 kHz. Smoking status was categorized into three groups; current smoking group, passive smoking group and non-smoking group. Results In the current smoking group, the prevalence of speech-frequency bilateral hearing impairment was increased in ages of 40−69, and the rate of high frequency bilateral hearing impairment was elevated in ages of 30−79. When we investigated the impact of smoking on hearing thresholds, we found that the current smoking group had significantly increased hearing thresholds compared to the passive smoking group and non-smoking groups, across all ages in both speech-relevant and high frequencies. The passive smoking group did not have an elevated prevalence of either speech-frequency bilateral hearing impairment or high frequency bilateral hearing impairment, except in ages of 40s. However, the passive smoking group had higher hearing thresholds than the non-smoking group in the 30s and 40s age groups. Conclusion Current smoking was associated with hearing impairment in both speech-relevant frequency and high frequency across all ages. However, except in the ages of 40s, passive smoking was not related to hearing impairment in either speech-relevant or high frequencies. PMID:26756932
ERIC Educational Resources Information Center
Haugh, Erin Kathleen
2017-01-01
The purpose of this study was to examine the role orthographic coding might play in distinguishing between membership in groups of language-based disability types. The sample consisted of 36 second and third-grade subjects who were administered the PAL-II Receptive Coding and Word Choice Accuracy subtest as a measure of orthographic coding…
How Women in Politics View the Role Their Sex Plays in the Impact of Their Speeches on Audiences.
ERIC Educational Resources Information Center
Weisman, Martha
While investigating materials for a new course at City College of New York dealing with the rhetoric of women activists, women who were previously actively involved in the "political scene" were asked to respond to the question, "Does the fact that you are a women affect the content, delivery, or reception of your ideas by the audiences you have…
Longitudinal predictors of aided speech audibility in infants and children
McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Bentler, Ruth; Holte, Lenore; Roush, Patricia; Oleson, Jacob; Van Buren, John; Moeller, Mary Pat
2015-01-01
Objectives Amplification is a core component of early intervention for children who are hard of hearing (CHH), but hearing aids (HAs) have unique effects that may be independent from other components of the early intervention process, such as caregiver training or speech and language intervention. The specific effects of amplification are rarely described in studies of developmental outcomes. The primary purpose of this manuscript is to quantify aided speech audibility during the early childhood years and examine the factors that influence audibility with amplification for children in the Outcomes of Children with Hearing Loss (OCHL) study. Design Participants were 288 children with permanent hearing loss who were followed as part of the OCHL study. All of the children in this analysis had bilateral hearing loss and wore air-conduction behind-the-ear HAs. At every study visit, hearing thresholds were measured using developmentally-appropriate behavioral methods. Data were obtained for a total of 1043 audiometric evaluations across all subjects for the first four study visits. In addition, the aided audibility of speech through the HA was assessed using probe microphone measures. Hearing thresholds and aided audibility were analyzed. Repeated-measures analyses of variance were conducted to determine if patterns of thresholds and aided audibility were significantly different between ears (left vs. right) or across the first four study visits. Furthermore, a cluster analysis was performed based on the aided audibility at entry into the study, aided audibility at the child’s final visit, and change in aided audibility between these two intervals to determine if there were different patterns of longitudinal aided audibility within the sample. Results Eighty-four percent of children in the study had stable audiometric thresholds during the study, defined as threshold changes <10 dB for any single study visit. There were no significant differences in hearing thresholds, aided audibility, or deviation of the HA fitting from prescriptive targets between ears or across test intervals for the first four visits. Approximately 35% of the children in the study had aided audibility that was below the average for the normative range for the Speech Intelligibility Index (SII) based on degree of hearing loss. The cluster analysis of longitudinal aided audibility revealed three distinct groups of children: a group with consistently high aided audibility throughout the study, a group with decreasing audibility during the study, and a group with consistently low aided audibility. Conclusions The current results indicated that approximately 65% of children in the study had adequate aided audibility of speech and stable hearing during the study period. Limited audibility was associated with greater degrees of hearing loss and larger deviations from prescriptive targets. Studies of developmental outcomes will help to determine how aided audibility is necessary to affects developmental outcomes in CHH. PMID:26731156
Characterising receptive language processing in schizophrenia using word and sentence tasks.
Tan, Eric J; Yelland, Gregory W; Rossell, Susan L
2016-01-01
Language dysfunction is proposed to relate to the speech disturbances in schizophrenia, which are more commonly referred to as formal thought disorder (FTD). Presently, language production deficits in schizophrenia are better characterised than language comprehension difficulties. This study thus aimed to examine three aspects of language comprehension in schizophrenia: (1) the role of lexical processing, (2) meaning attribution for words and sentences, and (3) the relationship between comprehension and production. Fifty-seven schizophrenia/schizoaffective disorder patients and 48 healthy controls completed a clinical assessment and three language tasks assessing word recognition, synonym identification, and sentence comprehension. Poorer patient performance was expected on the latter two tasks. Recognition of word form was not impaired in schizophrenia, indicating intact lexical processing. Whereas single-word synonym identification was not significantly impaired, there was a tendency to attribute word meanings based on phonological similarity with increasing FTD severity. Importantly, there was a significant sentence comprehension deficit for processing deep structure, which correlated with FTD severity. These findings established a receptive language deficit in schizophrenia at the syntactic level. There was also evidence for a relationship between some aspects of language comprehension and speech production/FTD. Apart from indicating language as another mechanism in FTD aetiology, the data also suggest that remediating language comprehension problems may be an avenue to pursue in alleviating FTD symptomatology.
Camilleri, Bernard; Botting, Nicola
2013-01-01
Children's low scores on vocabulary tests are often erroneously interpreted as reflecting poor cognitive and/or language skills. It may be necessary to incorporate the measurement of word-learning ability in estimating children's lexical abilities. To explore the reliability and validity of the Dynamic Assessment of Word Learning (DAWL), a new dynamic assessment of receptive vocabulary. A dynamic assessment (DA) of word learning ability was developed and adopted within a nursery school setting with 15 children aged between 3;07 and 4;03, ten of whom had been referred to speech and language therapy. A number of quantitative measures were derived from the DA procedure, including measures of children's ability to identify the targeted items and to generalize to a second exemplar, as well as measures of children's ability to retain the targeted items. Internal, inter-rater and test-retest reliability of the DAWL was established as well as correlational measures of concurrent and predictive validity. The DAWL was found to provide both quantitative and qualitative information which could be used to improve the accuracy of differential diagnosis and the understanding of processes underlying the child's performance. The latter can be used for the purpose of designing more individualized interventions. © 2013 Royal College of Speech and Language Therapists.
Bernstein, Joshua G. W.; Summers, Van; Iyer, Nandini; Brungart, Douglas S.
2012-01-01
Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups. PMID:23039460
A Psychophysical Evaluation of Spectral Enhancement
ERIC Educational Resources Information Center
DiGiovanni, Jeffrey J.; Nelson, Peggy B.; Schlauch, Robert S.
2005-01-01
Listeners with sensorineural hearing loss have well-documented elevated hearing thresholds; reduced auditory dynamic ranges; and reduced spectral (or frequency) resolution that may reduce speech intelligibility, especially in the presence of competing sounds. Amplification and amplitude compression partially compensate for elevated thresholds and…
Vietnamese children and language-based processing tasks.
Hwa-Froelich, Deborah A; Matsuo, Hisako
2005-07-01
Vietnamese children's performance on language-based processing tasks of fast-mapping (FM) word-learning and dynamic assessment (DA) word- and rule-learning tasks were investigated. Twenty-one first- and second-generation Vietnamese preschool children participated in this study. All children were enrolled in 2 Head Start programs in a large city in the Midwest. All children had passed a developmental assessment and routine speech, language, and hearing screenings. All participants were taught 4 invented monosyllabic words in an FM word task, an invented monosyllabic suffix rule (-po) meaning "a part of" in a DA rule task, and 4 invented bisyllabic words in a DA word task. Potential relationships among task performances were investigated. Receptive task performances, expressive task performances, and task totals were added to create receptive total, expressive total, and accumulated performance total (APT) scores. Relationships among receptive total, expressive total, and APT scores were also investigated. Significant correlations were found between FM word, DA rule, and the receptive total. The expressive total correlated with all task total scores, APT, age, and modifiability scores. Modifiability scores correlated with the two DA tasks, expressive total, and the APT. Findings indicate that FM word and the expressive total were positively correlated with most of the other tasks, composite totals, and age. Performance on language-based processing tasks may provide valuable information for separating typically developing Vietnamese preschool children from their peers with language disorders. Practitioners should consider linguistic characteristics of target stimuli. Comparisons should include task, receptive, expressive, and APT.
Dickson, Kirstin; Marshall, Marjorie; Boyle, James; McCartney, Elspeth; O'Hare, Anne; Forbes, John
2009-01-01
The study is the first within trial cost analysis of direct versus indirect and individual versus group modes of speech-and-language therapy for children with primary language impairment. To compare the short-run resource consequences of the four interventions alongside the effects achieved measured by standardized scores on a test of expressive and receptive language. The study design was a cost analysis integrated within a randomized controlled trial using a 2x2 factorial design (direct/indirect versus individual/group therapy) together with a control group that received usual levels of community-based speech-and-language therapy. Research interventions were delivered in school settings in Scotland, UK. Children aged between 6 and 11 years, attending a mainstream school, with standard scores on the Clinical Evaluation of Language Fundamentals (CELF-III(UK)) of less than -1.25 standard deviation (SD) (receptive and/or expressive) and non-verbal IQ on the Wechsler Abbreviated Scale of Intelligence (WASI) above 75, and no reported hearing loss, no moderate/severe articulation/phonology/dysfluency problems or otherwise requiring individual work with a speech-and-language therapist. The intervention involved speech-and-language therapists and speech-and-language therapy assistants working with individual children or small groups of children. A therapy manual was constructed to assist the choice of procedures and activities for intervention. The cost analysis focused on the salary and travel costs associated with each mode of intervention. The cumulative distribution of total costs arising from the time of randomization to post-intervention assessment was estimated. Arithmetic mean costs were compared and reported with their 95% confidence intervals. The results of the intention-to-treat analysis revealed that there were no significant post-intervention differences between direct and indirect modes of therapy, or between individual and group modes on any of the primary language outcome measures. The cost analysis identified indirect therapy, particularly indirect group therapy, as the least costly of the intervention modes with direct individual therapy as the most costly option. The programme cost of providing therapy in practice over 30 weeks for children could represent between 30% and 75% of the total gross revenue spend in primary school per pupil, depending on the choice of assistant led group therapy or therapist-led individual therapy. This study suggests that speech-and-language therapy assistants can act as effective surrogates for speech-and-language therapists in delivering cost-effective services to children with primary language impairment. The resource gains from adopting a group-based approach may ensure that effective therapy is provided to more children in a more efficient way.
Effect of signal to noise ratio on the speech perception ability of older adults
Shojaei, Elahe; Ashayeri, Hassan; Jafari, Zahra; Zarrin Dast, Mohammad Reza; Kamali, Koorosh
2016-01-01
Background: Speech perception ability depends on auditory and extra-auditory elements. The signal- to-noise ratio (SNR) is an extra-auditory element that has an effect on the ability to normally follow speech and maintain a conversation. Speech in noise perception difficulty is a common complaint of the elderly. In this study, the importance of SNR magnitude as an extra-auditory effect on speech perception in noise was examined in the elderly. Methods: The speech perception in noise test (SPIN) was conducted on 25 elderly participants who had bilateral low–mid frequency normal hearing thresholds at three SNRs in the presence of ipsilateral white noise. These participants were selected by available sampling method. Cognitive screening was done using the Persian Mini Mental State Examination (MMSE) test. Results: Independent T- test, ANNOVA and Pearson Correlation Index were used for statistical analysis. There was a significant difference in word discrimination scores at silence and at three SNRs in both ears (p≤0.047). Moreover, there was a significant difference in word discrimination scores for paired SNRs (0 and +5, 0 and +10, and +5 and +10 (p≤0.04)). No significant correlation was found between age and word recognition scores at silence and at three SNRs in both ears (p≥0.386). Conclusion: Our results revealed that decreasing the signal level and increasing the competing noise considerably reduced the speech perception ability in normal hearing at low–mid thresholds in the elderly. These results support the critical role of SNRs for speech perception ability in the elderly. Furthermore, our results revealed that normal hearing elderly participants required compensatory strategies to maintain normal speech perception in challenging acoustic situations. PMID:27390712
Cochlear Implantation in the Very Young Child: Issues Unique to the Under-1 Population
Cosetti, Maura; Roland, J. Thomas
2010-01-01
Since the advent of cochlear implantation, candidacy criteria have slowly broadened to include increasingly younger patients. Spurred by evidence demonstrating both perioperative safety and significantly increased speech and language benefit with early auditory intervention, children younger than 12 months of age are now being successfully implanted at many centers. This review highlights the unique challenges involved in cochlear implantation in the very young child, specifically diagnosis and certainty of testing, anesthetic risk, surgical technique, intraoperative testing and postoperative programming, long-term safety, development of receptive and expressive language, and outcomes of speech perception. Overall, the current body of literature indicates that cochlear implantation prior to 1 year of age is both safe and efficacious. PMID:20483813
Accurate or assumed: visual learning in children with ASD.
Trembath, David; Vivanti, Giacomo; Iacono, Teresa; Dissanayake, Cheryl
2015-10-01
Children with autism spectrum disorder (ASD) are often described as visual learners. We tested this assumption in an experiment in which 25 children with ASD, 19 children with global developmental delay (GDD), and 17 typically developing (TD) children were presented a series of videos via an eye tracker in which an actor instructed them to manipulate objects in speech-only and speech + pictures conditions. We found no group differences in visual attention to the stimuli. The GDD and TD groups performed better when pictures were available, whereas the ASD group did not. Performance of children with ASD and GDD was positively correlated with visual attention and receptive language. We found no evidence of a prominent visual learning style in the ASD group.
NASA Astrophysics Data System (ADS)
Ramirez Garzón, Y. T.; Pasaye, E. H.; Barrios, F. A.
2014-11-01
Using functional Magnetic Resonance Imaging (fMRI) it is possible to study the functional anatomy of primary cortices. Cortical representations in the primary somatosensory cortex have shown discrepancies between activations related to the same body region in some studies; these differences have been more pronounced for lower limb representations. The aim of this study was to observe the influence of the tactile stimulus intensity in somatosensory cortical responses using fMRI. Based in the sensitivity and pain threshold of each subject, we used Von Frey filaments for stimulate 12 control subject in three receptive fields on the right thigh. One filament near to sensitivity threshold (VFS), other close to pain threshold (VFP) and one intermediate filament between the two previous thresholds (VFI). The tactile stimulation with VFS produced no activation on SI, while that the contralateral SI was activated by stimulation with VFI in 5 subjects and with the stimulation of VFP in all subjects. Second level statistical analysis showed significant differences between SI activations related to the stimulation with VFP and VFI (VFP > VFI), in the comparison between the applied different intensities, a small cluster of activation was observed on SI for the unique possible contrast (VFP > VFI). The time course per trial for each subject was extracted and averaged to extract the activation in the contralateral SI and compared across the stimulus modalities, between the sites of field receptive stimulated and the intensities used. The time course of tactile stimulus responses revealed a consistent single peak of activity per cycle (30 s), approximately 12 s after the onset of the stimulus, with exception of the VFI stimulation,_which showed the peak at 10 s. Thus, our results indicate that the cortical representation of a tactile stimulus with fMRI is modulated for the intensity of the stimulus applied.
Gray, Lincoln; Kesser, Bradley; Cole, Erika
2009-09-01
Unilateral hearing loss causes difficulty hearing in noise (the "cocktail party effect") due to absence of redundancy, head-shadow, and binaural squelch. This study explores the emergence of the head-shadow and binaural squelch effects in children with unilateral congenital aural atresia undergoing surgery to correct their hearing deficit. Adding patients and data from a similar study previously published, we also evaluate covariates such as the age of the patient, surgical outcome, and complexity of the task that might predict the extent of binaural benefit--patients' ability to "use" their new ear--when understanding speech in noise. Patients with unilateral congenital aural atresia were tested for their ability to understand speech in noise before and again 1 month after surgery to repair their atresia. In a sound-attenuating booth participants faced a speaker that produced speech signals with noise 90 degrees to the side of the normal (non-atretic) ear and again to the side of the atretic ear. The Hearing in Noise Test (HINT for adults or HINT-C for children) was used to estimate the patients' speech reception thresholds. The speech-in-noise test (SPIN) or the Pediatric Speech Intelligibility (PSI) Test was used in the previous study. There was consistent improvement, averaging 5dB regardless of age, in the ability to take advantage of head-shadow in understanding speech with noise to the side of the non-atretic (normal) ear. There was, in contrast, a strong negative linear effect of age (r(2)=.78, selecting patients over 8 years) in the emergence of binaural squelch to understand speech with noise to the side of the atretic ear. In patients over 8 years, this trend replicated over different studies and different tests. Children less than 8 years, however, showed less improvement in the HINT-C than in the PSI after surgery with noise toward their atretic ear (effect size=3). No binaural result was correlated with degree of hearing improvement after surgery. All patients are able to take advantage of a favorable signal-to-noise ratio in their newly opened ear; that is with noise toward the side of the normal ear (but this physical, bilateral, head-shadow effect need not involve true central binaural processing). With noise toward the atretic ear, the emergence of binaural squelch replicates between two studies for all but the youngest patients. Approximately 2dB of binaural gain is lost for each decade that surgery is delayed, and zero (or poorer) binaural benefit is predicted after 38 years of age. Older adults do more poorly, possibly secondary to their long period of auditory deprivation. At the youngest ages, however, binaural results are different in open- and closed-set speech tests; the more complex hearing tasks may involve a greater cognitive load. Other cognitive abilities (late evoked potentials, grey matter in auditory cortex, and multitasking) show similar effects of age, peaking at the same late-teen/young-adult period. Longer follow-up is likely critical for the understanding of these data. Getting a new ear may be--like multitasking--challenging for the youngest and oldest subjects.
2013-01-01
Children with severe hearing loss most likely receive the greatest benefit from a cochlear implant (CI) when implanted at less than 2 years of age. Children with a hearing loss may also benefit greater from binaural sensory stimulation. Four children who received their first CI under 12 months of age were included in this study. Effects on auditory development were determined using the German LittlEARS Auditory Questionnaire, closed- and open-set monosyllabic word tests, aided free-field, the Mainzer and Göttinger speech discrimination tests, Monosyllabic-Trochee-Polysyllabic (MTP), and Listening Progress Profile (LiP). Speech production and grammar development were evaluated using a German language speech development test (SETK), reception of grammar test (TROG-D) and active vocabulary test (AWST-R). The data showed that children implanted under 12 months of age reached open-set monosyllabic word discrimination at an age of 24 months. LiP results improved over time, and children recognized 100% of words in the MTP test after 12 months. All children performed as well as or better than their hearing peers in speech production and grammar development. SETK showed that the speech development of these children was in general age appropriate. The data suggests that early hearing loss intervention benefits speech and language development and supports the trend towards early cochlear implantation. Furthermore, the data emphasizes the potential benefits associated with bilateral implantation. PMID:23509653
May-Mederake, Birgit; Shehata-Dieler, Wafaa
2013-01-01
Children with severe hearing loss most likely receive the greatest benefit from a cochlear implant (CI) when implanted at less than 2 years of age. Children with a hearing loss may also benefit greater from binaural sensory stimulation. Four children who received their first CI under 12 months of age were included in this study. Effects on auditory development were determined using the German LittlEARS Auditory Questionnaire, closed- and open-set monosyllabic word tests, aided free-field, the Mainzer and Göttinger speech discrimination tests, Monosyllabic-Trochee-Polysyllabic (MTP), and Listening Progress Profile (LiP). Speech production and grammar development were evaluated using a German language speech development test (SETK), reception of grammar test (TROG-D) and active vocabulary test (AWST-R). The data showed that children implanted under 12 months of age reached open-set monosyllabic word discrimination at an age of 24 months. LiP results improved over time, and children recognized 100% of words in the MTP test after 12 months. All children performed as well as or better than their hearing peers in speech production and grammar development. SETK showed that the speech development of these children was in general age appropriate. The data suggests that early hearing loss intervention benefits speech and language development and supports the trend towards early cochlear implantation. Furthermore, the data emphasizes the potential benefits associated with bilateral implantation.
Venezia, Jonathan H.; Hickok, Gregory; Richards, Virginia M.
2016-01-01
Speech intelligibility depends on the integrity of spectrotemporal patterns in the signal. The current study is concerned with the speech modulation power spectrum (MPS), which is a two-dimensional representation of energy at different combinations of temporal and spectral (i.e., spectrotemporal) modulation rates. A psychophysical procedure was developed to identify the regions of the MPS that contribute to successful reception of auditory sentences. The procedure, based on the two-dimensional image classification technique known as “bubbles” (Gosselin and Schyns (2001). Vision Res. 41, 2261–2271), involves filtering (i.e., degrading) the speech signal by removing parts of the MPS at random, and relating filter patterns to observer performance (keywords identified) over a number of trials. The result is a classification image (CImg) or “perceptual map” that emphasizes regions of the MPS essential for speech intelligibility. This procedure was tested using normal-rate and 2×-time-compressed sentences. The results indicated: (a) CImgs could be reliably estimated in individual listeners in relatively few trials, (b) CImgs tracked changes in spectrotemporal modulation energy induced by time compression, though not completely, indicating that “perceptual maps” deviated from physical stimulus energy, and (c) the bubbles method captured variance in intelligibility not reflected in a common modulation-based intelligibility metric (spectrotemporal modulation index or STMI). PMID:27586738
Applying and evaluating computer-animated tutors
NASA Astrophysics Data System (ADS)
Massaro, Dominic W.; Bosseler, Alexis; Stone, Patrick S.; Connors, Pamela
2002-05-01
We have developed computer-assisted speech and language tutors for deaf, hard of hearing, and autistic children. Our language-training program utilizes our computer-animated talking head, Baldi, as the conversational agent, who guides students through a variety of exercises designed to teach vocabulary and grammer, to improve speech articulation, and to develop linguistic and phonological awareness. Baldi is an accurate three-dimensional animated talking head appropriately aligned with either synthesized or natural speech. Baldi has a tongue and palate, which can be displayed by making his skin transparent. Two specific language-training programs have been evaluated to determine if they improve word learning and speech articulation. The results indicate that the programs are effective in teaching receptive and productive language. Advantages of utilizing a computer-animated agent as a language tutor are the popularity of computers and embodied conversational agents with autistic kids, the perpetual availability of the program, and individualized instruction. Students enjoy working with Baldi because he offers extreme patience, he doesn't become angry, tired, or bored, and he is in effect a perpetual teaching machine. The results indicate that the psychology and technology of Baldi holds great promise in language learning and speech therapy. [Work supported by NSF Grant Nos. CDA-9726363 and BCS-9905176 and Public Health Service Grant No. PHS R01 DC00236.
Vocabulary Facilitates Speech Perception in Children With Hearing Aids
Walker, Elizabeth A.; Kirby, Benjamin; McCreery, Ryan W.
2017-01-01
Purpose We examined the effects of vocabulary, lexical characteristics (age of acquisition and phonotactic probability), and auditory access (aided audibility and daily hearing aid [HA] use) on speech perception skills in children with HAs. Method Participants included 24 children with HAs and 25 children with normal hearing (NH), ages 5–12 years. Groups were matched on age, expressive and receptive vocabulary, articulation, and nonverbal working memory. Participants repeated monosyllabic words and nonwords in noise. Stimuli varied on age of acquisition, lexical frequency, and phonotactic probability. Performance in each condition was measured by the signal-to-noise ratio at which the child could accurately repeat 50% of the stimuli. Results Children from both groups with larger vocabularies showed better performance than children with smaller vocabularies on nonwords and late-acquired words but not early-acquired words. Overall, children with HAs showed poorer performance than children with NH. Auditory access was not associated with speech perception for the children with HAs. Conclusions Children with HAs show deficits in sensitivity to phonological structure but appear to take advantage of vocabulary skills to support speech perception in the same way as children with NH. Further investigation is needed to understand the causes of the gap that exists between the overall speech perception abilities of children with HAs and children with NH. PMID:28738138
Auditory verbal habilitation is associated with improved outcome for children with cochlear implant.
Percy-Smith, Lone; Tønning, Tenna Lindbjerg; Josvassen, Jane Lignel; Mikkelsen, Jeanette Hølledig; Nissen, Lena; Dieleman, Eveline; Hallstrøm, Maria; Cayé-Thomasen, Per
2018-01-01
To study the impact of (re)habilitation strategy on speech-language outcomes for early, cochlear implanted children enrolled in different intervention programmes post implant. Data relate to a total of 130 children representing two pediatric cohorts consisting of 94 and 36 subjects, respectively. The two cohorts had different speech and language intervention following cochlear implantation, i.e. standard habilitation vs. auditory verbal (AV) intervention. Three tests of speech and language were applied covering language areas of receptive and productive vocabulary and language understanding. Children in AV intervention outperformed children in standard habilitation on all three tests of speech and language. When effect of intervention was adjusted with other covariates children in AV intervention still had higher odds at performing at age equivalent speech and language levels. Compared to standard intervention, AV intervention is associated with improved outcome for children with CI. Based on this finding, we recommend that all children with HI should be offered this intervention and it is, therefore, highly relevant when National boards of Health and Social Affairs recommend basing the habilitation on principles from AV practice. It should be noted, that a minority of children use spoken language with sign support. For this group it is, however, still important that educational services provide auditory skills training.
Amplitude modulation detection with concurrent frequency modulation.
Nagaraj, Naveen K
2016-09-01
Human speech consists of concomitant temporal modulations in amplitude and frequency that are crucial for speech perception. In this study, amplitude modulation (AM) detection thresholds were measured for 550 and 5000 Hz carriers with and without concurrent frequency modulation (FM), at AM rates crucial for speech perception. Results indicate that adding 40 Hz FM interferes with AM detection, more so for 5000 Hz carrier and for frequency deviations exceeding the critical bandwidth of the carrier frequency. These findings suggest that future cochlear implant processors, encoding speech fine-structures may consider limiting the FM to narrow bandwidth and to low frequencies.
Spencer, Caroline; Weber-Fox, Christine
2014-01-01
Purpose In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Methods Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9–5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. Results CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Conclusion Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. PMID:25173455
ERIC Educational Resources Information Center
Patel, Aniruddh D.; Foxton, Jessica M.; Griffiths, Timothy D.
2005-01-01
Musically tone-deaf individuals have psychophysical deficits in detecting pitch changes, yet their discrimination of intonation contours in speech appears to be normal. One hypothesis for this dissociation is that intonation contours use coarse pitch contrasts which exceed the pitch-change detection thresholds of tone-deaf individuals (Peretz &…
Binaural Release from Masking for a Speech Sound in Infants, Preschoolers, and Adults.
ERIC Educational Resources Information Center
Nozza, Robert J.
1988-01-01
Binaural masked thresholds for a speech sound (/ba/) were estimated under two interaural phase conditions in three age groups (infants, preschoolers, adults). Differences as a function of both age and condition and effects of reducing intensity for adults were significant in indicating possible developmental binaural hearing changes, especially…
Cox, Robyn M; Alexander, Genevieve C; Johnson, Jani; Rivera, Izel
2011-01-01
We investigated the prevalence of cochlear dead regions in listeners with hearing losses similar to those of many hearing aid wearers, and explored the impact of these dead regions on speech perception. Prevalence of dead regions was assessed using the Threshold Equalizing Noise test (TEN(HL)). Speech recognition was measured using high-frequency emphasis (HFE) Quick Speech In Noise (QSIN) test stimuli and low-pass filtered HFE QSIN stimuli. About one third of subjects tested positive for a dead region at one or more frequencies. Also, groups without and with dead regions both benefited from additional high-frequency speech cues. PMID:21522068
Jansson-Verkasalo, Eira; Eggers, Kurt; Järvenpää, Anu; Suominen, Kalervo; Van den Bergh, Bea; De Nil, Luc; Kujala, Teija
2014-09-01
Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS). Participants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC. There were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS. The results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology. The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter. Copyright © 2014 Elsevier Inc. All rights reserved.
Assessment of central auditory processing in a group of workers exposed to solvents.
Fuente, Adrian; McPherson, Bradley; Muñoz, Verónica; Pablo Espina, Juan
2006-12-01
Despite having normal hearing thresholds and speech recognition thresholds, results for central auditory tests were abnormal in a group of workers exposed to solvents. Workers exposed to solvents may have difficulties in everyday listening situations that are not related to a decrement in hearing thresholds. A central auditory processing disorder may underlie these difficulties. To study central auditory processing abilities in a group of workers occupationally exposed to a mix of organic solvents. Ten workers exposed to a mix of organic solvents and 10 matched non-exposed workers were studied. The test battery comprised pure-tone audiometry, tympanometry, acoustic reflex measurement, acoustic reflex decay, dichotic digit, pitch pattern sequence, masking level difference, filtered speech, random gap detection and hearing-in-noise tests. All the workers presented normal hearing thresholds and no signs of middle ear abnormalities. Workers exposed to solvents had lower results in comparison with the control group and previously reported normative data, in the majority of the tests.
NASA Astrophysics Data System (ADS)
Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica
2005-12-01
This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.
Optimization of programming parameters in children with the advanced bionics cochlear implant.
Baudhuin, Jacquelyn; Cadieux, Jamie; Firszt, Jill B; Reeder, Ruth M; Maxson, Jerrica L
2012-05-01
Cochlear implants provide access to soft intensity sounds and therefore improved audibility for children with severe-to-profound hearing loss. Speech processor programming parameters, such as threshold (or T-level), input dynamic range (IDR), and microphone sensitivity, contribute to the recipient's program and influence audibility. When soundfield thresholds obtained through the speech processor are elevated, programming parameters can be modified to improve soft sound detection. Adult recipients show improved detection for low-level sounds when T-levels are set at raised levels and show better speech understanding in quiet when wider IDRs are used. Little is known about the effects of parameter settings on detection and speech recognition in children using today's cochlear implant technology. The overall study aim was to assess optimal T-level, IDR, and sensitivity settings in pediatric recipients of the Advanced Bionics cochlear implant. Two experiments were conducted. Experiment 1 examined the effects of two T-level settings on soundfield thresholds and detection of the Ling 6 sounds. One program set T-levels at 10% of most comfortable levels (M-levels) and another at 10 current units (CUs) below the level judged as "soft." Experiment 2 examined the effects of IDR and sensitivity settings on speech recognition in quiet and noise. Participants were 11 children 7-17 yr of age (mean 11.3) implanted with the Advanced Bionics High Resolution 90K or CII cochlear implant system who had speech recognition scores of 20% or greater on a monosyllabic word test. Two T-level programs were compared for detection of the Ling sounds and frequency modulated (FM) tones. Differing IDR/sensitivity programs (50/0, 50/10, 70/0, 70/10) were compared using Ling and FM tone detection thresholds, CNC (consonant-vowel nucleus-consonant) words at 50 dB SPL, and Hearing in Noise Test for Children (HINT-C) sentences at 65 dB SPL in the presence of four-talker babble (+8 signal-to-noise ratio). Outcomes were analyzed using a paired t-test and a mixed-model repeated measures analysis of variance (ANOVA). T-levels set 10 CUs below "soft" resulted in significantly lower detection thresholds for all six Ling sounds and FM tones at 250, 1000, 3000, 4000, and 6000 Hz. When comparing programs differing by IDR and sensitivity, a 50 dB IDR with a 0 sensitivity setting showed significantly poorer thresholds for low frequency FM tones and voiced Ling sounds. Analysis of group mean scores for CNC words in quiet or HINT-C sentences in noise indicated no significant differences across IDR/sensitivity settings. Individual data, however, showed significant differences between IDR/sensitivity programs in noise; the optimal program differed across participants. In pediatric recipients of the Advanced Bionics cochlear implant device, manually setting T-levels with ascending loudness judgments should be considered when possible or when low-level sounds are inaudible. Study findings confirm the need to determine program settings on an individual basis as well as the importance of speech recognition verification measures in both quiet and noise. Clinical guidelines are suggested for selection of programming parameters in both young and older children. American Academy of Audiology.
Potts, Lisa G; Kolb, Kelly A
2014-04-01
Difficulty understanding speech in the presence of background noise is a common report among cochlear implant (CI) recipients. Several speech-processing options designed to improve speech recognition, especially in noise, are currently available in the Cochlear Nucleus CP810 speech processor. These include adaptive dynamic range optimization (ADRO), autosensitivity control (ASC), Beam, and Zoom. The purpose of this study was to evaluate CI recipients' speech-in-noise recognition to determine which currently available processing option or options resulted in best performance in a simulated restaurant environment. Experimental study with one study group. The independent variable was speech-processing option, and the dependent variable was the reception threshold for sentences score. Thirty-two adult CI recipients. Eight processing options were tested: Beam, Beam + ASC, Beam + ADRO, Beam + ASC + ADRO, Zoom, Zoom + ASC, Zoom + ADRO, and Zoom + ASC + ADRO. Participants repeated Hearing in Noise Test sentences presented at a 0° azimuth, with R-Space restaurant noise presented from a 360° eight-loudspeaker array at 70 dB sound pressure level. A one-way repeated-measures analysis of variance was used to analyze differences in Beam options, Zoom options, and Beam versus Zoom options. Among the Beam options, Beam + ADRO was significantly poorer than Beam only, Beam + ASC, and Beam + ASC + ADRO. A 1.6-dB difference was observed between the best (Beam only) and poorest (Beam + ADRO) options. Among the Zoom options, Zoom only and Zoom + ADRO were significantly poorer than Zoom + ASC. A 2.2-dB difference was observed between the best (Zoom + ASC) and poorest (Zoom only) options. The comparison between Beam and Zoom options showed one significant difference, with Zoom only significantly poorer than Beam only. No significant difference was found between the other Beam and Zoom options (Beam + ASC vs Zoom + ASC, Beam + ADRO vs Zoom + ADRO, and Beam + ASC + ADRO vs Zoom + ASC + ADRO). The best processing option varied across subjects, with an almost equal number of participants performing best with a Beam option (n = 15) compared with a Zoom option (n = 17). There were no significant demographic or audiological moderating variables for any option. The results showed no significant differences between adaptive directionality (Beam) and fixed directionality (Zoom) when ASC was active in the R-Space environment. This finding suggests that noise-reduction processing is extremely valuable in loud semidiffuse environments in which the effectiveness of directional filtering might be diminished. However, there was no significant difference between the Beam-only and Beam + ASC options, which is most likely related to the additional noise cancellation performed by the Beam option (i.e., two-stage directional filtering and noise cancellation). In addition, the processing options with ADRO resulted in the poorest performances. This could be related to how the CI recipients were programmed or the loud noise level used in this study. The best processing option varied across subjects, but the majority performed best with directional filtering (Beam or Zoom) in combination with ASC. Therefore in a loud semidiffuse environment, the use of either Beam + ASC or Zoom + ASC is recommended. American Academy of Audiology.
Rader, Tobias; Fastl, Hugo; Baumann, Uwe
2013-01-01
The aim of the study was to measure and compare speech perception in users of electric-acoustic stimulation (EAS) supported by a hearing aid in the unimplanted ear and in bilateral cochlear implant (CI) users under different noise and sound field conditions. Gap listening was assessed by comparing performance in unmodulated and modulated Comité Consultatif International Téléphonique et Télégraphique (CCITT) noise conditions, and binaural interaction was investigated by comparing single source and multisource sound fields. Speech perception in noise was measured using a closed-set sentence test (Oldenburg Sentence Test, OLSA) in a multisource noise field (MSNF) consisting of a four-loudspeaker array with independent noise sources and a single source in frontal position (S0N0). Speech simulating noise (Fastl-noise), CCITT-noise (continuous), and OLSA-noise (pseudo continuous) served as noise sources with different temporal patterns. Speech tests were performed in two groups of subjects who were using either EAS (n = 12) or bilateral CIs (n = 10). All subjects in the EAS group were fitted with a high-power hearing aid in the opposite ear (bimodal EAS). The average group score on monosyllable in quiet was 68.8% (EAS) and 80.5% (bilateral CI). A group of 22 listeners with normal hearing served as controls to compare and evaluate potential gap listening effects in implanted patients. Average speech reception thresholds in the EAS group were significantly lower than those for the bilateral CI group in all test conditions (CCITT 6.1 dB, p = 0.001; Fastl-noise 5.4 dB, p < 0.01; Oldenburg-(OL)-noise 1.6 dB, p < 0.05). Bilateral CI and EAS user groups showed a significant improvement of 4.3 dB (p = 0.004) and 5.4 dB (p = 0.002) between S0N0 and MSNF sound field conditions respectively, which signifies advantages caused by bilateral interaction in both groups. Performance in the control group showed a significant gap listening effect with a difference of 6.5 dB between modulated and unmodulated noise in S0N0, and a difference of 3.0 dB in MSNF. The ability to "glimpse" into short temporal masker gaps was absent in both groups of implanted subjects. Combined EAS in one ear supported by a hearing aid on the contralateral ear provided significantly improved speech perception compared with bilateral cochlear implantation. Although the scores for monosyllable words in quiet were higher in the bilateral CI group, the EAS group performed better in different noise and sound field conditions. Furthermore, the results indicated that binaural interaction between EAS in one ear and residual acoustic hearing in the opposite ear enhances speech perception in complex noise situations. Both bilateral CI and bimodal EAS users did not benefit from short temporal masker gaps, therefore the better performance of the EAS group in modulated noise conditions could be explained by the improved transmission of fundamental frequency cues in the lower-frequency region of acoustic hearing, which might foster the grouping of auditory objects.
ERIC Educational Resources Information Center
Mistry, Malini; Barnes, Danielle
2013-01-01
This study examines the use of Makaton® , a language programme based on the use of signing, symbols and speech, as a pedagogic tool to support the development of talk for pupils learning English as an Additional Language (EAL). The research setting was a Reception class with a high percentage of pupils who have EAL in the initial stages of…
Peter, Beate
2013-01-01
This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD and 11 controls. Syllable durations from a syllable repetition task served as an estimate of maximal oral movement speed. In two imitation tasks, nonwords and clapped rhythms, unstressed vowel durations and quarter-note clap intervals served as estimates of oral and hand movement speed, respectively. Syllable durations were significantly correlated with vowel durations and hand clap intervals. Sentence imitation was correlated with all three timed movement measures. Clustering on syllable repetition durations produced three clusters that also differed in sentence imitation scores. Results are consistent with limited movement speeds across motor systems and SSD subtypes defined by motor speeds as a corollary of expressive language abilities. PMID:22411590
Peter, Beate
2012-12-01
This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD and 11 controls. Syllable durations from a syllable repetition task served as an estimate of maximal oral movement speed. In two imitation tasks, nonwords and clapped rhythms, unstressed vowel durations and quarter-note clap intervals served as estimates of oral and hand movement speed, respectively. Syllable durations were significantly correlated with vowel durations and hand clap intervals. Sentence imitation was correlated with all three timed movement measures. Clustering on syllable repetition durations produced three clusters that also differed in sentence imitation scores. Results are consistent with limited movement speeds across motor systems and SSD subtypes defined by motor speeds as a corollary of expressive language abilities.
Cross-modal enhancement of speech detection in young and older adults: does signal content matter?
Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra
2011-01-01
The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.
Abdul Wahab, Noor Alaudin; Zakaria, Mohd Normani; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Wahab, Suzaily
2017-11-01
The present, case-control, study investigates binaural hearing performance in schizophrenia patients towards sentences presented in quiet and noise. Participants were twenty-one healthy controls and sixteen schizophrenia patients with normal peripheral auditory functions. The binaural hearing was examined in four listening conditions by using the Malay version of hearing in noise test. The syntactically and semantically correct sentences were presented via headphones to the randomly selected subjects. In each condition, the adaptively obtained reception thresholds for speech (RTS) were used to determine RTS noise composite and spatial release from masking. Schizophrenia patients demonstrated significantly higher mean RTS value relative to healthy controls (p=0.018). The large effect size found in three listening conditions, i.e., in quiet (d=1.07), noise right (d=0.88) and noise composite (d=0.90) indicates statistically significant difference between the groups. However, noise front and noise left conditions show medium (d=0.61) and small (d=0.50) effect size respectively. No statistical difference between groups was noted in regards to spatial release from masking on right (p=0.305) and left (p=0.970) ear. The present findings suggest an abnormal unilateral auditory processing in central auditory pathway in schizophrenia patients. Future studies to explore the role of binaural and spatial auditory processing were recommended.
Mobio, M N A; Ilé, S; Yavo, N; Koffi-Aka, V; Kouassi-Ndjeundo, J; Tea, B
To evaluate patients wearing hearing aid at the International Center of Auditory Correction in Abidjan. It is a descriptif and transversal study from 07/01/99 to 06/30/10. We have included the files of patients completely filled. We have studied the indications, prosthetics gains and the satisfaction after hearing aid. We have achieved 536 files. The average was 36 years. The indications have been in 76.1% cases of sensorineural hearing loss. The hearing loss has been associated in 13.2% cases to language disorder. For all patient we have noticed bilateral hearing loss in 496 cases (92.5%). The behind the ear aids have been chosen in 69% cases. The type was analogical or digital respectively in 65% and 35% des cas. The prosthetic pure tonal gain was more than 30 dB in 66.8% cases and the prosthetic speech reception threshold gain more than 30 dB in 55.3% cases. The patients have been respectively satisfied less satisfacted, no satisfacted in 68.47%, 22.76% and 8.7% cases. The hearing aids have improved the audition in most of the indications. The proportion of patients satisfied was proportionally equivalent to the audiometric results.
DeVries, Lindsay; Scheperle, Rachel; Bierer, Julie Arenberg
2016-06-01
Variability in speech perception scores among cochlear implant listeners may largely reflect the variable efficacy of implant electrodes to convey stimulus information to the auditory nerve. In the present study, three metrics were applied to assess the quality of the electrode-neuron interface of individual cochlear implant channels: the electrically evoked compound action potential (ECAP), the estimation of electrode position using computerized tomography (CT), and behavioral thresholds using focused stimulation. The primary motivation of this approach is to evaluate the ECAP as a site-specific measure of the electrode-neuron interface in the context of two peripheral factors that likely contribute to degraded perception: large electrode-to-modiolus distance and reduced neural density. Ten unilaterally implanted adults with Advanced Bionics HiRes90k devices participated. ECAPs were elicited with monopolar stimulation within a forward-masking paradigm to construct channel interaction functions (CIF), behavioral thresholds were obtained with quadrupolar (sQP) stimulation, and data from imaging provided estimates of electrode-to-modiolus distance and scalar location (scala tympani (ST), intermediate, or scala vestibuli (SV)) for each electrode. The width of the ECAP CIF was positively correlated with electrode-to-modiolus distance; both of these measures were also influenced by scalar position. The ECAP peak amplitude was negatively correlated with behavioral thresholds. Moreover, subjects with low behavioral thresholds and large ECAP amplitudes, averaged across electrodes, tended to have higher speech perception scores. These results suggest a potential clinical role for the ECAP in the objective assessment of individual cochlear implant channels, with the potential to improve speech perception outcomes.
Ekström, Seth-Reino; Borg, Erik
2011-01-01
The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC) testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA) noise and speech spectrum-filtered noise (SPN)]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA). The results showed a significant effect of piano performance speed and octave (P<.01). Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01) and SPN (P<.05). Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01), but there were smaller differences between masking conditions (P<.01). It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.
Octave-Band Thresholds for Modeled Reverberant Fields
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Wenzel, Elizabeth M.; Tran, Laura L.; Anderson, Mark R.; Trejo, Leonard J. (Technical Monitor)
1998-01-01
Auditory thresholds for 10 subjects were obtained for speech stimuli reverberation. The reverberation was produced and manipulated by 3-D audio modeling based on an actual room. The independent variables were octave-band-filtering (bypassed, 0.25 - 2.0 kHz Fc) and reverberation time (0.2- 1.1 sec). An ANOVA revealed significant effects (threshold range: -19 to -35 dB re 60 dB SRL).
Noise-induced hearing impairment and handicap
NASA Technical Reports Server (NTRS)
1984-01-01
A permanent, noise-induced hearing loss has doubly harmful effect on speech communications. First, the elevation in the threshold of hearing means that many speech sounds are too weak to be heard, and second, very intense speech sounds may appear to be distorted. The whole question of the impact of noise-induced hearing loss upon the impairments and handicaps experienced by people with such hearing losses was somewhat controversial partly because of the economic aspects of related practical noise control and workmen's compensation.
Role of maternal gesture use in speech use by children with fragile X syndrome.
Hahn, Laura J; Zimmer, B Jean; Brady, Nancy C; Swinburne Romine, Rebecca E; Fleming, Kandace K
2014-05-01
The purpose of this study was to investigate how maternal gesture relates to speech production by children with fragile X syndrome (FXS). Participants were 27 young children with FXS (23 boys, 4 girls) and their mothers. Videotaped home observations were conducted between the ages of 25 and 37 months (toddler period) and again between the ages of 60 and 71 months (child period). The videos were later coded for types of maternal utterances and maternal gestures that preceded child speech productions. Children were also assessed with the Mullen Scales of Early Learning at both ages. Maternal gesture use in the toddler period was positively related to expressive language scores at both age periods and was related to receptive language scores in the child period. Maternal proximal pointing, in comparison to other gestures, evoked more speech responses from children during the mother-child interactions, particularly when combined with wh-questions. This study adds to the growing body of research on the importance of contextual variables, such as maternal gestures, in child language development. Parental gesture use may be an easily added ingredient to parent-focused early language intervention programs.
The relevance of receptive vocabulary in reading comprehension.
Nalom, Ana Flávia de Oliveira; Soares, Aparecido José Couto; Cárnio, Maria Silvia
2015-01-01
To characterize the performance of students from the 5th year of primary school, with and without indicatives of reading and writing disorders, in receptive vocabulary and reading comprehension of sentences and texts, and to verify possible correlations between both. This study was approved by the Research Ethics Committee of the institution (no. 098/13). Fifty-two students in the 5th year from primary school, with and without indicatives of reading and writing disorders, and from two public schools participated in this study. After signing the informed consent and having a speech therapy assessment for the application of inclusion criteria, the students were submitted to a specific test for standardized evaluation of receptive vocabulary and reading comprehension. The data were studied using statistical analysis through the Kruskal-Wallis test, analysis of variance techniques, and Spearman's rank correlation coefficient with level of significance to be 0.05. A receiver operating characteristic (ROC) curve (was constructed in which reading comprehension was considered as gold standard. The students without indicatives of reading and writing disorders presented a better performance in all tests. No significant correlation was found between the tests that evaluated reading comprehension in either group. A correlation was found between reading comprehension of texts and receptive vocabulary in the group without indicatives. In the absence of indicatives of reading and writing disorders, the presence of a good range of vocabulary highly contributes to a proficient reading comprehension of texts.
Gifford, René H.; Revit, Lawrence J.
2014-01-01
Background Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. Purpose To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Research Design Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Study Sample Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Intervention Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects’ preferred listening programs as well as with the addition of either Beam™ preprocessing (Cochlear Corporation) or the T-Mic® accessory option (Advanced Bionics). Data Collection and Analysis In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition, a standard t-test was run to evaluate effectiveness across manufacturer for improving the SRT in noise. In Experiment 2, 16 of the 20 Cochlear Corporation subjects were reassessed obtaining an SRT in noise using the manufacturer-suggested “Everyday,” “Noise,” and “Focus” preprocessing strategies. A repeated-measures ANOVA was employed to assess the effects of preprocessing. Results The primary findings were (i) both Noise and Focus preprocessing strategies (Cochlear Corporation) significantly improved the SRT in noise as compared to Everyday preprocessing, (ii) the T-Mic accessory option (Advanced Bionics) significantly improved the SRT as compared to the BTE mic, and (iii) Focus preprocessing and the T-Mic resulted in similar degrees of improvement that were not found to be significantly different from one another. Conclusion Options available in current cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise with both Cochlear Corporation and Advanced Bionics systems. For Cochlear Corporation recipients, Focus preprocessing yields the best speech-recognition performance in a complex listening environment; however, it is recommended that Noise preprocessing be used as the new default for everyday listening environments to avoid the need for switching programs throughout the day. For Advanced Bionics recipients, the T-Mic offers significantly improved performance in noise and is recommended for everyday use in all listening environments. PMID:20807480
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Zhang, Xuyang; Gantz, Bruce; Froman, Rebecca; Olszewski, Carol
2007-06-01
The purposes of this study were to (a) examine the accuracy of cochlear implant recipients who use different types of devices and signal processing strategies on pitch ranking as a function of size of interval and frequency range and (b) to examine the relations between this pitch perception measure and demographic variables, melody recognition, and speech reception in background noise. One hundred fourteen cochlear implant users and 21 normal-hearing adults were tested on a pitch discrimination task (pitch ranking) that required them to determine direction of pitch change as a function of base frequency and interval size. Three groups were tested: (a) long electrode cochlear implant users (N = 101); (b) short electrode users that received acoustic plus electrical stimulation (A+E) (N = 13); and (c) a normal-hearing (NH) comparison group (N = 21). Pitch ranking was tested at standard frequencies of 131 to 1048 Hz, and the size of the pitch-change intervals ranged from 1 to 4 semitones. A generalized linear mixed model (GLMM) was fit to predict pitch ranking and to determine if group differences exist as a function of base frequency and interval size. Overall significance effects were measured with Chi-square tests and individual effects were measured with t-tests. Pitch ranking accuracy was correlated with demographic measures (age at time of testing, length of profound deafness, months of implant use), frequency difference limens, familiar melody recognition, and two measures of speech reception in noise. The long electrode recipients performed significantly poorer on pitch discrimination than the NH and A+E group. The A+E users performed similarly to the NH listeners as a function of interval size in the lower base frequency range, but their pitch discrimination scores deteriorated slightly in the higher frequency range. The long electrode recipients, although less accurate than participants in the NH and A+E groups, tended to perform with greater accuracy within the higher frequency range. There were statistically significant correlations between pitch ranking and familiar melody recognition as well as with pure-tone frequency difference limens at 200 and 400 Hz. Low-frequency acoustic hearing improves pitch discrimination as compared with traditional, electric-only cochlear implants. These findings have implications for musical tasks such as familiar melody recognition.
Relationship between Consonant Recognition in Noise and Hearing Threshold
ERIC Educational Resources Information Center
Yoon, Yang-soo; Allen, Jont B.; Gooler, David M.
2012-01-01
Purpose: Although poorer understanding of speech in noise by listeners who are hearing-impaired (HI) is known not to be directly related to audiometric hearing threshold, "HT" (f), grouping HI listeners with "HT" (f) is widely practiced. In this article, the relationship between consonant recognition and "HT" (f) is…
Receptive fields selection for binary feature description.
Fan, Bin; Kong, Qingqun; Trzcinski, Tomasz; Wang, Zhiheng; Pan, Chunhong; Fua, Pascal
2014-06-01
Feature description for local image patch is widely used in computer vision. While the conventional way to design local descriptor is based on expert experience and knowledge, learning-based methods for designing local descriptor become more and more popular because of their good performance and data-driven property. This paper proposes a novel data-driven method for designing binary feature descriptor, which we call receptive fields descriptor (RFD). Technically, RFD is constructed by thresholding responses of a set of receptive fields, which are selected from a large number of candidates according to their distinctiveness and correlations in a greedy way. Using two different kinds of receptive fields (namely rectangular pooling area and Gaussian pooling area) for selection, we obtain two binary descriptors RFDR and RFDG .accordingly. Image matching experiments on the well-known patch data set and Oxford data set demonstrate that RFD significantly outperforms the state-of-the-art binary descriptors, and is comparable with the best float-valued descriptors at a fraction of processing time. Finally, experiments on object recognition tasks confirm that both RFDR and RFDG successfully bridge the performance gap between binary descriptors and their floating-point competitors.
Ho, Cheng-Yu; Li, Pei-Chun; Chiang, Yuan-Chuan; Young, Shuenn-Tsong; Chu, Woei-Chyn
2015-01-01
Binaural hearing involves using information relating to the differences between the signals that arrive at the two ears, and it can make it easier to detect and recognize signals in a noisy environment. This phenomenon of binaural hearing is quantified in laboratory studies as the binaural masking-level difference (BMLD). Mandarin is one of the most commonly used languages, but there are no publication values of BMLD or BILD based on Mandarin tones. Therefore, this study investigated the BMLD and BILD of Mandarin tones. The BMLDs of Mandarin tone detection were measured based on the detection threshold differences for the four tones of the voiced vowels /i/ (i.e., /i1/, /i2/, /i3/, and /i4/) and /u/ (i.e., /u1/, /u2/, /u3/, and /u4/) in the presence of speech-spectrum noise when presented interaurally in phase (S0N0) and interaurally in antiphase (SπN0). The BILDs of Mandarin tone recognition in speech-spectrum noise were determined as the differences in the target-to-masker ratio (TMR) required for 50% correct tone recognitions between the S0N0 and SπN0 conditions. The detection thresholds for the four tones of /i/ and /u/ differed significantly (p<0.001) between the S0N0 and SπN0 conditions. The average detection thresholds of Mandarin tones were all lower in the SπN0 condition than in the S0N0 condition, and the BMLDs ranged from 7.3 to 11.5 dB. The TMR for 50% correct Mandarin tone recognitions differed significantly (p<0.001) between the S0N0 and SπN0 conditions, at –13.4 and –18.0 dB, respectively, with a mean BILD of 4.6 dB. The study showed that the thresholds of Mandarin tone detection and recognition in the presence of speech-spectrum noise are improved when phase inversion is applied to the target speech. The average BILDs of Mandarin tones are smaller than the average BMLDs of Mandarin tones. PMID:25835987
Mechanisms of Neuronal Computation in Mammalian Visual Cortex
Priebe, Nicholas J.; Ferster, David
2012-01-01
Orientation selectivity in the primary visual cortex (V1) is a receptive field property that is at once simple enough to make it amenable to experimental and theoretical approaches and yet complex enough to represent a significant transformation in the representation of the visual image. As a result, V1 has become an area of choice for studying cortical computation and its underlying mechanisms. Here we consider the receptive field properties of the simple cells in cat V1—the cells that receive direct input from thalamic relay cells—and explore how these properties, many of which are highly nonlinear, arise. We have found that many receptive field properties of V1 simple cells fall directly out of Hubel and Wiesel’s feedforward model when the model incorporates realistic neuronal and synaptic mechanisms, including threshold, synaptic depression, response variability, and the membrane time constant. PMID:22841306
Bhaumik, Basabi; Mathur, Mona
2003-01-01
We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.
Tzaneva, L
1996-09-01
The discomfort threshold problem is not yet clear from the audiological point of view. Its significance for work physiology and hygiene is not enough clarified. This paper discussed the results of a study of the discomfort threshold, performed including 385 operators from the State Company "Kremikovtzi", divided into 4 groups (3 groups according to length of service and one control group). The most prominent changes were found in operators with increased tonal auditory threshold up to 45 and over 50 dB with high confidential probability. The observed changes are distributed in 3 groups: 1. increased tonal auditory threshold (up to 30 dB) without decrease of the discomfort threshold; 2. decreased discomfort threshold (with about 15-20 dB) at increased tonal auditory threshold (up to 45 dB); 3. decreased discomfort threshold at increased (over 50 dB) tonal auditory threshold. The auditory scope of the operators, belonging to groups III and IV (with the longest length of service) is narrowed, being distorted for the latter. This pathophysiological phenomenon can be explained by an enhanced effect of sound irritation and the presence of a recruitment phenomenon with possible engagement of the central part of the auditory analyzer. It is concluded that the discomfort threshold is a sensitive indicator for the state of the individual norms for speech-sound-noise discomfort. The comparison of the discomfort threshold with the hygienic standards and the noise levels at each particular working place can be used as a criterion for the professional selection for work in conditions of masking noise effect and its tolerance with respect to achieving the individual discomfort level depending on the intensity of the speech-sound-noise signals at a particular working place.
Seeing visual word forms: spatial summation, eccentricity and spatial configuration.
Kao, Chien-Hui; Chen, Chien-Chung
2012-06-01
We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Gou, J.; Smith, J.; Valero, J.; Rubio, I.
2011-01-01
This paper reports on a clinical trial evaluating outcomes of a frequency-lowering technique for adolescents and young adults with severe to profound hearing impairment. Outcomes were defined by changes in aided thresholds, speech perception, and acceptance. The participants comprised seven young people aged between 13 and 25 years. They were…
Vaerenberg, Bart; Péan, Vincent; Lesbros, Guillaume; De Ceulaer, Geert; Schauwers, Karen; Daemers, Kristin; Gnansia, Dan; Govaerts, Paul J
2013-06-01
To assess the auditory performance of Digisonic(®) cochlear implant users with electric stimulation (ES) and electro-acoustic stimulation (EAS) with special attention to the processing of low-frequency temporal fine structure. Six patients implanted with a Digisonic(®) SP implant and showing low-frequency residual hearing were fitted with the Zebra(®) speech processor providing both electric and acoustic stimulation. Assessment consisted of monosyllabic speech identification tests in quiet and in noise at different presentation levels, and a pitch discrimination task using harmonic and disharmonic intonating complex sounds ( Vaerenberg et al., 2011 ). These tests investigate place and time coding through pitch discrimination. All tasks were performed with ES only and with EAS. Speech results in noise showed significant improvement with EAS when compared to ES. Whereas EAS did not yield better results in the harmonic intonation test, the improvements in the disharmonic intonation test were remarkable, suggesting better coding of pitch cues requiring phase locking. These results suggest that patients with residual hearing in the low-frequency range still have good phase-locking capacities, allowing them to process fine temporal information. ES relies mainly on place coding but provides poor low-frequency temporal coding, whereas EAS also provides temporal coding in the low-frequency range. Patients with residual phase-locking capacities can make use of these cues.
Effectiveness of the Directional Microphone in the Baha® Divino™
Oeding, Kristi; Valente, Michael; Kerckhoff, Jessica
2010-01-01
Background Patients with unilateral sensorineural hearing loss (USNHL) experience great difficulty listening to speech in noisy environments. A directional microphone (DM) could potentially improve speech recognition in this difficult listening environment. It is well known that DMs in behind-the-ear (BTE) and custom hearing aids can provide a greater signal-to-noise ratio (SNR) in comparison to an omnidirectional microphone (OM) to improve speech recognition in noise for persons with hearing impairment. Studies examining the DM in bone anchored auditory osseointegrated implants (Baha), however, have been mixed, with little to no benefit reported for the DM compared to an OM. Purpose The primary purpose of this study was to determine if there are statistically significant differences in the mean reception threshold for sentences (RTS in dB) in noise between the OM and DM in the Baha® Divino™. The RTS of these two microphone modes was measured utilizing two loudspeaker arrays (speech from 0° and noise from 180° or a diffuse eight-loudspeaker array) and with the better ear open or closed with an earmold impression and noise attenuating earmuff. Subjective benefit was assessed using the Abbreviated Profile of Hearing Aid Benefit (APHAB) to compare unaided and aided (Divino OM and DM combined) problem scores. Research Design A repeated measures design was utilized, with each subject counterbalanced to each of the eight treatment levels for three independent variables: (1) microphone (OM and DM), (2) loudspeaker array (180° and diffuse), and (3) better ear (open and closed). Study Sample Sixteen subjects with USNHL currently utilizing the Baha were recruited from Washington University’s Center for Advanced Medicine and the surrounding area. Data Collection and Analysis Subjects were tested at the initial visit if they entered the study wearing the Divino or after at least four weeks of acclimatization to a loaner Divino. The RTS was determined utilizing Hearing in Noise Test (HINT) sentences in the R-Space™ system, and subjective benefit was determined utilizing the APHAB. A three-way repeated measures analysis of variance (ANOVA) and a paired samples t-test were utilized to analyze results of the HINT and APHAB, respectively. Results Results revealed statistically significant differences within microphone (p < 0.001; directional advantage of 3.2 dB), loudspeaker array (p = 0.046; 180° advantage of 1.1 dB), and better ear conditions (p < 0.001; open ear advantage of 4.9 dB). Results from the APHAB revealed statistically and clinically significant benefit for the Divino relative to unaided on the subscales of Ease of Communication (EC) (p = 0.037), Background Noise (BN) (p < 0.001), and Reverberation (RV) (p = 0.005). Conclusions The Divino’s DM provides a statistically significant improvement in speech recognition in noise compared to the OM for subjects with USNHL. Therefore, it is recommended that audiologists consider selecting a Baha with a DM to provide improved speech recognition performance in noisy listening environments. PMID:21034701
Effectiveness of the directional microphone in the Baha® Divino™.
Oeding, Kristi; Valente, Michael; Kerckhoff, Jessica
2010-09-01
Patients with unilateral sensorineural hearing loss (USNHL) experience great difficulty listening to speech in noisy environments. A directional microphone (DM) could potentially improve speech recognition in this difficult listening environment. It is well known that DMs in behind-the-ear (BTE) and custom hearing aids can provide a greater signal-to-noise ratio (SNR) in comparison to an omnidirectional microphone (OM) to improve speech recognition in noise for persons with hearing impairment. Studies examining the DM in bone anchored auditory osseointegrated implants (Baha), however, have been mixed, with little to no benefit reported for the DM compared to an OM. The primary purpose of this study was to determine if there are statistically significant differences in the mean reception threshold for sentences (RTS in dB) in noise between the OM and DM in the Baha® Divino™. The RTS of these two microphone modes was measured utilizing two loudspeaker arrays (speech from 0° and noise from 180° or a diffuse eight-loudspeaker array) and with the better ear open or closed with an earmold impression and noise attenuating earmuff. Subjective benefit was assessed using the Abbreviated Profile of Hearing Aid Benefit (APHAB) to compare unaided and aided (Divino OM and DM combined) problem scores. A repeated measures design was utilized, with each subject counterbalanced to each of the eight treatment levels for three independent variables: (1) microphone (OM and DM), (2) loudspeaker array (180° and diffuse), and (3) better ear (open and closed). Sixteen subjects with USNHL currently utilizing the Baha were recruited from Washington University's Center for Advanced Medicine and the surrounding area. Subjects were tested at the initial visit if they entered the study wearing the Divino or after at least four weeks of acclimatization to a loaner Divino. The RTS was determined utilizing Hearing in Noise Test (HINT) sentences in the R-Space™ system, and subjective benefit was determined utilizing the APHAB. A three-way repeated measures analysis of variance (ANOVA) and a paired samples t-test were utilized to analyze results of the HINT and APHAB, respectively. Results revealed statistically significant differences within microphone (p < 0.001; directional advantage of 3.2 dB), loudspeaker array (p = 0.046; 180° advantage of 1.1 dB), and better ear conditions (p < 0.001; open ear advantage of 4.9 dB). Results from the APHAB revealed statistically and clinically significant benefit for the Divino relative to unaided on the subscales of Ease of Communication (EC) (p = 0.037), Background Noise (BN) (p < 0.001), and Reverberation (RV) (p = 0.005). The Divino's DM provides a statistically significant improvement in speech recognition in noise compared to the OM for subjects with USNHL. Therefore, it is recommended that audiologists consider selecting a Baha with a DM to provide improved speech recognition performance in noisy listening environments. American Academy of Audiology.
Detection Performance of an Operator Using Lofar
1976-04-01
Blur on Perimetric Thresholds," Aroh. of Opthalmology , 68:2, pp. 240-51 (1962). 15. Ferree, C.E., Rand, G., Hardy, C, "Refraction for the...Static Perimetric Technique Believed to Test Receptive Field Properties III Clinical Trials," Am. Jour. Opthalmology , 70:2, pp. 244-272 (Aug. 1970
Smith, Ashlyn L.; Hustad, Katherine C.
2015-01-01
The current study examined parent perceptions of communication, the focus of early intervention goals and strategies, and factors predicting the implementation of augmentative and alternative communication (AAC) for 26, 2-year-old children with cerebral palsy. Parents completed a communication questionnaire and provided early intervention plans detailing child speech and language goals. Results indicated that receptive language had the strongest association with parent perceptions of communication. Children who were not talking received a greater number of intervention goals, had a greater variety of goals, and had more AAC goals than children who were emerging and established talkers. Finally, expressive language had the strongest influence on AAC decisions. Results are discussed in terms of the relationship between parent perceptions and language skills, communication as an emphasis in early intervention, AAC intervention decisions, and the importance of receptive language. PMID:26401966
Formant discrimination in noise for isolated vowels
NASA Astrophysics Data System (ADS)
Liu, Chang; Kewley-Port, Diane
2004-11-01
Formant discrimination for isolated vowels presented in noise was investigated for normal-hearing listeners. Discrimination thresholds for F1 and F2, for the seven American English vowels /eye, smcapi, eh, æ, invv, aye, you/, were measured under two types of noise, long-term speech-shaped noise (LTSS) and multitalker babble, and also under quiet listening conditions. Signal-to-noise ratios (SNR) varied from -4 to +4 dB in steps of 2 dB. All three factors, formant frequency, signal-to-noise ratio, and noise type, had significant effects on vowel formant discrimination. Significant interactions among the three factors showed that threshold-frequency functions depended on SNR and noise type. The thresholds at the lowest levels of SNR were highly elevated by a factor of about 3 compared to those in quiet. The masking functions (threshold vs SNR) were well described by a negative exponential over F1 and F2 for both LTSS and babble noise. Speech-shaped noise was a slightly more effective masker than multitalker babble, presumably reflecting small benefits (1.5 dB) due to the temporal variation of the babble. .
Hornsby, Benjamin W. Y.; Johnson, Earl E.; Picou, Erin
2011-01-01
Objectives The purpose of this study was to examine the effects of degree and configuration of hearing loss on the use of, and benefit from, information in amplified high- and low-frequency speech presented in background noise. Design Sixty-two adults with a wide range of high- and low-frequency sensorineural hearing loss (5–115+ dB HL) participated. To examine the contribution of speech information in different frequency regions, speech understanding in noise was assessed in multiple low- and high-pass filter conditions, as well as a band-pass (713–3534 Hz) and wideband (143–8976 Hz) condition. To increase audibility over a wide frequency range, speech and noise were amplified based on each individual’s hearing loss. A stepwise multiple linear regression approach was used to examine the contribution of several factors to 1) absolute performance in each filter condition and 2) the change in performance with the addition of amplified high- and low-frequency speech components. Results Results from the regression analysis showed that degree of hearing loss was the strongest predictor of absolute performance for low- and high-pass filtered speech materials. In addition, configuration of hearing loss affected both absolute performance for severely low-pass filtered speech and benefit from extending high-frequency (3534–8976 Hz) bandwidth. Specifically, individuals with steeply sloping high-frequency losses made better use of low-pass filtered speech information than individuals with similar low-frequency thresholds but less high-frequency loss. In contrast, given similar high-frequency thresholds, individuals with flat hearing losses received more benefit from extending high-frequency bandwidth than individuals with more sloping losses. Conclusions Consistent with previous work, benefit from speech information in a given frequency region generally decreases as degree of hearing loss in that frequency region increases. However, given a similar degree of loss, the configuration of hearing loss also affects the ability to use speech information in different frequency regions. Except for individuals with steeply sloping high-frequency losses, providing high-frequency amplification (3534–8976 Hz) had either a beneficial effect on, or did not significantly degrade, speech understanding. These findings highlight the importance of extended high-frequency amplification for listeners with a wide range of high-frequency hearing losses, when seeking to maximize intelligibility. PMID:21336138
Spatial Release from Masking in Children: Effects of Simulated Unilateral Hearing Loss
Corbin, Nicole E.; Buss, Emily; Leibold, Lori J.
2016-01-01
Objectives The purpose of this study was twofold: 1) to determine the effect of an acute simulated unilateral hearing loss on children’s spatial release from masking in two-talker speech and speech-shaped noise, and 2) to develop a procedure to be used in future studies that will assess spatial release from masking in children who have permanent unilateral hearing loss. There were three main predictions. First, spatial release from masking was expected to be larger in two-talker speech than speech-shaped noise. Second, simulated unilateral hearing loss was expected to worsen performance in all listening conditions, but particularly in the spatially separated two-talker speech masker. Third, spatial release from masking was expected to be smaller for children than for adults in the two-talker masker. Design Participants were 12 children (8.7 to 10.9 yrs) and 11 adults (18.5 to 30.4 yrs) with normal bilateral hearing. Thresholds for 50%-correct recognition of Bamford-Kowal-Bench sentences were measured adaptively in continuous two-talker speech or speech-shaped noise. Target sentences were always presented from a loudspeaker at 0° azimuth. The masker stimulus was either co-located with the target or spatially separated to +90° or −90° azimuth. Spatial release from masking was quantified as the difference between thresholds obtained when the target and masker were co-located and thresholds obtained when the masker was presented from +90° or − 90°. Testing was completed both with and without a moderate simulated unilateral hearing loss, created with a foam earplug and supra-aural earmuff. A repeated-measures design was used to compare performance between children and adults, and performance in the no-plug and simulated-unilateral-hearing-loss conditions. Results All listeners benefited from spatial separation of target and masker stimuli on the azimuth plane in the no-plug listening conditions; this benefit was larger in two-talker speech than in speech-shaped noise. In the simulated-unilateral-hearing-loss conditions, a positive spatial release from masking was observed only when the masker was presented ipsilateral to the simulated unilateral hearing loss. In the speech-shaped noise masker, spatial release from masking in the no-plug condition was similar to that obtained when the masker was presented ipsilateral to the simulated unilateral hearing loss. In contrast, in the two-talker speech masker, spatial release from masking in the no-plug condition was much larger than that obtained when the masker was presented ipsilateral to the simulated unilateral hearing loss. When either masker was presented contralateral to the simulated unilateral hearing loss, spatial release from masking was negative. This pattern of results was observed for both children and adults, although children performed more poorly overall. Conclusions Children and adults with normal bilateral hearing experience greater spatial release from masking for a two-talker speech than a speech-shaped noise masker. Testing in a two-talker speech masker revealed listening difficulties in the presence of disrupted binaural input that were not observed in a speech-shaped noise masker. This procedure offers promise for the assessment of spatial release from masking in children with permanent unilateral hearing loss. PMID:27787392
Index of FAA Office of Aviation Medicine Reports: 1961 through 1989
1990-01-01
ballistocardiographic research and the current state of the art . AD455651 64-13 Gogel, W. C.: The size cue to visually perceived distance. AD456655 64-14 Capps...aeromedical aspects of marihuana . AD775889 73-13 Tobias, J. V., and Irons, F. M.: Reception of distorted speech. AD777564 73-14 Thackray, R. I., Jones...Mertens, H. W., and Steen, J. A.: Interaction between marihuana and altitude on a complex behavioral task in baboons. ADA020680/5GI 75-7 Melton, C. E
Communication in a noisy environment: Perception of one's own voice and speech enhancement
NASA Astrophysics Data System (ADS)
Le Cocq, Cecile
Workers in noisy industrial environments are often confronted to communication problems. Lost of workers complain about not being able to communicate easily with their coworkers when they wear hearing protectors. In consequence, they tend to remove their protectors, which expose them to the risk of hearing loss. In fact this communication problem is a double one: first the hearing protectors modify one's own voice perception; second they interfere with understanding speech from others. This double problem is examined in this thesis. When wearing hearing protectors, the modification of one's own voice perception is partly due to the occlusion effect which is produced when an earplug is inserted in the car canal. This occlusion effect has two main consequences: first the physiological noises in low frequencies are better perceived, second the perception of one's own voice is modified. In order to have a better understanding of this phenomenon, the literature results are analyzed systematically, and a new method to quantify the occlusion effect is developed. Instead of stimulating the skull with a bone vibrator or asking the subject to speak as is usually done in the literature, it has been decided to excite the buccal cavity with an acoustic wave. The experiment has been designed in such a way that the acoustic wave which excites the buccal cavity does not excite the external car or the rest of the body directly. The measurement of the hearing threshold in open and occluded car has been used to quantify the subjective occlusion effect for an acoustic wave in the buccal cavity. These experimental results as well as those reported in the literature have lead to a better understanding of the occlusion effect and an evaluation of the role of each internal path from the acoustic source to the internal car. The speech intelligibility from others is altered by both the high sound levels of noisy industrial environments and the speech signal attenuation due to hearing protectors. A possible solution to this problem is to denoise the speech signal and transmit it under the hearing protector. Lots of denoising techniques are available and are often used for denoising speech in telecommunication. In the framework of this thesis, denoising by wavelet thresholding is considered. A first study on "classical" wavelet denoising technics is conducted in order to evaluate their performance in noisy industrial environments. The tested speech signals are altered by industrial noises according to a wide range of signal to noise ratios. The speech denoised signals are evaluated with four criteria. A large database is obtained and analyzed with a selection algorithm which has been designed for this purpose. This first study has lead to the identification of the influence from the different parameters of the wavelet denoising method on its quality and has identified the "classical" method which has given the best performances in terms of denoising quality. This first study has also generated ideas for designing a new thresholding rule suitable for speech wavelet denoising in an industrial noisy environment. In a second study, this new thresholding rule is presented and evaluated. Its performances are better than the "classical" method found in the first study when the signal to noise ratio from the speech signal is between --10 dB and 15 dB.
McIntyre, Laureen J; Hellsten, Laurie-Ann M; Bidonde, Julia; Boden, Catherine; Doi, Carolyn
2017-04-04
The majority of a child's language development occurs in the first 5 years of life when brain development is most rapid. There are significant long-term benefits to supporting all children's language and literacy development such as maximizing their developmental potential (i.e., cognitive, linguistic, social-emotional), when children are experiencing a critical period of development (i.e., early childhood to 9 years of age). A variety of people play a significant role in supporting children's language development, including parents, guardians, family members, educators, and/or speech-language pathologists. Speech-language pathologists and educators are the professionals who predominantly support children's language development in order for them to become effective communicators and lay the foundation for later developing literacy skills (i.e., reading and writing skills). Therefore, these professionals need formal and informal assessments that provide them information on a child's understanding and/or use of the increasingly complex aspects of language in order to identify and support the receptive and expressive language learning needs of diverse children during their early learning experiences (i.e., aged 1.5 to 9 years). However, evidence on what methods and tools are being used is lacking. The authors will carry out a scoping review of the literature to identify studies and map the receptive and expressive English language assessment methods and tools that have been published and used since 1980. Arksey and O'Malley's (2005) six-stage approach to conducting a scoping review was drawn upon to design the protocol for this investigation: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; (5) collating, summarizing, and reporting the results; and (6) consultation. This information will help these professionals identify and select appropriate assessment methods or tools that can be used to support development and/or identify areas of delay or difficulty and plan, implement, and monitor the progress of interventions supporting the development of receptive and expressive language skills in individuals with diverse language needs (e.g., typically developing children, children with language delays and disorders, children learning English as a second or additional language, Indigenous children who may be speaking dialects of English). Researchers plan to evaluate the effectiveness of the assessment methods or tools identified in the scoping review as an extension of this study.
Koohi, Nehzat; Vickers, Deborah; Chandrashekar, Hoskote; Tsang, Benjamin; Werring, David; Bamiou, Doris-Eva
2017-03-01
Auditory disability due to impaired auditory processing (AP) despite normal pure-tone thresholds is common after stroke, and it leads to isolation, reduced quality of life and physical decline. There are currently no proven remedial interventions for AP deficits in stroke patients. This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. Fifty stroke patients had baseline audiological assessments, AP tests and completed the (modified) Amsterdam Inventory for Auditory Disability and Hearing Handicap Inventory for Elderly questionnaires. Nine out of these 50 patients were diagnosed with disordered AP based on severe deficits in understanding speech in background noise but with normal pure-tone thresholds. These nine patients underwent spatial speech-in-noise testing in a sound-attenuating chamber (the "crescent of sound") with and without FM systems. The signal-to-noise ratio (SNR) for 50% correct speech recognition performance was measured with speech presented from 0° azimuth and competing babble from ±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SNRs measured with co-located speech and babble and SNRs measured with spatially separated speech and babble. The SRM significantly improved when babble was spatially separated from target speech, while the patients had the FM systems in their ears compared to without the FM systems. Personal FM systems may substantially improve speech-in-noise deficits in stroke patients who are not eligible for conventional hearing aids. FMs are feasible in stroke patients and show promise to address impaired AP after stroke. Implications for Rehabilitation This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. All cases significantly improved speech perception in noise with the FM systems, when noise was spatially separated from the speech signal by 90° compared with unaided listening. Personal FM systems are feasible in stroke patients, and may be of benefit in just under 20% of this population, who are not eligible for conventional hearing aids.
Sheft, Stanley; Shafiro, Valeriy; Lorenzi, Christian; McMullen, Rachel; Farrell, Caitlin
2012-01-01
Objective The frequency modulation (FM) of speech can convey linguistic information and also enhance speech-stream coherence and segmentation. Using a clinically oriented approach, the purpose of the present study was to examine the effects of age and hearing loss on the ability to discriminate between stochastic patterns of low-rate FM and determine whether difficulties in speech perception experienced by older listeners relate to a deficit in this ability. Design Data were collected from 18 normal-hearing young adults, and 18 participants who were at least 60 years old, nine normal-hearing and nine with a mild-to-moderate sensorineural hearing loss. Using stochastic frequency modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, discrimination thresholds were measured in terms of frequency excursion (ΔF) both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio (SNRFM) in the presence of a speech-babble masker. Speech perception ability was evaluated using Quick Speech-in-Noise (QuickSIN) sentences in four-talker babble. Results Results showed a significant effect of age, but not of hearing loss among the older listeners, for FM discrimination conditions with masking present (ΔF and SNRFM). The effect of age was not significant for the FM measures based on stimulus duration. ΔF and SNRFM were also the two conditions for which performance was significantly correlated with listener age when controlling for effect of hearing loss as measured by pure-tone average. With respect to speech-in-noise ability, results from the SNRFM condition were significantly correlated with QuickSIN performance. Conclusions Results indicate that aging is associated with reduced ability to discriminate moderate-duration patterns of low-rate stochastic FM. Furthermore, the relationship between QuickSIN performance and the SNRFM thresholds suggests that the difficulty experienced by older listeners with speech-in-noise processing may in part relate to diminished ability to process slower fine-structure modulation at low sensation levels. Results thus suggest that clinical consideration of stochastic FM discrimination measures may offer a fuller picture of auditory processing abilities. PMID:22790319
Neuroanatomical and resting state EEG power correlates of central hearing loss in older adults.
Giroud, Nathalie; Hirsiger, Sarah; Muri, Raphaela; Kegel, Andrea; Dillier, Norbert; Meyer, Martin
2018-01-01
To gain more insight into central hearing loss, we investigated the relationship between cortical thickness and surface area, speech-relevant resting state EEG power, and above-threshold auditory measures in older adults and younger controls. Twenty-three older adults and 13 younger controls were tested with an adaptive auditory test battery to measure not only traditional pure-tone thresholds, but also above individual thresholds of temporal and spectral processing. The participants' speech recognition in noise (SiN) was evaluated, and a T1-weighted MRI image obtained for each participant. We then determined the cortical thickness (CT) and mean cortical surface area (CSA) of auditory and higher speech-relevant regions of interest (ROIs) with FreeSurfer. Further, we obtained resting state EEG from all participants as well as data on the intrinsic theta and gamma power lateralization, the latter in accordance with predictions of the Asymmetric Sampling in Time hypothesis regarding speech processing (Poeppel, Speech Commun 41:245-255, 2003). Methodological steps involved the calculation of age-related differences in behavior, anatomy and EEG power lateralization, followed by multiple regressions with anatomical ROIs as predictors for auditory performance. We then determined anatomical regressors for theta and gamma lateralization, and further constructed all regressions to investigate age as a moderator variable. Behavioral results indicated that older adults performed worse in temporal and spectral auditory tasks, and in SiN, despite having normal peripheral hearing as signaled by the audiogram. These behavioral age-related distinctions were accompanied by lower CT in all ROIs, while CSA was not different between the two age groups. Age modulated the regressions specifically in right auditory areas, where a thicker cortex was associated with better auditory performance in older adults. Moreover, a thicker right supratemporal sulcus predicted more rightward theta lateralization, indicating the functional relevance of the right auditory areas in older adults. The question how age-related cortical thinning and intrinsic EEG architecture relates to central hearing loss has so far not been addressed. Here, we provide the first neuroanatomical and neurofunctional evidence that cortical thinning and lateralization of speech-relevant frequency band power relates to the extent of age-related central hearing loss in older adults. The results are discussed within the current frameworks of speech processing and aging.
Impaired auditory temporal selectivity in the inferior colliculus of aged Mongolian gerbils.
Khouri, Leila; Lesica, Nicholas A; Grothe, Benedikt
2011-07-06
Aged humans show severe difficulties in temporal auditory processing tasks (e.g., speech recognition in noise, low-frequency sound localization, gap detection). A degradation of auditory function with age is also evident in experimental animals. To investigate age-related changes in temporal processing, we compared extracellular responses to temporally variable pulse trains and human speech in the inferior colliculus of young adult (3 month) and aged (3 years) Mongolian gerbils. We observed a significant decrease of selectivity to the pulse trains in neuronal responses from aged animals. This decrease in selectivity led, on the population level, to an increase in signal correlations and therefore a decrease in heterogeneity of temporal receptive fields and a decreased efficiency in encoding of speech signals. A decrease in selectivity to temporal modulations is consistent with a downregulation of the inhibitory transmitter system in aged animals. These alterations in temporal processing could underlie declines in the aging auditory system, which are unrelated to peripheral hearing loss. These declines cannot be compensated by traditional hearing aids (that rely on amplification of sound) but may rather require pharmacological treatment.
Scherer, Nancy J; Baker, Shauna; Kaiser, Ann; Frey, Jennifer R
2018-01-01
Objective This study compares the early speech and language development of children with cleft palate with or without cleft lip who were adopted internationally with children born in the United States. Design Prospective longitudinal description of early speech and language development between 18 and 36 months of age. Participants This study compares four children (age range = 19 to 38 months) with cleft palate with or without cleft lip who were adopted internationally with four children (age range = 19 to 38 months) with cleft palate with or without cleft lip who were born in the United States, matched for age, gender, and cleft type across three time points over 10 to 12 months. Main Outcome Measures Children's speech-language skills were analyzed using standardized tests, parent surveys, language samples, and single-word phonological assessments to determine differences between the groups. Results The mean scores for the children in the internationally adopted group were lower than the group born in the United States at all three time points for expressive language and speech sound production measures. Examination of matched pairs demonstrated observable differences for two of the four pairs. No differences were observed in cognitive performance and receptive language measures. Conclusions The results suggest a cumulative effect of later palate repair and/or a variety of health and environmental factors associated with their early circumstances that persist to age 3 years. Early intervention to address the trajectory of speech and language is warranted. Given the findings from this small pilot study, a larger study of the long-term speech and language development of children who are internationally adopted and have cleft palate with or without cleft lip is recommended.
Role of IGF-1 in cortical plasticity and functional deficit induced by sensorimotor restriction.
Mysoet, Julien; Dupont, Erwan; Bastide, Bruno; Canu, Marie-Hélène
2015-09-01
In the adult rat, sensorimotor restriction by hindlimb unloading (HU) is known to induce impairments in motor behavior as well as a disorganization of somatosensory cortex (shrinkage of the cortical representation of the hindpaw, enlargement of the cutaneous receptive fields, decreased cutaneous sensibility threshold). Recently, our team has demonstrated that IGF-1 level was decreased in the somatosensory cortex of rats submitted to a 14-day period of HU. To determine whether IGF-1 is involved in these plastic mechanisms, a chronic cortical infusion of this substance was performed by means of osmotic minipump. When administered in control rats, IGF-1 affects the size of receptive fields and the cutaneous threshold, but has no effect on the somatotopic map. In addition, when injected during the whole HU period, IGF-1 is interestingly implied in cortical changes due to hypoactivity: the shrinkage of somatotopic representation of hindlimb is prevented, whereas the enlargement of receptive fields is reduced. IGF-1 has no effect on the increase in neuronal response to peripheral stimulation. We also explored the functional consequences of IGF-1 level restoration on tactile sensory discrimination. In HU rats, the percentage of paw withdrawal after a light tactile stimulation was decreased, whereas it was similar to control level in HU-IGF-1 rats. Taken together, the data clearly indicate that IGF-1 plays a key-role in cortical plastic mechanisms and in behavioral alterations induced by a decrease in sensorimotor activity. Copyright © 2015 Elsevier B.V. All rights reserved.
Early hearing loss and language abilities in children with Down syndrome.
Laws, Glynis; Hall, Amanda
2014-01-01
Although many children with Down syndrome experience hearing loss, there has been little research to investigate its impact on speech and language development. Studies that have investigated the association give inconsistent results. These have often been based on samples where children with the most severe hearing impairments have been excluded and so results do not generalize to the wider population with Down syndrome. Also, measuring children's hearing at the time of a language assessment does not take into account the fluctuating nature of hearing loss in children with Down syndrome or possible effects of losses in their early years. To investigate the impact of early hearing loss on language outcomes for children with Down syndrome. Retrospective audiology clinic records and parent report for 41 children were used to categorize them as either having had hearing difficulties from 2 to 4 years or more normal hearing. Differences between the groups on measures of language expression and comprehension, receptive vocabulary, a narrative task and speech accuracy were investigated. After accounting for the contributions of chronological age and nonverbal mental age to children's scores, there were significant differences between the groups on all measures. Early hearing loss has a significant impact on the speech and language development of children with Down syndrome. Results suggest that speech and language therapy should be provided when children are found to have ongoing hearing difficulties and that joint audiology and speech and language therapy clinics could be considered for preschool children. © 2014 Royal College of Speech and Language Therapists.
Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Potton, Anita; Birtles, Deidre; Frostick, Caroline; Moore, Derek G.
2013-01-01
The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants. PMID:23882240
Landwehr, Markus; Fürstenberg, Dirk; Walger, Martin; von Wedel, Hasso; Meister, Hartmut
2014-01-01
Advances in speech coding strategies and electrode array designs for cochlear implants (CIs) predominantly aim at improving speech perception. Current efforts are also directed at transmitting appropriate cues of the fundamental frequency (F0) to the auditory nerve with respect to speech quality, prosody, and music perception. The aim of this study was to examine the effects of various electrode configurations and coding strategies on speech intonation identification, speaker gender identification, and music quality rating. In six MED-EL CI users electrodes were selectively deactivated in order to simulate different insertion depths and inter-electrode distances when using the high definition continuous interleaved sampling (HDCIS) and fine structure processing (FSP) speech coding strategies. Identification of intonation and speaker gender was determined and music quality rating was assessed. For intonation identification HDCIS was robust against the different electrode configurations, whereas fine structure processing showed significantly worse results when a short electrode depth was simulated. In contrast, speaker gender recognition was not affected by electrode configuration or speech coding strategy. Music quality rating was sensitive to electrode configuration. In conclusion, the three experiments revealed different outcomes, even though they all addressed the reception of F0 cues. Rapid changes in F0, as seen with intonation, were the most sensitive to electrode configurations and coding strategies. In contrast, electrode configurations and coding strategies did not show large effects when F0 information was available over a longer time period, as seen with speaker gender. Music quality relies on additional spectral cues other than F0, and was poorest when a shallow insertion was simulated.
Masking release due to linguistic and phonetic dissimilarity between the target and masker speech
Calandruccio, Lauren; Brouwer, Susanne; Van Engen, Kristin J.; Dhar, Sumitrajit; Bradlow, Ann R.
2013-01-01
Purpose To investigate masking release for speech maskers for linguistically and phonetically close (English and Dutch) and distant (English and Mandarin) language pairs. Method Twenty monolingual speakers of English with normal-audiometric thresholds participated. Data are reported for an English sentence recognition task in English, Dutch and Mandarin competing speech maskers (Experiment I) and noise maskers (Experiment II) that were matched either to the long-term-average-speech spectra or to the temporal modulations of the speech maskers from Experiment I. Results Results indicated that listener performance increased as the target-to-masker linguistic distance increased (English-in-English < English-in-Dutch < English-in-Mandarin). Conclusions Spectral differences between maskers can account for some, but not all, of the variation in performance between maskers; however, temporal differences did not seem to play a significant role. PMID:23800811
Detection thresholds for gaps, overlaps, and no-gap-no-overlaps.
Heldner, Mattias
2011-07-01
Detection thresholds for gaps and overlaps, that is acoustic and perceived silences and stretches of overlapping speech in speaker changes, were determined. Subliminal gaps and overlaps were categorized as no-gap-no-overlaps. The established gap and overlap detection thresholds both corresponded to the duration of a long vowel, or about 120 ms. These detection thresholds are valuable for mapping the perceptual speaker change categories gaps, overlaps, and no-gap-no-overlaps into the acoustic domain. Furthermore, the detection thresholds allow generation and understanding of gaps, overlaps, and no-gap-no-overlaps in human-like spoken dialogue systems. © 2011 Acoustical Society of America
Palmer, Shannon B; Musiek, Frank E
2014-01-01
Temporal processing ability has been linked to speech understanding ability and older adults often complain of difficulty understanding speech in difficult listening situations. Temporal processing can be evaluated using gap detection procedures. There is some research showing that gap detection can be evaluated using an electrophysiological procedure. However, there is currently no research establishing gap detection threshold using the N1-P2 response. The purposes of the current study were to 1) determine gap detection thresholds in younger and older normal-hearing adults using an electrophysiological measure, 2) compare the electrophysiological gap detection threshold and behavioral gap detection threshold within each group, and 3) investigate the effect of age on each gap detection measure. This study utilized an older adult group and younger adult group to compare performance on an electrophysiological and behavioral gap detection procedure. The subjects in this study were 11 younger, normal-hearing adults (mean = 22 yrs) and 11 older, normal-hearing adults (mean = 64.36 yrs). All subjects completed an adaptive behavioral gap detection procedure in order to determine their behavioral gap detection threshold (BGDT). Subjects also completed an electrophysiologic gap detection procedure to determine their electrophysiologic gap detection threshold (EGDT). Older adults demonstrated significantly larger gap detection thresholds than the younger adults. However, EGDT and BGDT were not significantly different in either group. The mean difference between EGDT and BGDT for all subjects was 0.43 msec. Older adults show poorer gap detection ability when compared to younger adults. However, this study shows that gap detection thresholds can be measured using evoked potential recordings and yield results similar to a behavioral measure. American Academy of Audiology.
Lovett, Rosemary; Summerfield, Quentin; Vickers, Deborah
2013-06-01
The Toy Discrimination Test measures children's ability to discriminate spoken words. Previous assessments of reliability tested children with normal hearing or mild hearing impairment, and most studies used a version of the test without a masking sound. We assessed test-retest reliability for children with hearing impairment using maskers of broadband noise and two-talker babble. Stimuli were presented from a loudspeaker. The signal-to-noise ratio (SNR) was varied adaptively to estimate the speech-reception threshold (SRT) corresponding to 70.7% correct performance. Participants completed each masked condition twice. Fifty-five children with permanent hearing impairment participated, aged 3.0 to 6.3 years. Thirty-four children used acoustic hearing aids; 21 children used cochlear implants. For the noise masker, the within-subject standard deviation of SRTs was 2.4 dB, and the correlation between first and second SRT was + 0.73. For the babble masker, corresponding values were 2.7 dB and + 0.60. Reliability was similar for children with hearing aids and children with cochlear implants. The results can inform the interpretation of scores from individual children. If a child completes a condition twice in different listening situations (e.g. aided and unaided), a difference between scores ≥ 7.5 dB would be statistically significant (p <.05).
Cat colour vision: one cone process or several?
Daw, N. W.; Pearlman, A. L.
1969-01-01
1. Peripheral mechanisms that might contribute to colour vision in the cat have been investigated by recording from single units in the lateral geniculate and optic tract. Evidence is presented that the input to these cells comes from a single class of cones with a single spectral sensitivity. 2. In cats with pupils dilated a background level of 10-30 cd/m2 was sufficient to saturate the rod system for all units. When the rods were saturated, the spectral sensitivity of all units peaked at 556 nm; this was true both for centre and periphery of the receptive field. The spectral sensitivity curve was slightly narrower than the Dartnall nomogram. It could not be shifted by chromatic adaptation with red, green, blue or yellow backgrounds. 3. In the mesopic range (0·1-10 cd/m2), the threshold could be predicted in terms of two mechanisms, a cone mechanism with spectral sensitivity peaking at 556 nm, and a rod mechanism with spectral sensitivity at 500 nm. The mechanisms were separated and their increment threshold curves measured by testing with one colour against a background of another colour. All units had input from both rods and cones. The changeover from rods to cones occurred at the same level of adaptation in both centre and periphery of the receptive field. Threshold for the cones was between 0·04 and 0·25 cd/m2 with the pupil dilated, for a spot covering the centre of the receptive field. 4. None of the results was found to vary between lateral geniculate and optic tract, with layer in the lateral geniculate, or with distance from area centralis in the visual field. 5. The lack of evidence for more than one cone type suggests that colour discrimination in the cat may be a phenomenon of mesopic vision, based on differences in spectral sensitivity of the rods and a single class of cones. PMID:5767891
Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.
Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga
2015-11-01
Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.
Fostick, Leah; Babkoff, Harvey; Zukerman, Gil
2014-06-01
To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Fifty-five sleep-deprived young adults were compared with 29 aging adults (older than 60 years) and with 18 young controls on auditory temporal order judgment (TOJ) and on speech perception tasks (Experiment 1). The sleep deprived were also compared with 51 dyslexic readers and with the young controls on TOJ and phonological awareness tasks (One-Minute Test for Pseudowords, Phoneme Deletion, Pig Latin, and Spoonerism; Experiment 2). Sleep deprivation resulted in longer TOJ thresholds, poorer speech perception, and poorer nonword reading compared with controls. The TOJ thresholds of the sleep deprived were comparable to those of the aging adults, but their pattern of speech performance differed. They also performed better on TOJ and phonological awareness than dyslexic readers. A variety of linguistic skills are affected by sleep deprivation. The comparison of sleep-deprived individuals with other groups with known difficulties in these linguistic skills might suggest that different groups exhibit common difficulties.
Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T.; Brand, Thomas
2016-01-01
To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. PMID:27604782
Evaluation of the 'Fitting to Outcomes eXpert' (FOX®) with established cochlear implant users.
Buechner, Andreas; Vaerenberg, Bart; Gazibegovic, Dzemal; Brendel, Martina; De Ceulaer, Geert; Govaerts, Paul; Lenarz, Thomas
2015-01-01
To evaluate the possible impact of 'Fitting to Outcomes eXpert (FOX(®))' on cochlear implant (CI) fitting in a clinic with extensive experience of fitting a range of CI systems, as a way to assess whether a software tool such as FOX is able to complement standard clinical procedures. Ten adult post-lingually deafened and unilateral long-term users of the Advanced Bionics(TM) CI system (Clarion CII or HiRes 90K(TM)) underwent speech perception assessment with their current clinical program. One cycle 'iteration' of FOX optimization was performed and the program adjusted accordingly. After a month of using both clinical and FOX programs, a second iteration of FOX optimization was performed. Following this, the assessments were repeated without further acclimatization. FOX prescribed programming modifications in all subjects. Soundfield-aided thresholds were significantly lower for FOX than the clinical program. Group speech scores in noise were not significantly different between the two programs but three individual subjects had improved speech scores with the FOX MAP, two had worse speech scores, and five were the same. FOX provided a standardized approach to fitting based on outcome measures rather than comfort alone. The results indicated that for this group of well-fitted patients, FOX improved outcomes in some individuals. There were significant changes, both better and worse, in individual speech perception scores but median scores remained unchanged. Soundfield-aided thresholds were significantly improved for the FOX group.
Speech training alters consonant and vowel responses in multiple auditory cortex fields
Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.
2015-01-01
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927
Phonological encoding in speech-sound disorder: evidence from a cross-modal priming experiment.
Munson, Benjamin; Krause, Miriam O P
2017-05-01
Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to encode lexical items that have been accessed from memory. Thirty-six children (18 with TD, 18 with SSD) viewed pictures while listening to interfering words (IW) or a non-linguistic auditory stimulus presented over headphones either 150 ms before, concurrent with or 150 ms after picture presentation. The phonological similarity of the IW and the pictures' names varied. Picture-naming latency, accuracy and duration were tallied. All children named pictures more quickly in the presence of an IW identical to the picture's name than in the other conditions. At the +150 ms stimulus onset asynchrony, pictures were named more quickly when the IW shared phonemes with the picture's name than when they were phonologically unrelated to the picture's name. The size of this effect was similar for children with SSD and children with TD. Variation in the magnitude of inhibition and facilitation on cross-modal priming tasks across children was more strongly affected by the size of the expressive and receptive lexicons than by speech-production accuracy. Results suggest that SSD is not associated with reduced phonological encoding ability, at least as it is reflected by cross-modal naming tasks. © 2016 Royal College of Speech and Language Therapists.
Lopez Valdes, Alejandro; Mc Laughlin, Myles; Viani, Laura; Walshe, Peter; Smith, Jaclyn; Zeng, Fan-Gang; Reilly, Richard B.
2014-01-01
Cochlear implants (CIs) can partially restore functional hearing in deaf individuals. However, multiple factors affect CI listener's speech perception, resulting in large performance differences. Non-speech based tests, such as spectral ripple discrimination, measure acoustic processing capabilities that are highly correlated with speech perception. Currently spectral ripple discrimination is measured using standard psychoacoustic methods, which require attentive listening and active response that can be difficult or even impossible in special patient populations. Here, a completely objective cortical evoked potential based method is developed and validated to assess spectral ripple discrimination in CI listeners. In 19 CI listeners, using an oddball paradigm, cortical evoked potential responses to standard and inverted spectrally rippled stimuli were measured. In the same subjects, psychoacoustic spectral ripple discrimination thresholds were also measured. A neural discrimination threshold was determined by systematically increasing the number of ripples per octave and determining the point at which there was no longer a significant difference between the evoked potential response to the standard and inverted stimuli. A correlation was found between the neural and the psychoacoustic discrimination thresholds (R2 = 0.60, p<0.01). This method can objectively assess CI spectral resolution performance, providing a potential tool for the evaluation and follow-up of CI listeners who have difficulty performing psychoacoustic tests, such as pediatric or new users. PMID:24599314
Lopez Valdes, Alejandro; Mc Laughlin, Myles; Viani, Laura; Walshe, Peter; Smith, Jaclyn; Zeng, Fan-Gang; Reilly, Richard B
2014-01-01
Cochlear implants (CIs) can partially restore functional hearing in deaf individuals. However, multiple factors affect CI listener's speech perception, resulting in large performance differences. Non-speech based tests, such as spectral ripple discrimination, measure acoustic processing capabilities that are highly correlated with speech perception. Currently spectral ripple discrimination is measured using standard psychoacoustic methods, which require attentive listening and active response that can be difficult or even impossible in special patient populations. Here, a completely objective cortical evoked potential based method is developed and validated to assess spectral ripple discrimination in CI listeners. In 19 CI listeners, using an oddball paradigm, cortical evoked potential responses to standard and inverted spectrally rippled stimuli were measured. In the same subjects, psychoacoustic spectral ripple discrimination thresholds were also measured. A neural discrimination threshold was determined by systematically increasing the number of ripples per octave and determining the point at which there was no longer a significant difference between the evoked potential response to the standard and inverted stimuli. A correlation was found between the neural and the psychoacoustic discrimination thresholds (R2=0.60, p<0.01). This method can objectively assess CI spectral resolution performance, providing a potential tool for the evaluation and follow-up of CI listeners who have difficulty performing psychoacoustic tests, such as pediatric or new users.
Johannesen, Peter T.; Pérez-González, Patricia; Kalluri, Sridhar; Blanco, José L.
2016-01-01
The aim of this study was to assess the relative importance of cochlear mechanical dysfunction, temporal processing deficits, and age on the ability of hearing-impaired listeners to understand speech in noisy backgrounds. Sixty-eight listeners took part in the study. They were provided with linear, frequency-specific amplification to compensate for their audiometric losses, and intelligibility was assessed for speech-shaped noise (SSN) and a time-reversed two-talker masker (R2TM). Behavioral estimates of cochlear gain loss and residual compression were available from a previous study and were used as indicators of cochlear mechanical dysfunction. Temporal processing abilities were assessed using frequency modulation detection thresholds. Age, audiometric thresholds, and the difference between audiometric threshold and cochlear gain loss were also included in the analyses. Stepwise multiple linear regression models were used to assess the relative importance of the various factors for intelligibility. Results showed that (a) cochlear gain loss was unrelated to intelligibility, (b) residual cochlear compression was related to intelligibility in SSN but not in a R2TM, (c) temporal processing was strongly related to intelligibility in a R2TM and much less so in SSN, and (d) age per se impaired intelligibility. In summary, all factors affected intelligibility, but their relative importance varied across maskers. PMID:27604779
A cocktail-party listening experiment with children
NASA Astrophysics Data System (ADS)
Wightman, Frederic; Callahan, Michael; Kistler, Doris
2003-04-01
In an experiment modeled after one reported recently by Brungart and Simpson [J. Acoust. Soc. Am. 112, 2985-2995 (2002)], 38 children (ages 4-16) and 10 adults responded to a monaural target speech signal in the presence of one or two distracter speech signals. The target speaker was a male and the distracter speakers were females. When two distracters were present they were in different ears. Performance at several different target ear S/N was measured and psychometric functions were fitted to estimate threshold, or the 50% performance level. The youngest children required approximately 20 dB higher S/N than adults to achieve threshold with a single distracter. This difference disappeared by age 16. The impact of adding the contralateral distracter, which is thought to contribute only informational masking, was roughly constant across age, however. Adult thresholds increased about 11 dB and the thresholds for the youngest children increased about 10 dB. This was surprising given previous experiments that showed much larger informational masking effects in young children. Also inconsistent with previous results is the lack of individual differences. Nearly all listeners showed almost the same contralateral distracter effect. [Work supported by NICHD.
Anderson, Elizabeth S; Oxenham, Andrew J; Nelson, Peggy B; Nelson, David A
2012-12-01
Measures of spectral ripple resolution have become widely used psychophysical tools for assessing spectral resolution in cochlear-implant (CI) listeners. The objective of this study was to compare spectral ripple discrimination and detection in the same group of CI listeners. Ripple detection thresholds were measured over a range of ripple frequencies and were compared to spectral ripple discrimination thresholds previously obtained from the same CI listeners. The data showed that performance on the two measures was correlated, but that individual subjects' thresholds (at a constant spectral modulation depth) for the two tasks were not equivalent. In addition, spectral ripple detection was often found to be possible at higher rates than expected based on the available spectral cues, making it likely that temporal-envelope cues played a role at higher ripple rates. Finally, spectral ripple detection thresholds were compared to previously obtained speech-perception measures. Results confirmed earlier reports of a robust relationship between detection of widely spaced ripples and measures of speech recognition. In contrast, intensity difference limens for broadband noise did not correlate with spectral ripple detection measures, suggesting a dissociation between the ability to detect small changes in intensity across frequency and across time.
Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.
Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth
2017-08-09
Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate anatomically distinct cortical representations of modulated noise in normal-hearing and hearing-impaired listeners. This work provides the first link among hearing thresholds, the amplitude of cortical representations of modulated sounds, and the ability to understand speech in modulated background noise. In light of previous work, we propose that magnified cortical representations of modulated sounds disrupt the separation of speech from modulated background noise in auditory cortex. Copyright © 2017 Millman et al.
Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram.
Hossain, Mohammad E; Jassim, Wissam A; Zilany, Muhammad S A
2016-01-01
Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants.
Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram
Hossain, Mohammad E.; Jassim, Wissam A.; Zilany, Muhammad S. A.
2016-01-01
Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants. PMID:26967160
Lee, Soo Jung; Park, Kyung Won; Kim, Lee-Suk; Kim, HyangHee
2016-06-01
Along with auditory function, cognitive function contributes to speech perception in the presence of background noise. Older adults with cognitive impairment might, therefore, have more difficulty perceiving speech-in-noise than their peers who have normal cognitive function. We compared the effects of noise level and cognitive function on speech perception in patients with amnestic mild cognitive impairment (aMCI), cognitively normal older adults, and cognitively normal younger adults. We studied 14 patients with aMCI and 14 age-, education-, and hearing threshold-matched cognitively intact older adults as experimental groups, and 14 younger adults as a control group. We assessed speech perception with monosyllabic word and sentence recognition tests at four noise levels: quiet condition and signal-to-noise ratio +5 dB, 0 dB, and -5 dB. We also evaluated the aMCI group with a neuropsychological assessment. Controlling for hearing thresholds, we found that the aMCI group scored significantly lower than both the older adults and the younger adults only when the noise level was high (signal-to-noise ratio -5 dB). At signal-to-noise ratio -5 dB, both older groups had significantly lower scores than the younger adults on the sentence recognition test. The aMCI group's sentence recognition performance was related to their executive function scores. Our findings suggest that patients with aMCI have more problems communicating in noisy situations in daily life than do their cognitively healthy peers and that older listeners with more difficulties understanding speech in noise should be considered for testing of neuropsychological function as well as hearing.
Frizelle, Pauline; Harte, Jennifer; O'Sullivan, Kathleen; Fletcher, Paul; Gibbon, Fiona
2017-01-01
The receptive language measure information-carrying word (ICW) level, is used extensively by speech and language therapists in the UK and Ireland. Despite this it has never been validated via its relationship to any other relevant measures. This study aims to validate the ICW measure by investigating the relationship between the receptive ICW score of children with specific language impairment (SLI) and their performance on standardized memory and language assessments. Twenty-seven children with SLI, aged between 5;07 and 8;11, completed a sentence comprehension task in which the instructions gradually increased in number of ICWs. The children also completed subtests from The Working Memory Test Battery for children and The Clinical Evaluation of Language Fundamentals- 4. Results showed that there was a significant positive relationship between both language and memory measures and children's ICW score. While both receptive and expressive language were significant in their contribution to children's ICW score, the contribution of memory was solely determined by children's working memory ability. ICW score is in fact a valid measure of the language ability of children with SLI. However therapists should also be cognisant of its strong association with working memory when using this construct in assessment or intervention methods.
Cycyk, Lauren M; Bitetti, Dana; Hammer, Carol Scheffner
2015-08-01
This study examined the impact of maternal depressive symptomatology and social support on the English and Spanish language growth of young bilingual children from low-income backgrounds. It was hypothesized that maternal depression would slow children's development in both languages but that social support would buffer the negative effect. Longitudinal data were collected from 83 mothers of Puerto Rican descent and their children who were attending Head Start preschool for 2 years. The effects of maternal depressive symptomatology and social support from family and friends on receptive vocabulary and oral comprehension development in both languages were examined. Growth curve modeling revealed that maternal depressive symptomatology negatively affected Spanish receptive vocabulary development only. Maternal depression did not affect children's English receptive vocabulary or their oral comprehension in either language. Social support was not related to maternal depressive symptomatology or child language. These findings suggest that maternal depression is 1 risk factor that contributes to less robust primary language development of bilingual children from low-income households. Speech-language pathologists must (a) increase their awareness of maternal depression in order to provide families with appropriate mental health referrals and (b) consider their roles as supportive adults for children whose mothers may be depressed.
Bitetti, Dana; Hammer, Carol Scheffner
2015-01-01
Purpose This study examined the impact of maternal depressive symptomatology and social support on the English and Spanish language growth of young bilingual children from low-income backgrounds. It was hypothesized that maternal depression would slow children's development in both languages but that social support would buffer the negative effect. Method Longitudinal data were collected from 83 mothers of Puerto Rican descent and their children who were attending Head Start preschool for 2 years. The effects of maternal depressive symptomatology and social support from family and friends on receptive vocabulary and oral comprehension development in both languages were examined. Results Growth curve modeling revealed that maternal depressive symptomatology negatively affected Spanish receptive vocabulary development only. Maternal depression did not affect children's English receptive vocabulary or their oral comprehension in either language. Social support was not related to maternal depressive symptomatology or child language. Conclusions These findings suggest that maternal depression is 1 risk factor that contributes to less robust primary language development of bilingual children from low-income households. Speech-language pathologists must (a) increase their awareness of maternal depression in order to provide families with appropriate mental health referrals and (b) consider their roles as supportive adults for children whose mothers may be depressed. PMID:25863774
Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo
2015-05-01
The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.
Hoth, S
2016-08-01
The Freiburg speech intelligibility test according to DIN 45621 was introduced around 60 years ago. For decades, and still today, the Freiburg test has been a standard whose relevance extends far beyond pure audiometry. It is used primarily to determine the speech perception threshold (based on two-digit numbers) and the ability to discriminate speech at suprathreshold presentation levels (based on monosyllabic nouns). Moreover, it is a measure of the degree of disability, the requirement for and success of technical hearing aids (auxiliaries directives), and the compensation for disability and handicap (Königstein recommendation). In differential audiological diagnostics, the Freiburg test contributes to the distinction between low- and high-frequency hearing loss, as well as to identification of conductive, sensory, neural, and central disorders. Currently, the phonemic and perceptual balance of the monosyllabic test lists is subject to critical discussions. Obvious deficiencies exist for testing speech recognition in noise. In this respect, alternatives such as sentence or rhyme tests in closed-answer inventories are discussed.
Neural representation of consciously imperceptible speech sound differences.
Allen, J; Kraus, N; Bradlow, A
2000-10-01
The concept of subliminal perception has been a subject of interest and controversy for decades. Of interest in the present investigation was whether a neurophysiologic index of stimulus change could be elicited to speech sound contrasts that were consciously indiscriminable. The stimuli were chosen on the basis of each individual subject's discrimination threshold. The speech stimuli (which varied along an F3 onset frequency continuum from /da/ to /ga/) were synthesized so that the acoustical properties of the stimuli could be tightly controlled. Subthreshold and suprathreshold stimuli were chosen on the basis of behavioral ability demonstrated during psychophysical testing. A significant neural representation of stimulus change, reflected by the mismatch negativity response, was obtained in all but 1 subject in response to subthreshold stimuli. Grand average responses differed significantly from responses obtained in a control condition consisting of physiologic responses elicited by physically identical stimuli. Furthermore, responses to suprathreshold stimuli (close to threshold) did not differ significantly from subthreshold responses with respect to latency, amplitude, or area. These results suggest that neural representation of consciously imperceptible stimulus differences occurs and that this representation occurs at a preattentive level.
2016-07-01
music of varying complexities. We did observe improvement from the first to the last lesson and the subject expressed appreciation for the training...hearing threshold data. C. Collect pre- and post-operative speech perception data. D. Collect music appraisal and pitch data. E. Administer training...localization, and music data. We are also collecting quality of life and functional questionnaire data. In Figure 2, we show post-operative speech
Real-time interactive speech technology at Threshold Technology, Incorporated
NASA Technical Reports Server (NTRS)
Herscher, Marvin B.
1977-01-01
Basic real-time isolated-word recognition techniques are reviewed. Industrial applications of voice technology are described in chronological order of their development. Future research efforts are also discussed.
Integrating cognitive and peripheral factors in predicting hearing-aid processing effectiveness
Kates, James M.; Arehart, Kathryn H.; Souza, Pamela E.
2013-01-01
Individual factors beyond the audiogram, such as age and cognitive abilities, can influence speech intelligibility and speech quality judgments. This paper develops a neural network framework for combining multiple subject factors into a single model that predicts speech intelligibility and quality for a nonlinear hearing-aid processing strategy. The nonlinear processing approach used in the paper is frequency compression, which is intended to improve the audibility of high-frequency speech sounds by shifting them to lower frequency regions where listeners with high-frequency loss have better hearing thresholds. An ensemble averaging approach is used for the neural network to avoid the problems associated with overfitting. Models are developed for two subject groups, one having nearly normal hearing and the other mild-to-moderate sloping losses. PMID:25669257
Effect of subliminal visual material on an auditory signal detection task.
Moroney, E; Bross, M
1984-02-01
An experiment assessed the effect of subliminally embedded, visual material on an auditory detection task. 22 women and 19 men were presented tachistoscopically with words designated as "emotional" or "neutral" on the basis of prior GSRs and a Word Rating List under four conditions: (a) Unembedded Neutral, (b) Embedded Neutral, (c) Unembedded Emotional, and (d) Embedded Emotional. On each trial subjects made forced choices concerning the presence or absence of an auditory tone (1000 Hz) at threshold level; hits and false alarm rates were used to compute non-parametric indices for sensitivity (A') and response bias (B"). While over-all analyses of variance yielded no significant differences, further examination of the data suggests the presence of subliminally "receptive" and "non-receptive" subpopulations.
Shao, Yu; Chang, Chip-Hong
2007-08-01
We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.
ERIC Educational Resources Information Center
Schlauch, Robert S.; Han, Heekyung J.; Yu, Tzu-Ling J.; Carney, Edward
2017-01-01
Purpose: The purpose of this article is to examine explanations for pure-tone average-spondee threshold differences in functional hearing loss. Method: Loudness magnitude estimation functions were obtained from 24 participants for pure tones (0.5 and 1.0 kHz), vowels, spondees, and speech-shaped noise as a function of level (20-90 dB SPL).…
Mary Watson, Rose; Pennington, Lindsay
2015-01-01
Background Communication difficulties are common in cerebral palsy (CP) and are frequently associated with motor, intellectual and sensory impairments. Speech and language therapy research comprises single-case experimental design and small group studies, limiting evidence-based intervention and possibly exacerbating variation in practice. Aims To describe the assessment and intervention practices of speech–language therapist (SLTs) in the UK in their management of communication difficulties associated with CP in childhood. Methods & Procedures An online survey of the assessments and interventions employed by UK SLTs working with children and young people with CP was conducted. The survey was publicized via NHS trusts, the Royal College of Speech and Language Therapists (RCSLT) and private practice associations using a variety of social media. The survey was open from 5 December 2011 to 30 January 2012. Outcomes & Results Two hundred and sixty-five UK SLTs who worked with children and young people with CP in England (n = 199), Wales (n = 13), Scotland (n = 36) and Northern Ireland (n = 17) completed the survey. SLTs reported using a wide variety of published, standardized tests, but most commonly reported assessing oromotor function, speech, receptive and expressive language, and communication skills by observation or using assessment schedules they had developed themselves. The most highly prioritized areas for intervention were: dysphagia, alternative and augmentative (AAC)/interaction and receptive language. SLTs reported using a wide variety of techniques to address difficulties in speech, language and communication. Some interventions used have no supporting evidence. Many SLTs felt unable to estimate the hours of therapy per year children and young people with CP and communication disorders received from their service. Conclusions & Implications The assessment and management of communication difficulties associated with CP in childhood varies widely in the UK. Lack of standard assessment practices prevents comparisons across time or services. The adoption of a standard set of agreed clinical measures would enable benchmarking of service provision, permit the development of large-scale research studies using routine clinical data and facilitate the identification of potential participants for research studies in the UK. Some interventions provided lack evidence. Recent systematic reviews could guide intervention, but robust evidence is needed in most areas addressed in clinical practice. PMID:25652139
Code of Federal Regulations, 2014 CFR
2014-07-01
... American Speech-Language-Hearing Association (ASHA) or licensed by a state board of examiners. Baseline... network and a slow response, expressed in the unit dBA. Standard threshold shift. A change in hearing...
Code of Federal Regulations, 2012 CFR
2012-07-01
... American Speech-Language-Hearing Association (ASHA) or licensed by a state board of examiners. Baseline... network and a slow response, expressed in the unit dBA. Standard threshold shift. A change in hearing...
He, Shuman; Grose, John H; Teagle, Holly F B; Woodard, Jennifer; Park, Lisa R; Hatch, Debora R; Buchman, Craig A
2013-01-01
This study aimed (1) to investigate the feasibility of recording the electrically evoked auditory event-related potential (eERP), including the onset P1-N1-P2 complex and the electrically evoked auditory change complex (EACC) in response to temporal gaps, in children with auditory neuropathy spectrum disorder (ANSD); and (2) to evaluate the relationship between these measures and speech-perception abilities in these subjects. Fifteen ANSD children who are Cochlear Nucleus device users participated in this study. For each subject, the speech-processor microphone was bypassed and the eERPs were elicited by direct stimulation of one mid-array electrode (electrode 12). The stimulus was a train of biphasic current pulses 800 msec in duration. Two basic stimulation conditions were used to elicit the eERP. In the no-gap condition, the entire pulse train was delivered uninterrupted to electrode 12, and the onset P1-N1-P2 complex was measured relative to the stimulus onset. In the gapped condition, the stimulus consisted of two pulse train bursts, each being 400 msec in duration, presented sequentially on the same electrode and separated by one of five gaps (i.e., 5, 10, 20, 50, and 100 msec). Open-set speech-perception ability of these subjects with ANSD was assessed using the phonetically balanced kindergarten (PBK) word lists presented at 60 dB SPL, using monitored live voice in a sound booth. The eERPs were recorded from all subjects with ANSD who participated in this study. There were no significant differences in test-retest reliability, root mean square amplitude or P1 latency for the onset P1-N1-P2 complex between subjects with good (>70% correct on PBK words) and poorer speech-perception performance. In general, the EACC showed less mature morphological characteristics than the onset P1-N1-P2 response recorded from the same subject. There was a robust correlation between the PBK word scores and the EACC thresholds for gap detection. Subjects with poorer speech-perception performance showed larger EACC thresholds in this study. These results demonstrate the feasibility of recording eERPs from implanted children with ANSD, using direct electrical stimulation. Temporal-processing deficits, as demonstrated by large EACC thresholds for gap detection, might account in part for the poor speech-perception performances observed in a subgroup of implanted subjects with ANSD. This finding suggests that the EACC elicited by changes in temporal continuity (i.e., gap) holds promise as a predictor of speech-perception ability among implanted children with ANSD.
Moore, Brian C J; Stone, Michael A; Füllgrabe, Christian; Glasberg, Brian R; Puria, Sunil
2008-12-01
It is possible for auditory prostheses to provide amplification for frequencies above 6 kHz. However, most current hearing-aid fitting procedures do not give recommended gains for such high frequencies. This study was intended to provide information that could be useful in quantifying appropriate high-frequency gains, and in establishing the population of hearing-impaired people who might benefit from such amplification. The study had two parts. In the first part, wide-bandwidth recordings of normal conversational speech were obtained from a sample of male and female talkers. The recordings were used to determine the mean spectral shape over a wide frequency range, and to determine the distribution of levels (the speech dynamic range) as a function of center frequency. In the second part, audiometric thresholds were measured for frequencies of 0.125, 0.25, 0.5, 1, 2, 3, 4, 6, 8, 10, and 12.5 kHz for both ears of 31 people selected to have mild or moderate cochlear hearing loss. The hearing loss was never greater than 70 dB for any frequency up to 4 kHz. The mean spectrum level of the speech fell progressively with increasing center frequency above about 0.5 kHz. For speech with an overall level of 65 dB SPL, the mean 1/3-octave level was 49 and 37 dB SPL for center frequencies of 1 and 10 kHz, respectively. The dynamic range of the speech was similar for center frequencies of 1 and 10 kHz. The part of the dynamic range below the root-mean-square level was larger than reported in previous studies. The mean audiometric thresholds at high frequencies (10 and 12.5 kHz) were relatively high (69 and 77 dB HL, respectively), even though the mean thresholds for frequencies below 4 kHz were 41 dB HL or better. To partially restore audibility for a hearing loss of 65 dB at 10 kHz would require an effective insertion gain of about 36 dB at 10 kHz. With this gain, audibility could be (partly) restored for 25 of the 62 ears assessed.
Watkin, Peter; McCann, Donna; Law, Catherine; Mullee, Mark; Petrou, Stavros; Stevenson, Jim; Worsfold, Sarah; Yuen, Ho Ming; Kennedy, Colin
2007-09-01
The goal was to examine the relationships between management after confirmation, family participation, and speech and language outcomes in the same group of children with permanent childhood hearing impairment. Speech, oral language, and nonverbal abilities, expressed as z scores and adjusted in a regression model, and Family Participation Rating Scale scores were assessed at a mean age of 7.9 years for 120 children with bilateral permanent childhood hearing impairment from a 1992-1997 United Kingdom birth cohort. Ages at institution of management and hearing aid fitting were obtained retrospectively from case notes. Compared with children managed later (> 9 months), those managed early (< or = 9 months) had higher adjusted mean z scores for both receptive and expressive language, relative to nonverbal ability, but not for speech. Compared with children aided later, a smaller group of more-impaired children aided early did not have significantly higher scores for these outcomes. Family Participation Rating Scale scores showed significant positive correlations with language and speech intelligibility scores only for those with confirmation after 9 months and were highest for those with late confirmed, severe/profound, permanent childhood hearing impairment. Early management of permanent childhood hearing impairment results in improved language. Family participation is also an important factor in cases that are confirmed late, especially for children with severe or profound permanent childhood hearing impairment.
Kollmeier, Birger; Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T; Brand, Thomas
2016-09-07
To characterize the individual patient's hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The "typical" audiogram shapes from Bisgaard et al with or without a "typical" level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. © The Author(s) 2016.
Carter, Julie Anne; Murira, Grace; Gona, Joseph; Tumaini, Judy; Lees, Janet; Neville, Brian George; Newton, Charles Richard
2013-01-01
This study sought to adapt a battery of Western speech and language assessment tools to a rural Kenyan setting. The tool was developed for children whose first language was KiGiryama, a Bantu language. A total of 539 Kenyan children (males=271, females=268, ethnicity=100% Kigiryama. Data were collected from 303 children admitted to hospital with severe malaria and 206 age-matched children recruited from the village communities. The language assessments were based upon the Content, Form and Use (C/F/U) model. The assessment was based upon the adapted versions of the Peabody Picture Vocabulary Test, Test for the Reception of Grammar, Renfrew Action Picture Test, Pragmatics Profile of Everyday Communication Skills in Children, Test of Word Finding and language specific tests of lexical semantics, higher level language. Preliminary measures of construct validity suggested that the theoretical assumptions behind the construction of the assessments were appropriate and re-test and inter-rater reliability scores were acceptable. These findings illustrate the potential to adapt Western speech and language assessments in other languages and settings, particularly those in which there is a paucity of standardised tools. PMID:24294109
Feeney, Rachel; Desha, Laura; Khan, Asaduzzaman; Ziviani, Jenny
2017-04-01
The trajectory of health-related quality-of-life (HRQoL) for children aged 4-9 years and its relationship with speech and language difficulties (SaLD) was examined using data from the Longitudinal Study of Australian Children (LSAC). Generalized linear latent and mixed modelling was used to analyse data from three waves of the LSAC across four HRQoL domains (physical, emotional, social and school functioning). Four domains of HRQoL, measured using the Paediatric Quality-of-Life Inventory (PedsQL™), were examined to find the contribution of SaLD while accounting for child-specific factors (e.g. gender, ethnicity, temperament) and family characteristics (social ecological considerations and psychosocial stressors). In multivariable analyses, one measure of SaLD, namely parent concern about receptive language, was negatively associated with all HRQoL domains. Covariates positively associated with all HRQoL domains included child's general health, maternal mental health, parental warmth and primary caregiver's engagement in the labour force. Findings suggest that SaLD are associated with reduced HRQoL. For most LSAC study children, having typical speech/language skills was a protective factor positively associated with HRQoL.
Resolving ability and image discretization in the visual system.
Shelepin, Yu E; Bondarko, V M
2004-02-01
Psychophysiological studies were performed to measure the spatial threshold for resolution of two "points" and the thresholds for discriminating their orientations depending on the distance between the two points. Data were compared with the scattering of the "point" by the eye's optics, the packing density of cones in the fovea, and the characteristics of the receptive fields of ganglion cells in the foveal area of the retina and neurons in the corresponding projection zones of the primary visual cortex. The effective zone was shown to have to contain a scattering function for several receptors, as this allowed preliminary blurring of the image by the eye's optics to decrease the subsequent (at the level of receptors) discretization noise created by a matrix of receptors. The concordance of these parameters supports the optical operation of the spatial elements of the neural network determining the resolving ability of the visual system at different levels of visual information processing. It is suggested that the special geometry of the receptive fields of neurons in the striate cortex, which are concordant with the statistics of natural scenes, results in a further increase in the signal:noise ratio.
Langereis, Margreet; Vermeulen, Anneke
2015-06-01
This study aimed to evaluate the long term effects of CI on auditory, language, educational and social-emotional development of deaf children in different educational-communicative settings. The outcomes of 58 children with profound hearing loss and normal non-verbal cognition, after 60 months of CI use have been analyzed. At testing the children were enrolled in three different educational settings; in mainstream education, where spoken language is used or in hard-of-hearing education where sign supported spoken language is used and in bilingual deaf education, with Sign Language of the Netherlands and Sign Supported Dutch. Children were assessed on auditory speech perception, receptive language, educational attainment and wellbeing. Auditory speech perception of children with CI in mainstream education enable them to acquire language and educational levels that are comparable to those of their normal hearing peers. Although the children in mainstream and hard-of-hearing settings show similar speech perception abilities, language development in children in hard-of-hearing settings lags significantly behind. Speech perception, language and educational attainments of children in deaf education remained extremely poor. Furthermore more children in mainstream and hard-of-hearing environments are resilient than in deaf educational settings. Regression analyses showed an important influence of educational setting. Children with CI who are placed in early intervention environments that facilitate auditory development are able to achieve good auditory speech perception, language and educational levels on the long term. Most parents of these children report no social-emotional concerns. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Warzybok, Anna; Brand, Thomas; Wagener, Kirsten C; Kollmeier, Birger
2015-01-01
The current study investigates the extent to which the linguistic complexity of three commonly employed speech recognition tests and second language proficiency influence speech recognition thresholds (SRTs) in noise in non-native listeners. SRTs were measured for non-natives and natives using three German speech recognition tests: the digit triplet test (DTT), the Oldenburg sentence test (OLSA), and the Göttingen sentence test (GÖSA). Sixty-four non-native and eight native listeners participated. Non-natives can show native-like SRTs in noise only for the linguistically easy speech material (DTT). Furthermore, the limitation of phonemic-acoustical cues in digit triplets affects speech recognition to the same extent in non-natives and natives. For more complex and less familiar speech materials, non-natives, ranging from basic to advanced proficiency in German, require on average 3-dB better signal-to-noise ratio for the OLSA and 6-dB for the GÖSA to obtain 50% speech recognition compared to native listeners. In clinical audiology, SRT measurements with a closed-set speech test (i.e. DTT for screening or OLSA test for clinical purposes) should be used with non-native listeners rather than open-set speech tests (such as the GÖSA or HINT), especially if a closed-set version in the patient's own native language is available.
Expressive and receptive language in Prader-Willi syndrome: report on genetic subtype differences.
Dimitropoulos, Anastasia; Ferranti, Angela; Lemler, Maria
2013-01-01
Prader-Willi syndrome (PWS), most recognized for the hallmark hyperphagia and food preoccupations, is caused by the absence of expression of the paternally active genes in the q11-13 region of chromosome 15. Since the recognition of PWS as a genetic disorder, most research has focused primarily on the medical, genetic, and behavioral aspects of the syndrome. Extensive research has not been conducted on the cognitive, speech, and language abilities in PWS. In addition, language differences with regard to genetic mechanism of PWS have not been well investigated. To date, research indicates overall language ability is markedly below chronological age with expressive language more impaired than receptive language in people with PWS. Thus, the aim of the present study was to further characterize expressive and receptive language ability in 35 participants with PWS and compare functioning by genetic subtype using the Clinical Evaluation of Language Fundamentals-4 (CELF-IV). Results indicate that core language ability is significantly impaired in PWS and both expressive and receptive abilities are significantly lower than verbal intelligence. In addition, participants with the maternal uniparental disomy (mUPD) genetic subtype exhibit discrepant language functioning with higher expressive vs. receptive language abilities. Future research is needed to further examine language functioning in larger genetic subtype participant samples using additional descriptive measures. Further work should also delineate findings with respect to size of the paternal deletion (Type 1 and Type 2 deletions) and explore how overexpression of maternally expressed genes in the 15q11-13 region may relate to verbal ability. After reading this article, the reader will be able to: (1) summarize primary characteristics of Prader-Willi syndrome (PWS), (2) describe differentiating characteristics for the PWS genetic subtypes, (3) recall limited research regarding language functioning in PWS to date, (4) summarize potential genetic variations of language ability in Prader-Willi syndrome, and (5) summarize language ability in PWS with respect to adaptive functioning. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Courtillon-Leclercq, Janine; Papo, Eliane
1977-01-01
An attempt to show that a threshold level would furnish conditions for a renewed methodology and greater student creativity. The acquisition of communicative competence would be constructed around two types of activities: analysis of the conditions of speech production and systematization of two levels of grammar. (Text is in French.) (AMH)
Anderson, Elizabeth S.; Oxenham, Andrew J.; Nelson, Peggy B.; Nelson, David A.
2012-01-01
Measures of spectral ripple resolution have become widely used psychophysical tools for assessing spectral resolution in cochlear-implant (CI) listeners. The objective of this study was to compare spectral ripple discrimination and detection in the same group of CI listeners. Ripple detection thresholds were measured over a range of ripple frequencies and were compared to spectral ripple discrimination thresholds previously obtained from the same CI listeners. The data showed that performance on the two measures was correlated, but that individual subjects’ thresholds (at a constant spectral modulation depth) for the two tasks were not equivalent. In addition, spectral ripple detection was often found to be possible at higher rates than expected based on the available spectral cues, making it likely that temporal-envelope cues played a role at higher ripple rates. Finally, spectral ripple detection thresholds were compared to previously obtained speech-perception measures. Results confirmed earlier reports of a robust relationship between detection of widely spaced ripples and measures of speech recognition. In contrast, intensity difference limens for broadband noise did not correlate with spectral ripple detection measures, suggesting a dissociation between the ability to detect small changes in intensity across frequency and across time. PMID:23231122
2013-01-01
Background Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. Results FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. Conclusion The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral responses. Serial FMAERs may be useful for tracking language change in LKS. Cortical FMAERs may augment invasive cortical language testing in epilepsy surgical patients. The FMAER may be normal in ASD and other language disorders when pathology spares the superior temporal gyrus and surround but presumably involves other brain regions. Ear/mastoid reference electrodes should be avoided and multichannel, reference free recordings utilized. Source analysis may assist in better understanding of complex FMAER findings. PMID:23351174
Evaluation of an Expert System for the Generation of Speech and Language Therapy Plans.
Robles-Bykbaev, Vladimir; López-Nores, Martín; García-Duque, Jorge; Pazos-Arias, José J; Arévalo-Lucero, Daysi
2016-07-01
Speech and language pathologists (SLPs) deal with a wide spectrum of disorders, arising from many different conditions, that affect voice, speech, language, and swallowing capabilities in different ways. Therefore, the outcomes of Speech and Language Therapy (SLT) are highly dependent on the accurate, consistent, and complete design of personalized therapy plans. However, SLPs often have very limited time to work with their patients and to browse the large (and growing) catalogue of activities and specific exercises that can be put into therapy plans. As a consequence, many plans are suboptimal and fail to address the specific needs of each patient. We aimed to evaluate an expert system that automatically generates plans for speech and language therapy, containing semiannual activities in the five areas of hearing, oral structure and function, linguistic formulation, expressive language and articulation, and receptive language. The goal was to assess whether the expert system speeds up the SLPs' work and leads to more accurate, consistent, and complete therapy plans for their patients. We examined the evaluation results of the SPELTA expert system in supporting the decision making of 4 SLPs treating children in three special education institutions in Ecuador. The expert system was first trained with data from 117 cases, including medical data; diagnosis for voice, speech, language and swallowing capabilities; and therapy plans created manually by the SLPs. It was then used to automatically generate new therapy plans for 13 new patients. The SLPs were finally asked to evaluate the accuracy, consistency, and completeness of those plans. A four-fold cross-validation experiment was also run on the original corpus of 117 cases in order to assess the significance of the results. The evaluation showed that 87% of the outputs provided by the SPELTA expert system were considered valid therapy plans for the different areas. The SLPs rated the overall accuracy, consistency, and completeness of the proposed activities with 4.65, 4.6, and 4.6 points (to a maximum of 5), respectively. The ratings for the subplans generated for the areas of hearing, oral structure and function, and linguistic formulation were nearly perfect, whereas the subplans for expressive language and articulation and for receptive language failed to deal properly with some of the subject cases. Overall, the SLPs indicated that over 90% of the subplans generated automatically were "better than" or "as good as" what the SLPs would have created manually if given the average time they can devote to the task. The cross-validation experiment yielded very similar results. The results show that the SPELTA expert system provides valuable input for SLPs to design proper therapy plans for their patients, in a shorter time and considering a larger set of activities than proceeding manually. The algorithms worked well even in the presence of a sparse corpus, and the evidence suggests that the system will become more reliable as it is trained with more subjects.
Evaluation of an Expert System for the Generation of Speech and Language Therapy Plans
López-Nores, Martín; García-Duque, Jorge; Pazos-Arias, José J; Arévalo-Lucero, Daysi
2016-01-01
Background Speech and language pathologists (SLPs) deal with a wide spectrum of disorders, arising from many different conditions, that affect voice, speech, language, and swallowing capabilities in different ways. Therefore, the outcomes of Speech and Language Therapy (SLT) are highly dependent on the accurate, consistent, and complete design of personalized therapy plans. However, SLPs often have very limited time to work with their patients and to browse the large (and growing) catalogue of activities and specific exercises that can be put into therapy plans. As a consequence, many plans are suboptimal and fail to address the specific needs of each patient. Objective We aimed to evaluate an expert system that automatically generates plans for speech and language therapy, containing semiannual activities in the five areas of hearing, oral structure and function, linguistic formulation, expressive language and articulation, and receptive language. The goal was to assess whether the expert system speeds up the SLPs’ work and leads to more accurate, consistent, and complete therapy plans for their patients. Methods We examined the evaluation results of the SPELTA expert system in supporting the decision making of 4 SLPs treating children in three special education institutions in Ecuador. The expert system was first trained with data from 117 cases, including medical data; diagnosis for voice, speech, language and swallowing capabilities; and therapy plans created manually by the SLPs. It was then used to automatically generate new therapy plans for 13 new patients. The SLPs were finally asked to evaluate the accuracy, consistency, and completeness of those plans. A four-fold cross-validation experiment was also run on the original corpus of 117 cases in order to assess the significance of the results. Results The evaluation showed that 87% of the outputs provided by the SPELTA expert system were considered valid therapy plans for the different areas. The SLPs rated the overall accuracy, consistency, and completeness of the proposed activities with 4.65, 4.6, and 4.6 points (to a maximum of 5), respectively. The ratings for the subplans generated for the areas of hearing, oral structure and function, and linguistic formulation were nearly perfect, whereas the subplans for expressive language and articulation and for receptive language failed to deal properly with some of the subject cases. Overall, the SLPs indicated that over 90% of the subplans generated automatically were “better than” or “as good as” what the SLPs would have created manually if given the average time they can devote to the task. The cross-validation experiment yielded very similar results. Conclusions The results show that the SPELTA expert system provides valuable input for SLPs to design proper therapy plans for their patients, in a shorter time and considering a larger set of activities than proceeding manually. The algorithms worked well even in the presence of a sparse corpus, and the evidence suggests that the system will become more reliable as it is trained with more subjects. PMID:27370070
Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K
2016-01-01
Despite the positive effects of cochlear implantation, postimplant variability in speech perception and oral language outcomes is still difficult to predict. The aim of this study was to identify neuroimaging biomarkers of postimplant speech perception and oral language performance in children with hearing loss who receive a cochlear implant. The authors hypothesized positive correlations between blood oxygen level-dependent functional magnetic resonance imaging (fMRI) activation in brain regions related to auditory language processing and attention and scores on the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Early Speech Perception Test for Profoundly Hearing-Impaired Children (ESP), in children with congenital hearing loss. Eleven children with congenital hearing loss were recruited for the present study based on referral for clinical MRI and other inclusion criteria. All participants were <24 months at fMRI scanning and <36 months at first implantation. A silent background fMRI acquisition method was performed to acquire fMRI during auditory stimulation. A voxel-based analysis technique was utilized to generate z maps showing significant contrast in brain activation between auditory stimulation conditions (spoken narratives and narrow band noise). CELF-P2 and ESP were administered 2 years after implantation. Because most participants reached a ceiling on ESP, a voxel-wise regression analysis was performed between preimplant fMRI activation and postimplant CELF-P2 scores alone. Age at implantation and preimplant hearing thresholds were controlled in this regression analysis. Four brain regions were found to be significantly correlated with CELF-P2 scores. These clusters of positive correlation encompassed the temporo-parieto-occipital junction, areas in the prefrontal cortex and the cingulate gyrus. For the story versus silence contrast, CELF-P2 core language score demonstrated significant positive correlation with activation in the right angular gyrus (r = 0.95), left medial frontal gyrus (r = 0.94), and left cingulate gyrus (r = 0.96). For the narrow band noise versus silence contrast, the CELF-P2 core language score exhibited significant positive correlation with activation in the left angular gyrus (r = 0.89; for all clusters, corrected p < 0.05). Four brain regions related to language function and attention were identified that correlated with CELF-P2. Children with better oral language performance postimplant displayed greater activation in these regions preimplant. The results suggest that despite auditory deprivation, these regions are more receptive to gains in oral language development performance of children with hearing loss who receive early intervention via cochlear implantation. The present study suggests that oral language outcome following cochlear implant may be predicted by preimplant fMRI with auditory stimulation using natural speech.
Auditory Sensitivity and Masking Profiles for the Sea Otter (Enhydra lutris).
Ghoul, Asila; Reichmuth, Colleen
2016-01-01
Sea otters are threatened marine mammals that may be negatively impacted by human-generated coastal noise, yet information about sound reception in this species is surprisingly scarce. We investigated amphibious hearing in sea otters by obtaining the first measurements of absolute sensitivity and critical masking ratios. Auditory thresholds were measured in air and underwater from 0.125 to 40 kHz. Critical ratios derived from aerial masked thresholds from 0.25 to 22.6 kHz were also obtained. These data indicate that although sea otters can detect underwater sounds, their hearing appears to be primarily air adapted and not specialized for detecting signals in background noise.
Measurements methodology for evaluation of Digital TV operation in VHF high-band
NASA Astrophysics Data System (ADS)
Pudwell Chaves de Almeida, M.; Vladimir Gonzalez Castellanos, P.; Alfredo Cal Braz, J.; Pereira David, R.; Saboia Lima de Souza, R.; Pereira da Soledade, A.; Rodrigues Nascimento Junior, J.; Ferreira Lima, F.
2016-07-01
This paper describes the experimental setup of field measurements carried out for evaluating the operation of the ISDB-TB (Integrated Services Digital Broadcasting, Terrestrial, Brazilian version) standard digital TV in the VHF-highband. Measurements were performed in urban and suburban areas in a medium-sized Brazilian city. Besides the direct measurements of received power and environmental noise, a measurement procedure involving the injection of Gaussian additive noise was employed to achieve the signal to noise ratio threshold at each measurement site. The analysis includes results of static reception measurements for evaluating the received field strength and the signal to noise ratio thresholds for correct signal decoding.
Working memory capacity may influence perceived effort during aided speech recognition in noise.
Rudner, Mary; Lunner, Thomas; Behrens, Thomas; Thorén, Elisabet Sundewall; Rönnberg, Jerker
2012-09-01
Recently there has been interest in using subjective ratings as a measure of perceived effort during speech recognition in noise. Perceived effort may be an indicator of cognitive load. Thus, subjective effort ratings during speech recognition in noise may covary both with signal-to-noise ratio (SNR) and individual cognitive capacity. The present study investigated the relation between subjective ratings of the effort involved in listening to speech in noise, speech recognition performance, and individual working memory (WM) capacity in hearing impaired hearing aid users. In two experiments, participants with hearing loss rated perceived effort during aided speech perception in noise. Noise type and SNR were manipulated in both experiments, and in the second experiment hearing aid compression release settings were also manipulated. Speech recognition performance was measured along with WM capacity. There were 46 participants in all with bilateral mild to moderate sloping hearing loss. In Experiment 1 there were 16 native Danish speakers (eight women and eight men) with a mean age of 63.5 yr (SD = 12.1) and average pure tone (PT) threshold of 47. 6 dB (SD = 9.8). In Experiment 2 there were 30 native Swedish speakers (19 women and 11 men) with a mean age of 70 yr (SD = 7.8) and average PT threshold of 45.8 dB (SD = 6.6). A visual analog scale (VAS) was used for effort rating in both experiments. In Experiment 1, effort was rated at individually adapted SNRs while in Experiment 2 it was rated at fixed SNRs. Speech recognition in noise performance was measured using adaptive procedures in both experiments with Dantale II sentences in Experiment 1 and Hagerman sentences in Experiment 2. WM capacity was measured using a letter-monitoring task in Experiment 1 and the reading span task in Experiment 2. In both experiments, there was a strong and significant relation between rated effort and SNR that was independent of individual WM capacity, whereas the relation between rated effort and noise type seemed to be influenced by individual WM capacity. Experiment 2 showed that hearing aid compression setting influenced rated effort. Subjective ratings of the effort involved in speech recognition in noise reflect SNRs, and individual cognitive capacity seems to influence relative rating of noise type. American Academy of Audiology.
[A research in speech endpoint detection based on boxes-coupling generalization dimension].
Wang, Zimei; Yang, Cuirong; Wu, Wei; Fan, Yingle
2008-06-01
In this paper, a new calculating method of generalized dimension, based on boxes-coupling principle, is proposed to overcome the edge effects and to improve the capability of the speech endpoint detection which is based on the original calculating method of generalized dimension. This new method has been applied to speech endpoint detection. Firstly, the length of overlapping border was determined, and through calculating the generalized dimension by covering the speech signal with overlapped boxes, three-dimension feature vectors including the box dimension, the information dimension and the correlation dimension were obtained. Secondly, in the light of the relation between feature distance and similarity degree, feature extraction was conducted by use of common distance. Lastly, bi-threshold method was used to classify the speech signals. The results of experiment indicated that, by comparison with the original generalized dimension (OGD) and the spectral entropy (SE) algorithm, the proposed method is more robust and effective for detecting the speech signals which contain different kinds of noise in different signal noise ratio (SNR), especially in low SNR.