Reduced Capacity in a Dichotic Memory Test for Adult Patients with ADHD
ERIC Educational Resources Information Center
Dige, Niels; Maahr, Eija; Backenroth-Ohsako, Gunnel
2010-01-01
Objective: To evaluate whether a dichotic memory test would reveal deficits in short-term working-memory recall and long-term memory recall in a group of adult patients with ADHD. Methods: A dichotic memory test with ipsilateral backward speech distraction in an adult ADHD group (n = 69) and a control group (n = 66) is used to compare performance…
The Effect of Lexical Content on Dichotic Speech Recognition in Older Adults.
Findlen, Ursula M; Roup, Christina M
2016-01-01
Age-related auditory processing deficits have been shown to negatively affect speech recognition for older adult listeners. In contrast, older adults gain benefit from their ability to make use of semantic and lexical content of the speech signal (i.e., top-down processing), particularly in complex listening situations. Assessment of auditory processing abilities among aging adults should take into consideration semantic and lexical content of the speech signal. The purpose of this study was to examine the effects of lexical and attentional factors on dichotic speech recognition performance characteristics for older adult listeners. A repeated measures design was used to examine differences in dichotic word recognition as a function of lexical and attentional factors. Thirty-five older adults (61-85 yr) with sensorineural hearing loss participated in this study. Dichotic speech recognition was evaluated using consonant-vowel-consonant (CVC) word and nonsense CVC syllable stimuli administered in the free recall, directed recall right, and directed recall left response conditions. Dichotic speech recognition performance for nonsense CVC syllables was significantly poorer than performance for CVC words. Dichotic recognition performance varied across response condition for both stimulus types, which is consistent with previous studies on dichotic speech recognition. Inspection of individual results revealed that five listeners demonstrated an auditory-based left ear deficit for one or both stimulus types. Lexical content of stimulus materials affects performance characteristics for dichotic speech recognition tasks in the older adult population. The use of nonsense CVC syllable material may provide a way to assess dichotic speech recognition performance while potentially lessening the effects of lexical content on performance (i.e., measuring bottom-up auditory function both with and without top-down processing). American Academy of Audiology.
ERIC Educational Resources Information Center
Markevych, Vladlena; Asbjornsen, Arve E.; Lind, Ola; Plante, Elena; Cone, Barbara
2011-01-01
The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a…
Hemispheric Differences in Processing Dichotic Meaningful and Non-Meaningful Words
ERIC Educational Resources Information Center
Yasin, Ifat
2007-01-01
Classic dichotic-listening paradigms reveal a right-ear advantage (REA) for speech sounds as compared to non-speech sounds. This REA is assumed to be associated with a left-hemisphere dominance for meaningful speech processing. This study objectively probed the relationship between ear advantage and hemispheric dominance in a dichotic-listening…
Elliott, D; Weeks, D J
1993-03-01
Adults with Down's syndrome and a group of undifferentiated mentally handicapped persons were examined using a free recall dichotic listening procedure to determine a laterality index for the perception of speech sounds. Subjects also performed both the visual and verbal portions of a standard apraxia battery. As in previous research, subjects with Down's syndrome tended to display a left ear advantage on the dichotic listening test. As well, they performed better on the apraxia battery when movements were cued visually rather than verbally. This verbal-motor disadvantage increased as the left ear dichotic listening advantage became more pronounced. It is argued that the verbal-motor difficulties experienced by persons with Down's syndrome stem from a dissociation of the functional systems responsible for speech perception and movement organization (Elliott and Weeks, 1990).
Interictal and Postictal Performances on Dichotic Listening Test in Children with Focal Epilepsy
ERIC Educational Resources Information Center
Carlsson, G.; Wiegand, G.; Stephani, U.
2011-01-01
Dichotic listening test (DL) is an important tool to disclose speech dominance in healthy subjects and in clinical cases. The aim of this study was to probe if focal epilepsy in children reveals a corresponding suppression of the ear reports contralateral to seizure onset site. Thus, 15 children and adolescents with clinically and…
Asynchronous glimpsing of speech: Spread of masking and task set-size
Ozmeral, Erol J.; Buss, Emily; Hall, Joseph W.
2012-01-01
Howard-Jones and Rosen [(1993). J. Acoust. Soc. Am. 93, 2915–2922] investigated the ability to integrate glimpses of speech that are separated in time and frequency using a “checkerboard” masker, with asynchronous amplitude modulation (AM) across frequency. Asynchronous glimpsing was demonstrated only for spectrally wide frequency bands. It is possible that the reduced evidence of spectro-temporal integration with narrower bands was due to spread of masking at the periphery. The present study tested this hypothesis with a dichotic condition, in which the even- and odd-numbered bands of the target speech and asynchronous AM masker were presented to opposite ears, minimizing the deleterious effects of masking spread. For closed-set consonant recognition, thresholds were 5.1–8.5 dB better for dichotic than for monotic asynchronous AM conditions. Results were similar for closed-set word recognition, but for open-set word recognition the benefit of dichotic presentation was more modest and level dependent, consistent with the effects of spread of masking being level dependent. There was greater evidence of asynchronous glimpsing in the open-set than closed-set tasks. Presenting stimuli dichotically supported asynchronous glimpsing with narrower frequency bands than previously shown, though the magnitude of glimpsing was reduced for narrower bandwidths even in some dichotic conditions. PMID:22894234
ERIC Educational Resources Information Center
Lavie, Limor; Banai, Karen; Karni, Avi; Attias, Joseph
2015-01-01
Purpose: We tested whether using hearing aids can improve unaided performance in speech perception tasks in older adults with hearing impairment. Method: Unaided performance was evaluated in dichotic listening and speech-in-noise tests in 47 older adults with hearing impairment; 36 participants in 3 study groups were tested before hearing aid…
ERIC Educational Resources Information Center
Turney, Michael T.; And Others
This report on speech research contains papers describing experiments involving both information processing and speech production. The papers concerned with information processing cover such topics as peripheral and central processes in vision, separate speech and nonspeech processing in dichotic listening, and dichotic fusion along an acoustic…
Dlouha, Olga; Novak, Alexej; Vokral, Jan
2007-06-01
The aim of this project is to use central auditory tests for diagnosis of central auditory processing disorder (CAPD) in children with specific language impairment (SLI), in order to confirm relationship between speech-language impairment and central auditory processing. We attempted to establish special dichotic binaural tests in Czech language modified for younger children. Tests are based on behavioral audiometry using dichotic listening (different auditory stimuli that presented to each ear simultaneously). The experimental tasks consisted of three auditory measures (test 1-3)-dichotic listening of two-syllable words presented like binaural interaction tests. Children with SLI are unable to create simple sentences from two words that are heard separately but simultaneously. Results in our group of 90 pre-school children (6-7 years old) confirmed integration deficit and problems with quality of short-term memory. Average rate of success of children with specific language impairment was 56% in test 1, 64% in test 2 and 63% in test 3. Results of control group: 92% in test 1, 93% in test 2 and 92% in test 3 (p<0.001). Our results indicate the relationship between disorders of speech-language perception and central auditory processing disorders.
The influence of target-masker similarity on across-ear interference in dichotic listening
NASA Astrophysics Data System (ADS)
Brungart, Douglas; Simpson, Brian
2004-05-01
In most dichotic listening tasks, the comprehension of a target speech signal presented in one ear is unaffected by the presence of irrelevant speech in the opposite ear. However, recent results have shown that contralaterally presented interfering speech signals do influence performance when a second interfering speech signal is present in the same ear as the target speech. In this experiment, we examined the influence of target-masker similarity on this effect by presenting ipsilateral and contralateral masking phrases spoken by the same talker, a different same-sex talker, or a different-sex talker than the one used to generate the target speech. The results show that contralateral target-masker similarity has the greatest influence on performance when an easily segregated different-sex masker is presented in the target ear, and the least influence when a difficult-to-segregate same-talker masker is presented in the target ear. These results indicate that across-ear interference in dichotic listening is not directly related to the difficulty of the segregation task in the target ear, and suggest that contralateral maskers are least likely to interfere with dichotic speech perception when the same general strategy could be used to segregate the target from the masking voices in the ipsilateral and contralateral ears.
Zenker Castro, Franz; Fernández Belda, Rafael; Barajas de Prat, José Juan
2008-12-01
In this study we present a case of a 71-year-old female patient with sensorineural hearing loss and fitted with bilateral hearing aids. The patient complained of scant benefit from the hearing aid fitting with difficulties in understanding speech with background noise. The otolaryngology examination was normal. Audiological tests revealed bilateral sensorineural hearing loss with threshold values of 51 and 50 dB HL in the right and left ear. The Dichotic Digit Test was administered in a divided attention mode and focalizing the attention to each ear. Results in this test are consistent with a Central Auditory Processing Disorder.
Dichotic Hearing in Elderly Hearing Aid Users Who Choose to Use a Single-Ear Device
Ribas, Angela; Mafra, Nicoli; Marques, Jair; Mottecy, Carla; Silvestre, Renata; Kozlowski, Lorena
2014-01-01
Introduction Elderly individuals with bilateral hearing loss often do not use hearing aids in both ears. Because of this, dichotic tests to assess hearing in this group may help identify peculiar degenerative processes of aging and hearing aid selection. Objective To evaluate dichotic hearing for a group of elderly hearing aid users who did not adapt to using binaural devices and to verify the correlation between ear dominance and the side chosen to use the device. Methods A cross-sectional descriptive study involving 30 subjects from 60 to 81 years old, of both genders, with an indication for bilateral hearing aids for over 6 months, but using only a single device. Medical history, pure tone audiometry, and dichotic listening tests were all completed. Results All subjects (100%) of the sample failed the dichotic digit test; 94% of the sample preferred to use the device in one ear because bilateral use bothered them and affected speech understanding. In 6%, the concern was aesthetics. In the dichotic digit test, there was significant predominance of the right ear over the left, and there was a significant correlation between the dominant side with the ear chosen by the participant for use of the hearing aid. Conclusion In elderly subjects with bilateral hearing loss who have chosen to use only one hearing aid, there is dominance of the right ear over the left in dichotic listening tasks. There is a correlation between the dominant ear and the ear chosen for hearing aid fitting. PMID:25992120
Markevych, Vladlena; Asbjørnsen, Arve E; Lind, Ola; Plante, Elena; Cone, Barbara
2011-07-01
The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a nonforced, and also attention-right, and attention-left condition. Transient evoked otoacoustic emissions (TEOAEs) were recorded for both ears, with and without the presentation of contralateral broadband noise. The main finding was a strong negative correlation between language laterality as measured with the dichotic listening task and of the TEOAE responses. The findings support a hypothesis of shared variance between central and peripheral auditory lateralities, and contribute to the attentional theory of auditory lateralization. The results have implications for the understanding of the cortico-fugal efferent control of cochlear activity. 2011 Elsevier Inc. All rights reserved.
Central Auditory Nervous System Dysfunction in Echolalic Autistic Individuals.
ERIC Educational Resources Information Center
Wetherby, Amy Miller; And Others
1981-01-01
The results showed that all the Ss had normal hearing on the monaural speech tests; however, there was indication of central auditory nervous system dysfunction in the language dominant hemisphere, inferred from the dichotic tests, for those Ss displaying echolalia. (Author)
Dichotic Word Recognition in Noise and the Right-Ear Advantage
ERIC Educational Resources Information Center
Roup, Christina M.
2011-01-01
Purpose: This study sought to compare dichotic right-ear advantages (REAs) of young adults to older adult data (C. M. Roup, T. L. Wiley, & R. H. Wilson, 2006) after matching for overall levels of recognition performance. Specifically, speech-spectrum noise was introduced in order to reduce dichotic recognition performance of young adults to a…
Central Presbycusis: A Review and Evaluation of the Evidence
Humes, Larry E.; Dubno, Judy R.; Gordon-Salant, Sandra; Lister, Jennifer J.; Cacace, Anthony T.; Cruickshanks, Karen J.; Gates, George A.; Wilson, Richard H.; Wingfield, Arthur
2018-01-01
Background The authors reviewed the evidence regarding the existence of age-related declines in central auditory processes and the consequences of any such declines for everyday communication. Purpose This report summarizes the review process and presents its findings. Data Collection and Analysis The authors reviewed 165 articles germane to central presbycusis. Of the 165 articles, 132 articles with a focus on human behavioral measures for either speech or nonspeech stimuli were selected for further analysis. Results For 76 smaller-scale studies of speech understanding in older adults reviewed, the following findings emerged: (1) the three most commonly studied behavioral measures were speech in competition, temporally distorted speech, and binaural speech perception (especially dichotic listening); (2) for speech in competition and temporally degraded speech, hearing loss proved to have a significant negative effect on performance in most of the laboratory studies; (3) significant negative effects of age, unconfounded by hearing loss, were observed in most of the studies of speech in competing speech, time-compressed speech, and binaural speech perception; and (4) the influence of cognitive processing on speech understanding has been examined much less frequently, but when included, significant positive associations with speech understanding were observed. For 36 smaller-scale studies of the perception of nonspeech stimuli by older adults reviewed, the following findings emerged: (1) the three most frequently studied behavioral measures were gap detection, temporal discrimination, and temporal-order discrimination or identification; (2) hearing loss was seldom a significant factor; and (3) negative effects of age were almost always observed. For 18 studies reviewed that made use of test batteries and medium-to-large sample sizes, the following findings emerged: (1) all studies included speech-based measures of auditory processing; (2) 4 of the 18 studies included nonspeech stimuli; (3) for the speech-based measures, monaural speech in a competing-speech background, dichotic speech, and monaural time-compressed speech were investigated most frequently; (4) the most frequently used tests were the Synthetic Sentence Identification (SSI) test with Ipsilateral Competing Message (ICM), the Dichotic Sentence Identification (DSI) test, and time-compressed speech; (5) many of these studies using speech-based measures reported significant effects of age, but most of these studies were confounded by declines in hearing, cognition, or both; (6) for nonspeech auditory-processing measures, the focus was on measures of temporal processing in all four studies; (7) effects of cognition on nonspeech measures of auditory processing have been studied less frequently, with mixed results, whereas the effects of hearing loss on performance were minimal due to judicious selection of stimuli; and (8) there is a paucity of observational studies using test batteries and longitudinal designs. Conclusions Based on this review of the scientific literature, there is insufficient evidence to confirm the existence of central presbycusis as an isolated entity. On the other hand, recent evidence has been accumulating in support of the existence of central presbycusis as a multifactorial condition that involves age- and/or disease-related changes in the auditory system and in the brain. Moreover, there is a clear need for additional research in this area. PMID:22967738
Christianson, S A; Nilsson, L G; Silfvenius, H
1989-01-01
Dichotic listening tests were used to determine cerebral hemisphere memory functions in patients with complex partial seizures before, 10 days after, and 1-3 yr after right (RTE) or left (LTE) temporal-lobe excisions. Control subjects were also tested on two occasions. The tests consisted of presenting a series of 12-word lists and 7-word lists alternately to the two ears while backward speech was presented to the other ear. Measures of immediate free recall, final free recall, final cued recall, and serial recall were employed. The results revealed: (a) that both groups of patients were inferior the control group in tests tapping long-term memory functions rather than short-term memory functions, (b) a right-ear advantage for RTE patients at postoperative testing, (c) that the LTE group was more affected by surgery than the RTE group, and (d) a general improvement in recall performance from early to late postoperative testing. Taken together, these results indicate that the present dichotic test can be used as a non-invasive hemisphere memory test to complement invasive techniques for diagnosis of patients considered for epilepsy surgery.
The auditory processing battery: survey of common practices.
Emanuel, Diana C
2002-02-01
A survey of auditory processing (AP) diagnostic practices was mailed to all licensed audiologists in the State of Maryland and sent as an electronic mail attachment to the American Speech-Language-Hearing Association and Educational Audiology Association Internet forums. Common AP protocols (25 from the Internet, 28 from audiologists in Maryland) included requiring basic audiologic testing, using questionnaires, and administering dichotic listening, monaural low-redundancy speech, temporal processing, and electrophysiologic tests. Some audiologists also administer binaural interaction, attention, memory, and speech-language/psychological/educational tests and incorporate a classroom observation. The various AP batteries presently administered appear to be based on the availability of AP tests with well-documented normative data. Resources for obtaining AP tests are listed.
Florida Journal of Communication Disorders, 1998.
ERIC Educational Resources Information Center
Victor, Shelley J., Ed.; Lundy, Donna S., Ed.
1998-01-01
This annual volume is a compilation of research, clinical, and professional articles addressing innovative technology, new diagnostic tests, physiological basis for treatment, and therapeutic ideas in the fields of speech-language pathology and audiology. Featured articles include: (1) "Development of Local Child Norms for the Dichotic Digits…
[Auditory and corporal laterality, logoaudiometry, and monaural hearing aid gain].
Benavides, Mariela; Peñaloza-López, Yolanda R; de la Sancha-Jiménez, Sabino; García Pedroza, Felipe; Gudiño, Paula K
2007-12-01
To identify the auditory or clinical test that has the best correlation with the ear in which we apply the monaural hearing aid in symmetric bilateral hearing loss. A total of 37 adult patients with symmetric bilateral hearing loss were examined regarding the correlation between the best score in speech discrimination test, corporal laterality, auditory laterality with dichotic digits in Spanish and score for filtered words with monaural hearing aid. The best correlation was obtained between auditory laterality and gain with hearing aid (0.940). The dichotic test for auditory laterality is a good tool for identifying the best ear in which to apply a monaural hearing aid. The results of this paper suggest the necessity to apply this test in patients before a hearing aid is indicated.
A comparison of aphasic and non-brain-injured adults on a dichotic CV-syllable listening task.
Shanks, J; Ryan, W
1976-06-01
A dichotic CV-syllable listening task was administered to a group of eleven non-brain-injured adults and to a group of eleven adult aphasics. The results of this study may be summarized as follows: 1)The group of non-brain-injured adults showed a slight right ear advantage for dichotically presented CV-syllables. 2)In comparison with the control group the asphasic group showed a bilateral deficit in response to the dichotic CV-syllables, superimposed on a non-significant right ear advantage. 3) The asphasic group demonstrated a great deal of intersubject variability on the dichotic task with six aphasics showing a right ear preference for the stimuli. The non-brain-injured subjects performed more homogeneously on the task. 4) The two subgroups of aphasics, a right ear advantage group and a left ear advantage group, performed significantly different on the dichotic listening task. 5) Single correct data analysis proved valuable by deleting accuracy of report for an examination of trials in which there was true competition for the single left hemispheric speech processor. These results were analyzed in terms of a functional model of auditory processing. In view of this model, the bilateral deficit in dichotic performance of the asphasic group was accounted for by the presence of a lesion within the dominant left hemisphere, where the speech signals from both ears converge for final processing. The right ear advantage shown by one asphasic subgroup was explained by a lesion interfering with the corpus callosal pathways from the left hemisphere; the left ear advantage observed within the other subgroup was explained by a lesion in the area of the auditory processor of the left hemisphere.
ERIC Educational Resources Information Center
Haskins Labs., New Haven, CT.
This collection on speech research presents a number of reports of experiments conducted on neurological, physiological, and phonological questions, using electronic equipment for analysis. The neurological experiments cover auditory and phonetic processes in speech perception, auditory storage, ear asymmetry in dichotic listening, auditory…
Dichotic and dichoptic digit perception in normal adults.
Lawfield, Angela; McFarland, Dennis J; Cacace, Anthony T
2011-06-01
Verbally based dichotic-listening experiments and reproduction-mediated response-selection strategies have been used for over four decades to study perceptual/cognitive aspects of auditory information processing and make inferences about hemispheric asymmetries and language lateralization in the brain. Test procedures using dichotic digits have also been used to assess for disorders of auditory processing. However, with this application, limitations exist and paradigms need to be developed to improve specificity of the diagnosis. Use of matched tasks in multiple sensory modalities is a logical approach to address this issue. Herein, we use dichotic listening and dichoptic viewing of visually presented digits for making this comparison. To evaluate methodological issues involved in using matched tasks of dichotic listening and dichoptic viewing in normal adults. A multivariate assessment of the effects of modality (auditory vs. visual), digit-span length (1-3 pairs), response selection (recognition vs. reproduction), and ear/visual hemifield of presentation (left vs. right) on dichotic and dichoptic digit perception. Thirty adults (12 males, 18 females) ranging in age from 18 to 30 yr with normal hearing sensitivity and normal or corrected-to-normal visual acuity. A computerized, custom-designed program was used for all data collection and analysis. A four-way repeated measures analysis of variance (ANOVA) evaluated the effects of modality, digit-span length, response selection, and ear/visual field of presentation. The ANOVA revealed that performances on dichotic listening and dichoptic viewing tasks were dependent on complex interactions between modality, digit-span length, response selection, and ear/visual hemifield of presentation. Correlation analysis suggested a common effect on overall accuracy of performance but isolated only an auditory factor for a laterality index. The variables used in this experiment affected performances in the auditory modality to a greater extent than in the visual modality. The right-ear advantage observed in the dichotic-digits task was most evident when reproduction mediated response selection was used in conjunction with three-digit pairs. This effect implies that factors such as "speech related output mechanisms" and digit-span length (working memory) contribute to laterality effects in dichotic listening performance with traditional paradigms. Thus, the use of multiple-digit pairs to avoid ceiling effects and the application of verbal reproduction as a means of response selection may accentuate the role of nonperceptual factors in performance. Ideally, tests of perceptual abilities should be relatively free of such effects. American Academy of Audiology.
Sætrevik, Bjørn
2012-01-01
The dichotic listening task is typically administered by presenting a consonant-vowel (CV) syllable to each ear and asking the participant to report the syllable heard most clearly. The results tend to show more reports of the right ear syllable than of the left ear syllable, an effect called the right ear advantage (REA). The REA is assumed to be due to the crossing over of auditory fibres and the processing of language stimuli being lateralised to left temporal areas. However, the tendency for most dichotic listening experiments to use only CV syllable stimuli limits the extent to which the conclusions can be generalised to also apply to other speech phonemes. The current study re-examines the REA in dichotic listening by using both CV and vowel-consonant (VC) syllables and combinations thereof. Results showed a replication of the REA response pattern for both CV and VC syllables, thus indicating that the general assumption of left-side localisation of processing can be applied for both types of stimuli. Further, on trials where a CV is presented in one ear and a VC is presented in the other ear, the CV is selected more often than the VC, indicating that these phonemes have an acoustic or processing advantage.
Impact of Educational Level on Performance on Auditory Processing Tests.
Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane
2016-01-01
Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.
Working memory predicts semantic comprehension in dichotic listening in older adults.
James, Philip J; Krishnan, Saloni; Aydelott, Jennifer
2014-10-01
Older adults have difficulty understanding spoken language in the presence of competing voices. Everyday social situations involving multiple simultaneous talkers may become increasingly challenging in later life due to changes in the ability to focus attention. This study examined whether individual differences in cognitive function predict older adults' ability to access sentence-level meanings in competing speech using a dichotic priming paradigm. Older listeners showed faster responses to words that matched the meaning of spoken sentences presented to the left or right ear, relative to a neutral baseline. However, older adults were more vulnerable than younger adults to interference from competing speech when the competing signal was presented to the right ear. This pattern of performance was strongly correlated with a non-auditory working memory measure, suggesting that cognitive factors play a key role in semantic comprehension in competing speech in healthy aging. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Westerhausen, René; Bless, Josef J.; Passow, Susanne; Kompus, Kristiina; Hugdahl, Kenneth
2015-01-01
The ability to use cognitive-control functions to regulate speech perception is thought to be crucial in mastering developmental challenges, such as language acquisition during childhood or compensation for sensory decline in older age, enabling interpersonal communication and meaningful social interactions throughout the entire life span.…
A right-ear bias of auditory selective attention is evident in alpha oscillations.
Payne, Lisa; Rogers, Chad S; Wingfield, Arthur; Sekuler, Robert
2017-04-01
Auditory selective attention makes it possible to pick out one speech stream that is embedded in a multispeaker environment. We adapted a cued dichotic listening task to examine suppression of a speech stream lateralized to the nonattended ear, and to evaluate the effects of attention on the right ear's well-known advantage in the perception of linguistic stimuli. After being cued to attend to input from either their left or right ear, participants heard two different four-word streams presented simultaneously to the separate ears. Following each dichotic presentation, participants judged whether a spoken probe word had been in the attended ear's stream. We used EEG signals to track participants' spatial lateralization of auditory attention, which is marked by interhemispheric differences in EEG alpha (8-14 Hz) power. A right-ear advantage (REA) was evident in faster response times and greater sensitivity in distinguishing attended from unattended words. Consistent with the REA, we found strongest parietal and right frontotemporal alpha modulation during the attend-right condition. These findings provide evidence for a link between selective attention and the REA during directed dichotic listening. © 2016 Society for Psychophysiological Research.
Roup, Christina M; Leigh, Elizabeth D
2015-06-01
The purpose of the present study was to examine individual differences in binaural processing across the adult life span. Sixty listeners (aged 23-80 years) with symmetrical hearing were tested. Binaural behavioral processing was measured by the Words-in-Noise Test, the 500-Hz masking level difference, and the Dichotic Digit Test. Electrophysiologic responses were assessed by the auditory middle latency response binaural interaction component. No correlations among binaural measures were found. Age accounted for the greatest amount of variability in speech-in-noise performance. Age was significantly correlated with the Words-in-Noise Test binaural advantage and dichotic ear advantage. Partial correlations, however, revealed that this was an effect of hearing status rather than age per se. Inspection of individual results revealed that 20% of listeners demonstrated reduced binaural performance for at least 2 of the binaural measures. The lack of significant correlations among variables suggests that each is an important measurement of binaural abilities. For some listeners, binaural processing was abnormal, reflecting a binaural processing deficit not identified by monaural audiologic tests. The inclusion of a binaural test battery in the audiologic evaluation is supported given that these listeners may benefit from alternative forms of audiologic rehabilitation.
Westerhausen, René; Kompus, Kristiina; Hugdahl, Kenneth
2014-01-01
Functional hemispheric differences for speech and language processing have been traditionally studied by using verbal dichotic-listening paradigms. The commonly observed right-ear preference for the report of dichotically presented syllables is taken to reflect the left hemispheric dominance for speech processing. However, the results of recent functional imaging studies also show that both hemispheres - not only the left - are engaged by dichotic listening, suggesting a more complex relationship between behavioral laterality and functional hemispheric activation asymmetries. In order to more closely examine the hemispheric differences underlying dichotic-listening performance, we report an analysis of functional magnetic resonance imaging (fMRI) data of 104 right-handed subjects, for the first time combining an interhemispheric difference and conjunction analysis. This approach allowed for a distinction of homotopic brain regions which showed symmetrical (i.e., brain region significantly activated in both hemispheres and no activation difference between the hemispheres), relative asymmetrical (i.e., activated in both hemispheres but significantly stronger in one than the other hemisphere), and absolute asymmetrical activation patterns (i.e., activated only in one hemisphere and this activation is significantly stronger than in the other hemisphere). Symmetrical activation was found in large clusters encompassing temporal, parietal, inferior frontal, and medial superior frontal regions. Relative and absolute left-ward asymmetries were found in the posterior superior temporal gyrus, located adjacent to symmetrically activated areas, and creating a lateral-medial gradient from symmetrical towards absolute asymmetrical activation within the peri-Sylvian region. Absolute leftward asymmetry was also found in the post-central and medial superior frontal gyri, while rightward asymmetries were found in middle temporal and middle frontal gyri. We conclude that dichotic listening engages a bihemispheric cortical network, showing a symmetrical and mostly leftward asymmetrical pattern. The here obtained functional (a)symmetry map might serve as a basis for future studies which - by studying the relevance of the here identified regions - clarify the relationship between behavioral laterality measures and hemispheric asymmetry. © 2013 Elsevier Inc. All rights reserved.
Strategies to combat auditory overload during vehicular command and control.
Abel, Sharon M; Ho, Geoffrey; Nakashima, Ann; Smith, Ingrid
2014-09-01
Strategies to combat auditory overload were studied. Normal-hearing males were tested in a sound isolated room in a mock-up of a military land vehicle. Two tasks were presented concurrently, in quiet and vehicle noise. For Task 1 dichotic phrases were delivered over a communications headset. Participants encoded only those beginning with a preassigned call sign (Baron or Charlie). For Task 2, they agreed or disagreed with simple equations presented either over loudspeakers, as text on the laptop monitor, in both the audio and the visual modalities, or not at all. Accuracy was significantly better by 20% on Task 2 when the equations were presented visually or audiovisually. Scores were at least 78% correct for dichotic phrases presented over the headset, with a right ear advantage of 7%, given the 5 dB speech-to-noise ratio. The left ear disadvantage was particularly apparent in noise, where the interaural difference was 12%. Relatively lower scores in the left ear, in noise, were observed for phrases beginning with Charlie. These findings underscore the benefit of delivering higher priority communications to the dominant ear, the importance of selecting speech sounds that are resilient to noise masking, and the advantage of using text in cases of degraded audio. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
Study on the application of the time-compressed speech in children.
Padilha, Fernanda Yasmin Odila Maestri Miguel; Pinheiro, Maria Madalena Canina
2017-11-09
To analyze the performance of children without alteration of central auditory processing in the Time-compressed Speech Test. This is a descriptive, observational, cross-sectional study. Study participants were 22 children aged 7-11 years without central auditory processing disorders. The following instruments were used to assess whether these children presented central auditory processing disorders: Scale of Auditory Behaviors, simplified evaluation of central auditory processing, and Dichotic Test of Digits (binaural integration stage). The Time-compressed Speech Test was applied to the children without auditory changes. The participants presented better performance in the list of monosyllabic words than in the list of disyllabic words, but with no statistically significant difference. No influence on test performance was observed with respect to order of presentation of the lists and the variables gender and ear. Regarding age, difference in performance was observed only in the list of disyllabic words. The mean score of children in the Time-compressed Speech Test was lower than that of adults reported in the national literature. Difference in test performance was observed only with respect to the age variable for the list of disyllabic words. No difference was observed in the order of presentation of the lists or in the type of stimulus.
[Diagnosis of psychogenic hearing disorders in childhood].
Kothe, C; Fleischer, S; Breitfuss, A; Hess, M
2003-11-01
In comparison with organic hearing loss, which is commonly reported, non-organic hearing loss is under-represented in the literature. The audiological results for 20 patients, aged between 6 and 17 years (mean 11.3), with psychogenic hearing disturbances were analysed prospectively. In 17 cases, the disturbance was bilateral and in three cases unilateral. In no case was the result of an objective hearing test exceptional, while a hearing threshold of between 30 and 100 dB was reported in single ear, pure-tone audiograms. In 12 cases, single ear speech audiograms were unexceptional. Suprathreshold tests, such as the dichotic discrimination test or the speech audiogram with noise disturbance, could lead to a clearer diagnosis in cases of severe psychogenic auditory impairment. In half of the patients, a conflict situation in the school or family was evident. After treatment for this conflict, hearing ability returned to normal. There was no improvement for six patients.
Divided listening in noise in a mock-up of a military command post.
Abel, Sharon M; Nakashima, Ann; Smith, Ingrid
2012-04-01
This study investigated divided listening in noise in a mock-up of a vehicular command post. The effects of background noise from the vehicle, unattended speech of coworkers on speech understanding, and a visual cue that directed attention to the message source were examined. Sixteen normal-hearing males participated in sixteen listening conditions, defined by combinations of the absence/presence of vehicle and speech babble noises, availability of a vision cue, and number of channels (2 or 3, diotic or dichotic, and loudspeakers) over which concurrent series of call sign, color, and number phrases were presented. All wore a communications headset with integrated hearing protection. A computer keyboard was used to encode phrases beginning with an assigned call sign. Subjects achieved close to 100% correct phrase identification when presented over the headset (with or without vehicle noise) or over the loudspeakers, without vehicle noise. In contrast, the percentage correct phrase identification was significantly less by 30 to 35% when presented over loudspeakers with vehicle noise. Vehicle noise combined with babble noise decreased the accuracy by an additional 12% for dichotic listening. Vision cues increased phrase identification accuracy by 7% for diotic listening. Outcomes could be explained by the at-ear energy spectra of the speech and noise.
Differential neural contributions to native- and foreign-language talker identification
Perrachione, Tyler K.; Pierrehumbert, Janet B.; Wong, Patrick C.M.
2009-01-01
Humans are remarkably adept at identifying individuals by the sound of their voice, a behavior supported by the nervous system’s ability to integrate information from voice and speech perception. Talker-identification abilities are significantly impaired when listeners are unfamiliar with the language being spoken. Recent behavioral studies describing the language-familiarity effect implicate functionally integrated neural systems for speech and voice perception, yet specific neuroscientific evidence demonstrating the basis for such integration has not yet been shown. Listeners in the present study learned to identify voices speaking a familiar (native) or unfamiliar (foreign) language. The talker-identification performance of neural circuitry in each cerebral hemisphere was assessed using dichotic listening. To determine the relative contribution of circuitry in each hemisphere to ecological (binaural) talker identification abilities, we compared the predictive capacity of dichotic performance on binaural performance across languages. We found listeners’ right-ear (left hemisphere) performance to be a better predictor of overall accuracy in their native language than a foreign one. The enhanced predictive capacity of the classically language-dominant left-hemisphere on overall talker-identification accuracy demonstrates functionally integrated neural systems for speech and voice perception during natural talker identification. PMID:19968445
Carey, Daniel; Mercure, Evelyne; Pizzioli, Fabrizio; Aydelott, Jennifer
2014-12-01
The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of -12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning. Copyright © 2014 Elsevier Ltd. All rights reserved.
Assessment of central auditory processing in a group of workers exposed to solvents.
Fuente, Adrian; McPherson, Bradley; Muñoz, Verónica; Pablo Espina, Juan
2006-12-01
Despite having normal hearing thresholds and speech recognition thresholds, results for central auditory tests were abnormal in a group of workers exposed to solvents. Workers exposed to solvents may have difficulties in everyday listening situations that are not related to a decrement in hearing thresholds. A central auditory processing disorder may underlie these difficulties. To study central auditory processing abilities in a group of workers occupationally exposed to a mix of organic solvents. Ten workers exposed to a mix of organic solvents and 10 matched non-exposed workers were studied. The test battery comprised pure-tone audiometry, tympanometry, acoustic reflex measurement, acoustic reflex decay, dichotic digit, pitch pattern sequence, masking level difference, filtered speech, random gap detection and hearing-in-noise tests. All the workers presented normal hearing thresholds and no signs of middle ear abnormalities. Workers exposed to solvents had lower results in comparison with the control group and previously reported normative data, in the majority of the tests.
Sleep quality and communication aspects in children.
de Castro Corrêa, Camila; José, Maria Renata; Andrade, Eduardo Carvalho; Feniman, Mariza Ribeiro; Fukushiro, Ana Paula; Berretin-Felix, Giédre; Maximino, Luciana Paula
2017-09-01
To correlate quality of life of children in terms of sleep, with their oral language skills, auditory processing and orofacial myofunctional aspects. Nineteen children (12 males and seven females, in the mean age 9.26) undergoing otorhinolaryngological and speech evaluations participated in this study. The OSA-18 questionnaire was applied, followed by verbal and nonverbal sequential memory tests, dichotic digit test, nonverbal dichotic test and Sustained Auditory Attention Ability Test, related to auditory processing. The Phonological Awareness Profile test, Rapid Automatized Naming and Phonological Working Memory were used for assessment of the phonological processing. Language was assessed by the ABFW Child Language Test, analyzing the phonological and lexical levels. Orofacial myofunctional aspects were evaluated through the MBGR Protocol. Statistical tests used: the Mann-Whitney Test, Fisher's exact test and Spearman Correlation. Relating the performance of children in all evaluations to the results obtained in the OSA-18, there was a statistically significant correlation in the phonological working memory for backward digits (p = 0.04); as well as in the breathing item (p = 0.03), posture of the mandible (p = 0.03) and mobility of lips (p = 0.04). A correlation was seen between the sleep quality of life and the skills related to the phonological processing, specifically in the phonological working memory in backward digits, and related to orofacial myofunctional aspects. Copyright © 2017 Elsevier B.V. All rights reserved.
Affective Priming with Auditory Speech Stimuli
ERIC Educational Resources Information Center
Degner, Juliane
2011-01-01
Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In Experiment 2, stimulus onset asynchrony (SOA) was…
1983-12-31
perception as much as binaural back- ward maskin6. Dichotic backward masking effects have also been found with more complex stimuli, such as CV syllables...the basis of these results and of binaur - al masking effects, it has been suggested that an auditory input produces a preperceptual auditory image that...four, in two sessions separated by at least 48 hours. In the "speech" session, subjects were first presented binaurally with the series of [bal and [gal
Bruder, Gerard E; Stewart, Jonathan W; McGrath, Patrick J; Deliyannides, Deborah; Quitkin, Frederic M
2004-09-01
Patients having a depressive disorder vary widely in their therapeutic responsiveness to a selective serotonin reuptake inhibitor (SSRI), but there are no clinical predictors of treatment outcome. Studies using dichotic listening, electrophysiologic and neuroimaging measures suggest that pretreatment differences among depressed patients in functional brain asymmetry are related to responsiveness to antidepressants. Two new studies replicate differences in dichotic listening asymmetry between fluoxetine responders and nonresponders, and demonstrate the importance of gender in this context. Right-handed outpatients who met DSM-IV criteria for major depression, dysthymia, or depression not otherwise specified were tested on dichotic fused-words and complex tones tests before completing 12 weeks of fluoxetine treatment. Perceptual asymmetry (PA) scores were compared for 75 patients (38 women) who responded to treatment and 39 patients (14 women) who were nonresponders. Normative data were also obtained for 101 healthy adults (61 women). Patients who responded to fluoxetine differed from nonresponders and healthy adults in favoring left- over right-hemisphere processing of dichotic stimuli, and this difference was dependent on gender and test. Heightened left-hemisphere advantage for dichotic words in responders was present among women but not men, whereas reduced right-hemisphere advantage for dichotic tones in responders was present among men but not women. Pretreatment PA was also predictive of change in depression severity following treatment. Responder vs nonresponder differences for verbal dichotic listening in women and nonverbal dichotic listening in men are discussed in terms of differences in cognitive function, hemispheric organization, and neurotransmitter function.
Speech processing asymmetry revealed by dichotic listening and functional brain imaging.
Hugdahl, Kenneth; Westerhausen, René
2016-12-01
In this article, we review research in our laboratory from the last 25 to 30 years on the neuronal basis for laterality of speech perception focusing on the upper, posterior parts of the temporal lobes, and its functional and structural connections to other brain regions. We review both behavioral and brain imaging data, with a focus on dichotic listening experiments, and using a variety of imaging modalities. The data have come in most parts from healthy individuals and from studies on normally functioning brain, although we also review a few selected clinical examples. We first review and discuss the structural model for the explanation of the right-ear advantage (REA) and left hemisphere asymmetry for auditory language processing. A common theme across many studies have been our interest in the interaction between bottom-up, stimulus-driven, and top-down, instruction-driven, aspects of hemispheric asymmetry, and how perceptual factors interact with cognitive factors to shape asymmetry of auditory language information processing. In summary, our research have shown laterality for the initial processing of consonant-vowel syllables, first observed as a behavioral REA when subjects are required to report which syllable of a dichotic syllable-pair they perceive. In subsequent work we have corroborated the REA with brain imaging, and have shown that the REA is modulated through both bottom-up manipulations of stimulus properties, like sound intensity, and top-down manipulations of cognitive properties, like attention focus. Copyright © 2015 Elsevier Ltd. All rights reserved.
Putter-Katz, Hanna; Adi-Bensaid, Limor; Feldman, Irit; Hildesheimer, Minka
2008-01-01
Twenty children with central auditory processing disorders [(C)APD] were subjected to a structured intervention program of listening skills in quiet and in noise. Their performance was compared to that of a control group of 10 children with (C)APD with no special treatment. Pretests were conducted in quiet and in degraded listening conditions (speech noise and competing speech). The (C)APD management approach was integrative and included top-down and bottom-up strategies. It focused on environmental modifications, remediation techniques, and compensatory strategies. Training was conducted with monosyllabic and polysyllabic words, sentences and phrases in quiet and in noise. Comparisons of pre- and post-management measures indicated increase in speech recognition performance in background noise and competing speech for the treatment group. This improvement was exhibited for both ears. A significant difference between ears was found with the left ear showing improvement in both the short and the long versions of competing sentence tests and the right ear performing better in the long competing sentences only following intervention. No changes were documented for the control group. These findings add to a growing body of literature suggesting that interactive auditory training can improve listening skills.
Cameron, Sharon; Glyde, Helen; Dillon, Harvey; Whitfield, Jessica; Seymour, John
2016-06-01
The dichotic digits test is one of the most widely used assessment tools for central auditory processing disorder. However, questions remain concerning the impact of cognitive factors on test results. To develop the Dichotic Digits difference Test (DDdT), an assessment tool that could differentiate children with cognitive deficits from children with genuine dichotic deficits based on differential test results. The DDdT consists of four subtests: dichotic free recall (FR), dichotic directed left ear (DLE), dichotic directed right ear (DRE), and diotic. Scores for six conditions are calculated (FR left ear [LE], FR right ear [RE], and FR total, as well as DLE, DRE, and diotic). Scores for four difference measures are also calculated: dichotic advantage, right-ear advantage (REA) FR, REA directed, and attention advantage. Experiment 1 involved development of the DDdT, including error rate analysis. Experiment 2 involved collection of normative and test-retest reliability data. Twenty adults (aged 25 yr 10 mo to 50 yr 7 mo, mean 36 yr 4 mo) took part in the development study; 62 normal-hearing, typically developing, primary-school children (aged 7 yr 1 mo to 11 yr 11 mo, mean 9 yr 4 mo) and 10 adults (aged 25 yr 0 mo to 51 yr 6 mo, mean 34 yr 10 mo) took part in the normative and test-retest reliability study. In Experiment 1, error rate analysis was conducted on the 36 digit-pair combinations of the DDdT. Normative data collected in Experiment 2 were arcsine transformed to achieve a distribution that was closer to a normal distribution and z-scores calculated. Pearson product-moment correlations were used to determine the strength of relationships between DDdT conditions. The development study revealed no significant differences in the adult population between test and retest on any DDdT condition. Error rates on 36 digit pairs ranged from 1.5% to 16.7%. The most and the least error-prone digits were removed before commencement of the normative data study, leaving 25 unique digit pairs. Average z-scores calculated from the arcsine-transformed data collected from the 62 children who took part in the normative data study revealed that FR dichotic processing (LE, RE, and total) was highly correlated with diotic processing (r ranging from 0.5 to 0.6; p < 0.0001). Significant improvements in performance on retest occurred for the FR LE, RE, total, and diotic conditions (p ranging from 0.05 to 0.0004), the conditions that would be expected to improve with practice if the participant's response strategies are better the second time around. The addition of a diotic control task-that shares many response demands with the usual dichotic tasks-opens up the possibility of differentiating children who perform below expectations because of poor dichotic processing skills from those who perform poorly because of impaired attention, memory, or other cognitive abilities. The high correlation between dichotic and diotic performance suggests that factors other than dichotic performance play a substantial role in a child's ability to perform a dichotic listening task. This hypothesis is investigated further in the cognitive correlation study that follows in the companion paper (DDdT Study Part 2; Cameron et al, 2016). American Academy of Audiology.
Evaluation of central auditory processing in children with Specific Language Impairment.
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Piłka, Adam; Skarżyński, Henryk
2015-01-01
Specific Language Impairment (SLI) affects about 7-15 % of children of school age and according to the currently accepted diagnostic criteria, it is presumed that these children do not suffer from hearing impairment. The goal of this work was to assess anomalies of central auditory processes in a group of children diagnosed with specific language impairment. Material consisted of 200 children aged 7-10 years (100 children in the study group and 100 hundred in the control group). Selected psychoacoustic tests (Frequency Pattern Test - FPT, Duration Pattern Test - DPT, Dichotic Digit Test - DDT, Time Compressed Sentence Test - CST, Gap Detection Test - GDT) were performed in all children. Results were subject to statistical analysis. It was observed that mean results obtained in individual age groups in the study group are significantly lower than in the control group. Based on the conducted studies we may conclude that children with SLI suffer from disorders of some higher auditory functions, which substantiates the diagnosis of hearing disorders according to the AHSA (American Hearing and Speech Association) guidelines. Use of sound-based, not verbal tests, eliminates the probability that observed problems with perception involve only perception of speech, therefore do not signify central hearing disorders, but problems with understanding of speech. Lack of literature data on the significance of FPT, DPT, DDT, CST and GDT tests in children with specific language impairment precludes comparison of acquired results and makes them unique.
Brungart, Douglas S; Simpson, Brian D
2007-09-01
Similarity between the target and masking voices is known to have a strong influence on performance in monaural and binaural selective attention tasks, but little is known about the role it might play in dichotic listening tasks with a target signal and one masking voice in the one ear and a second independent masking voice in the opposite ear. This experiment examined performance in a dichotic listening task with a target talker in one ear and same-talker, same-sex, or different-sex maskers in both the target and the unattended ears. The results indicate that listeners were most susceptible to across-ear interference with a different-sex within-ear masker and least susceptible with a same-talker within-ear masker, suggesting that the amount of across-ear interference cannot be predicted from the difficulty of selectively attending to the within-ear masking voice. The results also show that the amount of across-ear interference consistently increases when the across-ear masking voice is more similar to the target speech than the within-ear masking voice is, but that no corresponding decline in across-ear interference occurs when the across-ear voice is less similar to the target than the within-ear voice. These results are consistent with an "integrated strategy" model of speech perception where the listener chooses a segregation strategy based on the characteristics of the masker present in the target ear and the amount of across-ear interference is determined by the extent to which this strategy can also effectively be used to suppress the masker in the unattended ear.
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
2014-01-01
Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems.
Dichotic listening performance predicts language comprehension.
Asbjørnsen, Arve E; Helland, Turid
2006-05-01
Dichotic listening performance is considered a reliable and valid procedure for the assessment of language lateralisation in the brain. However, the documentation of a relationship between language functions and dichotic listening performance is sparse, although it is accepted that dichotic listening measures language perception. In particular, language comprehension should show close correspondence to perception of language stimuli. In the present study, we tested samples of reading-impaired and normally achieving children between 10 and 13 years of age with tests of reading skills, language comprehension, and dichotic listening to consonant-vowel (CV) syllables. A high correlation between the language scores and the dichotic listening performance was expected. However, since the left ear score is believed to be an error when assessing language laterality, covariation was expected for the right ear scores only. In addition, directing attention to one ear input was believed to reduce the influence of random factors, and thus show a more concise estimate of left hemisphere language capacity. Thus, a stronger correlation between language comprehension skills and the dichotic listening performance when attending to the right ear was expected. The analyses yielded a positive correlation between the right ear score in DL and language comprehension, an effect that was stronger when attending to the right ear. The present results confirm the assumption that dichotic listening with CV syllables measures an aspect of language perception and language skills that is related to general language comprehension.
[fMRI study of the dominant hemisphere for language in patients with brain tumor].
Buklina, S B; Podoprigora, A E; Pronin, I N; Shishkina, L V; Boldyreva, G N; Bondarenko, A A; Fadeeva, L M; Kornienko, V N; Zhukov, V Iu
2013-01-01
Paper describes a study of language lateralization of patients with brain tumors, measured by preoperative functional magnetic resonance imaging (fMRI) and comparison results with tumor histology and profile of functional asymmetry. During the study 21 patient underwent fMRI scan. 15 patients had a tumor in the left and 6 in the right hemisphere. Tumors were localized mainly in the frontal, temporal and fronto-temporal regions. Histological diagnosis in 8 cases was malignant Grade IV, in 13 cases--Grade I-III. fMRI study was perfomed on scanner "Signa Exite" with a field strength of 1.5 As speech test reciting the months of the year in reverse order was used. fMRI scan results were compared with the profile of functional asymmetry, which was received with the results of questionnaire Annette and dichotic listening test. Broca's area was found in 7 cases in the left hemisphere, 6 had a tumor Grade I-III. And one patient with glioblastoma had a tumor of the right hemisphere. Broca's area in the right hemisphere was found in 3 patients (2 patients with left sided tumor, and one with right-sided tumor). One patient with left-sided tumor had mild motor aphasia. Bilateral activation in both hemispheres of the brain was observed in 6 patients. All of them had tumor Grade II-III of the left hemisphere. Signs of left-handedness were revealed only in half of these patients. Broca's area was not found in 4 cases. All of them had large malignant tumors Grade IV. One patient couldn't handle program of the research. Results of fMRI scans, questionnaire Annette and dichotic listening test frequently were not the same, which is significant. Bilateral activation in speech-loads may be a reflection of brain plasticity in cases of long-growing tumors. Thus it's important to consider the full range of clinical data in studying the problem of the dominant hemisphere for language.
Benchmarks for the Dichotic Sentence Identification test in Brazilian Portuguese for ear and age.
Andrade, Adriana Neves de; Gil, Daniela; Iorio, Maria Cecilia Martinelli
2015-01-01
Dichotic listening tests should be used in local languages and adapted for the population. Standardize the Brazilian Portuguese version of the Dichotic Sentence Identification test in normal listeners, comparing the performance for age and ear. This prospective study included 200 normal listeners divided into four groups according to age: 13-19 years (GI), 20-29 years (GII), 30-39 years (GIII), and 40-49 years (GIV). The Dichotic Sentence Identification was applied in four stages: training, binaural integration and directed sound from right and left. Better results for the right ear were observed in the stages of binaural integration in all assessed groups. There was a negative correlation between age and percentage of correct responses in both ears for free report and training. The worst performance in all stages of the test was observed for the age group 40-49 years old. Reference values for the Brazilian Portuguese version of the Dichotic Sentence Identification test in normal listeners aged 13-49 years were established according to age, ear, and test stage; they should be used as benchmarks when evaluating individuals with these characteristics. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
The pupil response is sensitive to divided attention during speech processing.
Koelewijn, Thomas; Shinn-Cunningham, Barbara G; Zekveld, Adriana A; Kramer, Sophia E
2014-06-01
Dividing attention over two streams of speech strongly decreases performance compared to focusing on only one. How divided attention affects cognitive processing load as indexed with pupillometry during speech recognition has so far not been investigated. In 12 young adults the pupil response was recorded while they focused on either one or both of two sentences that were presented dichotically and masked by fluctuating noise across a range of signal-to-noise ratios. In line with previous studies, the performance decreases when processing two target sentences instead of one. Additionally, dividing attention to process two sentences caused larger pupil dilation and later peak pupil latency than processing only one. This suggests an effect of attention on cognitive processing load (pupil dilation) during speech processing in noise. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
2014-01-01
Background: Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Methods: Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Results: Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). Conclusion: The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems. PMID:25679009
Bruder, Gerard E; Haggerty, Agnes; Siegle, Greg J
2017-02-01
There are no commonly used clinical indicators of whether an individual will benefit from cognitive therapy (CT) for depression. A prior study found right ear (left hemisphere) advantage for perceiving dichotic words predicted CT response. This study replicates this finding at a different research center in clinical trials that included clinically representative samples and community therapists. Right-handed individuals with unipolar major depressive disorder who subsequently received 12-14 weeks of CT at the University of Pittsburgh were tested on dichotic fused words and complex tones tests. Responders to CT showed twice the mean right ear advantage in dichotic fused words performance than non-responders. Patients with a right ear advantage greater than the mean for healthy controls had an 81% response rate to CT, whereas those with performance lower than the mean for controls had a 46% response rate. Individuals with a right ear advantage, indicative of strong left hemisphere language dominance, may be better at utilizing cognitive processes and left frontotemporal cortical regions critical for success of CT for depression. Findings at two clinical research centers suggest that verbal dichotic listening may be a clinically disseminative brief, inexpensive and easily automated test prognostic for response to CT across diverse clinical settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Humes, Larry E.; Kidd, Gary R.; Lentz, Jennifer J.
2013-01-01
This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male) ranging in age from 60 to 86 (mean = 69.2). Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures), psychophysical (17 measures), and speech-understanding (9 measures), as well as the Speech, Spatial, and Qualities of Hearing (SSQ) self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference). All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding) measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection) as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI), and performance on the text-recognition-threshold (TRT) task (a visual analog of interrupted speech recognition). These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance. PMID:24098273
Testing the Benefits of Neurofeedback on Selective Attention Measured Through Dichotic Listening.
Gadea, Marien; Aliño, Marta; Garijo, Evelio; Espert, Raul; Salvador, Alicia
2016-06-01
The electrophysiological changes after a single session of neurofeedback training (↑SMR/↓Theta) and its effects on executive attention during a dichotic listening test with forced attentional procedures were measured in a sample of 20 healthy women. A pre-post moment test double blind design, with the inclusion of a group receiving sham neurofeedback, allowed for minimization of alien influences. The interaction of Moment × Group was significant, indicating an enhancement of SMR band after the real neurofeedback. The dichotic listening scores were correlated with the amplitude of Beta band in baseline conditions. The performance on the forced left attentional condition in dichotic listening was significantly improved and correlated positively with the post-training enhancement of the SMR band. The sham neurofeedback group also improved DL scores, so a clear affirmation about the benefits of neurofeedback training over cognitive performance could not be unambiguously established. It is concluded that the protocol showed a good independence and acceptable trainability in modifying the EEG results, but there was limited interpretability regarding cognitive outcomes.
Priming motivation through unattended speech.
Radel, Rémi; Sarrazin, Philippe; Jehu, Marie; Pelletier, Luc
2013-12-01
This study examines whether motivation can be primed through unattended speech. Study 1 used a dichotic-listening paradigm and repeated strength measures. In comparison to the baseline condition, in which the unattended channel was only composed by neutral words, the presence of words related to high (low) intensity of motivation led participants to exert more (less) strength when squeezing a hand dynamometer. In a second study, a barely audible conversation was played while participants' attention was mobilized on a demanding task. Participants who were exposed to a conversation depicting intrinsic motivation performed better and persevered longer in a subsequent word-fragment completion task than those exposed to the same conversation made unintelligible. These findings suggest that motivation can be primed without attention. © 2013 The British Psychological Society.
Dichotic Listening and Left-Right Confusion
ERIC Educational Resources Information Center
Hirnstein, Marco
2011-01-01
The present study examined the relationship between individual differences in dichotic listening (DL) and the susceptibility to left-right confusion (LRC). Thirty-six men and 59 women completed a consonant-vowel DL test, a behavioral LRC task, and an LRC self-rating questionnaire. Significant negative correlations between overall DL accuracy and…
Dichotic Listening Deficits in Children with Dyslexia
ERIC Educational Resources Information Center
Moncrieff, Deborah W.; Black, Jeffrey R.
2008-01-01
Several auditory processing deficits have been reported in children with dyslexia. In order to assess for the presence of a binaural integration type of auditory processing deficit, dichotic listening tests with digits, words and consonant-vowel (CV) pairs were administered to two groups of right-handed 11-year-old children, one group diagnosed…
Extinction of auditory stimuli in hemineglect: Space versus ear.
Spierer, Lucas; Meuli, Reto; Clarke, Stephanie
2007-02-01
Unilateral extinction of auditory stimuli, a key feature of the neglect syndrome, was investigated in 15 patients with right (11), left (3) or bilateral (1) hemispheric lesions using a verbal dichotic condition, in which each ear received simultaneously one word, and a interaural-time-difference (ITD) diotic condition, in which both ears received both words lateralised by means of ITD. Additional investigations included sound localisation, visuo-spatial attention and general cognitive status. Five patients presented a significant asymmetry in the ITD diotic test, due to a decrease of left hemispace reporting but no asymmetry was found in dichotic listening. Six other patients presented a significant asymmetry in the dichotic test due to a significant decrease of left or right ear reporting, but no asymmetry in diotic listening. Ten of the above patients presented mild to severe deficits in sound localisation and eight signs of visuo-spatial neglect (three with selective asymmetry in the diotic and five in the dichotic task). Four other patients presented a significant asymmetry in both the diotic and dichotic listening tasks. Three of them presented moderate deficits in localisation and all four moderate visuo-spatial neglect. Thus, extinction for left ear and left hemispace can double dissociate, suggesting distinct underlying neural processes. Furthermore, the co-occurrence with sound localisation disturbance and with visuo-spatial hemineglect speaks in favour of the involvement of multisensory attentional representations.
Vanhoucke, Elodie; Cousin, Emilie; Baciu, Monica
2013-03-01
Growing evidence suggests that age impacts on interhemispheric representation of language. Dichotic listening test allows assessing language lateralization for spoken language and it generally reveals right-ear/left-hemisphere (LH) predominance for language in young adult subjects. According to reported results, elderly would display increasing LH predominance in some studies or stable LH language lateralization for language in others ones. The aim of this study was to depict the main pattern of results in respect with the effect of normal aging on the hemisphere specialization for language by using dichotic listening test. A meta-analysis based on 11 studies has been performed. The inter-hemisphere asymmetry does not seem to increase according to age. A supplementary qualitative analysis suggests that right-ear advantage seems to increase between 40 and 49 y old and becomes stable or decreases after 55 y old, suggesting right-ear/LH decline.
Mahdavi, Mohammad Ebrahim; Pourbakht, Akram; Parand, Akram; Jalaie, Shohreh
2018-03-01
Evaluation of dichotic listening to digits is a common part of many studies for diagnosis and managing auditory processing disorders in children. Previous researchers have verified test-retest relative reliability of dichotic digits results in normal children and adults. However, detecting intervention-related changes in the ear scores after dichotic listening training requires information regarding trial-to-trial typical variation of individual ear scores that is estimated using indices of absolute reliability. Previous studies have not addressed absolute reliability of dichotic listening results. To compare the results of the Persian randomized dichotic digits test (PRDDT) and its relative and absolute indices of reliability between typical achieving (TA) and learning-disabled (LD) children. A repeated measures observational study. Fifteen LD children were recruited from a previously performed study with age range of 7-12 yr. The control group consisted of 15 TA schoolchildren with age range of 8-11 yr. The Persian randomized dichotic digits test was administered on the children under free recall condition in two test sessions 7-12 days apart. We compared the average of the ear scores and ear advantage between TA and LD children. Relative indices of reliability included Pearson's correlation and intraclass correlation (ICC 2,1 ) coefficients and absolute reliability was evaluated by calculation of standard error of measurement (SEM) and minimal detectable change (MDC) using the raw ear scores. The Pearson correlation coefficient indicated that in both groups of children the ear scores of test and retest sessions were strongly and positively (greater than +0.8) correlated. The ear scores showed excellent ICC coefficient of consistency (0.78-0.82) and fair to excellent ICC coefficient of absolute agreement (0.62-0.74) in TA children and excellent ICC coefficients of consistency and absolute agreement in LD children (0.76-0.87). SEM and SEM% of the ear scores in TA children were 1.46 and 1.44% for the right ear and 4.68 and 5.47% for the left ear. SEM and SEM% of the ear scores in LD children were 4.55 and 5.88% for the right ear to 7.56 and 12.81% for the left ear. MDC and MDC% of the ear scores in TA children varied from 4.03 and 3.99% for the right ear to 12.93 and 15.13% for the left ear. MDC and MDC% of the ear scores in LD children varied from 12.57 and 16.25% for the right ear to 20.89 and 35.39% for the left ear. The LD children indicated test-retest relative reliability as high as TA children in the ear scores measured by PRDDT. However, within-subject variations of the ear scores calculated by indices of absolute reliability were considerably higher in LD children versus TA children. The results of the current study could have implications for detecting real training-related changes in the ear scores. American Academy of Audiology
Hearing in Noise Test, HINT-Brazil, in normal-hearing children.
Novelli, Carolina Lino; Carvalho, Nádia Giulian de; Colella-Santos, Maria Francisca
The auditory processing is related to certain skills such as speech recognition in noise. The HINT-Brazil test allows the measurement of the Speech/Noise ratio however there are no studies in the national literature that establish parameters for the child population. To analyze the performance of normal-hearing subjects aged 8-10 years old in tasks for speech recognition in noise using HINT test. Sixty schoolchildren were evaluated. They were between 8 and 10 years of age, of both genders, and had no auditory and school complaints, with results ranking within normality for the Basic Audiological Assessment and the Dichotic Digits Test. HINT-Brazil test was applied with headphones, with the Speech/Noise ratio in conditions of frontal noise, noise to the right, and noise to the left being investigated. The software calculated the Composite Noise, which corresponds to the weighted mean of the tested conditions. There was no statistically significant difference between the ears, nor between the genders. There was a statistically significant difference for age ranges of 8 and 10 years, in situations with noise, and for Composite Noise. The age group of 10 years showed better performance than the age group of 8; the age group of 9 years did not show statistically significant difference regarding the other age ranges. We suggest the values of mean and standard deviation of the Speech/Noise ratio, considering the age ranges of: 8 years - Frontal Noise: -2.09 (±1.09); Right Noise: -7.64 (±1.72); Left Noise: -7.53 (±2.80); Composite Noise: -4.86 (±1.31); 9 years - Frontal Noise: -2.82 (±0.74); Right Noise: -8.49 (±2.24); Left Noise: -8.41 (±1.75); Composite Noise: -5.63 (±1.02); 10 years - Frontal Noise: -3.01 (±0.95); Right Noise: -9.47 (±1.43); Left Noise: -9.16 (±1.65); Composite Noise: -6.16 (±0.91). HINT-Brazil test is a simple and fast test, and is not difficult to performed with normal-hearing children. The results confirm that it is an efficient test to be used with the age range evaluated. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Reliability of Laterality Effects in a Dichotic Listening Task with Words and Syllables
ERIC Educational Resources Information Center
Russell, Nancy L.; Voyer, Daniel
2004-01-01
Large and reliable laterality effects have been found using a dichotic target detection task in a recent experiment using word stimuli pronounced with an emotional component. The present study tested the hypothesis that the magnitude and reliability of the laterality effects would increase with the removal of the emotional component and variations…
Cerebral specialization for speech production in persons with Down syndrome.
Heath, M; Elliott, D
1999-09-01
The study of cerebral specialization in persons with Down syndrome (DS) has revealed an anomalous pattern of organization. Specifically, dichotic listening studies (e.g., Elliott & Weeks, 1993) have suggested a left ear/right hemisphere dominance for speech perception for persons with DS. In the current investigation, the cerebral dominance for speech production was examined using the mouth asymmetry technique. In right-handed, nonhandicapped subjects, mouth asymmetry methodology has shown that during speech, the right side of the mouth opens sooner and to a larger degree then the left side (Graves, Goodglass, & Landis, 1982). The phenomenon of right mouth asymmetry (RMA) is believed to reflect the direct access that the musculature on the right side of the face has to the left hemisphere's speech production systems. This direct access may facilitate the transfer of innervatory patterns to the muscles on the right side of the face. In the present study, the lateralization for speech production was investigated in 10 right-handed participants with DS and 10 nonhandicapped subjects. A RMA at the initiation and end of speech production occurred for subjects in both groups. Surprisingly, the degree of asymmetry between groups did not differ, suggesting that the lateralization of speech production is similar for persons with and persons without DS. These results support the biological dissociation model (Elliott, Weeks, & Elliott, 1987), which holds that persons with DS display a unique dissociation between speech perception (right hemisphere) and speech production (left hemisphere). Copyright 1999 Academic Press.
[The role of the right hemisphere on recovery from Wernicke's aphasia].
Tabuchi, M; Fujii, T; Yamadori, A; Onodera, K; Endou, K
1998-04-01
We report a rare case of severe Wernicke's aphasia who showed a rapid and surprisingly good recovery despite of a large infarct involving the left posterior language area. A 68-year-old right-handed woman without a family history of left-handedness developed a severe comprehension difficulty and paraphasic output following a large infarct in the left posterior temporoparietal region. However, in 6 weeks, naming, comprehension, and repetition of words became almost normal. Spontaneous speech also became almost normal, although comprehension and repetition of sentences remained slightly impaired. The lesion size remained unchanged. A dichotic listening test 4 months after the onset showed clear left ear superiority. We speculate from these observations that the dormant language function in the right hemisphere might have played a role for rapid and good recovery of this case.
Hypothalamic digoxin, hemispheric chemical dominance, and sarcoidosis.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-11-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin, dolichol, and ubiquinone. This was assessed in patients with systemic sarcoidosis. All l5 patients with sarcoidosis were right-handed/left hemispheric dominant by the dichotic listening test. The pathway was also studied in normal right hemispheric, left hemispheric, and bihemispheric dominant individuals for comparison to find out the role of hemispheric dominance in the pathogenesis of sarcoidosis. In patients with sarcoidosis there was elevated digoxin synthesis, increased dolichol, and glycoconjugate levels, and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites. There was an increase in cholesterol:phospholipid ratio and a reduction in glycoconjugate level of RBC membrane in these patients. The neurotransmitter/digoxin-mediated increased intra cellular calcium induced immune activation, ubiquinone deficiency-related mitochondrial dysfunction/free radical generation, and increased dolichol-related altered glycoconjugate metabolism/endogenous self-glycoprotein antigen generation are crucial to the pathogenesis of sarcoidosis. The biochemical patterns obtained in sarcoidosis are similar to those obtained in left-handed/right hemispheric chemically dominant individuals by the dichotic listening test. But all the patients with sarcoidosis were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Sarcoidosis occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function.
ERIC Educational Resources Information Center
Gadea, Marien; Marti-Bonmati, Luis; Arana, Estanislao; Espert, Raul; Salvador, Alicia; Casanova, Bonaventura
2009-01-01
This study conducted a follow-up of 13 early-onset slightly disabled Relapsing-Remitting Multiple Sclerosis (RRMS) patients within an year, evaluating both CC area measurements in a midsagittal Magnetic Resonance (MR) image, and Dichotic Listening (DL) testing with stop consonant vowel (C-V) syllables. Patients showed a significant progressive…
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-06-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin, dolichol, and ubiquinone. Since endogenous digoxin can regulate neurotransmitter transport and dolichols can modulate glycoconjugate synthesis important in synaptic connectivity, the pathway was assessed in patients with dyslexia, delayed recovery from global aphasia consequent to a dominant hemispheric thrombotic infarct, and developmental delay of speech milestone. The pathway was also studied in right hemispheric, left hemispheric, and bihemispheric dominance to find out the role of hemispheric dominance in the pathogenesis of speech disorders. The plasma/serum--activity of HMG CoA reductase, magnesium, digoxin, dolichol, ubiquinone--and tryptophan/tyrosine catabolic patterns, as well as RBC (Na+)-K+ ATPase activity, were measured in the above mentioned groups. The glycoconjugate metabolism and membrane composition was also studied. The study showed that in dyslexia, developmental delay of speech milestone, and delayed recovery from global aphasia there was an upregulated isoprenoidal pathway with increased digoxin and dolichol levels. The membrane (Na+)-K+ ATPase activity, serum magnesium and ubiquinone levels were low. The tryptophan catabolites were increased and the tyrosine catabolites including dopamine decreased in the serum contributing to a speech dysfunction. There was an increase in carbohydrate residues of glycoproteins, glycosaminoglycans, and glycolipids levels as well as an increased activity of GAG degrading enzymes and glyco hydrolases in the serum. The cholesterol:phospholipid ratio of RBC membrane increased and membrane glycoconjugates showed a decrease. All of these could contribute to altered synaptic inactivity in these disorders. The patterns correlated with those obtained in right hemispheric chemical dominance. Right hemispheric chemical dominance may play a role in the genesis of these disorders. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test.
Dole, Marjorie; Hoen, Michel; Meunier, Fanny
2012-06-01
Developmental dyslexia is associated with impaired speech-in-noise perception. The goal of the present research was to further characterize this deficit in dyslexic adults. In order to specify the mechanisms and processing strategies used by adults with dyslexia during speech-in-noise perception, we explored the influence of background type, presenting single target-words against backgrounds made of cocktail party sounds, modulated speech-derived noise or stationary noise. We also evaluated the effect of three listening configurations differing in terms of the amount of spatial processing required. In a monaural condition, signal and noise were presented to the same ear while in a dichotic situation, target and concurrent sound were presented to two different ears, finally in a spatialised configuration, target and competing signals were presented as if they originated from slightly differing positions in the auditory scene. Our results confirm the presence of a speech-in-noise perception deficit in dyslexic adults, in particular when the competing signal is also speech, and when both signals are presented to the same ear, an observation potentially relating to phonological accounts of dyslexia. However, adult dyslexics demonstrated better levels of spatial release of masking than normal reading controls when the background was speech, suggesting that they are well able to rely on denoising strategies based on spatial auditory scene analysis strategies. Copyright © 2012 Elsevier Ltd. All rights reserved.
Watanabe, S; Tasaki, H; Hojo, K; Yoshimura, I; Sato, T; Nakaoka, T; Iwabuchi, T
1982-06-01
The authors made neuropsychological studies by the tachistoscope and the dichotic listening test on a subject who had undergone the transection of the posterior part of the corpus callosum. As to the tachistoscopic recognition, stimulus material was composed with the various Japanese letters (Katakana, Hiragana, Kanji), various faces (variations of the eyebrow form and the mouth form) and various slopes of line. Table 1 shows results of the cases (the subject was the present case, subjects 1 and subject 2 were past cases). It was seen that the performance of the subject on Japanese letters tasks showed greater right visual field superiority than the one of subject 1 and subject 2. As to the auditory recognition, the tasks used for the dichotic listening test were the following (Table 2, 3, 4). Different digits (three pairs) of the subject showed greater right ear superiority (right ear: 61.1, left ear 5.9) than the ones of subject 1 and subject 2.
2015-01-01
An important aspect of speech perception is the ability to group or select formants using cues in the acoustic source characteristics—for example, fundamental frequency (F0) differences between formants promote their segregation. This study explored the role of more radical differences in source characteristics. Three-formant (F1+F2+F3) synthetic speech analogues were derived from natural sentences. In Experiment 1, F1+F3 were generated by passing a harmonic glottal source (F0 = 140 Hz) through second-order resonators (H1+H3); in Experiment 2, F1+F3 were tonal (sine-wave) analogues (T1+T3). F2 could take either form (H2 or T2). In some conditions, the target formants were presented alone, either monaurally or dichotically (left ear = F1+F3; right ear = F2). In others, they were accompanied by a competitor for F2 (F1+F2C+F3; F2), which listeners must reject to optimize recognition. Competitors (H2C or T2C) were created using the time-reversed frequency and amplitude contours of F2. Dichotic presentation of F2 and F2C ensured that the impact of the competitor arose primarily through informational masking. In the absence of F2C, the effect of a source mismatch between F1+F3 and F2 was relatively modest. When F2C was present, intelligibility was lowest when F2 was tonal and F2C was harmonic, irrespective of which type matched F1+F3. This finding suggests that source type and context, rather than similarity, govern the phonetic contribution of a formant. It is proposed that wideband harmonic analogues are more effective informational maskers than narrowband tonal analogues, and so become dominant in across-frequency integration of phonetic information when placed in competition. PMID:25751040
ERIC Educational Resources Information Center
Fernandes, M. A.; Smith, M. L.; Logan, W.; Crawley, A.; McAndrews, M. P.
2006-01-01
We investigated the relationship between ear advantage scores on the Fused Dichotic Words Test (FDWT), and laterality of activation in fMRI using a verb generation paradigm in fourteen children with epilepsy. The magnitude of the laterality index (LI), based on spatial extent and magnitude of activation in classical language areas (BA 44/45,…
Hypothalamic digoxin, hemispheric chemical dominance, and interstitial lung disease.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-10-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin, dolichol, and ubiquinone. This was assessed in patients with idiopathic pulmonary fibrosis and in individuals of differing hemispheric dominance to find out the role of hemispheric dominance in the pathogenesis of idiopathic pulmonary fibrosis. All 15 cases of interstitial lung disease were right-handed/left hemispheric dominant by the dichotic listening test. The isoprenoidal metabolites--digoxin, dolichol, and ubiquinone, RBC membrane Na(+)-K+ ATPase activity, serum magnesium, tyrosine/tryptophan catabolic patterns, free radical metabolism, glycoconjugate metabolism, and RBC membrane composition--were assessed in idiopathic pulmonary fibrosis as well as in individuals with differing hemispheric dominance. In patients with idiopathic pulmonary fibrosis there was elevated digoxin synthesis, increased dolichol and glycoconjugate levels, and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites. There was an increase in cholesterol phospholipid ratio and a reduction in glycoconjugate level of RBC membrane in patients with idiopathic pulmonary fibrosis. Isoprenoid pathway dysfunction con tributes to the pathogenesis of idiopathic pulmonary fibrosis. The biochemical patterns obtained in interstitial lung disease are similar to those obtained in left-handed/right hemispheric chemically dominant individuals by the dichotic listening test. However, all the patients with interstitial lung disease were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Interstitial lung disease occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function.
Evaluation of selective attention in patients with misophonia.
Silva, Fúlvia Eduarda da; Sanchez, Tanit Ganz
2018-03-21
Misophonia is characterized by the aversion to very selective sounds, which evoke a strong emotional reaction. It has been inferred that misophonia, as well as tinnitus, is associated with hyperconnectivity between auditory and limbic systems. Individuals with bothersome tinnitus may have selective attention impairment, but it has not been demonstrated in case of misophonia yet. To characterize a sample of misophonic subjects and compare it with two control groups, one with tinnitus individuals (without misophonia) and the other with asymptomatic individuals (without misophonia and without tinnitus), regarding the selective attention. We evaluated 40 normal-hearing participants: 10 with misophonia, 10 with tinnitus (without misophonia) and 20 without tinnitus and without misophonia. In order to evaluate the selective attention, the dichotic sentence identification test was applied in three situations: firstly, the Brazilian Portuguese test was applied. Then, the same test was applied, combined with two competitive sounds: chewing sound (representing a sound that commonly triggers misophonia), and white noise (representing a common type of tinnitus which causes discomfort to patients). The dichotic sentence identification test with chewing sound, showed that the average of correct responses differed between misophonia and without tinnitus and without misophonia (p=0.027) and between misophonia and tinnitus (without misophonia) (p=0.002), in both cases lower in misophonia. Both, the dichotic sentence identification test alone, and with white noise, failed to show differences in the average of correct responses among the three groups (p≥0.452). The misophonia participants presented a lower percentage of correct responses in the dichotic sentence identification test with chewing sound; suggesting that individuals with misophonia may have selective attention impairment when they are exposed to sounds that trigger this condition. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
The effect of stimulus intensity on the right ear advantage in dichotic listening.
Hugdahl, Kenneth; Westerhausen, René; Alho, Kimmo; Medvedev, Svyatoslav; Hämäläinen, Heikki
2008-01-24
The dichotic listening test is non-invasive behavioural technique to study brain lateralization and it has been shown, that its results can be systematically modulated by varying stimulation properties (bottom-up effects) or attentional instructions (top-down effects) of the testing procedure. The goal of the present study was to further investigate the bottom-up modulation, by examining the effect of differences in the right or left ear stimulus intensity on the ear advantage. For this purpose, interaural intensity difference were gradually varied in steps of 3 dB from -21 dB in favour of the left ear to +21 dB in favour of the right ear, also including a no difference baseline condition. Thirty-three right-handed adult participants with normal hearing acuity were tested. The dichotic listening paradigm was based on consonant-vowel stimuli pairs. Only pairs with the same voicing (voice or non-voiced) of the consonant sound were used. The results showed: (a) a significant right ear advantage (REA) for interaural intensity differences from 21 to -3 dB, (b) no ear advantage (NEA) for the -6 dB difference, and (c) a significant left ear advantage (LEA) for differences form -9 to -21 dB. It is concluded that the right ear advantage in dichotic listening to CV syllables withstands an interaural intensity difference of -9 dB before yielding to a significant left ear advantage. This finding could have implications for theories of auditory laterality and hemispheric asymmetry for phonological processing.
Andrade, Adriana Neves de; Silva, Mariane Richetto da; Iorio, Maria Cecilia Martinelli; Gil, Daniela
2015-01-01
To compare the performance of the Dichotic Sentence Identification (DSI) test in the Brazilian Portuguese version, considering: the right and left ears and the educational status in normal-hearing individuals. This investigation assessed 200 individuals who are normal listeners and right-handed and were divided into seven groups according to the years of schooling. All the participants underwent basic audiologic evaluation and behavioral auditory processing assessment (sound localization test, memory test for verbal and nonverbal sounds in sequence, dichotic digits test, and DSI). The evaluated individuals revealed an average educational status of 13.1 years and results within normal limits in the selected tests for the audiologic and auditory processing assessments. Regarding the DSI test, the educational status showed a dependent relationship with the percentages of correct answers in each stage of the test and the evaluated ear. There was a statistically significant positive correlation between the educational status and the percentage of correct answers for all the stages of the DSI test in both the ears. There was also an effect of the educational level on the results obtained in each condition of the DSI test, with the exception of directed attention to the right ear. Comparing the performance considering the variables studied in the DSI test, we concluded that there is an advantage of the right ear and that, the better the educational level, the better the performance of the individuals.
Cognitive Conflict and Inhibition in Primed Dichotic Listening
ERIC Educational Resources Information Center
Saetrevik, Bjorn; Specht, Karsten
2009-01-01
In previous behavioral studies, a prime syllable was presented just prior to a dichotic syllable pair, with instructions to ignore the prime and report one syllable from the dichotic pair. When the prime matched one of the syllables in the dichotic pair, response selection was biased towards selecting the unprimed target. The suggested mechanism…
Dichotic listening during forced-attention in a patient with left hemispherectomy.
Wester, K; Hugdahl, K; Asbjørnsen, A
1991-02-01
A young left-handed girl with an extensive posttraumatic lesion in the left hemisphere was tested with dichotic listening (DL) under three different attentional instructions. The major aim of the study was to evaluate a structural vs attentional explanation for dichotic listening. As both her expressive and receptive language functions were intact after the lesion, it was assumed that the right hemisphere was the language-dominant one. In the free-report condition, she was free to divert attention to and to report from both ear inputs. In the forced-right condition, she was instructed to attend to and report only from the right ear input. In the forced-left condition, she was instructed to attend to and to report only from the left-ear input. Her performance was compared with data from a previously collected sample of normal left-handed females. Analysis showed that the patient, in contrast to the normal sample, revealed a complete right-ear extinction phenomenon, irrespective of attentional instruction. Furthermore, she showed superior correct reports from the left ear, compared with those of the normal sample, also irrespective of attentional instruction. It is concluded that these results support a structural, rather than attentional explanation for the right-ear advantage (REA) typically observed in dichotic listening. The utility of validating the dichotic listening technique on patients with brain lesions is discussed.
Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.
Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta
2009-01-01
In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.
Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.
2016-01-01
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs. PMID:27475132
Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y
2016-07-01
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs.
Speech processing: from peripheral to hemispheric asymmetry of the auditory system.
Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier
2012-01-01
Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
Multi-time resolution analysis of speech: evidence from psychophysics
Chait, Maria; Greenberg, Steven; Arai, Takayuki; Simon, Jonathan Z.; Poeppel, David
2015-01-01
How speech signals are analyzed and represented remains a foundational challenge both for cognitive science and neuroscience. A growing body of research, employing various behavioral and neurobiological experimental techniques, now points to the perceptual relevance of both phoneme-sized (10–40 Hz modulation frequency) and syllable-sized (2–10 Hz modulation frequency) units in speech processing. However, it is not clear how information associated with such different time scales interacts in a manner relevant for speech perception. We report behavioral experiments on speech intelligibility employing a stimulus that allows us to investigate how distinct temporal modulations in speech are treated separately and whether they are combined. We created sentences in which the slow (~4 Hz; Slow) and rapid (~33 Hz; Shigh) modulations—corresponding to ~250 and ~30 ms, the average duration of syllables and certain phonetic properties, respectively—were selectively extracted. Although Slow and Shigh have low intelligibility when presented separately, dichotic presentation of Shigh with Slow results in supra-additive performance, suggesting a synergistic relationship between low- and high-modulation frequencies. A second experiment desynchronized presentation of the Slow and Shigh signals. Desynchronizing signals relative to one another had no impact on intelligibility when delays were less than ~45 ms. Longer delays resulted in a steep intelligibility decline, providing further evidence of integration or binding of information within restricted temporal windows. Our data suggest that human speech perception uses multi-time resolution processing. Signals are concurrently analyzed on at least two separate time scales, the intermediate representations of these analyses are integrated, and the resulting bound percept has significant consequences for speech intelligibility—a view compatible with recent insights from neuroscience implicating multi-timescale auditory processing. PMID:26136650
Díaz-Leines, Sergio; Peñaloza-López, Yolanda R; Serrano-Miranda, Tirzo A; Flores-Ávalos, Blanca; Vidal-Ixta, Martha T; Jiménez-Herrera, Blanca
2013-01-01
Hyperhomocysteinemia as a risk factor for hearing impairment, neuronal damage and cognitive impairment in elderly patients is controversial and is limited by the small number of studies. The aim of this work was determine if elderly patients detected with hyperhomocysteinemia have an increased risk of developing abnormalities in the central auditory processes as compared with a group of patients with appropriate homocysteine levels, and to define the behaviour of psychoacoustic tests and long latency potentials (P300) in these patients. This was a cross-sectional, comparative and analytical study. We formed a group of patients with hyperhomocysteinemia and a control group with normal levels of homocysteine. All patients underwent audiometry, tympanometry and a selection of psychoacoustic tests (dichotic digits, low-pass filtered words, speech in noise and masking level difference), auditory evoked brainstem potentials and P300. Patients with hyperhomocysteinemia had higher values in the test of masking level difference than did the control group (P=.049) and more protracted latency in P300 (P=.000). Hyperhomocysteinemia is a factor that alters the central auditory functions. Alterations in psychoacoustic tests and disturbances in electrophysiological tests suggest that the central portion of the auditory pathway is affected in patients with hyperhomocysteinemia. Copyright © 2012 Elsevier España, S.L. All rights reserved.
Aghamollaei, Maryam; Jafari, Zahra; Tahaei, Aliakbar; Toufan, Reyhane; Keyhani, Mohammadreza; Rahimzade, Shadi; Esmaeili, Mahdieh
2013-09-01
The Dichotic Verbal Memory Test (DVMT) is useful in detecting verbal memory deficits and differences in memory function between the brain hemispheres. The purpose of this study was to prepare the Persian version of DVMT, to obtain its results in 18- to 25-yr-old Iranian individuals, and to examine the ear, gender, and serial position effect. The Persian version of DVMT consisted of 18 10-word lists. After preparing the 18 lists, content validity was assessed by a panel of eight experts and the equivalency of the lists was evaluated. Then the words were recorded on CD in a dichotic mode such that 10 words were presented to one ear, with the same words reversed simultaneously presented to the other ear. Thereafter, it was performed on a sample of young, normal, Iranian individuals. Thirty normal individuals (no history of neurological, ontological, or psychological diseases) with ages ranging from 18 to 25 yr were examined for evaluating the equivalency of the lists, and 110 subjects within the same age range participated in the final stage of the study to obtain the normative data on the developed test. There was no significant difference between the mean scores of the 18 developed lists (p > 0.05). The mean content validity index (CVI) score was .96. A significant difference was found between the mean score of the two ears (p < 0.05) and between female and male participants (p < 0.05). The Persian version of DVMT has good content validity and can be used for verbal memory assessment in Iranian young adults. American Academy of Audiology.
Auditory memory function in expert chess players.
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
2015-01-01
Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.
Bellis, Teri James; Ross, Jody
2011-09-01
It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.
Swanson, H L
1987-01-01
Three theoretical models (additive, independence, maximum rule) that characterize and predict the influence of independent hemispheric resources on learning-disabled and skilled readers' simultaneous processing were tested. Predictions related to word recall performance during simultaneous encoding conditions (dichotic listening task) were made from unilateral (dichotic listening task) presentations. The maximum rule model best characterized both ability groups in that simultaneous encoding produced no better recall than unilateral presentations. While the results support the hypothesis that both ability groups use similar processes in the combining of hemispheric resources (i.e., weak/dominant processing), ability group differences do occur in the coordination of such resources.
Exploring auditory neglect: Anatomo-clinical correlations of auditory extinction.
Tissieres, Isabel; Crottaz-Herbette, Sonia; Clarke, Stephanie
2018-05-23
The key symptoms of auditory neglect include left extinction on tasks of dichotic and/or diotic listening and rightward shift in locating sounds. The anatomical correlates of the latter are relatively well understood, but no systematic studies have examined auditory extinction. Here, we performed a systematic study of anatomo-clinical correlates of extinction by using dichotic and/or diotic listening tasks. In total, 20 patients with right hemispheric damage (RHD) and 19 with left hemispheric damage (LHD) performed dichotic and diotic listening tasks. Either task consists of the simultaneous presentation of word pairs; in the dichotic task, 1 word is presented to each ear, and in the diotic task, each word is lateralized by means of interaural time differences and presented to one side. RHD was associated with exclusively contralesional extinction in dichotic or diotic listening, whereas in selected cases, LHD led to contra- or ipsilesional extinction. Bilateral symmetrical extinction occurred in RHD or LHD, with dichotic or diotic listening. The anatomical correlates of these extinction profiles offer an insight into the organisation of the auditory and attentional systems. First, left extinction in dichotic versus diotic listening involves different parts of the right hemisphere, which explains the double dissociation between these 2 neglect symptoms. Second, contralesional extinction in the dichotic task relies on homologous regions in either hemisphere. Third, ipsilesional extinction in dichotic listening after LHD was associated with lesions of the intrahemispheric white matter, interrupting callosal fibres outside their midsagittal or periventricular trajectory. Fourth, bilateral symmetrical extinction was associated with large parieto-fronto-temporal LHD or smaller parieto-temporal RHD, which suggests that divided attention, supported by the right hemisphere, and auditory streaming, supported by the left, likely play a critical role. Copyright © 2018. Published by Elsevier Masson SAS.
Dichotic listening in patients with splenial and nonsplenial callosal lesions.
Pollmann, Stefan; Maertens, Marianne; von Cramon, D Yves; Lepsien, Joeran; Hugdahl, Kenneth
2002-01-01
The authors found splenial lesions to be associated with left ear suppression in dichotic listening of consonant-vowel syllables. This was found in both a rapid presentation dichotic monitoring task and a standard dichotic listening task, ruling out attentional limitations in the processing of high stimulus loads as a confounding factor. Moreover, directed attention to the left ear did not improve left ear target detection in the patients, independent of callosal lesion location. The authors' data may indicate that auditory callosal fibers pass through the splenium more posterior than previously thought. However, further studies should investigate whether callosal fibers between primary and secondary auditory cortices, or between higher level multimodal cortices, are vital for the detection of left ear targets in dichotic listening.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-11-01
The isoprenoid pathway including endogenous digoxin was assessed in systemic lupus erythematosis (SLE). All the patients with SLE were right-handed/left hemispheric dominant by the dichotic listening test. This was also studied for comparison in patients with right hemispheric and left hemispheric dominance. The isoprenoid pathway was upregulated with increased digoxin synthesis in patients with SLE and in those with right hemispheric dominance. In this group of patients (i) the tryptophan catabolites were increased and the tyrosine catabolites reduced, (ii) the dolichol and glycoconjugate levels were elevated, (iii) lysosomal stability was reduced, (iv) ubiquinone levels were low and free radical levels increased, and (v) the membrane cholesterol:phospholipid ratios were increased and membrane glycoconjugates reduced. On the other hand, in patients with left hemispheric dominance the reverse patterns were obtained. The biochemical patterns obtained in SLE is similar to those obtained in left-handed/right hemispheric chemically dominant individuals. But all the patients with SLE were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. SLE occurs in right hemispheric chemically dominant individuals, and is a reflection of altered brain function. The role of the isoprenoid pathway in the pathogenesis of SLE and its relation to hemispheric dominance is discussed.
Weihing, Jeffrey; Guenette, Linda; Chermak, Gail; Brown, Mallory; Ceruti, Julianne; Fitzgerald, Krista; Geissler, Kristin; Gonzalez, Jennifer; Brenneman, Lauren; Musiek, Frank
2015-01-01
Although central auditory processing disorder (CAPD) test battery performance has been examined in adults with neurologic lesions of the central auditory nervous system (CANS), similar data on children being referred for CAPD evaluations are sparse. This study characterizes CAPD test battery performance in children using tests commonly administered to diagnose the disorder. Specifically, this study describes failure rates for various test combinations, relationships between CAPD tests used in the battery, and the influence of cognitive function on CAPD test performance and CAPD diagnosis. A comparison is also made between the performance of children with CAPD and data from patients with neurologic lesions of the CANS. A retrospective study. Fifty-six pediatric patients were referred for CAPD testing. Participants were administered four CAPD tests, including frequency patterns (FP), low-pass filtered speech (LPFS), dichotic digits (DD), and competing sentences (CS). In addition, they were given the Wechsler Intelligence Scale for Children (WISC). Descriptive analyses examined the failure rates of various test combinations, as well as how often children with CAPD failed certain combinations when compared with adults with CANS lesions. A principal components analysis was performed to examine interrelationships between tests. Correlations and regressions were conducted to determine the relationship between CAPD test performance and the WISC. Results showed that the FP and LPFS tests were most commonly failed by children with CAPD. Two-test combinations that included one or both of these two tests and excluded DD tended to be failed more often. Including the DD and CS test in a battery benefited specificity. Tests thought to measure interhemispheric transfer tended to be correlated. Compared with adult patients with neurologic lesions, children with CAPD tended to fail LPFS more frequently and DD less frequently. Both groups failed FP with relatively equal frequency. The two-test combination that showed the highest failure rate for children with CAPD was LPFS-FP. Comparison with adults with CANS lesions, however, suggests that the mechanisms underlying LPFS performance in children need to be better understood. The two-test combination that showed the next highest failure rates among children with CAPD and did not include LPFS was CS-FP. If it is desirable to use a dichotic measure that has a lower linguistic load than CS then DD can be substituted for CS despite the slightly lower failure rate of the DD-FP battery. American Academy of Audiology.
Hypothalamic digoxin, hemispheric chemical dominance, and inflammatory bowel disease.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-09-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin, dolichol, and ubiquinone. It was considered pertinent to assess the pathway in inflammatory bowel disease (ulcerative colitis and regional ileitis). Since endogenous digoxin can regulate neurotransmitter transport, the pathway and the related cascade were also assessed in individuals with differing hemispheric dominance to find out the role of hemispheric dominance in its pathogenesis. All the patients with inflammatory bowel disease were right-handed/left hemispheric dominant by the dichotic listening test. The following parameters were measured in patients with inflammatory bowel disease and in individuals with differing hemispheric dominance: (1) plasma HMG CoA reductase, digoxin, dolichol, ubiquinone, and magnesium levels; (2) tryptophan/tyrosine catabolic patterns; (3) free-radical metabolism; (4) glycoconjugate metabolism; and (5) membrane composition and RBC membrane Na+-K+ ATPase activity. Statistical analysis was done by ANOVA. In patients with inflammatory bowel disease there was elevated digoxin synthesis, increased dolichol and glycoconjugate levels, and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites. There was an increase in cholesterol:phospholipid ratio and a reduction in glycoconjugate level of RBC membrane in these groups of patients. Inflammatory bowel disease is associated with an upregulated isoprenoid pathway and elevated digoxin secretion from the hypothalamus. This can contribute to immune activation, defective glycoprotein bowel antigen presentation, and autoimmunity and a schizophreniform psychosis important in its pathogenesis. The biochemical patterns obtained in inflammatory bowel disease is similar to those obtained in left-handed/right hemispheric dominant individuals by the dichotic listening test. But all the patients with peptic ulcer disease were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Inflammatory bowel disease occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function.
Bruder, Gerard E.; Stewart, Jonathan W.; Hellerstein, David; Alvarenga, Jorge E.; Alschuler, Daniel; McGrath, Patrick J.
2012-01-01
Prior studies have found abnormalities of functional brain asymmetry in patients having a major depressive disorder (MDD). This study aimed to replicate findings of reduced right hemisphere advantage for perceiving dichotic complex tones in depressed patients, and to determine whether patients having “pure” dysthymia show the same abnormality of perceptual asymmetry as MDD. It also examined gender differences in lateralization, and the extent to which abnormalities of perceptual asymmetry in depressed patients are dependent on gender. Unmedicated patients having either a MDD (n=96) or “pure” dysthymic disorder (n=42) and healthy controls (n=114) were tested on dichotic fused-words and complex-tone tests. Patient and control groups differed in right hemisphere advantage for complex tones, but not left hemisphere advantage for words. Reduced right hemisphere advantage for tones was equally present in MDD and dysthymia, but was more evident among depressed men than depressed women. Also, healthy men had greater hemispheric asymmetry than healthy women for both words and tones, whereas this gender difference was not seen for depressed patients. Dysthymia and MDD share a common abnormality of hemispheric asymmetry for dichotic listening. PMID:22397909
Bruder, Gerard E; Stewart, Jonathan W; Hellerstein, David; Alvarenga, Jorge E; Alschuler, Daniel; McGrath, Patrick J
2012-04-30
Prior studies have found abnormalities of functional brain asymmetry in patients having a major depressive disorder (MDD). This study aimed to replicate findings of reduced right hemisphere advantage for perceiving dichotic complex tones in depressed patients, and to determine whether patients having "pure" dysthymia show the same abnormality of perceptual asymmetry as MDD. It also examined gender differences in lateralization, and the extent to which abnormalities of perceptual asymmetry in depressed patients are dependent on gender. Unmedicated patients having either a MDD (n=96) or "pure" dysthymic disorder (n=42) and healthy controls (n=114) were tested on dichotic fused-words and complex-tone tests. Patient and control groups differed in right hemisphere advantage for complex tones, but not left hemisphere advantage for words. Reduced right hemisphere advantage for tones was equally present in MDD and dysthymia, but was more evident among depressed men than depressed women. Also, healthy men had greater hemispheric asymmetry than healthy women for both words and tones, whereas this gender difference was not seen for depressed patients. Dysthymia and MDD share a common abnormality of hemispheric asymmetry for dichotic listening. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Cued Dichotic Listening with Right-Handed, Left-Handed, Bilingual and Learning-Disabled Children.
ERIC Educational Resources Information Center
Obrzut, John E.; And Others
This study used cued dichotic listening to investigate differences in language lateralization among right-handed (control), left handed, bilingual, and learning disabled children. Subjects (N=60) ranging in age from 7-13 years were administered a consonant-vowel-consonant dichotic paradigm with three experimental conditions (free recall, directed…
Sex Differences in Dichotic Listening
ERIC Educational Resources Information Center
Voyer, Daniel
2011-01-01
The present study quantified the magnitude of sex differences in perceptual asymmetries as measured with dichotic listening. This was achieved by means of a meta-analysis of the literature dating back from the initial use of dichotic listening as a measure of laterality. The meta-analysis included 249 effect sizes pertaining to sex differences and…
Auditory memory function in expert chess players
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
2015-01-01
Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. Results: The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Conclusion: Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time. PMID:26793666
Age- and Gender-Specific Normative Information from Children Assessed with a Dichotic Words Test.
Moncrieff, Deborah
2015-01-01
The most widely used assessment in the clinical auditory processing disorder (APD) battery is the dichotic listening test. New tests with normative information are helpful for assessment and cross-check of results for reliable diagnosis. The Dichotic Words Test was developed for use in the clinical test battery for diagnosis of APD. The test stimuli were common single syllable words matched for average root-mean-square amplitude and each pair was temporally aligned at both onset and offset. The study was conducted to collect performance results from typically developing children to create normative information for the test. The study follows a cross-sectional design. Typically developing children (n = 416) between the ages of 5 and 12 yr were recruited from schools in the community. There were 217 males and 199 females in the study sample. Only children who passed a hearing screening were eligible to participate. Scores for each ear were recorded during administration of the first free recall version of the test. Ear advantages based on results recorded for left and right ears were used to measure prevalence of right, left, and no ear advantages. Results for each listener's dominant and non-dominant ears and the absolute difference between them were put into the data analysis. Results were analyzed for normality and because no results were normally distributed, all further analyses were done with nonparametric statistical tests. Normative data for dominant and non-dominant ear scores and ear advantages were determined at the 95% confidence interval through bootstrapping methods with 1,000 samples. Children were divided into four age groups based on results in their dominant ears. Females generally performed better than males and the prevalence of a right-ear advantage was ∼60% across all children tested. Normative lower-bound cut-off scores were established for males and females within each age group for dominant and non-dominant ear scores. Normative upper-bound cut-off scores were established for males and females within each age group for ear advantage scores. Normative information specific to age group and gender will be useful in clinical assessment for APD. Prevalence of left-ear advantage results in the sample may have been partly due to uncontrolled influences of voice-onset time in arranging the dichotic pairs. American Academy of Audiology.
Olivares-García, M R; Peñaloza-López, Y R; García-Pedroza, F; Jesús-Pérez, S; Uribe-Escamilla, R; Jiménez-de la Sancha, S
In this study, a new dichotic digit test in Spanish (NDDTS) was applied in order to identify auditory laterality. We also evaluated body laterality and spatial location using the Subirana test. Both the dichotic test and the Subirana test for body laterality and spatial location were applied in a group of 40 children with dyslexia and in a control group made up of 40 children who were paired according to age and gender. The results of the three evaluations were analysed using the SPSS 10 software application, with Pearson's chi-squared test. It was seen that 42.5% of the children in the group of dyslexics had mixed auditory laterality, compared to 7.5% in the control group (p < or = 0.05). Body laterality was mixed in 25% of dyslexic children and in 2.5% in the control group (p < or = 0.05) and there was 72.5% spatial disorientation in the group of dyslexics, whereas only 15% (p < or = 0.05) was found in the control group. The NDDTS proved to be a useful tool for demonstrating that mixed auditory laterality and auditory predominance of the left ear are linked to dyslexia. The results of this test exceed those obtained for body laterality. Spatial orientation is indeed altered in children with dyslexia. The importance of this finding makes it necessary to study the central auditory processes in all cases in order to define better rehabilitation strategies in Spanish-speaking children.
Evaluation of 2 cognitive abilities tests in a dual-task environment
NASA Technical Reports Server (NTRS)
Vidulich, M. A.; Tsang, P. S.
1986-01-01
Most real world operators are required to perform multiple tasks simultaneously. In some cases, such as flying a high performance aircraft or trouble shooting a failing nuclear power plant, the operator's ability to time share or process in parallel" can be driven to extremes. This has created interest in selection tests of cognitive abilities. Two tests that have been suggested are the Dichotic Listening Task and the Cognitive Failures Questionnaire. Correlations between these test results and time sharing performance were obtained and the validity of these tests were examined. The primary task was a tracking task with dynamically varying bandwidth. This was performed either alone or concurrently with either another tracking task or a spatial transformation task. The results were: (1) An unexpected negative correlation was detected between the two tests; (2) The lack of correlation between either test and task performance made the predictive utility of the tests scores appear questionable; (3) Pilots made more errors on the Dichotic Listening Task than college students.
NASA Astrophysics Data System (ADS)
Lauter, Judith
2002-05-01
As Research Director of CID, Ira emphasized the importance of combining information from biology with rigorous studies of behavior, such as psychophysics, to better understand how the brain and body accomplish the goals of everyday life. In line with this philosophy, my doctoral dissertation sought to explain brain functional asymmetries (studied with dichotic listening) in terms of the physical dimensions of a library of test sounds designed to represent a speech-music continuum. Results highlighted individual differences plus similarities in terms of patterns of relative ear advantages, suggesting an organizational basis for brain asymmetries depending on physical dimensions of stimulus and gesture with analogs in auditory, visual, somatosensory, and motor systems. My subsequent work has employed a number of noninvasive methods (OAEs, EPs, qEEG, PET, MRI) to explore the neurobiological bases of individual differences in general and functional asymmetries in particular. This research has led to (1) the AXS test battery for assessing the neurobiology of human sensory-motor function; (2) the handshaking model of brain function, describing dynamic relations along all three body/brain axes; (3) the four-domain EPIC model of functional asymmetries; and (4) the trimodal brain, a new model of individual differences based on psychoimmunoneuroendocrinology.
Hypothalamic digoxin, hemispheric chemical dominance, and mesenteric artery occlusion.
Kurup, Ravi Kumar; Kurup, Paramesware Achutha
2003-12-01
The role of the isoprenoid pathway in vascular thrombosis, especially mesenteric artery occlusion and its relation to hemispheric dominance, was assessed in this study. The following parameters were measured in patients with mesenteric artery occlusion and individuals with right hemispheric, left hemispheric, and bihemispheric dominance: (1) plasma HMG CoA reductase, digoxin, dolichol, ubiquinone, and magnesium levels; (2) tryptophan/tyrosine catabolic patterns; (3) free radical metabolism; (4) glycoconjugate metabolism; and (5) membrane composition. In patients with mesenteric artery occlusion there was elevated digoxin synthesis, increased dolichol and glycoconjugate levels, low ubiquinone, and elevated free radical levels. The RBC membrane Na(+)-K+ ATPase activity and serum magnesium were decreased. There was also an increase in tryptophan catabolites and reduction in tyrosine catabolites in the serum. There was an increase in cholesterol:phospholipid ratio and a reduction in glycoconjugate level of RBC membrane in these patients. The biochemical patterns obtained in mesenteric artery occlusion is similar to those obtained in left-handed/right hemispheric dominant individuals by the dichotic listening test. But all the patients with mesenteric artery occlusion were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Mesenteric artery occlusion occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function. Hemispheric chemical dominance may thus control the risk for developing vascular thrombosis in individuals.
Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J
2015-06-01
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.
2015-01-01
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721
Nakamura, Miyoko; Kolinsky, Régine
2014-12-01
We explored the functional units of speech segmentation in Japanese using dichotic presentation and a detection task requiring no intentional sublexical analysis. Indeed, illusory perception of a target word might result from preattentive migration of phonemes, morae, or syllables from one ear to the other. In Experiment I, Japanese listeners detected targets presented in hiragana and/or kanji. Phoneme migrations did occur, suggesting that orthography-independent sublexical constituents play some role in segmentation. However, syllable and especially mora migrations were more numerous. This pattern of results was not observed in French speakers (Experiment 2), suggesting that it reflects native segmentation in Japanese. To control for the intervention of kanji representations (many words are written in kanji, and one kanji often corresponds to one syllable), in Experiment 3, Japanese listeners were presented with target loanwords that can be written only in katakana. Again, phoneme migrations occurred, while the first mora and syllable led to similar rates of illusory percepts. No migration occurred for the second, "special" mora (/J/ or/N/), probably because this constitutes the latter part of a heavy syllable. Overall, these findings suggest that multiple units, such as morae, syllables, and even phonemes, function independently of orthographic knowledge in Japanese preattentive speech segmentation.
Formant-frequency variation and its effects on across-formant grouping in speech perception.
Roberts, Brian; Summers, Robert J; Bailey, Peter J
2013-01-01
How speech is separated perceptually from other speech remains poorly understood. In a series of experiments, perceptual organisation was probed by presenting three-formant (F1+F2+F3) analogues of target sentences dichotically, together with a competitor for F2 (F2C), or for F2+F3, which listeners must reject to optimise recognition. To control for energetic masking, the competitor was always presented in the opposite ear to the corresponding target formant(s). Sine-wave speech was used initially, and different versions of F2C were derived from F2 using separate manipulations of its amplitude and frequency contours. F2Cs with time-varying frequency contours were highly effective competitors, whatever their amplitude characteristics, whereas constant-frequency F2Cs were ineffective. Subsequent studies used synthetic-formant speech to explore the effects of manipulating the rate and depth of formant-frequency change in the competitor. Competitor efficacy was not tuned to the rate of formant-frequency variation in the target sentences; rather, the reduction in intelligibility increased with competitor rate relative to the rate for the target sentences. Therefore, differences in speech rate may not be a useful cue for separating the speech of concurrent talkers. Effects of competitors whose depth of formant-frequency variation was scaled by a range of factors were explored using competitors derived either by inverting the frequency contour of F2 about its geometric mean (plausibly speech-like pattern) or by using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Competitor efficacy depended on the overall depth of frequency variation, not depth relative to that for the other formants. Furthermore, the triangle-wave competitors were as effective as their more speech-like counterparts. Overall, the results suggest that formant-frequency variation is critical for the across-frequency grouping of formants but that this grouping does not depend on speech-specific constraints.
Bellis, Teri James; Billiet, Cassie; Ross, Jody
2011-09-01
Cacace and McFarland (2005) have suggested that the addition of cross-modal analogs will improve the diagnostic specificity of (C)APD (central auditory processing disorder) by ensuring that deficits observed are due to the auditory nature of the stimulus and not to supra-modal or other confounds. Others (e.g., Musiek et al, 2005) have expressed concern about the use of such analogs in diagnosing (C)APD given the uncertainty as to the degree to which cross-modal measures truly are analogous and emphasize the nonmodularity of the CANs (central auditory nervous system) and its function, which precludes modality specificity of (C)APD. To date, no studies have examined the clinical utility of cross-modal (e.g., visual) analogs of central auditory tests in the differential diagnosis of (C)APD. This study investigated performance of children diagnosed with (C)APD, children diagnosed with ADHD (attention deficit hyperactivity disorder), and typically developing children on three diagnostic tests of central auditory function and their corresponding visual analogs. The study sought to determine whether deficits observed in the (C)APD group were restricted to the auditory modality and the degree to which the addition of visual analogs aids in the ability to differentiate among groups. An experimental repeated measures design was employed. Participants consisted of three groups of right-handed children (normal control, n=10; ADHD, n=10; (C)APD, n=7) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of disorders unrelated to their primary diagnosis. Participants in Groups 2 and 3 met current diagnostic criteria for ADHD and (C)APD. Visual analogs of three tests in common clinical use for the diagnosis of (C)APD were used (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; and Duration Patterns [Pinheiro and Musiek, 1985]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANCOVAs (analyses of covariance) were used to examine effects of group, modality, and laterality (Dichotic/Dichoptic Digits) or response condition (auditory and visual patterning). In addition, planned univariate ANCOVAs were used to examine effects of group on intratest comparison measures (REA, HLD [Humming-Labeling Differential]). Children with both ADHD and (C)APD performed more poorly overall than typically developing children on all tasks, with the (C)APD group exhibiting the poorest performance on the auditory and visual patterns tests but the ADHD and (C)APD group performing similarly on the Dichotic/Dichoptic Digits task. However, each of the auditory and visual intratest comparison measures, when taken individually, was able to distinguish the (C)APD group from both the normal control and ADHD groups, whose performance did not differ from one another. Results underscore the importance of intratest comparison measures in the interpretation of central auditory tests (American Speech-Language-Hearing Association [ASHA], 2005 ; American Academy of Audiology [AAA], 2010). Results also support the "non-modular" view of (C)APD in which cross-modal deficits would be predicted based on shared neuroanatomical substrates. Finally, this study demonstrates that auditory tests alone are sufficient to distinguish (C)APD from supra-modal disorders, with cross-modal analogs adding little if anything to the differential diagnostic process. American Academy of Audiology.
Dichotic listening in patients with situs inversus: brain asymmetry and situs asymmetry.
Tanaka, S; Kanzaki, R; Yoshibayashi, M; Kamiya, T; Sugishita, M
1999-06-01
In order to investigate the relation between situs asymmetry and functional asymmetry of the human brain, a consonant-vowel syllable dichotic listening test known as the Standard Dichotic Listening Test (SDLT) was administered to nine subjects with situs inversus (SI) that ranged in age from 6 to 46 years old (mean of 21.8 years old, S.D. = 15.6); the four males and five females all exhibited strong right-handedness. The SDLT was also used to study twenty four age-matched normal subjects that were from 6 to 48 years old (mean 21.7 years old, S.D. = 15.3); the twelve males and twelve females were all strongly right-handed and served as a control group. Eight out of the nine subjects (88.9%) with SI more often reproduced the sounds from the right ear than sounds from the left ear; this is called right ear advantage (REA). The ratio of REA in the control group was almost the same, i.e., nineteen out of the twenty-four subjects (79.1%) showed REA. Results of the present study suggest that the left-right reversal in situs inversus does not involve functional asymmetry of the brain. As such, the system that produces functional asymmetry in the human brain must independently recognize laterality from situs asymmetry.
Hypothalamic digoxin, hemispheric chemical dominance, and chronic bronchitis emphysema.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-09-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin (membrane sodium-potassium ATPase inhibitor, immunomodulator, and regulator of neurotransmitter/amino acid transport), dolichol (regulates N-glycosylation of proteins), and ubiquinone (free radical scavenger). This was assessed in patients with chronic bronchitis emphysema. The pathway was also assessed in patients with right hemispheric, left hemispheric, and bihemispheric dominance to find the role of hemispheric dominance in the pathogenesis of chronic bronchitis emphysema. All the 15 patients with chronic bronchitis emphysema were right-handed/left hemispheric dominant by the dichotic listening test. In patients with chronic bronchitis emphysema there was elevated digoxin synthesis, increased dolichol, and glycoconjugate levels, and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites. There was an increase in cholesterol:phospholipid ratio and a reduction in glycoconjugate levels of RBC membrane in patients with chronic bronchitis emphysema. The same biochemical patterns were obtained in individuals with right hemispheric dominance. Endogenous digoxin by activating the calcineurin signal transduction pathway of T-cell can contribute to immune activation in chronic bronchitis emphysema. Increased free radical generation can also lead to immune activation. Endogenous synthesis of nicotine can contribute to the pathogenesis of the disease. Altered glycoconjugate metabolism and membranogenesis can lead to defective lysosomal stability contributing to the disease process by increased release of lysosomal proteases. The role of an endogenous digoxin and hemispheric dominance in the pathogenesis of chronic bronchitis emphysema and in the regulation of lung structure/function is discussed. The biochemical patterns obtained in chronic bronchitis emphysema is similar to those obtained in left-handed/right hemispheric chemically dominant individuals by the dichotic listening test. But all the patients with chronic bronchitis emphysema were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Chronic bronchitis emphysema occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function. Hemispheric chemical dominance can play a role in the regulation of lung function and structure.
Perception of Complex Auditory Scenes
2014-07-02
Simpson, B. D., & Romigh, G., (2014). “Ear dominance in a dichotic cocktail party .” Journal of the Association for Research in Otolaryngology, Abstract...B. D., & Romigh, G. (2014). Ear dominance in a dichotic cocktail party . Journal of the Association for Research in Otolaryngology, Abstract 37, p...dominance in a dichotic cocktail party .” Journal of the Association for Research in Otolaryngology, Abstract 37, p 518. Cherry, E. C. (1953). Some
Focused attention in a simple dichotic listening task: an fMRI experiment.
Jäncke, Lutz; Specht, Karsten; Shah, Joni Nadim; Hugdahl, Kenneth
2003-04-01
Whole-head functional magnetic resonance imaging (fMRI) was used in nine neurologically intact subjects to measure the hemodynamic responses in the context of dichotic listening (DL). In order to eliminate the influence of verbal information processing, tones of different frequencies were used as stimuli. Three different dichotic listening tasks were used: the subjects were instructed to either concentrate on the stimuli presented in both ears (DIV), or only in the left (FL) or right (FR) ear and to monitor the auditory input for a specific target tone. When the target tone was detected, the subjects were required to indicate this by pressing a response button. Compared to the resting state, all dichotic listening tasks evoked strong hemodynamic responses within a distributed network comprising of temporal, parietal, and frontal brain areas. Thus, it is clear that dichotic listening makes use of various cognitive functions located within the dorsal and ventral stream of auditory information processing (i.e., the 'what' and 'where' streams). Comparing the three different dichotic listening conditions with each other only revealed a significant difference in the pre-SMA and within the left planum temporale area. The pre-SMA was generally more strongly activated during the DIV condition than during the FR and FL conditions. Within the planum temporale, the strongest activation was found during the FR condition and the weakest during the DIV condition. These findings were taken as evidence that even a simple dichotic listening task such as the one used here, makes use of a distributed neural network comprising of the dorsal and ventral stream of auditory information processing. In addition, these results support the previously made assumption that planum temporale activation is modulated by attentional strategies. Finally, the present findings uncovered that the pre-SMA, which is mostly thought to be involved in higher-order motor control processes, is also involved in cognitive processes operative during dichotic listening.
Bruder, Gerard E; Schneier, Franklin R; Stewart, Jonathan W; McGrath, Patrick J; Quitkin, Frederic
2004-01-01
Behavioral, electrophysiological, and imaging studies have found evidence that anxiety disorders are associated with left hemisphere dysfunction or higher than normal activation of right hemisphere regions. Few studies, however, have examined hemispheric asymmetries of function in social phobia, and the influence of comorbidity with depressive disorders is unknown. The present study used dichotic listening tests to assess lateralized cognitive processing in patients with social phobia, depression, or comorbid social phobia and depression. The study used a two-by-two factorial design in which one factor was social phobia (present versus absent) and the second factor was depressive disorder (present versus absent). A total of 125 unmedicated patients with social phobia, depressive disorder, or comorbid social phobia and depressive disorder and 44 healthy comparison subjects were tested on dichotic fused-words, consonant-vowel syllable, and complex tone tests. Patients with social phobia with or without a comorbid depressive disorder had a smaller left hemisphere advantage for processing words and syllables, compared with subjects without social phobia, whereas no difference between groups was found in the right hemisphere advantage for processing complex tones. Depressed women had a larger left hemisphere advantage for processing words, compared with nondepressed women, but this difference was not seen among men. The results support the hypothesis that social phobia is associated with dysfunction of left hemisphere regions mediating verbal processing. Given the importance of verbal processes in social interactions, this dysfunction may contribute to the stress and difficulty experienced by patients with social phobia in social situations.
At what time is the cocktail party? A late locus of selective attention to natural speech.
Power, Alan J; Foxe, John J; Forde, Emma-Jane; Reilly, Richard B; Lalor, Edmund C
2012-05-01
Distinguishing between speakers and focusing attention on one speaker in multi-speaker environments is extremely important in everyday life. Exactly how the brain accomplishes this feat and, in particular, the precise temporal dynamics of this attentional deployment are as yet unknown. A long history of behavioral research using dichotic listening paradigms has debated whether selective attention to speech operates at an early stage of processing based on the physical characteristics of the stimulus or at a later stage during semantic processing. With its poor temporal resolution fMRI has contributed little to the debate, while EEG-ERP paradigms have been hampered by the need to average the EEG in response to discrete stimuli which are superimposed onto ongoing speech. This presents a number of problems, foremost among which is that early attention effects in the form of endogenously generated potentials can be so temporally broad as to mask later attention effects based on the higher level processing of the speech stream. Here we overcome this issue by utilizing the AESPA (auditory evoked spread spectrum analysis) method which allows us to extract temporally detailed responses to two concurrently presented speech streams in natural cocktail-party-like attentional conditions without the need for superimposed probes. We show attentional effects on exogenous stimulus processing in the 200-220 ms range in the left hemisphere. We discuss these effects within the context of research on auditory scene analysis and in terms of a flexible locus of attention that can be deployed at a particular processing stage depending on the task. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Effects of the rate of formant-frequency variation on the grouping of formants in speech perception.
Summers, Robert J; Bailey, Peter J; Roberts, Brian
2012-04-01
How speech is separated perceptually from other speech remains poorly understood. Recent research suggests that the ability of an extraneous formant to impair intelligibility depends on the modulation of its frequency, but not its amplitude, contour. This study further examined the effect of formant-frequency variation on intelligibility by manipulating the rate of formant-frequency change. Target sentences were synthetic three-formant (F1 + F2 + F3) analogues of natural utterances. Perceptual organization was probed by presenting stimuli dichotically (F1 + F2C + F3C; F2 + F3), where F2C + F3C constitute a competitor for F2 and F3 that listeners must reject to optimize recognition. Competitors were derived using formant-frequency contours extracted from extended passages spoken by the same talker and processed to alter the rate of formant-frequency variation, such that rate scale factors relative to the target sentences were 0, 0.25, 0.5, 1, 2, and 4 (0 = constant frequencies). Competitor amplitude contours were either constant, or time-reversed and rate-adjusted in parallel with the frequency contour. Adding a competitor typically reduced intelligibility; this reduction increased with competitor rate until the rate was at least twice that of the target sentences. Similarity in the results for the two amplitude conditions confirmed that formant amplitude contours do not influence across-formant grouping. The findings indicate that competitor efficacy is not tuned to the rate of the target sentences; most probably, it depends primarily on the overall rate of frequency variation in the competitor formants. This suggests that, when segregating the speech of concurrent talkers, differences in speech rate may not be a significant cue for across-frequency grouping of formants.
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-07-01
The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were assessed in the dyslexic children. We presented the dyslexic children with a phonological short-term memory task and a phonemic awareness task to quantify their phonological skills. Visual attention spans correlated positively with individual scores obtained on the dichotic listening task while phonological skills did not correlate with either dichotic scores or visual attention span measures. Moreover, all the dyslexic children with a dichotic listening deficit showed a simultaneous visual processing deficit, and a substantial number of dyslexic children exhibited phonological processing deficits whether or not they exhibited low dichotic listening scores. These findings suggest that processing simultaneous auditory stimuli may be impaired in dyslexic children regardless of phonological processing difficulties and be linked to similar problems in the visual modality.
Stimulus Suffix Effects in Dichotic Memory
ERIC Educational Resources Information Center
Parkinson, Stanley R.; Hubbard, Lora L.
1974-01-01
in the present dichotic memory research, the addition of either a monaural stimulus suffix on the unattended ear or a binaural suffix was shown to selectively impair unattended-ear performance. (Editor)
Current audiological diagnostics
Hoth, Sebastian; Baljić, Izet
2017-01-01
Today’s audiological functional diagnostics is based on a variety of hearing tests, whose large number takes account of the variety of malfunctions of a complex sensory organ system and the necessity to examine it in a differentiated manner and at any age of life. The objective is to identify nature and origin of the hearing loss and to quantify its extent as far as necessary to dispose of the information needed to initiate the adequate medical (conservative or operational) treatment or the provision with technical hearing aids or prostheses. Moreover, audiometry provides the basis for the assessment of impairment and handicap as well as for the calculation of the degree of disability. In the present overview, the current state of the method inventory available for practical use is described, starting from basic diagnostics over to complex special techniques. The presentation is systematically grouped in subjective procedures, based on psychoacoustic exploration, and objective methods, based on physical measurements: preliminary hearing tests, pure tone threshold, suprathreshold processing of sound intensity, directional hearing, speech understanding in quiet and in noise, dichotic hearing, tympanogram, acoustic reflex, otoacoustic emissions and auditory evoked potentials. Apart from a few still existing gaps, this method inventory covers the whole spectrum of all clinically relevant functional deficits of the auditory system. PMID:29279727
Bykova, L G; Bazylev, V N
1994-01-01
By means of dichotic test the comparative research of the brain activity in dynamics in 84 adult students was conducted during their traditional (36 persons) and intensive (48 persons) learning of foreign languages. By different methods of learning the reliable distinction of the hemisphere's asymmetry was not detected. By both methods in the reliable majority of students the activation of the hemisphere opposite to the one dominating initially was observed. The correlation between the maximum quantitative shift of the right ear coefficient and the level of success in colloquial practice by the same initial level of language start and initial comparable size of memory was revealed. The authors discuss the possibility of the individual map composition for every student using the results of dichotic tests in dynamics for the help in the profession of a teacher.
Peñaloza López, Yolanda Rebeca; Orozco Peña, Xóchitl Daisy; Pérez Ruiz, Santiago Jesús
2018-04-03
To evaluate the central auditory processing disorders in patients with multiple sclerosis, emphasizing auditory laterality by applying psychoacoustic tests and to identify their relationship with the Multiple Sclerosis Disability Scale (EDSS) functions. Depression scales (HADS), EDSS, and 9 psychoacoustic tests to study CAPD were applied to 26 individuals with multiple sclerosis and 26 controls. Correlation tests were performed between the EDSS and psychoacoustic tests. Seven out of 9 psychoacoustic tests were significantly different (P<.05); right or left (14/19 explorations) with respect to control. In dichotic digits there was a left-ear advantage compared to the usual predominance of RDD. There was significant correlation in five psychoacoustic tests and the specific functions of EDSS. The left-ear advantage detected and interpreted as an expression of deficient influences of the corpus callosum and attention in multiple sclerosis should be investigated. There was a correlation between psychoacoustic tests and specific EDSS functions. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.
Dichotic beats of mistuned consonances.
Feeney, M P
1997-10-01
The beats of mistuned consonances (BMCs) result from the presentation of two sinusoids at frequencies slightly mistuned from a ratio of small integers. Several studies have suggested that the source of dichotic BMCs is an interaction within a binaural critical band. In one case the mechanism has been explained as an aural harmonic of the low-frequency tone (f1) creating binaural beats with the high-frequency tone (f2). The other explanation involves a binaural cross correlation between the excitation pattern of f1 and the contralateral f2--occurring within the binaural critical band centered at f2. This study examined the detection of dichotic BMCs for the octave and fifth. In one experiment with the octave, narrow-band noise centered at f2 was presented to one ear along with f1. The other ear was presented with f2. The noise was used to prevent interactions in the binaural critical band centered at f2. Dichotic BMCs were still detected under these conditions, suggesting that binaural interaction within a critical band does not explain the effect. Localization effects were also observed under this masking condition for phase reversals of tuned dichotic octave stimuli. These findings suggest a new theory of dichotic BMCs as a between-channel phase effect. The modified weighted-image model of localization [Stern and Trahiotis, in Auditory Physiology and Perception, edited by Y. Cazals, L. Demany, and K. Horner (Pergamon, Oxford, 1992), pp. 547-554] was used to provide an explanation of the between-channel mechanism.
Morton, L L; Siegel, L S
1991-02-01
Twenty reading comprehension-disabled (CD) and 20 reading comprehension and word recognition-disabled (CWRD), right-handed male children were matched with 20 normal-achieving age-matched controls and 20 normal-achieving reading level-matched controls and tested for left ear report on dichotic listening tasks using digits and consonant-vowel combinations (CVs). Left ear report for CVs and digits did not correlate for any of the groups. Both reading-disabled groups showed lower left ear report on digits. On CVs the CD group showed a high left ear report but only when there were no priming precursors, such as directions to attend right first and to process digits first. Priming effects interfered with the processing of both digits and CVs. Theoretically, the CWRD group seems to be characterized by a depressed right hemisphere, whereas the CD group may have a more labile right hemisphere, perhaps tending to overengagement for CV tasks but vulnerable to situational precursors in the form of priming effects. Implications extend to (1) subtyping practices in research with the learning-disabled, (2) inferences drawn from studies using different dichotic stimuli, and (3) the neuropsychology of reading disorders.
[Dichotic perception of Mandarin third tone by Mexican Chinese learners].
Wang, Hongbin
2014-05-01
To investigate the relationship between the advantage ear (cerebral hemisphere) of Spanish-speaking Mexican learners and the third Chinese tone. Third tone Chinese vowel syllables were used as experimental materials with dichotic listening technology to test the Spanish-speaking Mexican Chinese learners (20-32 years old) who studied Chinese about 20 h. In terms of error rates to identify the third Chinese tone, the Spanish-speaking Mexican Chinese learners's reaction to the third tone suggested that their left ears were the advantageous ear (the right cerebral hemisphere) (Z=-2.091, P=0.036). The verbal information of tones influenced the perception of Mexican Chinese learners' mandarin tones. In the process of learning mandarin tones, Mexican Chinese learners gradually formed the category of tones.
Auditory Processing Speed and Signal Detection in Schizophrenia
ERIC Educational Resources Information Center
Korboot, P. J.; Damiani, N.
1976-01-01
Two differing explanations of schizophrenic processing deficit were examined: Chapman and McGhie's and Yates'. Thirty-two schizophrenics, classified on the acute-chronic and paranoid-nonparanoid dimensions, and eight neurotics were tested on two dichotic listening tasks. (Editor)
Dichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users.
Vannson, Nicolas; Innes-Brown, Hamish; Marozeau, Jeremy
2015-08-26
Musical enjoyment for cochlear implant (CI) recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic). Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear) and the diotic mode (both parts in both ears). Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy). Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users) participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity and preference. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants' preference ratings, or their judgments of intended emotion. © The Author(s) 2015.
Whiteford, Kelly L; Kreft, Heather A; Oxenham, Andrew J
2017-08-01
Natural sounds can be characterized by their fluctuations in amplitude and frequency. Ageing may affect sensitivity to some forms of fluctuations more than others. The present study used individual differences across a wide age range (20-79 years) to test the hypothesis that slow-rate, low-carrier frequency modulation (FM) is coded by phase-locked auditory-nerve responses to temporal fine structure (TFS), whereas fast-rate FM is coded via rate-place (tonotopic) cues, based on amplitude modulation (AM) of the temporal envelope after cochlear filtering. Using a low (500 Hz) carrier frequency, diotic FM and AM detection thresholds were measured at slow (1 Hz) and fast (20 Hz) rates in 85 listeners. Frequency selectivity and TFS coding were assessed using forward masking patterns and interaural phase disparity tasks (slow dichotic FM), respectively. Comparable interaural level disparity tasks (slow and fast dichotic AM and fast dichotic FM) were measured to control for effects of binaural processing not specifically related to TFS coding. Thresholds in FM and AM tasks were correlated, even across tasks thought to use separate peripheral codes. Age was correlated with slow and fast FM thresholds in both diotic and dichotic conditions. The relationship between age and AM thresholds was generally not significant. Once accounting for AM sensitivity, only diotic slow-rate FM thresholds remained significantly correlated with age. Overall, results indicate stronger effects of age on FM than AM. However, because of similar effects for both slow and fast FM when not accounting for AM sensitivity, the effects cannot be unambiguously ascribed to TFS coding.
Fritz, Thomas Hans; Renders, Wiske; Müller, Karsten; Schmude, Paul; Leman, Marc; Turner, Robert; Villringer, Arno
2013-10-01
Helmholtz himself speculated about a role of the cochlea in the perception of musical dissonance. Here we indirectly investigated this issue, assessing the valence judgment of musical stimuli with variable consonance/dissonance and presented diotically (exactly the same dissonant signal was presented to both ears) or dichotically (a consonant signal was presented to each ear--both consonant signals were rhythmically identical but differed by a semitone in pitch). Differences in brain organisation underlying inter-subject differences in the percept of dichotically presented dissonance were determined with voxel-based morphometry. Behavioral results showed that diotic dissonant stimuli were perceived as more unpleasant than dichotically presented dissonance, indicating that interactions within the cochlea modulated the valence percept during dissonance. However, the behavioral data also suggested that the dissonance percept did not depend crucially on the cochlea, but also occurred as a result of binaural integration when listening to dichotic dissonance. These results also showed substantial between-participant variations in the valence response to dichotic dissonance. These differences were in a voxel-based morphometry analysis related to differences in gray matter density in the inferior colliculus, which strongly substantiated a key role of the inferior colliculus in consonance/dissonance representation in humans. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Hemispheric Language Dominance of Language-Disordered, Articulation-Disordered, and Normal Children.
ERIC Educational Resources Information Center
Pettit, John M.; Helms, Suzanne B.
1979-01-01
The hemispheric dominance for language of three groups of six- to nine- year-olds (ten language-disordered, ten articulation-disordered, and ten normal children) was compared, and two dichotic listening tests (digits and animal names) were administered. (Author/CL)
Perception of Simultaneous Auditive Contents
NASA Astrophysics Data System (ADS)
Tschinkel, Christian
Based on a model of pluralistic music, we may approach an aesthetic concept of music, which employs dichotic listening situations. The concept of dichotic listening stems from neuropsychological test conditions in lateralization experiments on brain hemispheres, in which each ear is exposed to a different auditory content. In the framework of such sound experiments, the question which primarily arises concerns a new kind of hearing, which is also conceivable without earphones as a spatial composition, and which may superficially be linked to its degree of complexity. From a psychological perspective, the degree of complexity is correlated with the degree of attention given, with the listener's musical or listening experience and the level of his appreciation. Therefore, we may possibly also expect a measurable increase in physical activity. Furthermore, a dialectic interpretation of such "dualistic" music presents itself.
How brain asymmetry relates to performance – a large-scale dichotic listening study
Hirnstein, Marco; Hugdahl, Kenneth; Hausmann, Markus
2014-01-01
All major mental functions including language, spatial and emotional processing are lateralized but how strongly and to which hemisphere is subject to inter- and intraindividual variation. Relatively little, however, is known about how the degree and direction of lateralization affect how well the functions are carried out, i.e., how lateralization and task performance are related. The present study therefore examined the relationship between lateralization and performance in a dichotic listening task for which we had data available from 1839 participants. In this task, consonant-vowel syllables are presented simultaneously to the left and right ear, such that each ear receives a different syllable. When asked which of the two they heard best, participants typically report more syllables from the right ear, which is a marker of left-hemispheric speech dominance. We calculated the degree of lateralization (based on the difference between correct left and right ear reports) and correlated it with overall response accuracy (left plus right ear reports). In addition, we used reference models to control for statistical interdependency between left and right ear reports. The results revealed a u-shaped relationship between degree of lateralization and overall accuracy: the stronger the left or right ear advantage, the better the overall accuracy. This u-shaped asymmetry-performance relationship consistently emerged in males, females, right-/non-right-handers, and different age groups. Taken together, the present study demonstrates that performance on lateralized language functions depends on how strongly these functions are lateralized. The present study further stresses the importance of controlling for statistical interdependency when examining asymmetry-performance relationships in general. PMID:24427151
Hypothalamic digoxin, hemispheric chemical dominance, and peptic ulcer disease.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-10-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin-like factor (EDLF) (membrane sodium-potassium ATPase inhibitor and regulator of neurotransmitter transport), ubiquinone (free radical scavenger), and dolichol (regulator of glycoconjugate metabolism). The pathway was assessed in peptic ulcer and acid peptic disease and its relation to hemispheric dominance studied. The activity of HMG CoA reductase, serum levels of EDLF, magnesium, tryptophan catabolites, and tyrosine catabolites were measured in acid peptic disease, right hemispheric dominant, left hemispheric dominant, and bihemispheric dominant individuals. All the patients with peptic ulcer disease were right-handed/left hemispheric dominant by the dichotic listening test. The pathway was upregulated with increased EDLF synthesis in peptic ulcer disease (PUD). There was increase in tryptophan catabolites and reduction in tyrosine catabolites in these patients. The ubiquinone levels were low and free radical production increased. Dolichol and glycoconjugate levels were increased and lysosomal stability reduced in patients with acid peptic disease (APD). There was increase in cholesterol:phospholipid ratio with decreased glyco conjugate levels in membranes of patients with PUD. Acid peptic disease represents an elevated EDLF state which can modulate gastric acid secretion and the structure of the gastric mucous barrier. It can also lead to persistence of Helicobacter pylori infection. The biochemical pattern obtained in peptic ulcer disease is similar to those obtained in left-handed/right hemispheric chemically dominant individuals. But all the patients with peptic ulcer disease were right-handed/left hemispheric dominant by the dichotic listen ing test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Peptic ulcer disease occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function.
Divergent Thinking and Hemispheric Dominance for Language Function among Preschool Children.
ERIC Educational Resources Information Center
Tegano, Deborah Walker; And Others
1983-01-01
An investigation of the relationship of hemispheric dominance (dichotic listening) and divergent thinking (Torrance Tests of Creative Thinking) with 27 preschool children indicates that divergent thinking is associated with right hemispheric dominance in children as young as four years. (Author/PN)
Selective Attention with Human Earphones.
ERIC Educational Resources Information Center
Goodwin, C. James
1988-01-01
Describes a method for demonstrating dichotic listening tasks in the classroom which involves substituting live readers for tape recorded messages to allow direct student observation of various selective attention phenomena. Concludes that live readers offer pedagogical benefits that make them superior to tape recorded dichotic listening tasks.…
Maffei, Chiara; Capasso, Rita; Cazzolli, Giulia; Colosimo, Cesare; Dell'Acqua, Flavio; Piludu, Francesca; Catani, Marco; Miceli, Gabriele
2017-12-01
Pure Word Deafness (PWD) is a rare disorder, characterized by selective loss of speech input processing. Its most common cause is temporal damage to the primary auditory cortex of both hemispheres, but it has been reported also following unilateral lesions. In unilateral cases, PWD has been attributed to the disconnection of Wernicke's area from both right and left primary auditory cortex. Here we report behavioral and neuroimaging evidence from a new case of left unilateral PWD with both cortical and white matter damage due to a relatively small stroke lesion in the left temporal gyrus. Selective impairment in auditory language processing was accompanied by intact processing of nonspeech sounds and normal speech, reading and writing. Performance on dichotic listening was characterized by a reversal of the right-ear advantage typically observed in healthy subjects. Cortical thickness and gyral volume were severely reduced in the left superior temporal gyrus (STG), although abnormalities were not uniformly distributed and residual intact cortical areas were detected, for example in the medial portion of the Heschl's gyrus. Diffusion tractography documented partial damage to the acoustic radiations (AR), callosal temporal connections and intralobar tracts dedicated to single words comprehension. Behavioral and neuroimaging results in this case are difficult to integrate in a pure cortical or disconnection framework, as damage to primary auditory cortex in the left STG was only partial and Wernicke's area was not completely isolated from left or right-hemisphere input. On the basis of our findings we suggest that in this case of PWD, concurrent partial topological (cortical) and disconnection mechanisms have contributed to a selective impairment of speech sounds. The discrepancy between speech and non-speech sounds suggests selective damage to a language-specific left lateralized network involved in phoneme processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Auditory temporal-order processing of vowel sequences by young and elderly listeners1
Fogerty, Daniel; Humes, Larry E.; Kewley-Port, Diane
2010-01-01
This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18–31 years) and older (N=151; 60–88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners’ SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age. PMID:20370033
Auditory temporal-order processing of vowel sequences by young and elderly listeners.
Fogerty, Daniel; Humes, Larry E; Kewley-Port, Diane
2010-04-01
This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18-31 years) and older (N=151; 60-88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners' SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age.
Use of the Dichotic Listening Technique with Learning Disabilities
ERIC Educational Resources Information Center
Obrzut, John E.; Mahoney, Emery B.
2011-01-01
Dichotic listening (DL) techniques have been used extensively as a non-invasive procedure to assess language lateralization among children with and without learning disabilities (LD), and with individuals who have other auditory system related brain disorders. Results of studies using DL have indicated that language is lateralized in children with…
Simultaneous Masking in a Dichotic Emotion Detection Task
ERIC Educational Resources Information Center
Voyer, Daniel; Soraggi, Mariana; Brake, Brandy; Wood, Heather-Dawn
2006-01-01
The present study investigated the possible role of ceiling effects in producing laterality effects of small magnitude in dichotic emotion detection. Twenty two right-handed undergraduate students participated in the present experiment. They were required to detect the presence of a target emotion in the expressions tones of happiness, sadness,…
Perspectives on Dichotic Listening and the Corpus Callosum
ERIC Educational Resources Information Center
Musiek, Frank E.; Weihing, Jeffrey
2011-01-01
The present review summarizes historic and recent research which has investigated the role of the corpus callosum in dichotic processing within the context of audiology. Examination of performance by certain clinical groups, including split brain patients, multiple sclerosis cases, and other types of neurological lesions is included. Maturational,…
Dichotic Listening in the Study of Semantic Relations
ERIC Educational Resources Information Center
Kadesh, Irving; And Others
1976-01-01
A study is reported in which pairs of synonyms, antonyms, coordinates, and super super-subordinates were presented dichotically to university students. After each pair the subject reported what he heard. In one condition the two members of a pair were presented simultaneously, and in another they were presented sequentially. (Author/RM)
Binaural Interference and the Effects of Age and Hearing Loss.
Mussoi, Bruna S S; Bentler, Ruth A
2017-01-01
The existence of binaural interference, defined here as poorer speech recognition with both ears than with the better ear alone, is well documented. Studies have suggested that its prevalence may be higher in the elderly population. However, no study to date has explored binaural interference in groups of younger and older adults in conditions that favor binaural processing (i.e., in spatially separated noise). Also, the effects of hearing loss have not been studied. To examine binaural interference through speech perception tests, in groups of younger adults with normal hearing, older adults with normal hearing for their age, and older adults with hearing loss. A cross-sectional study. Thirty-three participants with symmetric thresholds were recruited from the University of Iowa community. Participants were grouped as follows: younger with normal hearing (18-28 yr, n = 12), older with normal hearing for their age (73-87 yr, n = 9), and older with hearing loss (78-94 yr, n = 12). Prior noise exposure was ruled out. The Connected Speech Test (CST) and Hearing in Noise Test (HINT) were administered to all participants bilaterally, and to each ear separately. Test materials were presented in the sound field with speech at 0° azimuth and the noise at 180°. The Dichotic Digits Test (DDT) was administered to all participants through earphones. Hearing aids were not used during testing. Group results were compared with repeated measures and one-way analysis of variances, as appropriate. Within-subject analyses using pre-established critical differences for each test were also performed. The HINT revealed no effect of condition (individual ear versus bilateral presentation) using group analysis, although within-subject analysis showed that 27% of the participants had binaural interference (18% had binaural advantage). On the CST, there was significant binaural advantage across all groups with group data analysis, as well as for 12% of the participants at each of the two signal-to-babble ratios (SBRs) tested. One participant had binaural interference at each SBR. Finally, on the DDT, a significant right-ear advantage was found with group data, and for at least some participants. Regarding age effects, more participants in the pooled elderly groups had binaural interference (33.3%) than in the younger group (16.7%), on the HINT. The presence of hearing loss yielded overall lower scores, but none of the comparisons between bilateral and unilateral performance were affected by hearing loss. Results of within-subject analyses on the HINT agree with previous findings of binaural interference in ≥17% of listeners. Across all groups, a significant right-ear advantage was also seen on the DDT. HINT results support the notion that the prevalence of binaural interference is likely higher in the elderly population. Hearing loss, however, did not affect the differences between bilateral and better unilateral scores. The possibility of binaural interference should be considered when fitting hearing aids to listeners with symmetric hearing loss. Comparing bilateral to unilateral (unaided) performance on tests such as the HINT may provide the clinician with objective data to support subjective preference for one hearing aid as opposed to two. American Academy of Audiology
A Performance of Individual Differences in Selective Attention.
ERIC Educational Resources Information Center
Wahl, Otto
A reliable, easily administered performance test of selective attentional ability was sought. A monaural listening task provided a baseline control for adequate hearing and memory; a dichotic listening task then provided indices of ability to focus attention and resist distraction while a simultaneous listening task provided measures of ability to…
Dichotic Listening and School Performance in Dyslexia
ERIC Educational Resources Information Center
Helland, Turid; Asbjornsen, Arve E.; Hushovd, Aud Ellen; Hugdahl, Kenneth
2008-01-01
This study focused on the relationship between school performance and performance on a dichotic listening (DL) task in dyslexic children. Dyslexia is associated with impaired phonological processing, related to functions in the left temporal lobe. DL is a frequently used task to assess functions of the left temporal lobe. Due to the predominance…
ERIC Educational Resources Information Center
Soveri, Anna; Laine, Matti; Hamalainen, Heikki; Hugdahl, Kenneth
2011-01-01
It has been claimed that due to their experience in controlling two languages, bilinguals exceed monolinguals in certain executive functions, especially inhibition of task-irrelevant stimuli. Here we investigated the effects of bilingualism on an executive phonological task, namely the forced-attention dichotic listening task with syllabic…
Hormones and Dichotic Listening: Evidence from the Study of Menstrual Cycle Effects
ERIC Educational Resources Information Center
Cowell, Patricia E.; Ledger, William L.; Wadnerkar, Meghana B.; Skilling, Fiona M.; Whiteside, Sandra P.
2011-01-01
This report presents evidence for changes in dichotic listening asymmetries across the menstrual cycle, which replicate studies from our laboratory and others. Increases in the right ear advantage (REA) were present in women at phases of the menstrual cycle associated with higher levels of ovarian hormones. The data also revealed correlations…
ERIC Educational Resources Information Center
Kimura, Doreen
2011-01-01
In this paper Doreen Kimura gives a personal history of the "right-ear effect" in dichotic listening. The focus is on the early ground-breaking papers, describing how she did the first dichotic listening studies relating the effects to brain asymmetry. The paper also gives a description of the visual half-field technique for lateralized stimulus…
Morton, L L
1994-08-01
Identifying disabilities in word-attack, word-recognition, or reading comprehension, allowed for four categories of reading disability: (1) reading comprehension only (RC), (2) word-attack plus comprehension (WA+RC), (3) word-attack, word-recognition, and comprehension (WA+WR+RC), and (4) word-attack but not comprehension (WA-RC). Along with age-matched controls (AMC) and developmental-delay controls (DDC), the disabled were tested on a directed-attention dichotic task using consonant-vowel combinations. Laterality results for each place of articulation (i.e., bilabial, alveolar, and velar) selectively attested to greater left hemisphere involvement or engagement for the RC group and greater right hemisphere involvement or engagement for the WA+RC group. Performance of the other two disabled groups was consistent with less efficient right hemisphere involvement or callosal transfer. Implications for theory, research, and remediation are discussed.
ERIC Educational Resources Information Center
Bouma, Anke; Gootjes, Liselotte
2011-01-01
This article presents an overview of our studies in elderly and Alzheimer patients employing Kimura's dichotic digits paradigm as a measure for left hemispheric predominance for processing language stimuli. In addition to structural brain mechanisms, we demonstrated that attention modulates the direction and degree of ear asymmetry in dichotic…
Reliability and Magnitude of Laterality Effects in Dichotic Listening with Exogenous Cueing
ERIC Educational Resources Information Center
Voyer, Daniel
2004-01-01
The purpose of the present study was to replicate and extend to word recognition previous findings of reduced magnitude and reliability of laterality effects when exogenous cueing was used in a dichotic listening task with syllable pairs. Twenty right-handed undergraduate students with normal hearing (10 females, 10 males) completed a dichotic…
The Effects of Background Noise on Dichotic Listening to Consonant-Vowel Syllables
ERIC Educational Resources Information Center
Sequeira, Sarah Dos Santos; Specht, Karsten; Hamalainen, Heikki; Hugdahl, Kenneth
2008-01-01
Lateralization of verbal processing is frequently studied with the dichotic listening technique, yielding a so called right ear advantage (REA) to consonant-vowel (CV) syllables. However, little is known about how background noise affects the REA. To address this issue, we presented CV-syllables either in silence or with traffic background noise…
A Forced-Attention Dichotic Listening fMRI Study on 113 Subjects
ERIC Educational Resources Information Center
Kompus, Kristiina; Specht, Karsten; Ersland, Lars; Juvodden, Hilde T.; van Wageningen, Heidi; Hugdahl, Kenneth; Westerhausen, Rene
2012-01-01
We report fMRI and behavioral data from 113 subjects on attention and cognitive control using a variant of the classic dichotic listening paradigm with pairwise presentations of consonant-vowel syllables. The syllable stimuli were presented in a block-design while subjects were in the MR scanner. The subjects were instructed to pay attention to…
Loudness enhancement - Monaural, binaural, and dichotic
NASA Technical Reports Server (NTRS)
Elmasian, R.; Galambos, R.
1975-01-01
When one tone burst (T) precedes another (S) by 100 msec, variations in the intensity of T systematically influence the loudness of S. When T is more intense than S, S is increased; and when T is less intense, S loudness is decreased. This occurs in monaural, binaural, and dichotic paradigms of signal presentation. When T and S are presented to the same ear (monaural or binaural), there is more enhancement with less intersubject variability than when they are presented to different ears (dichotic paradigm). Monaural enhancements as large as 30 dB can readily be demonstrated, but decrements rarely exceed 5 dB. Possible physiological mechanisms are discussed for this loudness enhancement, which apparently shares certain characteristics with time-order error, assimilation, and temporal partial masking experiments.
ERIC Educational Resources Information Center
Schepman, Astrid; Rodway, Paul; Geddes, Pauline
2012-01-01
Valence-specific laterality effects have been frequently obtained in facial emotion perception but not in vocal emotion perception. We report a dichotic listening study further examining whether valence-specific laterality effects generalise to vocal emotions. Based on previous literature, we tested whether valence-specific laterality effects were…
Q-Type Factor Analysis of Healthy Aged Men.
ERIC Educational Resources Information Center
Kleban, Morton H.
Q-type factor analysis was used to re-analyze baseline data collected in 1957, on 47 men aged 65-91. Q-type analysis is the use of factor methods to study persons rather than tests. Although 550 variables were originally studied involving psychiatry, medicine, cerebral metabolism and chemistry, personality, audiometry, dichotic and diotic memory,…
Kurup, Ravi Kumar; Kurup, Paramesware Achutha
2003-12-01
This study assessed the changes in the isoprenoid pathway and its metabolites digoxin, dolichol, and ubiquinone in multiple myeloma. The isoprenoid pathway and digoxin status were also studied for comparison in individuals of differing hemispheric dominance to find out the rote of cerebral dominance in the genesis of multiple myeloma and neoplasms. The following parameters were assessed: isoprenoid pathway metabolites, tyrosine and tryptophan catabolites, glycoconjugate metabolism, RBC membrane composition, and free radical metabolism--in multiple myeloma, as well as in individuals of differing hemispheric dominance. There was elevation in plasma HMG CoA reductase activity, serum digoxin, and dolichol, and a reduction in RBC membrane Na(+)-K+ ATPase activity, serum ubiquinone, and magnesium levels. Serum tryptophan, serotonin, nicotine, strychnine, and quinolinic acid were elevated, while tyrosine, dopamine, noradrenaline, and morphine were decreased. The total serum glycosaminoglycans and glycosaminoglycan fractions, the activity of GAG degrading enzymes and glycohydrolases, carbohydrate residues of glycoproteins, and serum glycolipids were elevated. The RBC membrane glycosaminoglycans, hexose, and fucose residues of glycoproteins, cholesterol, and phospholipids were reduced. The activity of all free-radical scavenging enzymes, concentration of glutathione, iron binding capacity, and ceruloplasmin decreased significantly, while the concentration of lipid peroxidation products and nitric oxide increased. Hyperdigoxinemia-related altered intracellular Ca++/Mg++ ratios mediated oncogene activation, dolichol-induced altered glycoconjugate metabolism, and ubiquinone deficiency-related mitochondrial dysfunction can contribute to the pathogenesis of multiple myeloma. The biochemical patterns obtained in multiple myeloma are similar to those obtained in left-handed/right hemispheric chemically dominant individuals by the dichotic listening test. But all the patients with multiple myeloma were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Multiple myeloma occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function.
ERIC Educational Resources Information Center
Gadea, Marien; Espert, Raul; Salvador, Alicia; Marti-Bonmati, Luis
2011-01-01
Dichotic Listening (DL) is a valuable tool to study emotional brain lateralization. Regarding the perception of sadness and anger through affective prosody, the main finding has been a left ear advantage (LEA) for the sad but contradictory data for the anger prosody. Regarding an induced mood in the laboratory, its consequences upon DL were a…
Interaction of attention and acoustic factors in dichotic listening for fused words.
McCulloch, Katie; Lachner Bass, Natascha; Dial, Heather; Hiscock, Merrill; Jansen, Ben
2017-07-01
Two dichotic listening experiments examined the degree to which the right-ear advantage (REA) for linguistic stimuli is altered by a "top-down" variable (i.e., directed attention) in conjunction with selected "bottom-up" (acoustic) variables. Halwes fused dichotic words were administered to 99 right-handed adults with instructions to attend to the left or right ear, or to divide attention equally. Stimuli in Experiment 1 were presented without noise or mixed with noise that was high-pass or low-pass filtered, or unfiltered. The stimuli themselves in Experiment 2 were high-pass or low-pass filtered, or unfiltered. The initial consonants of each dichotic pair were categorized according to voice onset time (VOT) and place of articulation (PoA). White noise extinguished both the REA and selective attention, and filtered noise nullified selective attention without extinguishing the REA. Frequency filtering of the words themselves did not alter performance. VOT effects were inconsistent across experiments but PoA analyses indicated that paired velar consonants (/k/ and /g/) yield a left-ear advantage and paradoxical selective-attention results. The findings show that ear asymmetry and the effectiveness of directed attention can be altered by bottom-up variables.
Linguistic Lateralization in Adolescents with Down Syndrome Revealed by a Dichotic Monitoring Test
ERIC Educational Resources Information Center
Shoji, Hiroaki; Koizumi, Natsuko; Ozaki, Hisaki
2009-01-01
Linguistic lateralization in 10 adolescents with Down syndrome (average age: 15.7 years), 15 adolescents with intellectual disabilities of unknown etiology (average age: 17.8 years), 2 groups of children without disabilities (11 children, average age: 4.7 years; 10 children, average age: 8.5 years), and 14 adolescents without disabilities (average…
ERIC Educational Resources Information Center
Bedoin, Nathalie; Ferragne, Emmanuel; Marsico, Egidio
2010-01-01
Dichotic listening experiments show a right-ear advantage (REA), reflecting a left-hemisphere (LH) dominance. However, we found a decrease in REA when the initial stop consonants of two simultaneous French CVC words differed in voicing rather than place of articulation (Experiment 1). This result suggests that the right hemisphere (RH) is more…
What Can We Learn about Auditory Processing from Adult Hearing Questionnaires?
Bamiou, Doris-Eva; Iliadou, Vasiliki Vivian; Zanchetta, Sthella; Spyridakou, Chrysa
2015-01-01
Questionnaires addressing auditory disability may identify and quantify specific symptoms in adult patients with listening difficulties. (1) To assess validity of the Speech, Spatial, and Qualities of Hearing Scale (SSQ), the (Modified) Amsterdam Inventory for Auditory Disability (mAIAD), and the Hyperacusis Questionnaire (HYP) in adult patients experiencing listening difficulties in the presence of a normal audiogram. (2) To examine which individual questionnaire items give the worse scores in clinical participants with an auditory processing disorder (APD). A prospective correlational analysis study. Clinical participants (N = 58) referred for assessment because of listening difficulties in the presence of normal audiometric thresholds to audiology/ear, nose, and throat or audiovestibular medicine clinics. Normal control participants (N = 30). The mAIAD, HYP, and the SSQ were administered to a clinical population of nonneurological adults who were referred for auditory processing (AP) assessment because of hearing complaints, in the presence of normal audiogram and cochlear function, and to a sample of age-matched normal-hearing controls, before the AP testing. Clinical participants with abnormal results in at least one ear and in at least two tests of AP (and at least one of these tests to be nonspeech) were classified as clinical APD (N = 39), and the remaining (16 of whom had a single test abnormality) as clinical non-APD (N = 19). The SSQ correlated strongly with the mAIAD and the HYP, and correlation was similar within the clinical group and the normal controls. All questionnaire total scores and subscores (except sound distinction of mAIAD) were significantly worse in the clinical APD versus the normal group, while questionnaire total scores and most subscores indicated greater listening difficulties for the clinical non-APD versus the normal subgroups. Overall, the clinical non-APD group tended to give better scores than the APD in all questionnaires administered. Correlation was strong for the worse-ear gaps-in-noise threshold with the SSQ, mAIAD, and HYP; strong to moderate for the speech in babble and left-ear dichotic digit test scores (at p < 0.01); and weak to moderate for the remaining AP tests except the frequency pattern test that did not correlate. The worse-scored items in all three questionnaires concerned speech-in-noise questions. This is similar to worse-scored items by hearing-impaired participants as reported in the literature. Worse-scored items of the clinical group also included quality aspects of listening questions from the SSQ, which most likely pertain to cognitive aspects of listening, such as ability to ignore other sounds and listening effort. Hearing questionnaires may help assess symptoms of adults with APD. The listening difficulties and needs of adults with APD to some extent overlap with those of hearing-impaired listeners, but there are significant differences. The correlation of the gaps-in-noise and duration pattern (but not frequency pattern) tests with the questionnaire scores indicates that temporal processing deficits may play an important role in clinical presentation. American Academy of Audiology.
Nittrouer, Susan; Tarr, Eric; Bolster, Virginia; Caldwell-Tarr, Amanda; Moberly, Aaron C.; Lowenstein, Joanna H.
2014-01-01
Objective Using signals processed to simulate speech received through cochlear implants and low-frequency extended hearing aids, this study examined the proposal that low-frequency signals facilitate the perceptual organization of broader, spectrally degraded signals. Design In two experiments, words and sentences were presented in diotic and dichotic configurations as four-channel noise-vocoded signals (VOC-only), and as those signals combined with the acoustic signal below 250 Hz (LOW-plus). Dependent measures were percent correct recognition scores, and the difference between scores for the two processing conditions given as proportions of recognition scores for VOC-only. The influence of linguistic context was also examined. Study Sample Participants had normal hearing. In all, 40 adults, 40 7-year-olds, and 20 5-year-olds participated. Results Participants of all ages showed benefits of adding the low-frequency signal. The effect was greater for sentences than words, but no effect of configuration was found. The influence of linguistic context was similar across age groups, and did not contribute to the low-frequency effect. Listeners who scored more poorly with VOC-only stimuli showed greater low-frequency effects. Conclusion The benefit of adding a very low-frequency signal to a broader, spectrally degraded signal seems to derive from its facilitative influence on perceptual organization of the sensory input. PMID:24456179
Binaural Pitch Fusion in Bilateral Cochlear Implant Users.
Reiss, Lina A J; Fowler, Jennifer R; Hartling, Curtis L; Oh, Yonghee
Binaural pitch fusion is the fusion of stimuli that evoke different pitches between the ears into a single auditory image. Individuals who use hearing aids or bimodal cochlear implants (CIs) experience abnormally broad binaural pitch fusion, such that sounds differing in pitch by as much as 3-4 octaves are fused across ears, leading to spectral averaging and speech perception interference. The goal of this study was to determine if adult bilateral CI users also experience broad binaural pitch fusion. Stimuli were pulse trains delivered to individual electrodes. Fusion ranges were measured using simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying the comparison stimulus to find the range that fused with the reference stimulus. Bilateral CI listeners had binaural pitch fusion ranges varying from 0 to 12 mm (average 6.1 ± 3.9 mm), where 12 mm indicates fusion over all electrodes in the array. No significant correlations of fusion range were observed with any subject factors related to age, hearing loss history, or hearing device history, or with any electrode factors including interaural electrode pitch mismatch, pitch match bandwidth, or within-ear electrode discrimination abilities. Bilateral CI listeners have abnormally broad fusion, similar to hearing aid and bimodal CI listeners. This broad fusion may explain the variability of binaural benefits for speech perception in quiet and in noise in bilateral CI users.
Hirnstein, Marco; Westerhausen, René; Korsnes, Maria S; Hugdahl, Kenneth
2013-01-01
Men are often believed to have a functionally more asymmetrical brain organization than women, but the empirical evidence for sex differences in lateralization is unclear to date. Over the years we have collected data from a vast number of participants using the same consonant-vowel dichotic listening task, a reliable marker for language lateralization. One dataset comprised behavioral data from 1782 participants (885 females, 125 non-right-handers), who were divided in four age groups (children <10 yrs, adolescents = 10-15 yrs, younger adults = 16-49 yrs, and older adults >50 yrs). In addition, we had behavioral and functional imaging (fMRI) data from another 104 younger adults (49 females, aged 18-45 yrs), who completed the same dichotic listening task in a 3T scanner. This database allowed us to comprehensively test whether there is a sex difference in functional language lateralization. Across all participants and in both datasets a right ear advantage (REA) emerged, reflecting left-hemispheric language lateralization. Accordingly, the fMRI data revealed a leftward asymmetry in superior temporal lobe language processing areas. In the N = 1782 dataset no main effect of sex but a significant sex by age interaction emerged: the REA increased with age in both sexes but as a result of an earlier onset in females the REA was stronger in female than male adolescents. In turn, male younger adults showed greater asymmetry than female younger adults (accounting for <1% of variance). There were no sex differences in children and older adults. The males in the fMRI dataset (N = 104) also had a greater REA than females (accounting for 4% of variance), but no sex difference emerged in the neuroimaging data. Handedness did not affect these findings. Taken together, our findings suggest that sex differences in language lateralization as assessed with dichotic listening exist, but they are (a) not necessarily reflected in fMRI data, (b) age-dependent and (c) relatively small. Copyright © 2012 Elsevier Ltd. All rights reserved.
From dichoptic to dichotic: historical contrasts between binocular vision and binaural hearing.
Wade, Nicholas J; Ono, Hiroshi
2005-01-01
Phenomena involving vision with two eyes have been commented upon for several thousand years whereas those concerned with hearing with two ears have a much more recent history. Studies of binocular vision and binaural hearing are contrasted with respect to the singleness of the percept, experimental manipulations of dichoptic and dichotic stimuli, eye and ear dominance, spatial localisation, and the instruments used to stimulate the paired organs. One of the principal phenomena that led to studies of dichotic hearing was dichoptic colour mixing. There was similar disagreement regarding whether colours or sounds could be combined when presented to different paired organs. Direction and distance in visual localisation were analysed before those for auditory localisation, partly due to difficulties in controlling the stimuli. Instruments for investigating binocular vision, like the stereoscope and pseudoscope, were invented before those for binaural hearing, like the stethophone and pseudophone.
ERIC Educational Resources Information Center
Di Stefano, Marirosa; Marano, Elena; Viti, Marzia
2004-01-01
The assessment of language laterality by the dichotic fused-words test may be impaired by interference effects revealed by the dominant report of one member of the stimuli-pair. Stimulus-dominance and ear asymmetry were evaluated in normal population (48 subjects of both sex and handedness) and in 2 patients with a single functional hemisphere.…
Lanzetta-Valdo, Bianca Pinheiro; Oliveira, Giselle Alves de; Ferreira, Jane Tagarro Correa; Palacios, Ester Miyuki Nakamura
2016-01-01
Introduction Children with Attention Deficit Hyperactivity Disorder can present Auditory Processing (AP) Disorder. Objective The study examined the AP in ADHD children compared with non-ADHD children, and before and after 3 and 6 months of methylphenidate (MPH) treatment in ADHD children. Methods Drug-naive children diagnosed with ADHD combined subtype aging between 7 and 11 years, coming from public and private outpatient service or public and private school, and age-gender-matched non-ADHD children, participated in an open, non-randomized study from February 2013 to December 2013. They were submitted to a behavioral battery of AP tests comprising Speech with white Noise, Dichotic Digits (DD), and Pitch Pattern Sequence (PPS) and were compared with non-ADHD children. They were followed for 3 and 6 months of MPH treatment (0.5 mg/kg/day). Results ADHD children presented larger number of errors in DD (p < 0.01), and less correct responses in the PPS (p < 0.0001) and in the SN (p < 0.05) tests when compared with non-ADHD children. The treatment with MPH, especially along 6 months, significantly decreased the mean errors in the DD (p < 0.01) and increased the correct response in the PPS (p < 0.001) and SN (p < 0.01) tests when compared with the performance before MPH treatment. Conclusions ADHD children show inefficient AP in selected behavioral auditory battery suggesting impaired in auditory closure, binaural integration, and temporal ordering. Treatment with MPH gradually improved these deficiencies and completely reversed them by reaching a performance similar to non-ADHD children at 6 months of treatment. PMID:28050211
Lanzetta-Valdo, Bianca Pinheiro; Oliveira, Giselle Alves de; Ferreira, Jane Tagarro Correa; Palacios, Ester Miyuki Nakamura
2017-01-01
Introduction Children with Attention Deficit Hyperactivity Disorder can present Auditory Processing (AP) Disorder. Objective The study examined the AP in ADHD children compared with non-ADHD children, and before and after 3 and 6 months of methylphenidate (MPH) treatment in ADHD children. Methods Drug-naive children diagnosed with ADHD combined subtype aging between 7 and 11 years, coming from public and private outpatient service or public and private school, and age-gender-matched non-ADHD children, participated in an open, non-randomized study from February 2013 to December 2013. They were submitted to a behavioral battery of AP tests comprising Speech with white Noise, Dichotic Digits (DD), and Pitch Pattern Sequence (PPS) and were compared with non-ADHD children. They were followed for 3 and 6 months of MPH treatment (0.5 mg/kg/day). Results ADHD children presented larger number of errors in DD ( p < 0.01), and less correct responses in the PPS ( p < 0.0001) and in the SN ( p < 0.05) tests when compared with non-ADHD children. The treatment with MPH, especially along 6 months, significantly decreased the mean errors in the DD ( p < 0.01) and increased the correct response in the PPS ( p < 0.001) and SN ( p < 0.01) tests when compared with the performance before MPH treatment. Conclusions ADHD children show inefficient AP in selected behavioral auditory battery suggesting impaired in auditory closure, binaural integration, and temporal ordering. Treatment with MPH gradually improved these deficiencies and completely reversed them by reaching a performance similar to non-ADHD children at 6 months of treatment.
Temporal relation between top-down and bottom-up processing in lexical tone perception
Shuai, Lan; Gong, Tao
2013-01-01
Speech perception entails both top-down processing that relies primarily on language experience and bottom-up processing that depends mainly on instant auditory input. Previous models of speech perception often claim that bottom-up processing occurs in an early time window, whereas top-down processing takes place in a late time window after stimulus onset. In this paper, we evaluated the temporal relation of both types of processing in lexical tone perception. We conducted a series of event-related potential (ERP) experiments that recruited Mandarin participants and adopted three experimental paradigms, namely dichotic listening, lexical decision with phonological priming, and semantic violation. By systematically analyzing the lateralization patterns of the early and late ERP components that are observed in these experiments, we discovered that: auditory processing of pitch variations in tones, as a bottom-up effect, elicited greater right hemisphere activation; in contrast, linguistic processing of lexical tones, as a top-down effect, elicited greater left hemisphere activation. We also found that both types of processing co-occurred in both the early (around 200 ms) and late (around 300–500 ms) time windows, which supported a parallel model of lexical tone perception. Unlike the previous view that language processing is special and performed by dedicated neural circuitry, our study have elucidated that language processing can be decomposed into general cognitive functions (e.g., sensory and memory) and share neural resources with these functions. PMID:24723863
Effects of stimulus characteristics and task demands on pilots' perception of dichotic messages
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.
1986-01-01
The experiment is an initial investigation of pilot performance when auditory advisory messages are presented dichotically, either with or without a concurrent pursuit task requiring visual/motor dexterity. The dependent measures were percent correct and correct reaction times for manual responses to the auditory messages. Two stimulus variables which show facilitory effects in traditional dichotic-listening paradigms, differences in pitch and semantic content of the messages, were examined to determine their effectiveness during the functional simulation of helicopter pursuit. In an effort to accumulate points for the advisory messages for accuracy alone or for both accuracy and reaction times which were faster than their opponent's. In general, the combined effects of the stimulus and task variables are additive. When interactions do occur they suggest that an increase in task demands can sometimes mitigate, but usually does not remove, any processing advantages accrued from stimulus characteristics. The implications of these results for cockpit displays are discussed.
Standardization of Performance Tests: A Proposal for Further Steps.
1986-07-01
obviously demand substantial attention can sometimes be time shared perfectly. Wickens describes cases in which skilled pianists can time share sight-reading...effects of divided attention on information processing in tracking. Journal of Experimental Psychology, 1, 1-13. Wickens, C.D. (1984). Processing resources... attention he regards focused- divided attention tasks (e.g. dichotic listening, dual task situations) as theoretically useful. From his point of view good
[Children with specific language impairment: electrophysiological and pedaudiological findings].
Rinker, T; Hartmann, K; Smith, E; Reiter, R; Alku, P; Kiefer, M; Brosch, S
2014-08-01
Auditory deficits may be at the core of the language delay in children with Specific Language Impairment (SLI). It was therefore hypothesized that children with SLI perform poorly on 4 tests typically used to diagnose central auditory processing disorder (CAPD) as well in the processing of phonetic and tone stimuli in an electrophysiological experiment. 14 children with SLI (mean age 61,7 months) and 16 children without SLI (mean age 64,9 months) were tested with 4 tasks: non-word repetition, language discrimination in noise, directional hearing, and dichotic listening. The electrophysiological recording Mismatch Negativity (MMN) employed sine tones (600 vs. 650 Hz) and phonetic stimuli (/ε/ versus /e/). Control children and children with SLI differed significantly in the non-word repetition as well as in the dichotic listening task but not in the two other tasks. Only the control children recognized the frequency difference in the MMN-experiment. The phonetic difference was discriminated by both groups, however, effects were longer lasting for the control children. Group differences were not significant. Children with SLI show limitations in auditory processing that involve either a complex task repeating unfamiliar or difficult material and show subtle deficits in auditory processing at the neural level. © Georg Thieme Verlag KG Stuttgart · New York.
On the possibility of a place code for the low pitch of high-frequency complex tonesa
Santurette, Sébastien; Dau, Torsten; Oxenham, Andrew J.
2012-01-01
Harmonics are considered unresolved when they interact with neighboring harmonics and cannot be heard out separately. Several studies have suggested that the pitch derived from unresolved harmonics is coded via temporal fine-structure cues emerging from their peripheral interactions. Such conclusions rely on the assumption that the components of complex tones with harmonic ranks down to at least 9 were indeed unresolved. The present study tested this assumption via three different measures: (1) the effects of relative component phase on pitch matches, (2) the effects of dichotic presentation on pitch matches, and (3) listeners' ability to hear out the individual components. No effects of relative component phase or dichotic presentation on pitch matches were found in the tested conditions. Large individual differences were found in listeners' ability to hear out individual components. Overall, the results are consistent with the coding of individual harmonic frequencies, based on the tonotopic activity pattern or phase locking to individual harmonics, rather than with temporal coding of single-channel interactions. However, they are also consistent with more general temporal theories of pitch involving the across-channel summation of information from resolved and/or unresolved harmonics. Simulations of auditory-nerve responses to the stimuli suggest potential benefits to a spatiotemporal mechanism. PMID:23231119
Sex differences in left-right confusion depend on hemispheric asymmetry.
Hirnstein, Marco; Ocklenburg, Sebastian; Schneider, Daniel; Hausmann, Markus
2009-01-01
Numerous studies have reported that women believe they are more susceptible to left-right confusion than men. Indeed, some studies have also found sex differences in behavioural tasks. It has been suggested that women have more difficulties with left-right discrimination, because they are less lateralised than men and a lower degree of lateralisation might lead to more left-right confusion (LRC). However, those studies reporting more left-right confusion for women have been criticised because the tasks that have been used involved mental rotation, a spatial ability in which men typically excel. In the present study, 34 right-handed women and 31 right-handed men completed two behavioural left-right discrimination tasks, in which mental rotation was either experimentally controlled for or was not needed. To measure the degree of hemispheric asymmetry participants also completed a dichotic listening test. Although women were not less lateralised than men, both tasks consistently revealed that women were more susceptible to left-right confusion than men. However, only women with a significant right ear advantage in the dichotic listening test had more difficulties in LRC tasks than men. There was no sex difference in less lateralised participants. This finding suggests that the impact of functional verbal asymmetries on LRC is mediated by sex.
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W
2013-11-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in white noise. Relative to control stimuli that contain no inter-aural timing differences, dichotic pitch stimuli typically elicit an object related negativity (ORN) response, associated with the perceptual segregation of the tone and the carrier noise into distinct auditory objects. Autistic children failed to demonstrate an ORN, suggesting a failure of segregation; however, comparison with the ORNs of age-matched typically developing controls narrowly failed to attain significance. More striking, the autistic children demonstrated a significant differential response to the pitch stimulus, peaking at around 50 ms. This was not present in the control group, nor has it been found in other groups tested using similar stimuli. This response may be a neural signature of atypical processing of pitch in at least some autistic individuals.
Hund-Georgiadis, Margret; Lex, Ulrike; Friederici, Angela D; von Cramon, D Yves
2002-07-01
Language lateralization was assessed by two independent functional techniques, fMRI and a dichotic listening test (DLT), in an attempt to establish a reliable and non-invasive protocol of dominance determination. This should particularly address the high intraindividual variability of language lateralization and allow decision-making in individual cases. Functional MRI of word classification tasks showed robust language lateralization in 17 right-handers and 17 left-handers in terms of activation in the inferior frontal gyrus. The DLT was introduced as a complementary tool to MR mapping for language dominance assessment, providing information on perceptual language processing located in superior temporal cortices. The overall agreement of lateralization assessment between the two techniques was 97.1%. Conflicting results were found in one subject, and diverging indices in ten further subjects. Increasing age, non-familial sinistrality, and a non-dominant writing hand were identified as the main factors explaining the observed mismatch between the two techniques. This finding stresses the concept of an intrahemispheric distribution of language function that is obviously associated with certain behavioral characteristics.
Frequency-shift detectors bind binaural as well as monaural frequency representations.
Carcagno, Samuele; Semal, Catherine; Demany, Laurent
2011-12-01
Previous psychophysical work provided evidence for the existence of automatic frequency-shift detectors (FSDs) that establish perceptual links between successive sounds. In this study, we investigated the characteristics of the FSDs with respect to the binaural system. Listeners were presented with sound sequences consisting of a chord of pure tones followed by a single test tone. Two tasks were performed. In the "present/absent" task, the test tone was either identical to one of the chord components or positioned halfway in frequency between two components, and listeners had to discriminate between these two possibilities. In the "up/down" task, the test tone was slightly different in frequency from one of the chord components and listeners had to identify the direction (up or down) of the corresponding shift. When the test tone was a pure tone presented monaurally, either to the same ear as the chord or to the opposite ear, listeners performed the up/down task better than the present/absent task. This paradoxical advantage for directional frequency shifts, providing evidence for FSDs, persisted when the test tone was replaced by a dichotic stimulus consisting of noise but evoking a pitch sensation as a consequence of binaural processing. Performance in the up/down task was similar for the dichotic stimulus and for a monaural narrow-band noise matched in pitch salience to it. Our results indicate that the FSDs are insensitive to sound localization mechanisms and operate on central frequency representations, at or above the level of convergence of the monaural auditory pathways.
An auditory brain-computer interface evoked by natural speech
NASA Astrophysics Data System (ADS)
Lopez-Gordo, M. A.; Fernandez, E.; Romero, S.; Pelayo, F.; Prieto, Alberto
2012-06-01
Brain-computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min-1 in full-length trials and 2.7 bits min-1 using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.
A frontal but not parietal neural correlate of auditory consciousness.
Brancucci, Alfredo; Lugli, Victor; Perrucci, Mauro Gianni; Del Gratta, Cosimo; Tommasi, Luca
2016-01-01
Hemodynamic correlates of consciousness were investigated in humans during the presentation of a dichotic sequence inducing illusory auditory percepts with features analogous to visual multistability. The sequence consisted of a variation of the original stimulation eliciting the Deutsch's octave illusion, created to maintain a stable illusory percept long enough to allow the detection of the underlying hemodynamic activity using functional magnetic resonance imaging (fMRI). Two specular 500 ms dichotic stimuli (400 and 800 Hz) presented in alternation by means of earphones cause an illusory segregation of pitch and ear of origin which can yield up to four different auditory percepts per dichotic stimulus. Such percepts are maintained stable when one of the two dichotic stimuli is presented repeatedly for 6 s, immediately after the alternation. We observed hemodynamic activity specifically accompanying conscious experience of pitch in a bilateral network including the superior frontal gyrus (SFG, BA9 and BA10), medial frontal gyrus (BA6 and BA9), insula (BA13), and posterior lateral nucleus of the thalamus. Conscious experience of side (ear of origin) was instead specifically accompanied by bilateral activity in the MFG (BA6), STG (BA41), parahippocampal gyrus (BA28), and insula (BA13). These results suggest that the neural substrate of auditory consciousness, differently from that of visual consciousness, may rest upon a fronto-temporal rather than upon a fronto-parietal network. Moreover, they indicate that the neural correlates of consciousness depend on the specific features of the stimulus and suggest the SFG-MFG and the insula as important cortical nodes for auditory conscious experience.
Response procedure, memory, and dichotic emotion recognition.
Voyer, Daniel; Dempsey, Danielle; Harding, Jennifer A
2014-03-01
Three experiments investigated the role of memory and rehearsal in a dichotic emotion recognition task by manipulating the response procedure as well as the interval between encoding and retrieval while taking into account order of report. For all experiments, right-handed undergraduates were presented with dichotic pairs of the words bower, dower, power, and tower pronounced in a sad, angry, happy, or neutral tone of voice. Participants were asked to report the two emotions presented on each trial by clicking on the corresponding drawings or words on a computer screen, either following no delay or a five second delay. Experiment 1 applied the delay conditions as a between-subjects factor whereas it was a within-subject factor in Experiment 2. In Experiments 1 and 2, more correct responses occurred for the left than the right ear, reflecting a left ear advantage (LEA) that was slightly larger with a nonverbal than a verbal response. The LEA was also found to be larger with no delay than with the 5s delay. In addition, participants typically responded first to the left ear stimulus. In fact, the first response produced a LEA whereas the second response produced a right ear advantage. Experiment 3 involved a concurrent task during the delay to prevent rehearsal. In Experiment 3, the pattern of results supported the claim that rehearsal could account for the findings of the first two experiments. The findings are interpreted in the context of the role of rehearsal and memory in models of dichotic listening. Copyright © 2013 Elsevier Inc. All rights reserved.
Phélip, Marion; Donnot, Julien; Vauclair, Jacques
2015-12-18
In their groundbreaking work featuring verbal dichotic listening tasks, Mondor and Bryden showed that tone cues do not enhance children's attentional orienting, in contrast to adults. The magnitude of the children's right-ear advantage was not attenuated when their attention was directed to the left ear. Verbal cues did, however, appear to favour the orientation of attention at around 10 years, although stimulus-onset asynchronies (SOAs), which ranged between 450 and 750 ms, were not rigorously controlled. The aim of our study was therefore to investigate the role of both types of cues in a typical CV-syllable dichotic listening task administered to 8- to 10-year-olds, applying a protocol as similar as possible to that used by Mondor and Bryden, but controlling for SOA as well as for cued ear. Results confirmed that verbal cues are more effective than tone cues in orienting children's attention. However, in contrast to adults, no effect of SOA was observed. We discuss the relative difficulty young children have processing CV syllables, as well as the role of top-down processes in attentional orienting abilities.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-08-01
The isoprenoid pathway produces three key metabolites--digoxin (membrane sodium-potassium ATPase inhibitor and regulator of neurotransmitter transport), dolichol (regulator of N-glycosylation of proteins), and ubiquinone (free radical scavenger). The isoprenoid pathway was assessed in patients with bronchial asthma. The pathway was also assessed in patients with right hemispheric, left hemispheric, and bihemispheric dominance to find out the role of hemispheric dominance in the pathogenesis of bronchial asthma. The pathway was upregulated with increase in digoxin synthesis in bronchial asthma. There was an increase in tryptophan catabolites and a reduction in tyrosine catabolites in patients with bronchial asthma. The ubiquinone levels were low and lipid peroxidation increased in these patients. There was increase in dolichol and glycoconjugate levels and reduction in lysosomal stability in these patients. The cholesterol:phospholipid ratio was increased and glycoconjugate levels were reduced in the membranes of these patients. The patterns noticed in bronchial asthma were similar to those in patients with right hemispheric chemical dominance. Bronchial asthma occurs in right hemispheric chemically dominant individuals. Ninety percent of the patients with bronchial asthma were right-handed and left hemispheric dominant by the dichotic listening test. But their biochemical patterns were similar to those obtained in right hemispheric chemical dominance. Hemispheric chemical dominance is a different entity and has no correlation with handedness or the dichotic listening test.
Mood Modulates Auditory Laterality of Hemodynamic Mismatch Responses during Dichotic Listening
Schock, Lisa; Dyck, Miriam; Demenescu, Liliana R.; Edgar, J. Christopher; Hertrich, Ingo; Sturm, Walter; Mathiak, Klaus
2012-01-01
Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI). Faces expressing emotions (sad/happy/neutral) were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV) syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing. PMID:22384105
Figure-background in dichotic task and their relation to skills untrained.
Cibian, Aline Priscila; Pereira, Liliane Desgualdo
2015-01-01
To evaluate the effectiveness of auditory training in dichotic task and to compare the responses of trained skills with the responses of untrained skills, after 4-8 weeks. Nineteen subjects, aged 12-15 years, underwent an auditory training based on dichotic interaural intensity difference (DIID), organized in eight sessions, each lasting 50 min. The assessment of auditory processing was conducted in three stages: before the intervention, after the intervention, and in the middle and at the end of the training. Data from this evaluation were analyzed as per group of disorder, according to the changes in the auditory processes evaluated: selective attention and temporal processing. Each of them was named selective attention group (SAG) and temporal processing group (TPG), and, for both the processes, selective attention and temporal processing group (SATPG). The training improved both the trained and untrained closing skill, normalizing all individuals. Untrained solving and temporal ordering skills did not reach normality for SATPG and TPG. Individuals reached normality for the trained figure-ground skill and for the untrained closing skill. The untrained solving and temporal ordering skills improved in some individuals but failed to reach normality.
A novel procedure for examining pre-lexical phonetic-level analysis
NASA Astrophysics Data System (ADS)
Bashford, James A.; Warren, Richard M.; Lenz, Peter W.
2005-09-01
A recorded word repeated over and over is heard to undergo a series of illusory changes (verbal transformations) to other syllables and words in the listener's lexicon. When a second image of the same repeating word is added through dichotic presentation (with an interaural delay preventing fusion), the two distinct lateralized images of the word undergo independent illusory transformations at the same rate observed for a single image [Lenz et al., J. Acoust. Soc. Am. 107, 2857 (2000)]. However, when the contralateral word differs by even one phoneme, transformation rate decreases dramatically [Bashford et al., J. Acoust. Soc. Am. 110, 2658 (2001)]. This suppression of transformations did not occur when a nonspeech competitor was employed. The present study found that dichotic suppression of transformation rate also is independent of the top-down influence of a verbal competitor's word frequency, neighborhood density, and lexicality. However, suppression did increase with the extent of feature mismatch at a given phoneme position (e.g., transformations for ``dark'' were suppressed more by contralateral ``hark'' than by ``bark''). These and additional findings indicate that dichotic verbal transformations can provide experimental access to a pre-lexical phonetic analysis normally obscured by subsequent processing. [Work supported by NIH.
Objective speech quality evaluation of real-time speech coders
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Russell, W. H.; Huggins, A. W. F.
1984-02-01
This report describes the work performed in two areas: subjective testing of a real-time 16 kbit/s adaptive predictive coder (APC) and objective speech quality evaluation of real-time coders. The speech intelligibility of the APC coder was tested using the Diagnostic Rhyme Test (DRT), and the speech quality was tested using the Diagnostic Acceptability Measure (DAM) test, under eight operating conditions involving channel error, acoustic background noise, and tandem link with two other coders. The test results showed that the DRT and DAM scores of the APC coder equalled or exceeded the corresponding test scores fo the 32 kbit/s CVSD coder. In the area of objective speech quality evaluation, the report describes the development, testing, and validation of a procedure for automatically computing several objective speech quality measures, given only the tape-recordings of the input speech and the corresponding output speech of a real-time speech coder.
Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.
Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi
2015-08-01
To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Subjective and psychophysiological indices of listening effort in a competing-talker task
Mackersie, Carol L.; Cones, Heather
2010-01-01
Background The effects of noise and other competing backgrounds on speech recognition performance are well documented. There is less information, however, on listening effort and stress experienced by listeners during a speech recognition task that requires inhibition of competing sounds. Purpose The purpose was a) to determine if psychophysiological indices of listening effort were more sensitive than performance measures (percentage correct) obtained near ceiling level during a competing speech task b) to determine the relative sensitivity of four psychophysiological measures to changes in task demand and c) to determine the relationships between changes in psychophysiological measures and changes in subjective ratings of stress and workload. Research Design A repeated-measures experimental design was used to examine changes in performance, psychophysiological measures, and subjective ratings in response to increasing task demand. Study Sample Fifteen adults with normal hearing participated in the study. The mean age of the participants was 27 (range: 24–54). Data Collection and Analysis Psychophysiological recordings of heart rate, skin conductance, skin temperature, and electromyographic activity (EMG) were obtained during listening tasks of varying demand. Materials from the Dichotic Digits Test were used to modulate task demand. The three levels of tasks demand were: single digits presented to one ear (low-demand reference condition), single digits presented simultaneously to both ears (medium demand), and a series of two digits presented simultaneously to both ears (high demand). Participants were asked to repeat all the digits they heard while psychophysiological activity was recorded simultaneously. Subjective ratings of task load were obtained after each condition using the NASA-TLX questionnaire. Repeated-measures analyses of variance were completed for each measure using task demand and session as factors. Results Mean performance was higher than 96% for all listening tasks. There was no significant change in performance across listening conditions for any listener. There was, however, a significant increase in mean skin conductance and EMG activity as task demand increased. Heart rate and skin temperature did not change significantly. There was no strong association between subjective and psychophysiological measures, but all participants with mean normalized effort ratings of greater than 4.5 (i.e. effort increased by a factor of at least 4.5) showed significant changes in skin conductance. Conclusions Even in the absence of substantial performance changes, listeners may experience changes in subjective and psychophysiological responses consistent with activation of a stress response. Skin conductance appears to be the most promising measure for evaluating individual changes in psychophysiological responses during listening tasks. PMID:21463566
Subjective and psychophysiological indexes of listening effort in a competing-talker task.
Mackersie, Carol L; Cones, Heather
2011-02-01
The effects of noise and other competing backgrounds on speech recognition performance are well documented. There is less information, however, on listening effort and stress experienced by listeners during a speech-recognition task that requires inhibition of competing sounds. The purpose was (a) to determine if psychophysiological indexes of listening effort were more sensitive than performance measures (percentage correct) obtained near ceiling level during a competing speech task, (b) to determine the relative sensitivity of four psychophysiological measures to changes in task demand, and (c) to determine the relationships between changes in psychophysiological measures and changes in subjective ratings of stress and workload. A repeated-measures experimental design was used to examine changes in performance, psychophysiological measures, and subjective ratings in response to increasing task demand. Fifteen adults with normal hearing participated in the study. The mean age of the participants was 27 (range: 24-54). Psychophysiological recordings of heart rate, skin conductance, skin temperature, and electromyographic (EMG) activity were obtained during listening tasks of varying demand. Materials from the Dichotic Digits Test were used to modulate task demand. The three levels of task demand were single digits presented to one ear (low-demand reference condition), single digits presented simultaneously to both ears (medium demand), and a series of two digits presented simultaneously to both ears (high demand). Participants were asked to repeat all the digits they heard, while psychophysiological activity was recorded simultaneously. Subjective ratings of task load were obtained after each condition using the National Aeronautics and Space Administration Task Load Index questionnaire. Repeated-measures analyses of variance were completed for each measure using task demand and session as factors. Mean performance was higher than 96% for all listening tasks. There was no significant change in performance across listening conditions for any listener. There was, however, a significant increase in mean skin conductance and EMG activity as task demand increased. Heart rate and skin temperature did not change significantly. There was no strong association between subjective and psychophysiological measures, but all participants with mean normalized effort ratings of greater than 4.5 (i.e., effort increased by a factor of at least 4.5) showed significant changes in skin conductance. Even in the absence of substantial performance changes, listeners may experience changes in subjective and psychophysiological responses consistent with the activation of a stress response. Skin conductance appears to be the most promising measure for evaluating individual changes in psychophysiological responses during listening tasks. American Academy of Audiology.
Central auditory processing effects induced by solvent exposure.
Fuente, Adrian; McPherson, Bradley
2007-01-01
Various studies have demonstrated that organic solvent exposure may induce auditory damage. Studies conducted in workers occupationally exposed to solvents suggest, on the one hand, poorer hearing thresholds than in matched non-exposed workers, and on the other hand, central auditory damage due to solvent exposure. Taking into account the potential auditory damage induced by solvent exposure due to the neurotoxic properties of such substances, the present research aimed at studying the possible auditory processing disorder (APD), and possible hearing difficulties in daily life listening situations that solvent-exposed workers may acquire. Fifty workers exposed to a mixture of organic solvents (xylene, toluene, methyl ethyl ketone) and 50 non-exposed workers matched by age, gender and education were assessed. Only subjects with no history of ear infections, high blood pressure, kidney failure, metabolic and neurological diseases, or alcoholism were selected. The subjects had either normal hearing or sensorineural hearing loss, and normal tympanometric results. Hearing-in-noise (HINT), dichotic digit (DD), filtered speech (FS), pitch pattern sequence (PPS), and random gap detection (RGD) tests were carried out in the exposed and non-exposed groups. A self-report inventory of each subject's performance in daily life listening situations, the Amsterdam Inventory for Auditory Disability and Handicap, was also administered. Significant threshold differences between exposed and non-exposed workers were found at some of the hearing test frequencies, for both ears. However, exposed workers still presented normal hearing thresholds as a group (equal or better than 20 dB HL). Also, for the HINT, DD, PPS, FS and RGD tests, non-exposed workers obtained better results than exposed workers. Finally, solvent-exposed workers reported significantly more hearing complaints in daily life listening situations than non-exposed workers. It is concluded that subjects exposed to solvents may acquire an APD and thus the sole use of pure-tone audiometry is insufficient to assess hearing in solvent-exposed populations.
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
2015-09-01
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
Neural effects of cognitive control load on auditory selective attention
Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R.; Mangalathu, Jain; Desai, Anjali
2014-01-01
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210 msec, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. PMID:24946314
Westerhausen, René; Grüner, Renate; Specht, Karsten; Hugdahl, Kenneth
2009-06-01
The midsagittal corpus callosum is topographically organized, that is, with regard to their cortical origin several subtracts can be distinguished within the corpus callosum that belong to specific functional brain networks. Recent diffusion tensor tractography studies have also revealed remarkable interindividual differences in the size and exact localization of these tracts. To examine the functional relevance of interindividual variability in callosal tracts, 17 right-handed male participants underwent structural and diffusion tensor magnetic resonance imaging. Probabilistic tractography was carried out to identify the callosal subregions that interconnect left and right temporal lobe auditory processing areas, and the midsagittal size of this tract was seen as indicator of the (anatomical) strength of this connection. Auditory information transfer was assessed applying an auditory speech perception task with dichotic presentations of consonant-vowel syllables (e.g., /ba-ga/). The frequency of correct left ear reports in this task served as a functional measure of interhemispheric transfer. Statistical analysis showed that a stronger anatomical connection between the superior temporal lobe areas supports a better information transfer. This specific structure-function association in the auditory modality supports the general notion that interindividual differences in callosal topography possess functional relevance.
Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex.
Bidelman, Gavin M; Grall, Jeremy
2014-11-01
Pitch relationships in music are characterized by their degree of consonance, a hierarchical perceptual quality that distinguishes how pleasant musical chords/intervals sound to the ear. The origins of consonance have been debated since the ancient Greeks. To elucidate the neurobiological mechanisms underlying these musical fundamentals, we recorded neuroelectric brain activity while participants listened passively to various chromatic musical intervals (simultaneously sounding pitches) varying in their perceptual pleasantness (i.e., consonance/dissonance). Dichotic presentation eliminated acoustic and peripheral contributions that often confound explanations of consonance. We found that neural representations for pitch in early human auditory cortex code perceptual features of musical consonance and follow a hierarchical organization according to music-theoretic principles. These neural correlates emerge pre-attentively within ~ 150 ms after the onset of pitch, are segregated topographically in superior temporal gyrus with a rightward hemispheric bias, and closely mirror listeners' behavioral valence preferences for the chromatic tone combinations inherent to music. A perceptual-based organization implies that parallel to the phonetic code for speech, elements of music are mapped within early cerebral structures according to higher-order, perceptual principles and the rules of Western harmony rather than simple acoustic attributes. Copyright © 2014 Elsevier Inc. All rights reserved.
The effect of unimodal affective priming on dichotic emotion recognition.
Voyer, Daniel; Myles, Daniel
2017-11-15
The present report concerns two experiments extending to unimodal priming the cross-modal priming effects observed with auditory emotions by Harding and Voyer [(2016). Laterality effects in cross-modal affective priming. Laterality: Asymmetries of Body, Brain and Cognition, 21, 585-605]. Experiment 1 used binaural targets to establish the presence of the priming effect and Experiment 2 used dichotically presented targets to examine auditory asymmetries. In Experiment 1, 82 university students completed a task in which binaural targets consisting of one of 4 English words inflected in one of 4 emotional tones were preceded by binaural primes consisting of one of 4 Mandarin words pronounced in the same (congruent) or different (incongruent) emotional tones. Trials where the prime emotion was congruent with the target emotion showed faster responses and higher accuracy in identifying the target emotion. In Experiment 2, 60 undergraduate students participated and the target was presented dichotically instead of binaurally. Primes congruent with the left ear produced a large left ear advantage, whereas right congruent primes produced a right ear advantage. These results indicate that unimodal priming produces stronger effects than those observed under cross-modal priming. The findings suggest that priming should likely be considered a strong top-down influence on laterality effects.
Carlyon, Robert P.; Long, Christopher J.; Deeks, John M.
2008-01-01
Experiment 1 measured rate discrimination of electric pulse trains by bilateral cochlear implant (CI) users, for standard rates of 100, 200, and 300 pps. In the diotic condition the pulses were presented simultaneously to the two ears. Consistent with previous results with unilateral stimulation, performance deteriorated at higher standard rates. In the signal interval of each trial in the dichotic condition, the standard rate was presented to the left ear and the (higher) signal rate was presented to the right ear; the non-signal intervals were the same as in the diotic condition. Performance in the dichotic condition was better for some listeners than in the diotic condition for standard rates of 100 and 200 pps, but not at 300 pps. It is concluded that the deterioration in rate discrimination observed for CI users at high rates cannot be alleviated by the introduction of a binaural cue, and is unlikely to be limited solely by central pitch processes. Experiment 2 performed an analogous experiment in which 300-pps acoustic pulse trains were bandpass filtered (3900-5400 Hz) and presented in a noise background to normal-hearing listeners. Unlike the results of experiment 1, performance was superior in the dichotic than in the diotic condition. PMID:18397032
Erb, Julia; Ludwig, Alexandra Annemarie; Kunke, Dunja; Fuchs, Michael; Obleser, Jonas
2018-04-24
Psychoacoustic tests assessed shortly after cochlear implantation are useful predictors of the rehabilitative speech outcome. While largely independent, both spectral and temporal resolution tests are important to provide an accurate prediction of speech recognition. However, rapid tests of temporal sensitivity are currently lacking. Here, we propose a simple amplitude modulation rate discrimination (AMRD) paradigm that is validated by predicting future speech recognition in adult cochlear implant (CI) patients. In 34 newly implanted patients, we used an adaptive AMRD paradigm, where broadband noise was modulated at the speech-relevant rate of ~4 Hz. In a longitudinal study, speech recognition in quiet was assessed using the closed-set Freiburger number test shortly after cochlear implantation (t0) as well as the open-set Freiburger monosyllabic word test 6 months later (t6). Both AMRD thresholds at t0 (r = -0.51) and speech recognition scores at t0 (r = 0.56) predicted speech recognition scores at t6. However, AMRD and speech recognition at t0 were uncorrelated, suggesting that those measures capture partially distinct perceptual abilities. A multiple regression model predicting 6-month speech recognition outcome with deafness duration and speech recognition at t0 improved from adjusted R = 0.30 to adjusted R = 0.44 when AMRD threshold was added as a predictor. These findings identify AMRD thresholds as a reliable, nonredundant predictor above and beyond established speech tests for CI outcome. This AMRD test could potentially be developed into a rapid clinical temporal-resolution test to be integrated into the postoperative test battery to improve the reliability of speech outcome prognosis.
Di Berardino, F; Tognola, G; Paglialonga, A; Alpini, D; Grandori, F; Cesarani, A
2010-08-01
To assess whether different compact disk recording protocols, used to prepare speech test material, affect the reliability and comparability of speech audiometry testing. We conducted acoustic analysis of compact disks used in clinical practice, to determine whether speech material had been recorded using similar procedures. To assess the impact of different recording procedures on speech test outcomes, normal hearing subjects were tested using differently prepared compact disks, and their psychometric curves compared. Acoustic analysis revealed that speech material had been recorded using different protocols. The major difference was the gain between the levels at which the speech material and the calibration signal had been recorded. Although correct calibration of the audiometer was performed for each compact disk before testing, speech recognition thresholds and maximum intelligibility thresholds differed significantly between compact disks (p < 0.05), and were influenced by the gain between the recording level of the speech material and the calibration signal. To ensure the reliability and comparability of speech test outcomes obtained using different compact disks, it is recommended to check for possible differences in the recording gains used to prepare the compact disks, and then to compensate for any differences before testing.
Rosemann, Stephanie; Gießing, Carsten; Özyurt, Jale; Carroll, Rebecca; Puschmann, Sebastian; Thiel, Christiane M.
2017-01-01
Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome. PMID:28638329
Momen, Nausheen
2009-12-01
The use of computer-based, psychomotor testing systems for personnel selection and classification has gained popularity in the civilian and military worlds in recent years. However, several issues need to be resolved before adopting a computerized, psychomotor test. The purpose of this study was to compare the impact of alternative input devices used for the Test Of Basic Aviation Skills (TBAS) as well as to explore the practice effects of the TBAS. In study 1, participants were administered the TBAS tracking tests once with a throttle and once with foot pedals in a classic test-retest paradigm. The results confirmed that neither of the input devices provided a significant advantage on TBAS performance. In study 2, participants were administered the TBAS twice with a 24-hour interval between testing. The results demonstrated significant practice effects for all the TBAS subtests except for the dichotic listening tests.
Warzybok, Anna; Brand, Thomas; Wagener, Kirsten C; Kollmeier, Birger
2015-01-01
The current study investigates the extent to which the linguistic complexity of three commonly employed speech recognition tests and second language proficiency influence speech recognition thresholds (SRTs) in noise in non-native listeners. SRTs were measured for non-natives and natives using three German speech recognition tests: the digit triplet test (DTT), the Oldenburg sentence test (OLSA), and the Göttingen sentence test (GÖSA). Sixty-four non-native and eight native listeners participated. Non-natives can show native-like SRTs in noise only for the linguistically easy speech material (DTT). Furthermore, the limitation of phonemic-acoustical cues in digit triplets affects speech recognition to the same extent in non-natives and natives. For more complex and less familiar speech materials, non-natives, ranging from basic to advanced proficiency in German, require on average 3-dB better signal-to-noise ratio for the OLSA and 6-dB for the GÖSA to obtain 50% speech recognition compared to native listeners. In clinical audiology, SRT measurements with a closed-set speech test (i.e. DTT for screening or OLSA test for clinical purposes) should be used with non-native listeners rather than open-set speech tests (such as the GÖSA or HINT), especially if a closed-set version in the patient's own native language is available.
Effect of technological advances on cochlear implant performance in adults.
Lenarz, Minoo; Joseph, Gert; Sönmez, Hasibe; Büchner, Andreas; Lenarz, Thomas
2011-12-01
To evaluate the effect of technological advances in the past 20 years on the hearing performance of a large cohort of adult cochlear implant (CI) patients. Individual, retrospective, cohort study. According to technological developments in electrode design and speech-processing strategies, we defined five virtual intervals on the time scale between 1984 and 2008. A cohort of 1,005 postlingually deafened adults was selected for this study, and their hearing performance with a CI was evaluated retrospectively according to these five technological intervals. The test battery was composed of four standard German speech tests: Freiburger monosyllabic test, speech tracking test, Hochmair-Schulz-Moser (HSM) sentence test in quiet, and HSM sentence test in 10 dB noise. The direct comparison of the speech perception in postlingually deafened adults, who were implanted during different technological periods, reveals an obvious improvement in the speech perception in patients who benefited from the recent electrode designs and speech-processing strategies. The major influence of technological advances on CI performance seems to be on speech perception in noise. Better speech perception in noisy surroundings is strong proof for demonstrating the success rate of new electrode designs and speech-processing strategies. Standard (internationally comparable) speech tests in noise should become an obligatory part of the postoperative test battery for adult CI patients. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
Investigation of potential cognitive tests for use with older adults in audiology clinics.
Vaughan, Nancy; Storzbach, Daniel; Furukawa, Izumi
2008-01-01
Cognitive declines in working memory and processing speed are hallmarks of aging. Deficits in speech understanding also are seen in aging individuals. A clinical test to determine whether the cognitive aging changes contribute to aging speech understanding difficulties would be helpful for determining rehabilitation strategies in audiology clinics. To identify a clinical neurocognitive test or battery of tests that could be used in audiology clinics to help explain deficits in speech recognition in some older listeners. A correlational study examining the association between certain cognitive test scores and speech recognition performance. Speeded (time-compressed) speech was used to increase the cognitive processing load. Two hundred twenty-five adults aged 50 through 75 years were participants in this study. Both batteries of tests were administered to all participants in two separate sessions. A selected battery of neurocognitive tests and a time-compressed speech recognition test battery using various rates of speech were administered. Principal component analysis was used to extract the important component factors from each set of tests, and regression models were constructed to examine the association between tests and to identify the neurocognitive test most strongly associated with speech recognition performance. A sequencing working memory test (Letter-Number Sequencing [LNS]) was most strongly associated with rapid speech understanding. The association between the LNS test results and the compressed sentence recognition scores (CSRS) was strong even when age and hearing loss were controlled. The LNS is a sequencing test that provides information about temporal processing at the cognitive level and may prove useful in diagnosis of speech understanding problems, and in the development of aural rehabilitation and training strategies.
Loudness enhancement: Monaural, binaural and dichotic
NASA Technical Reports Server (NTRS)
Elmasian, R. O.; Galambos, R.
1975-01-01
It is shown that when one tone burst precedes another by 100 msec variations in the intensity of the first systematically influences the loudness of second. When the first burst is more intense than the second, the second is increased and when the first burst is less intense, the loudness of the second is decreased. This occurs in monaural, binaural and dichotic paradigms of signal presentation. Where both bursts are presented to the same ear there is more enhancement with less intersubject variability than when they are presented to different ears. Monaural enhancements as large as 30 db can readily be demonstrated, but decrements rarely exceed 5 db. Possible physiological mechanisms are discussed for this loudness enhancement, which apparently shares certain characteristics with time-order-error, assimilation, and temporal partial masking experiments.
Corpus callosum functioning in patients with normal pressure hydrocephalus before and after surgery.
Mataró, Maria; Poca, Maria Antonia; Matarín, Mar; Sahuquillo, Juan; Sebastián, Nuria; Junqué, Carme
2006-05-01
Our aim was to evaluate corpus callosum functioning in a group of patients with normal pressure hydrocephalus (NPH) before and after shunting. Left ear-extinction under a dichotic listening task was evaluated in twenty-three patients with NPH, 30 patients with Alzheimer's disease and 30 aged controls. Patients with NPH had higher levels of left ear extinction than the control and Alzheimer's groups. Sixty-one percent of NPH patients exhibited left ear suppression, compared with 13% of Alzheimer's patients and 17% of controls. Following surgery, NPH patients showed a significant change in the degree of asymmetry in the dichotic listening task. Hydrocephalus was associated with left-ear extinction,which diminished after surgery. Our results may indicate reversible functional damage in the corpus callosum.
Neural effects of cognitive control load on auditory selective attention.
Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali
2014-08-01
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.
Greene, Beth G; Logan, John S; Pisoni, David B
1986-03-01
We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.
Lohmander, A; Willadsen, E; Persson, C; Henningsson, G; Bowden, M; Hutters, B
2009-07-01
To present the methodology for speech assessment in the Scandcleft project and discuss issues from a pilot study. Description of methodology and blinded test for speech assessment. Speech samples and instructions for data collection and analysis for comparisons of speech outcomes across five included languages were developed and tested. PARTICIPANTS AND MATERIALS: Randomly selected video recordings of 10 5-year-old children from each language (n = 50) were included in the project. Speech material consisted of test consonants in single words, connected speech, and syllable chains with nasal consonants. Five experienced speech and language pathologists participated as observers. Narrow phonetic transcription of test consonants translated into cleft speech characteristics, ordinal scale rating of resonance, and perceived velopharyngeal closure (VPC). A velopharyngeal composite score (VPC-sum) was extrapolated from raw data. Intra-agreement comparisons were performed. Range for intra-agreement for consonant analysis was 53% to 89%, for hypernasality on high vowels in single words the range was 20% to 80%, and the agreement between the VPC-sum and the overall rating of VPC was 78%. Pooling data of speakers of different languages in the same trial and comparing speech outcome across trials seems possible if the assessment of speech concerns consonants and is confined to speech units that are phonetically similar across languages. Agreed conventions and rules are important. A composite variable for perceptual assessment of velopharyngeal function during speech seems usable; whereas, the method for hypernasality evaluation requires further testing.
Development of the Russian matrix sentence test.
Warzybok, Anna; Zokoll, Melanie; Wardenga, Nina; Ozimek, Edward; Boboshko, Maria; Kollmeier, Birger
2015-01-01
To develop the Russian matrix sentence test for speech intelligibility measurements in noise. Test development included recordings, optimization of speech material, and evaluation to investigate the equivalency of the test lists and training. For each of the 500 test items, the speech intelligibility function, speech reception threshold (SRT: signal-to-noise ratio, SNR, that provides 50% speech intelligibility), and slope was obtained. The speech material was homogenized by applying level corrections. In evaluation measurements, speech intelligibility was measured at two fixed SNRs to compare list-specific intelligibility functions. To investigate the training effect and establish reference data, speech intelligibility was measured adaptively. Overall, 77 normal-hearing native Russian listeners. The optimization procedure decreased the spread in SRTs across words from 2.8 to 0.6 dB. Evaluation measurements confirmed that the 16 test lists were equivalent, with a mean SRT of -9.5 ± 0.2 dB and a slope of 13.8 ± 1.6%/dB. The reference SRT, -8.8 ± 0.8 dB for the open-set and -9.4 ± 0.8 dB for the closed-set format, increased slightly for noise levels above 75 dB SPL. The Russian matrix sentence test is suitable for accurate and reliable speech intelligibility measurements in noise.
Study Guide for Teacher Certification Test in Speech and Language Pathology.
ERIC Educational Resources Information Center
Umberger, Forrest G.
This study guide is designed for individuals preparing to take the Georgia Teacher Certification Test (TCT) in speech and language pathology. The test covers five subareas: (1) fundamentals of speech and language; (2) speech and language disorders; (3) related handicapping conditions; (4) hearing impairment; and (5) program management and…
Böhme, G; Clasen, B
1989-09-01
We carried out a transnasal insufflation test according to Blom and Singer on 27 laryngectomy patients as well as a speech communications test with the help of reverse speech audiometry, i.e. the post laryngectomy telephone test according to Zenner and Pfrang. The combined evaluation of both tests provided basic information on the quality of the esophagus voice and functionability of the speech organs. Both tests can be carried out quickly and easily and allow a differentiated statement to be made on the application possibilities of a esophagus voice, electronic speech aids and voice prothesis. Three groups could be identified from our results: 1. Insufflation test and reverse speech test provided conformable good or very good results. The esophagus voice was well understood. 2. Complete failure in the insufflation and telephone tests calls for further examinations to exclude any spasm, stricture, divertical and scarred membrane stenosis as well as tumor relapse in the region of the pharyngo-esophageal segments. 3. Organic causes must be looked for in the area of the nozzle as well as cranial nerve failure and social-determined causes in the case of normal insufflation and considerably reduced speech communication in the telephone test.
GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.
2012-01-01
We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916
Prathanee, Benjamas; Angsupakorn, Nipa; Pumnum, Tawitree; Seepuaham, Cholada; Jaiyong, Pechcharat
2012-11-01
To find reliability of parental or caregiver's report and testing of the Thai Speech and Language Test for Children Aged 0-4 Years Old. Five investigators assessed speech and language abilities from video both contexts: parental or caregivers' report and test forms of Thai Speech and Language Test for Children Aged 0-4 Years Old. Twenty-five normal and 30 children with delayed development or risk for delayed speech and language skills were assessed at age intervals of 3, 6, 9, 12, 15, 18, 24, 30, 36 and 48 months. Reliability of parental or caregivers' testing and reporting was at a moderate level (0.41-0.60). Inter-rater reliability among investigators was excellent (0.86-1.00). The parental or caregivers' report form of the Thai Speech and Language test for Children aged 0-4 years old was an indicator for success at a moderate level. Trained professionals could use both forms of this test as reliable tools at an excellent level.
Continuous multiword recognition performance of young and elderly listeners in ambient noise
NASA Astrophysics Data System (ADS)
Sato, Hiroshi
2005-09-01
Hearing threshold shift due to aging is known as a dominant factor to degrade speech recognition performance in noisy conditions. On the other hand, cognitive factors of aging-relating speech recognition performance in various speech-to-noise conditions are not well established. In this study, two kinds of speech test were performed to examine how working memory load relates to speech recognition performance. One is word recognition test with high-familiarity, four-syllable Japanese words (single-word test). In this test, each word was presented to listeners; the listeners were asked to write the word down on paper with enough time to answer. In the other test, five continuous word were presented to listeners and listeners were asked to write the word down after just five words were presented (multiword test). Both tests were done in various speech-to-noise ratios under 50-dBA Hoth spectrum noise with more than 50 young and elderly subjects. The results of two experiments suggest that (1) Hearing level is related to scores of both tests. (2) Scores of single-word test are well correlated with those of multiword test. (3) Scores of multiword test are not improved as speech-to-noise ratio improves in the condition where scores of single-word test reach their ceiling.
Persinger, M A; Moulden, J A; Richards, P M
1999-10-01
Analyses of the data from 212 boys and girls, aged 7-14 years, demonstrated a relatively abrupt and permanent decrease in the numbers of errors for dichotic (left ear) word listening and for toe gnosis after the ninth year. This pattern was not observed for right ear errors, finger gnosis, or indices of finger and foot agility. The results are compatible with the hypothesis that the final differentiation of the paracentral lobules and adjacent corpus callosum by the most distal portions of the Anterior Cerebral Artery occurs around 9 or 10 years of age. Implications for the development of the sense of self, enhanced apprehension, and "the sense of a presence" are discussed.
Hypothalamic digoxin, hemispheric chemical dominance and sarcoidosis.
Ravi Kumar, A; Kurup, Parameswara Achutha
2004-06-01
The isoprenoid pathway produces three key metabolites: endogenous digoxin (membrane sodium-potassium ATPase inhibitor, immunomodulator and regulator of neurotransmitter/amino acid transport), dolichol (regulates N-glycosylation of proteins) and ubiquinone (free radical scavenger). The role of the isoprenoid pathway in the pathogenesis of sarcoidosis in relation to hemispheric dominance was studied. The isoprenoid pathway-related cascade was assessed in patients with systemic sarcoidosis with pulmonary involvement. The pathway was also assessed in patients with right hemispheric, left hemispheric and bihemispheric dominance for comparison to find out the role of hemispheric dominance in the pathogenesis of sarcoidosis. In patients with sarcoidosis there was elevated digoxin synthesis, increased dolichol and glycoconjugate levels and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites. There was an increase in the cholesterol:phospholipid ratio and a reduction in the glycoconjugate level of red blood cell (RBC) membrane in this group of patients. The same biochemical patterns were obtained in individuals with right hemispheric dominance. In individuals with left hemispheric dominance the patterns were reversed. Endogenous digoxin, by activating the calcineurin signal transduction pathway of T cells, can contribute to immune activation in sarcoidosis. An altered glycoconjugate metabolism can lead to the generation of endogenous self-glycoprotein antigens in the lung as well as other tissues. Increased free radical generation can also lead to immune activation. The role of a dysfunctional isoprenoid pathway and endogenous digoxin in the pathogenesis of sarcoidosis in relation to right hemispheric chemical dominance is discussed. All the patients with sarcoidosis were right-handed/left hemispheric dominant according to the dichotic listening test, but their biochemical patterns were suggestive of right hemispheric chemical dominance. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-08-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin (modulate tryptophan/tyrosine transport), dolichol (important in N -glycosylation of proteins), and ubiquinone (free radical scavenger). It was considered pertinent to assess the pathway in alcoholic addiction, alcoholic cirrhosis, and acquired hepatocerebral degeneration. Since endogenous digoxin can regulate neurotransmitter transport, the pathway was also assessed in individuals with differing hemispheric dominance to find out the role of hemispheric dominance in its pathogenesis. In the patient group there was elevated digoxin synthesis, increased dolichol and glycoconjugate levels, and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites as reduced endogenous morphine synthesis from tyrosine. There was an increase in cholesterol:phospholipid ratio and a reduction in glycoconjugate level of RBC membrane in these groups of patients. The same patterns were obtained in individuals with right hemispheric chemical dominance. Alcoholic cirrhosis, alcoholic addiction, and acquired hepatocerebral degeneration are associated with an upregulated isoprenoid pathway and elevated digoxin secretion from the hypothalamus. This can contribute to NMDA excitotoxicity and altered connective tissue/lipid metabolism important in its pathogenesis. Endogenous morphine deficiency plays a role in alcoholic addiction. Alcoholic cirrhosis, addiction, and acquired hepato -cerebral degeneration occur in right hemispheric chemically dominant individuals. Ninety percent of the patients with alcoholic addiction, alcoholic cirrhosis, and acquired hepatocerebral degeneration were right-handed and left hemispheric dominant by the dichotic listening test. However, their biochemical patterns were similar to those obtained in right hemispheric chemical dominance. Hemispheric chemical dominance is a different entity and has no correlation with handedness or the dichotic listening test.
Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A
2016-01-01
Good speech perception and communication skills in everyday life are crucial for participation and well-being, and are therefore an overarching aim of auditory rehabilitation. Both behavioral and self-report measures can be used to assess these skills. However, correlations between behavioral and self-report speech perception measures are often low. One possible explanation is that there is a mismatch between the specific situations used in the assessment of these skills in each method, and a more careful matching across situations might improve consistency of results. The role that cognition plays in specific speech situations may also be important for understanding communication, as speech perception tests vary in their cognitive demands. In this study, the role of executive function, working memory (WM) and attention in behavioral and self-report measures of speech perception was investigated. Thirty existing hearing aid users with mild-to-moderate hearing loss aged between 50 and 74 years completed a behavioral test battery with speech perception tests ranging from phoneme discrimination in modulated noise (easy) to words in multi-talker babble (medium) and keyword perception in a carrier sentence against a distractor voice (difficult). In addition, a self-report measure of aided communication, residual disability from the Glasgow Hearing Aid Benefit Profile, was obtained. Correlations between speech perception tests and self-report measures were higher when specific speech situations across both were matched. Cognition correlated with behavioral speech perception test results but not with self-report. Only the most difficult speech perception test, keyword perception in a carrier sentence with a competing distractor voice, engaged executive functions in addition to WM. In conclusion, any relationship between behavioral and self-report speech perception is not mediated by a shared correlation with cognition.
Only Behavioral But Not Self-Report Measures of Speech Perception Correlate with Cognitive Abilities
Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.
2016-01-01
Good speech perception and communication skills in everyday life are crucial for participation and well-being, and are therefore an overarching aim of auditory rehabilitation. Both behavioral and self-report measures can be used to assess these skills. However, correlations between behavioral and self-report speech perception measures are often low. One possible explanation is that there is a mismatch between the specific situations used in the assessment of these skills in each method, and a more careful matching across situations might improve consistency of results. The role that cognition plays in specific speech situations may also be important for understanding communication, as speech perception tests vary in their cognitive demands. In this study, the role of executive function, working memory (WM) and attention in behavioral and self-report measures of speech perception was investigated. Thirty existing hearing aid users with mild-to-moderate hearing loss aged between 50 and 74 years completed a behavioral test battery with speech perception tests ranging from phoneme discrimination in modulated noise (easy) to words in multi-talker babble (medium) and keyword perception in a carrier sentence against a distractor voice (difficult). In addition, a self-report measure of aided communication, residual disability from the Glasgow Hearing Aid Benefit Profile, was obtained. Correlations between speech perception tests and self-report measures were higher when specific speech situations across both were matched. Cognition correlated with behavioral speech perception test results but not with self-report. Only the most difficult speech perception test, keyword perception in a carrier sentence with a competing distractor voice, engaged executive functions in addition to WM. In conclusion, any relationship between behavioral and self-report speech perception is not mediated by a shared correlation with cognition. PMID:27242564
Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.
2015-01-01
Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on functioning. PMID:26136699
[Improving speech comprehension using a new cochlear implant speech processor].
Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A
2009-06-01
The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg sentences in the clinical setting S(0)N(CI), with speech signal at 0 degrees and noise lateral to the CI at 90 degrees . With the convincing findings from our evaluations of this multicenter study cohort, a trial with the Freedom 24 sound processor for all suitable CI users is recommended. For evaluating the benefits of a new processor, the comparative assessment paradigm used in our study design would be considered ideal for use with individual patients.
Autistic traits and attention to speech: Evidence from typically developing individuals.
Korhonen, Vesa; Werner, Stefan
2017-04-01
Individuals with autism spectrum disorder have a preference for attending to non-speech stimuli over speech stimuli. We are interested in whether non-speech preference is only a feature of diagnosed individuals, and whether we can we test implicit preference experimentally. In typically developed individuals, serial recall is disrupted more by speech stimuli than by non-speech stimuli. Since behaviour of individuals with autistic traits resembles that of individuals with autism, we have used serial recall to test whether autistic traits influence task performance during irrelevant speech sounds. The errors made on the serial recall task during speech or non-speech sounds were counted as a measure of speech or non-speech preference in relation to no sound condition. We replicated the serial order effect and found the speech to be more disruptive than the non-speech sounds, but were unable to find any associations between the autism quotient scores and the non-speech sounds. Our results may indicate a learnt behavioural response to speech sounds.
Lamprecht-Dinnesen, A; Sick, U; Sandrieser, P; Illg, A; Lesinski-Schiedat, A; Döring, W H; Müller-Deile, J; Kiefer, J; Matthias, K; Wüst, A; Konradi, E; Riebandt, M; Matulat, P; Von Der Haar-Heise, S; Swart, J; Elixmann, K; Neumann, K; Hildmann, A; Coninx, F; Meyer, V; Gross, M; Kruse, E; Lenarz, T
2002-10-01
Since autumn 1998 the multicenter interdisciplinary study group "Test Materials for CI Children" has been compiling a uniform examination tool for evaluation of speech and hearing development after cochlear implantation in childhood. After studying the relevant literature, suitable materials were checked for practical applicability, modified and provided with criteria for execution and break-off. For data acquisition, observation forms for preparation of a PC-version were developed. The evaluation set contains forms for master data with supplements relating to postoperative processes. The hearing tests check supra-threshold hearing with loudness scaling for children, speech comprehension in silence (Mainz and Göttingen Test for Speech Comprehension in Childhood) and phonemic differentiation (Oldenburg Rhyme Test for Children), the central auditory processes of detection, discrimination, identification and recognition (modification of the "Frankfurt Functional Hearing Test for Children") and audiovisual speech perception (Open Paragraph Tracking, Kiel Speech Track Program). The materials for speech and language development comprise phonetics-phonology, lexicon and semantics (LOGO Pronunciation Test), syntax and morphology (analysis of spontaneous speech), language comprehension (Reynell Scales), communication and pragmatics (observation forms). The MAIS and MUSS modified questionnaires are integrated. The evaluation set serves quality assurance and permits factor analysis as well as controls for regularity through the multicenter comparison of long-term developmental trends after cochlear implantation.
Hoth, S
2016-08-01
The Freiburg speech intelligibility test according to DIN 45621 was introduced around 60 years ago. For decades, and still today, the Freiburg test has been a standard whose relevance extends far beyond pure audiometry. It is used primarily to determine the speech perception threshold (based on two-digit numbers) and the ability to discriminate speech at suprathreshold presentation levels (based on monosyllabic nouns). Moreover, it is a measure of the degree of disability, the requirement for and success of technical hearing aids (auxiliaries directives), and the compensation for disability and handicap (Königstein recommendation). In differential audiological diagnostics, the Freiburg test contributes to the distinction between low- and high-frequency hearing loss, as well as to identification of conductive, sensory, neural, and central disorders. Currently, the phonemic and perceptual balance of the monosyllabic test lists is subject to critical discussions. Obvious deficiencies exist for testing speech recognition in noise. In this respect, alternatives such as sentence or rhyme tests in closed-answer inventories are discussed.
Norrelgen, Fritjof; Lilja, Anders; Ingvar, Martin; Gisselgård, Jens; Fransson, Peter
2012-01-01
Objective The aims of this study were to develop and assess a method to map language networks in children with two auditory fMRI protocols in combination with a dichotic listening task (DL). The method is intended for pediatric patients prior to epilepsy surgery. To evaluate the potential clinical usefulness of the method we first wanted to assess data from a group of healthy children. Methods In a first step language test materials were developed, intended for subsequent implementation in fMRI protocols. An evaluation of this material was done in 30 children with typical development, 10 from the 1st, 4th and the 7th grade, respectively. The language test material was then adapted and implemented in two fMRI protocols intended to target frontal and posterior language networks. In a second step language lateralization was assessed in 17 typical 10–11 year olds with fMRI and DL. To reach a conclusion about language lateralization, firstly, quantitative analyses of the index data from the two fMRI tasks and the index data from the DL task were done separately. In a second step a set of criteria were applied to these results to reach a conclusion about language lateralization. The steps of these analyses are described in detail. Results The behavioral assessment of the language test material showed that it was well suited for typical children. The results of the language lateralization assessments, based on fMRI data and DL data, showed that for 15 of the 17 subjects (88%) a conclusion could be reached about hemispheric language dominance. In 2 cases (12%) DL provided critical data. Conclusions The employment of DL combined with language mapping using fMRI for assessing hemispheric language dominance is novel and it was deemed valuable since it provided additional information compared to the results gained from each method individually. PMID:23284796
Hurkmans, Joost; Jonkers, Roel; Boonstra, Anne M; Stewart, Roy E; Reinders-Messelink, Heleen A
2012-01-01
The number of reliable and valid instruments to measure the effects of therapy in apraxia of speech (AoS) is limited. To evaluate the newly developed Modified Diadochokinesis Test (MDT), which is a task to assess the effects of rate and rhythm therapies for AoS in a multiple baseline across behaviours design. The consistency, accuracy and fluency of speech of 24 adults with AoS and 12 unaffected speakers matched for age, gender and educational level were assessed using the MDT. The reliability and validity of the instrument were considered and outcomes compared with those obtained with existing tests. The results revealed that MDT had a strong internal consistency. Scores were influenced by syllable structure complexity, while distinctive features of articulation had no measurable effect. The test-retest and intra- and inter-rater reliabilities were shown to be adequate, and the discriminant validity was good. For convergent validity different outcomes were found: apart from one correlation, the scores on tests assessing functional communication and AoS correlated significantly with the MDT outcome measures. The spontaneous speech phonology measure of the Aachen Aphasia Test (AAT) correlated significantly with the MDT outcome measures, but no correlations were found for the repetition subtest and the spontaneous speech articulation/prosody measure of the AAT. The study shows that the MDT has adequate psychometric properties, implying that it can be used to measure changes in speech motor control during treatment for apraxia of speech. The results demonstrate the validity and utility of the instrument as a supplement to speech tasks in assessing speech improvement aimed at the level of planning and programming of speech. © 2012 Royal College of Speech and Language Therapists.
Binaural Interaction Effects of 30-50 Hz Auditory Steady State Responses.
Gransier, Robin; van Wieringen, Astrid; Wouters, Jan
Auditory stimuli modulated by modulation frequencies within the 30 to 50 Hz region evoke auditory steady state responses (ASSRs) with high signal to noise ratios in adults, and can be used to determine the frequency-specific hearing thresholds of adults who are unable to give behavioral feedback reliably. To measure ASSRs as efficiently as possible a multiple stimulus paradigm can be used, stimulating both ears simultaneously. The response strength of 30 to 50Hz ASSRs is, however, affected when both ears are stimulated simultaneously. The aim of the present study is to gain insight in the measurement efficiency of 30 to 50 Hz ASSRs evoked with a 2-ear stimulation paradigm, by systematically investigating the binaural interaction effects of 30 to 50 Hz ASSRs in normal-hearing adults. ASSRs were obtained with a 64-channel EEG system in 23 normal-hearing adults. All participants participated in one diotic, multiple dichotic, and multiple monaural conditions. Stimuli consisted of a modulated one-octave noise band, centered at 1 kHz, and presented at 70 dB SPL. The diotic condition contained 40 Hz modulated stimuli presented to both ears. In the dichotic conditions, the modulation frequency of the left ear stimulus was kept constant at 40 Hz, while the stimulus at the right ear was either the unmodulated or modulated carrier. In case of the modulated carrier, the modulation frequency varied between 30 and 50 Hz in steps of 2 Hz across conditions. The monaural conditions consisted of all stimuli included in the diotic and dichotic conditions. Modulation frequencies ≥36 Hz resulted in prominent ASSRs in all participants for the monaural conditions. A significant enhancement effect was observed (average: ~3 dB) in the diotic condition, whereas a significant reduction effect was observed in the dichotic conditions. There was no distinct effect of the temporal characteristics of the stimuli on the amount of reduction. The attenuation was in 33% of the cases >3 dB for ASSRs evoked with modulation frequencies ≥40 Hz and 50% for ASSRs evoked with modulation frequencies ≤36 Hz. Binaural interaction effects as observed in the diotic condition are similar to the binaural interaction effects of middle latency responses as reported in the literature, suggesting that these responses share a same underlying mechanism. Our data also indicated that 30 to 50 Hz ASSRs are attenuated when presented dichotically and that this attenuation is independent of the stimulus characteristics as used in the present study. These findings are important as they give insight in how binaural interaction affects the measurement efficiency. The 2-ear stimulation paradigm of the present study was, for the most optimal modulation frequencies (i.e., ≥40 Hz), more efficient than a 1-ear sequential stimulation paradigm in 66% of the cases.
Factors contributing to speech perception scores in long-term pediatric cochlear implant users.
Davidson, Lisa S; Geers, Ann E; Blamey, Peter J; Tobey, Emily A; Brenner, Christine A
2011-02-01
The objectives of this report are to (1) describe the speech perception abilities of long-term pediatric cochlear implant (CI) recipients by comparing scores obtained at elementary school (CI-E, 8 to 9 yrs) with scores obtained at high school (CI-HS, 15 to 18 yrs); (2) evaluate speech perception abilities in demanding listening conditions (i.e., noise and lower intensity levels) at adolescence; and (3) examine the relation of speech perception scores to speech and language development over this longitudinal timeframe. All 112 teenagers were part of a previous nationwide study of 8- and 9-yr-olds (N = 181) who received a CI between 2 and 5 yrs of age. The test battery included (1) the Lexical Neighborhood Test (LNT; hard and easy word lists); (2) the Bamford Kowal Bench sentence test; (3) the Children's Auditory-Visual Enhancement Test; (4) the Test of Auditory Comprehension of Language at CI-E; (5) the Peabody Picture Vocabulary Test at CI-HS; and (6) the McGarr sentences (consonants correct) at CI-E and CI-HS. CI-HS speech perception was measured in both optimal and demanding listening conditions (i.e., background noise and low-intensity level). Speech perception scores were compared based on age at test, lexical difficulty of stimuli, listening environment (optimal and demanding), input mode (visual and auditory-visual), and language age. All group mean scores significantly increased with age across the two test sessions. Scores of adolescents significantly decreased in demanding listening conditions. The effect of lexical difficulty on the LNT scores, as evidenced by the difference in performance between easy versus hard lists, increased with age and decreased for adolescents in challenging listening conditions. Calculated curves for percent correct speech perception scores (LNT and Bamford Kowal Bench) and consonants correct on the McGarr sentences plotted against age-equivalent language scores on the Test of Auditory Comprehension of Language and Peabody Picture Vocabulary Test achieved asymptote at similar ages, around 10 to 11 yrs. On average, children receiving CIs between 2 and 5 yrs of age exhibited significant improvement on tests of speech perception, lipreading, speech production, and language skills measured between primary grades and adolescence. Evidence suggests that improvement in speech perception scores with age reflects increased spoken language level up to a language age of about 10 yrs. Speech perception performance significantly decreased with softer stimulus intensity level and with introduction of background noise. Upgrades to newer speech processing strategies and greater use of frequency-modulated systems may be beneficial for ameliorating performance under these demanding listening conditions.
ERIC Educational Resources Information Center
Ben-David, Boaz M.; Multani, Namita; Shakuf, Vered; Rudzicz, Frank; van Lieshout, Pascal H. H. M.
2016-01-01
Purpose: Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. Method: We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5…
Automated Speech Rate Measurement in Dysarthria.
Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc
2015-06-01
In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. The new algorithm was trained and tested using Dutch speech samples of 36 speakers with no history of speech impairment and 40 speakers with mild to moderate dysarthria. We tested the algorithm under various conditions: according to speech task type (sentence reading, passage reading, and storytelling) and algorithm optimization method (speaker group optimization and individual speaker optimization). Correlations between automated and human SR determination were calculated for each condition. High correlations between automated and human SR determination were found in the various testing conditions. The new algorithm measures SR in a sufficiently reliable manner. It is currently being integrated in a clinical software tool for assessing and managing prosody in dysarthric speech. Further research is needed to fine-tune the algorithm to severely dysarthric speech, to make the algorithm less sensitive to background noise, and to evaluate how the algorithm deals with syllabic consonants.
Plasticity in the Human Speech Motor System Drives Changes in Speech Perception
Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.
2014-01-01
Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594
Davidson, Lisa S; Skinner, Margaret W; Holstad, Beth A; Fears, Beverly T; Richter, Marie K; Matusofsky, Margaret; Brenner, Christine; Holden, Timothy; Birath, Amy; Kettel, Jerrica L; Scollie, Susan
2009-06-01
The purpose of this study was to examine the effects of a wider instantaneous input dynamic range (IIDR) setting on speech perception and comfort in quiet and noise for children wearing the Nucleus 24 implant system and the Freedom speech processor. In addition, children's ability to understand soft and conversational level speech in relation to aided sound-field thresholds was examined. Thirty children (age, 7 to 17 years) with the Nucleus 24 cochlear implant system and the Freedom speech processor with two different IIDR settings (30 versus 40 dB) were tested on the Consonant Nucleus Consonant (CNC) word test at 50 and 60 dB SPL, the Bamford-Kowal-Bench Speech in Noise Test, and a loudness rating task for four-talker speech noise. Aided thresholds for frequency-modulated tones, narrowband noise, and recorded Ling sounds were obtained with the two IIDRs and examined in relation to CNC scores at 50 dB SPL. Speech Intelligibility Indices were calculated using the long-term average speech spectrum of the CNC words at 50 dB SPL measured at each test site and aided thresholds. Group mean CNC scores at 50 dB SPL with the 40 IIDR were significantly higher (p < 0.001) than with the 30 IIDR. Group mean CNC scores at 60 dB SPL, loudness ratings, and the signal to noise ratios-50 for Bamford-Kowal-Bench Speech in Noise Test were not significantly different for the two IIDRs. Significantly improved aided thresholds at 250 to 6000 Hz as well as higher Speech Intelligibility Indices afforded improved audibility for speech presented at soft levels (50 dB SPL). These results indicate that an increased IIDR provides improved word recognition for soft levels of speech without compromising comfort of higher levels of speech sounds or sentence recognition in noise.
Noise-immune multisensor transduction of speech
NASA Astrophysics Data System (ADS)
Viswanathan, Vishu R.; Henry, Claudia M.; Derr, Alan G.; Roucos, Salim; Schwartz, Richard M.
1986-08-01
Two types of configurations of multiple sensors were developed, tested and evaluated in speech recognition application for robust performance in high levels of acoustic background noise: One type combines the individual sensor signals to provide a single speech signal input, and the other provides several parallel inputs. For single-input systems, several configurations of multiple sensors were developed and tested. Results from formal speech intelligibility and quality tests in simulated fighter aircraft cockpit noise show that each of the two-sensor configurations tested outperforms the constituent individual sensors in high noise. Also presented are results comparing the performance of two-sensor configurations and individual sensors in speaker-dependent, isolated-word speech recognition tests performed using a commercial recognizer (Verbex 4000) in simulated fighter aircraft cockpit noise.
Bless, Josef J.; Westerhausen, René; Torkildsen, Janne von Koss; Gudmundsen, Magne; Kompus, Kristiina; Hugdahl, Kenneth
2015-01-01
Left-hemispheric language dominance has been suggested by observations in patients with brain damages as early as the 19th century, and has since been confirmed by modern behavioural and brain imaging techniques. Nevertheless, most of these studies have been conducted in small samples with predominantly Anglo-American background, thus limiting generalization and possible differences between cultural and linguistic backgrounds may be obscured. To overcome this limitation, we conducted a global dichotic listening experiment using a smartphone application for remote data collection. The results from over 4,000 participants with more than 60 different language backgrounds showed that left-hemispheric language dominance is indeed a general phenomenon. However, the degree of lateralization appears to be modulated by linguistic background. These results suggest that more emphasis should be placed on cultural/linguistic specificities of psychological phenomena and on the need to collect more diverse samples. PMID:25588000
Symmetry and asymmetry in the human brain
NASA Astrophysics Data System (ADS)
Hugdahl, Kenneth
2005-10-01
Structural and functional asymmetry in the human brain and nervous system is reviewed in a historical perspective, focusing on the pioneering work of Broca, Wernicke, Sperry, and Geschwind. Structural and functional asymmetry is exemplified from work done in our laboratory on auditory laterality using an empirical procedure called dichotic listening. This also involves different ways of validating the dichotic listening procedure against both invasive and non-invasive techniques, including PET and fMRI blood flow recordings. A major argument is that the human brain shows a substantial interaction between structurally, or "bottom-up" asymmetry and cognitively, or "top-down" modulation, through a focus of attention to the right or left side in auditory space. These results open up a more dynamic and interactive view of functional brain asymmetry than the traditional static view that the brain is lateralized, or asymmetric, only for specific stimuli and stimulus properties.
Bless, Josef J; Westerhausen, René; von Koss Torkildsen, Janne; Gudmundsen, Magne; Kompus, Kristiina; Hugdahl, Kenneth
2015-01-01
Left-hemispheric language dominance has been suggested by observations in patients with brain damages as early as the 19th century, and has since been confirmed by modern behavioural and brain imaging techniques. Nevertheless, most of these studies have been conducted in small samples with predominantly Anglo-American background, thus limiting generalization and possible differences between cultural and linguistic backgrounds may be obscured. To overcome this limitation, we conducted a global dichotic listening experiment using a smartphone application for remote data collection. The results from over 4,000 participants with more than 60 different language backgrounds showed that left-hemispheric language dominance is indeed a general phenomenon. However, the degree of lateralization appears to be modulated by linguistic background. These results suggest that more emphasis should be placed on cultural/linguistic specificities of psychological phenomena and on the need to collect more diverse samples.
The influence of musical experience on lateralisation of auditory processing.
Spajdel, Marián; Jariabková, Katarína; Riecanský, Igor
2007-11-01
The influence of musical experience on free-recall dichotic listening to environmental sounds, two-tone sequences, and consonant-vowel (CV) syllables was investigated. A total of 60 healthy right-handed participants were divided into two groups according to their active musical competence ("musicians" and "non-musicians"). In both groups, we found a left ear advantage (LEA) for nonverbal stimuli (environmental sounds and two-tone sequences) and a right ear advantage (REA) for CV syllables. Dichotic listening to environmental sounds was uninfluenced by musical experience. The total accuracy of recall for two-tone sequences was higher in musicians than in non-musicians but the lateralisation was similar in both groups. For CV syllables a lower REA was found in male but not female musicians in comparison to non-musicians. The results indicate a specific sex-dependent effect of musical experience on lateralisation of phonological auditory processing.
Spyridakou, Chrysa; Luxon, Linda M; Bamiou, Doris E
2012-07-01
To compare self-reported symptoms of difficulty hearing speech in noise and hyperacusis in adults with auditory processing disorders (APDs) and normal controls; and to compare self-reported symptoms to objective test results (speech in babble test, transient evoked otoacoustic emission [TEOAE] suppression test using contralateral noise). A prospective case-control pilot study. Twenty-two participants were recruited in the study: 10 patients with reported hearing difficulty, normal audiometry, and a clinical diagnosis of APD; and 12 normal age-matched controls with no reported hearing difficulty. All participants completed the validated Amsterdam Inventory for Auditory Disability questionnaire, a hyperacusis questionnaire, a speech in babble test, and a TEOAE suppression test using contralateral noise. Patients had significantly worse scores than controls in all domains of the Amsterdam Inventory questionnaire (with the exception of sound detection) and the hyperacusis questionnaire (P < .005). Patients also had worse TEOAE suppression test results in both ears than controls; however, this result was not significant after Bonferroni correction. Strong correlations were observed between self-reported symptoms of difficulty hearing speech in noise and speech in babble test results in the right ear (ρ = 0.624, P = .002), and between self-reported symptoms of hyperacusis and TEOAE suppression test results in the right ear (ρ = -0.597 P = .003). There was no significant correlation between the two tests. A strong correlation was observed between right ear speech in babble and patient-reported intelligibility of speech in noise, and right ear TEOAE suppression by contralateral noise and hyperacusis questionnaire. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.
Language and Speech Improvement for Kindergarten and First Grade. A Supplementary Handbook.
ERIC Educational Resources Information Center
Cole, Roberta; And Others
The 16-unit language and speech improvement handbook for kindergarten and first grade students contains an introductory section which includes a discussion of the child's developmental speech and language characteristics, a sound development chart, a speech and hearing language screening test, the Henja articulation test, and a general outline of…
Measurement of speech levels in the presence of time varying background noise
NASA Technical Reports Server (NTRS)
Pearsons, K. S.; Horonjeff, R.
1982-01-01
Short-term speech level measurements which could be used to note changes in vocal effort in a time varying noise environment were studied. Knowing the changes in speech level would in turn allow prediction of intelligibility in the presence of aircraft flyover noise. Tests indicated that it is possible to use two second samples of speech to estimate long term root mean square speech levels. Other tests were also performed in which people read out loud during aircraft flyover noise. Results of these tests indicate that people do indeed raise their voice during flyovers at a rate of about 3-1/2 dB for each 10 dB increase in background level. This finding is in agreement with other tests of speech levels in the presence of steady state background noise.
Namasivayam, Aravind Kumar; Pukonen, Margit; Goshulak, Debra; Yu, Vickie Y; Kadis, Darren S; Kroll, Robert; Pang, Elizabeth W; De Nil, Luc F
2013-01-01
The current study was undertaken to investigate the impact of speech motor issues on the speech intelligibility of children with moderate to severe speech sound disorders (SSD) within the context of the PROMPT intervention approach. The word-level Children's Speech Intelligibility Measure (CSIM), the sentence-level Beginner's Intelligibility Test (BIT) and tests of speech motor control and articulation proficiency were administered to 12 children (3:11 to 6:7 years) before and after PROMPT therapy. PROMPT treatment was provided for 45 min twice a week for 8 weeks. Twenty-four naïve adult listeners aged 22-46 years judged the intelligibility of the words and sentences. For CSIM, each time a recorded word was played to the listeners they were asked to look at a list of 12 words (multiple-choice format) and circle the word while for BIT sentences, the listeners were asked to write down everything they heard. Words correctly circled (CSIM) or transcribed (BIT) were averaged across three naïve judges to calculate percentage speech intelligibility. Speech intelligibility at both the word and sentence level was significantly correlated with speech motor control, but not articulatory proficiency. Further, the severity of speech motor planning and sequencing issues may potentially be a limiting factor in connected speech intelligibility and highlights the need to target these issues early and directly in treatment. The reader will be able to: (1) outline the advantages and disadvantages of using word- and sentence-level speech intelligibility tests; (2) describe the impact of speech motor control and articulatory proficiency on speech intelligibility; and (3) describe how speech motor control and speech intelligibility data may provide critical information to aid treatment planning. Copyright © 2013 Elsevier Inc. All rights reserved.
Hodge, Megan M; Gotzke, Carrie L
2014-01-01
This study evaluated construct-related validity of the Test of Children's Speech (TOCS). Intelligibility scores obtained using open-set word identification tasks (orthographic transcription) for the TOCS word and sentence tests and rate scores for the TOCS sentence test (words per minute or WPM and intelligible words per minute or IWPM) were compared for a group of 15 adults (18-30 years of age) with normal speech production and three groups of children: 48 3-6 year-olds with typical speech development and neurological histories (TDS), 48 3-6 year-olds with a speech sound disorder of unknown origin and no identified neurological impairment (SSD-UNK), and 22 3-10 year-olds with dysarthria and cerebral palsy (DYS). As expected, mean intelligibility scores and rates increased with age in the TDS group. However, word test intelligibility, WPM and IWPM scores for the 6 year-olds in the TDS group were significantly lower than those for the adults. The DYS group had significantly lower word and sentence test intelligibility and WPM and IWPM scores than the TDS and SSD-UNK groups. Compared to the TDS group, the SSD-UNK group also had significantly lower intelligibility scores for the word and sentence tests, and significantly lower IWPM, but not WPM scores on the sentence test. The results support the construct-related validity of TOCS as a tool for obtaining intelligibility and rate scores that are sensitive to group differences in 3-6 year-old children, with and without speech sound disorders, and to 3+ year-old children with speech disorders, with and without dysarthria. Readers will describe the word and sentence intelligibility and speaking rate performance of children with typically developing speech at age levels of 3, 4, 5 and 6 years, as measured by the Test of Children's Speech, and how these compare with adult speakers and two groups of children with speech disorders. They will also recognize what measures on this test differentiate children with speech sound disorders of unknown origin from children with cerebral palsy and dysarthria. Copyright © 2014 Elsevier Inc. All rights reserved.
Melodic contour identification and sentence recognition using sung speech
Crew, Joseph D.; Galvin, John J.; Fu, Qian-Jie
2015-01-01
For bimodal cochlear implant users, acoustic and electric hearing has been shown to contribute differently to speech and music perception. However, differences in test paradigms and stimuli in speech and music testing can make it difficult to assess the relative contributions of each device. To address these concerns, the Sung Speech Corpus (SSC) was created. The SSC contains 50 monosyllable words sung over an octave range and can be used to test both speech and music perception using the same stimuli. Here SSC data are presented with normal hearing listeners and any advantage of musicianship is examined. PMID:26428838
Su, Qiaotong; Galvin, John J.; Zhang, Guoping; Li, Yongxin
2016-01-01
Cochlear implant (CI) speech performance is typically evaluated using well-enunciated speech produced at a normal rate by a single talker. CI users often have greater difficulty with variations in speech production encountered in everyday listening. Within a single talker, speaking rate, amplitude, duration, and voice pitch information may be quite variable, depending on the production context. The coarse spectral resolution afforded by the CI limits perception of voice pitch, which is an important cue for speech prosody and for tonal languages such as Mandarin Chinese. In this study, sentence recognition from the Mandarin speech perception database was measured in adult and pediatric Mandarin-speaking CI listeners for a variety of speaking styles: voiced speech produced at slow, normal, and fast speaking rates; whispered speech; voiced emotional speech; and voiced shouted speech. Recognition of Mandarin Hearing in Noise Test sentences was also measured. Results showed that performance was significantly poorer with whispered speech relative to the other speaking styles and that performance was significantly better with slow speech than with fast or emotional speech. Results also showed that adult and pediatric performance was significantly poorer with Mandarin Hearing in Noise Test than with Mandarin speech perception sentences at the normal rate. The results suggest that adult and pediatric Mandarin-speaking CI patients are highly susceptible to whispered speech, due to the lack of lexically important voice pitch cues and perhaps other qualities associated with whispered speech. The results also suggest that test materials may contribute to differences in performance observed between adult and pediatric CI users. PMID:27363714
Lentz, Jennifer J; He, Yuan; Townsend, James T
2014-01-01
This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time (RT) distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180° out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic vs. the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, RTs allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear vs. two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1), consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency)-most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy.
Lentz, Jennifer J.; He, Yuan; Townsend, James T.
2014-01-01
This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time (RT) distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180° out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic vs. the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, RTs allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear vs. two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1), consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency)—most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy. PMID:25202254
[Validity criteria of a short test to assess speech and language competence in 4-year-olds].
Euler, H A; Holler-Zittlau, I; Minnen, S; Sick, U; Dux, W; Zaretsky, Y; Neumann, K
2010-11-01
A psychometrically constructed short test as a prerequisite for screening was developed on the basis of a revision of the Marburger Speech Screening to assess speech/language competence among children in Hessen (Germany). A total of 257 children (age 4.0 to 4.5 years) performed the test battery for speech/language competence; 214 children repeated the test 1 year later. Test scores correlated highly with scores of two competing language screenings (SSV, HASE) and with a combined score from four diagnostic tests of individual speech/language competences (Reynell III, patholinguistic diagnostics in impaired language development, PLAKSS, AWST-R). Validity was demonstrated by three comparisons: (1) Children with German family language had higher scores than children with another language. (2) The 3-month-older children achieved higher scores than younger children. (3) The difference between the children with German family language and those with another language was higher for the 3-month-older than for the younger children. The short test assesses the speech/language competence of 4-year-olds quickly, validly, and comprehensively.
Optimizing acoustical conditions for speech intelligibility in classrooms
NASA Astrophysics Data System (ADS)
Yang, Wonyoung
High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with SNS = 4 dB and increased to 0.8 and 1.2 s with decreased SNS = 0 dB, for both normal and hearing-impaired listeners. Hearing-impaired listeners required more early energy than normal-hearing listeners. Reflective ceiling barriers and ceiling reflectors---in particular, parallel front-back rows of semi-circular reflectors---achieved the goal of decreasing reverberation with the least speech-level reduction.
Zakaria, Mohd Normani; Jalaei, Bahram
2017-11-01
Auditory brainstem responses evoked by complex stimuli such as speech syllables have been studied in normal subjects and subjects with compromised auditory functions. The stability of speech-evoked auditory brainstem response (speech-ABR) when tested over time has been reported but the literature is limited. The present study was carried out to determine the test-retest reliability of speech-ABR in healthy children at a low sensation level. Seventeen healthy children (6 boys, 11 girls) aged from 5 to 9 years (mean = 6.8 ± 3.3 years) were tested in two sessions separated by a 3-month period. The stimulus used was a 40-ms syllable /da/ presented at 30 dB sensation level. As revealed by pair t-test and intra-class correlation (ICC) analyses, peak latencies, peak amplitudes and composite onset measures of speech-ABR were found to be highly replicable. Compared to other parameters, higher ICC values were noted for peak latencies of speech-ABR. The present study was the first to report the test-retest reliability of speech-ABR recorded at low stimulation levels in healthy children. Due to its good stability, it can be used as an objective indicator for assessing the effectiveness of auditory rehabilitation in hearing-impaired children in future studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Development of a test battery for evaluating speech perception in complex listening environments.
Brungart, Douglas S; Sheffield, Benjamin M; Kubli, Lina R
2014-08-01
In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.
Some Effects of Training on the Perception of Synthetic Speech
Schwab, Eileen C.; Nusbaum, Howard C.; Pisoni, David B.
2012-01-01
The present study was conducted to determine the effects of training on the perception of synthetic speech. Three groups of subjects were tested with synthetic speech using the same tasks before and after training. One group was trained with synthetic speech. A second group went through the identical training procedures using natural speech. The third group received no training. Although performance of the three groups was the same prior to training, significant differences on the post-test measures of word recognition were observed: the group trained with synthetic speech performed much better than the other two groups. A six-month follow-up indicated that the group trained with synthetic speech displayed long-term retention of the knowledge and experience gained with prior exposure to synthetic speech generated by a text-to-speech system. PMID:2936671
[Restoration of speech function in oncological patients with maxillary defects].
Matiakin, E G; Chuchkov, V M; Akhundov, A A; Azizian, R I; Romanov, I S; Chuchkov, M V; Agapov, V V
2009-01-01
Speech quality was evaluated in 188 patients with acquired maxillary defects. Prosthetic treatment of 29 patients was preceded by pharmacopsychotherapy. Sixty three patients had lessons with a logopedist and 66 practiced self-tuition based on the specially developed test. Thirty patients were examined for the quality of speech without preliminary preparation. Speech quality was assessed by auditory and spectral analysis. The main forms of impaired speech quality in the patients with maxillary defects were marked rhinophonia and impaired articulation. The proposed analytical tests were based on a combination of "difficult" vowels and consonants. The use of a removable prostheses with an obturator failed to correct the affected speech function but created prerequisites for the formation of the correct speech stereotype. Results of the study suggest the relationship between the quality of speech in subjects with maxillary defects and their intellectual faculties as well as the desire to overcome this drawback. The proposed tests are designed to activate the neuromuscular apparatus responsible for the generation of the speech. Lessons with a speech therapist give a powerful emotional incentive to the patients and promote their efforts toward restoration of speaking ability. Pharmacopsychotherapy and self-control are another efficacious tools for the improvement of speech quality in patients with maxillary defects.
Goykhburg, M V; Bakhshinyan, V V; Petrova, I P; Wazybok, A; Kollmeier, B; Tavartkiladze, G A
The deterioration of speech intelligibility in the patients using cochlear implantation (CI) systems is especially well apparent in the noisy environment. It explains why phrasal speech tests, such as a Matrix sentence test, have become increasingly more popular in the speech audiometry during rehabilitation after CI. The Matrix test allows to estimate speech perception by the patients in a real life situation. The objective of this study was to assess the effectiveness of audiological rehabilitation of CI patients using the Russian-language version of the matrix test (RUMatrix) in free field in the noisy environment. 33 patients aged from 5 to 40 years with a more than 3 year experience of using cochlear implants inserted at the National Research Center for Audiology and Hearing Rehabilitation were included in our study. Five of these patients were implanted bilaterally. The results of our study showed a statistically significant improvement of speech intelligibility in the noisy environment after the speech processor adjustment; dynamics of the signal-to-noise ratio changes was -1.7 dB (p<0.001). The RUMatrix test is a highly efficient method for the estimation of speech intelligibility in the patients undergoing clinical investigations in the noisy environment. The high degree of comparability of the RUMatrix test with the Matrix tests in other languages makes possible its application in international multicenter studies.
Zokoll, Melanie A; Wagener, Kirsten C; Brand, Thomas; Buschermöhle, Michael; Kollmeier, Birger
2012-09-01
A review is given of internationally comparable speech-in-noise tests for hearing screening purposes that were part of the European HearCom project. This report describes the development, optimization, and evaluation of such tests for headphone and telephone presentation, using the example of the German digit triplet test. In order to achieve the highest possible comparability, language- and speaker-dependent factors in speech intelligibility should be compensated for. The tests comprise spoken numbers in background noise and estimate the speech reception threshold (SRT), i.e. the signal-to-noise ratio (SNR) yielding 50% speech intelligibility. The respective reference speech intelligibility functions for headphone and telephone presentation of the German version for 15 and 10 normal-hearing listeners are described by a SRT of -9.3 ± 0.2 and -6.5 ± 0.4 dB SNR, and slopes of 19.6 and 17.9%/dB, respectively. Reference speech intelligibility functions of all digit triplet tests optimized within the HearCom project allow for investigation of the comparability due to language specificities. The optimization criteria established here should be used for similar screening tests in other languages.
Gordon-Salant, Sandra; Cole, Stacey Samuels
2016-01-01
This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.
Akeroyd, Michael A; Arlinger, Stig; Bentler, Ruth A; Boothroyd, Arthur; Dillier, Norbert; Dreschler, Wouter A; Gagné, Jean-Pierre; Lutman, Mark; Wouters, Jan; Wong, Lena; Kollmeier, Birger
2015-01-01
To provide guidelines for the development of two types of closed-set speech-perception tests that can be applied and interpreted in the same way across languages. The guidelines cover the digit triplet and the matrix sentence tests that are most commonly used to test speech recognition in noise. They were developed by a working group on Multilingual Speech Tests of the International Collegium of Rehabilitative Audiology (ICRA). The recommendations are based on reviews of existing evaluations of the digit triplet and matrix tests as well as on the research experience of members of the ICRA Working Group. They represent the results of a consensus process. The resulting recommendations deal with: Test design and word selection; Talker characteristics; Audio recording and stimulus preparation; Masking noise; Test administration; and Test validation. By following these guidelines for the development of any new test of this kind, clinicians and researchers working in any language will be able to perform tests whose results can be compared and combined in cross-language studies.
Smoking modulates language lateralization in a sex-specific way.
Hahn, Constanze; Pogun, Sakire; Güntürkün, Onur
2010-12-01
Smoking affects a widespread network of neuronal functions by altering the properties of acetylcholinergic transmission. Recent studies show that nicotine consumption affects ascending auditory pathways and alters auditory attention, particularly in men. Here we show that smoking affects language lateralization in a sex-specific way. We assessed brain asymmetries of 90 healthy, right-handed participants using a classic consonant-vowel syllable dichotic listening paradigm in a 2×3 experimental design with sex (male, female) and smoking status (non-smoker, light smoker, heavy smoker) as between-subject factors. Our results revealed that male smokers had a significantly less lateralized response pattern compared to the other groups due to a decreased response rate of their right ear. This finding suggests a group-specific impairment of the speech dominant left hemisphere. In addition, decreased overall response accuracy was observed in male smokers compared to the other experimental groups. Similar adverse effects of smoking were not detected in women. Further, a significant negative correlation was detected between the severity of nicotine dependency and response accuracy in male but not in female smokers. Taken together, these results show that smoking modulates functional brain lateralization significantly and in a sexually dimorphic manner. Given that some psychiatric disorders have been associated with altered brain asymmetries and increased smoking prevalence, nicotinergic effects need to be specifically investigated in this context in future studies. Copyright © 2010 Elsevier Ltd. All rights reserved.
Fiedler, Lorenz; Wöstmann, Malte; Graversen, Carina; Brandmeyer, Alex; Lunner, Thomas; Obleser, Jonas
2017-06-01
Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening ('cocktail party') scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. To investigate whether a listener's attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal ('in-Ear-EEG') and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n = 7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Each individual participants' attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener's focus of attention.
Ng, Elaine Hoi Ning; Rudner, Mary; Lunner, Thomas; Rönnberg, Jerker
2015-01-01
A hearing aid noise reduction (NR) algorithm reduces the adverse effect of competing speech on memory for target speech for individuals with hearing impairment with high working memory capacity. In the present study, we investigated whether the positive effect of NR could be extended to individuals with low working memory capacity, as well as how NR influences recall performance for target native speech when the masker language is non-native. A sentence-final word identification and recall (SWIR) test was administered to 26 experienced hearing aid users. In this test, target spoken native language (Swedish) sentence lists were presented in competing native (Swedish) or foreign (Cantonese) speech with or without binary masking NR algorithm. After each sentence list, free recall of sentence final words was prompted. Working memory capacity was measured using a reading span (RS) test. Recall performance was associated with RS. However, the benefit obtained from NR was not associated with RS. Recall performance was more disrupted by native than foreign speech babble and NR improved recall performance in native but not foreign competing speech. Noise reduction improved memory for speech heard in competing speech for hearing aid users. Memory for native speech was more disrupted by native babble than foreign babble, but the disruptive effect of native speech babble was reduced to that of foreign babble when there was NR.
Prisoner Fasting as Symbolic Speech: The Ultimate Speech-Action Test.
ERIC Educational Resources Information Center
Sneed, Don; Stonecipher, Harry W.
The ultimate test of the speech-action dichotomy, as it relates to symbolic speech to be considered by the courts, may be the fasting of prison inmates who use hunger strikes to protest the conditions of their confinement or to make political statements. While hunger strikes have been utilized by prisoners for years as a means of protest, it was…
Robust relationship between reading span and speech recognition in noise
Souza, Pamela; Arehart, Kathryn
2015-01-01
Objective Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. Design The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. Study sample The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Results Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Conclusions Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition. PMID:25975360
Robust relationship between reading span and speech recognition in noise.
Souza, Pamela; Arehart, Kathryn
2015-01-01
Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition.
Effect of classroom acoustics on the speech intelligibility of students.
Rabelo, Alessandra Terra Vasconcelos; Santos, Juliana Nunes; Oliveira, Rafaella Cristina; Magalhães, Max de Castro
2014-01-01
To analyze the acoustic parameters of classrooms and the relationship among equivalent sound pressure level (Leq), reverberation time (T₃₀), the Speech Transmission Index (STI), and the performance of students in speech intelligibility testing. A cross-sectional descriptive study, which analyzed the acoustic performance of 18 classrooms in 9 public schools in Belo Horizonte, Minas Gerais, Brazil, was conducted. The following acoustic parameters were measured: Leq, T₃₀, and the STI. In the schools evaluated, a speech intelligibility test was performed on 273 students, 45.4% of whom were boys, with an average age of 9.4 years. The results of the speech intelligibility test were compared to the values of the acoustic parameters with the help of Student's t-test. The Leq, T₃₀, and STI tests were conducted in empty and furnished classrooms. Children showed better results in speech intelligibility tests conducted in classrooms with less noise, a lower T₃₀, and greater STI values. The majority of classrooms did not meet the recommended regulatory standards for good acoustic performance. Acoustic parameters have a direct effect on the speech intelligibility of students. Noise contributes to a decrease in their understanding of information presented orally, which can lead to negative consequences in their education and their social integration as future professionals.
ERIC Educational Resources Information Center
Zekveld, Adriana A.; George, Erwin L. J.; Kramer, Sophia E.; Goverts, S. Theo; Houtgast, Tammo
2007-01-01
Purpose: In this study, the authors aimed to develop a visual analogue of the widely used Speech Reception Threshold (SRT; R. Plomp & A. M. Mimpen, 1979b) test. The Text Reception Threshold (TRT) test, in which visually presented sentences are masked by a bar pattern, enables the quantification of modality-aspecific variance in speech-in-noise…
Na, Wondo; Kim, Gibbeum; Kim, Gungu; Han, Woojae; Kim, Jinsook
2017-01-01
The current study aimed to evaluate hearing-related changes in terms of speech-in-noise processing, fast-rate speech processing, and working memory; and to identify which of these three factors is significantly affected by age-related hearing loss. One hundred subjects aged 65-84 years participated in the study. They were classified into four groups ranging from normal hearing to moderate-to-severe hearing loss. All the participants were tested for speech perception in quiet and noisy conditions and for speech perception with time alteration in quiet conditions. Forward- and backward-digit span tests were also conducted to measure the participants' working memory. 1) As the level of background noise increased, speech perception scores systematically decreased in all the groups. This pattern was more noticeable in the three hearing-impaired groups than in the normal hearing group. 2) As the speech rate increased faster, speech perception scores decreased. A significant interaction was found between speed of speech and hearing loss. In particular, 30% of compressed sentences revealed a clear differentiation between moderate hearing loss and moderate-to-severe hearing loss. 3) Although all the groups showed a longer span on the forward-digit span test than the backward-digit span test, there was no significant difference as a function of hearing loss. The degree of hearing loss strongly affects the speech recognition of babble-masked and time-compressed speech in the elderly but does not affect the working memory. We expect these results to be applied to appropriate rehabilitation strategies for hearing-impaired elderly who experience difficulty in communication.
Hochmuth, Sabine; Kollmeier, Birger; Brand, Thomas; Jürgens, Tim
2015-01-01
To compare speech reception thresholds (SRTs) in noise using matrix sentence tests in four languages: German, Spanish, Russian, Polish. The four tests were composed of equivalent five-word sentences and were all designed and optimized using the same principles. Six stationary speech-shaped noises and three non-stationary noises were used as maskers. Forty native listeners with normal hearing: 10 for each language. SRTs were about 3 dB higher for the German and Spanish tests than for the Russian and Polish tests when stationary noise was used that matched the long-term frequency spectrum of the respective speech test materials. This general SRT difference was also observed for the other stationary noises. The within-test variability across noise conditions differed between languages. About 56% of the observed variance was predicted by the speech intelligibility index. The observed SRT benefit in fluctuating noise was similar for all tests, with a slightly smaller benefit for the Spanish test. Of the stationary noises employed, noise with the same spectrum as the speech yielded the best masking. SRT differences across languages and noises could be attributed in part to spectral differences. These findings provide the feasibility and limits of comparing audiological results across languages.
Reed, Amanda C.; Centanni, Tracy M.; Borland, Michael S.; Matney, Chanel J.; Engineer, Crystal T.; Kilgard, Michael P.
2015-01-01
Objectives Hearing loss is a commonly experienced disability in a variety of populations including veterans and the elderly and can often cause significant impairment in the ability to understand spoken language. In this study, we tested the hypothesis that neural and behavioral responses to speech will be differentially impaired in an animal model after two forms of hearing loss. Design Sixteen female Sprague–Dawley rats were exposed to one of two types of broadband noise which was either moderate or intense. In nine of these rats, auditory cortex recordings were taken 4 weeks after noise exposure (NE). The other seven were pretrained on a speech sound discrimination task prior to NE and were then tested on the same task after hearing loss. Results Following intense NE, rats had few neural responses to speech stimuli. These rats were able to detect speech sounds but were no longer able to discriminate between speech sounds. Following moderate NE, rats had reorganized cortical maps and altered neural responses to speech stimuli but were still able to accurately discriminate between similar speech sounds during behavioral testing. Conclusions These results suggest that rats are able to adjust to the neural changes after moderate NE and discriminate speech sounds, but they are not able to recover behavioral abilities after intense NE. Animal models could help clarify the adaptive and pathological neural changes that contribute to speech processing in hearing-impaired populations and could be used to test potential behavioral and pharmacological therapies. PMID:25072238
ERIC Educational Resources Information Center
Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias
2016-01-01
Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds…
Speech Recognition and Cognitive Skills in Bimodal Cochlear Implant Users
ERIC Educational Resources Information Center
Hua, Håkan; Johansson, Björn; Magnusson, Lennart; Lyxell, Björn; Ellis, Rachel J.
2017-01-01
Purpose: To examine the relation between speech recognition and cognitive skills in bimodal cochlear implant (CI) and hearing aid users. Method: Seventeen bimodal CI users (28-74 years) were recruited to the study. Speech recognition tests were carried out in quiet and in noise. The cognitive tests employed included the Reading Span Test and the…
The Use of Reported Speech in Children's Narratives: A Priming Study
ERIC Educational Resources Information Center
Serratrice, Ludovica; Hesketh, Anne; Ashworth, Rachel
2015-01-01
This study investigated the long-term effects of structural priming on children's use of indirect speech clauses in a narrative context. Forty-two monolingual English-speaking 5-year-olds in two primary classrooms took part in a story-retelling task including reported speech. Testing took place in three individual sessions (pre-test, post-test 1,…
ERIC Educational Resources Information Center
Johnson, Dale L.
This investigation compares child language obtained with standardized tests and samples of spontaneous speech obtained in natural settings. It was hypothesized that differences would exist between social class and racial groups on the unfamiliar standard tests, but such differences would not be evident on spontaneous speech measures. Also, higher…
Cox, Robyn M; Alexander, Genevieve C; Johnson, Jani; Rivera, Izel
2011-01-01
We investigated the prevalence of cochlear dead regions in listeners with hearing losses similar to those of many hearing aid wearers, and explored the impact of these dead regions on speech perception. Prevalence of dead regions was assessed using the Threshold Equalizing Noise test (TEN(HL)). Speech recognition was measured using high-frequency emphasis (HFE) Quick Speech In Noise (QSIN) test stimuli and low-pass filtered HFE QSIN stimuli. About one third of subjects tested positive for a dead region at one or more frequencies. Also, groups without and with dead regions both benefited from additional high-frequency speech cues. PMID:21522068
Speech Perception and Short-Term Memory Deficits in Persistent Developmental Speech Disorder
ERIC Educational Resources Information Center
Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.
2006-01-01
Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech…
Lateralized goal framing: how selective presentation impacts message effectiveness.
McCormick, Michael; Seta, John J
2012-11-01
We tested whether framing a message as a gain or loss would alter its effectiveness by using a dichotic listening procedure to selectively present a health related message to the left or right hemisphere. A significant goal framing effect (losses > gains) was found when right, but not left, hemisphere processing was initially enhanced. The results support the position that the contextual processing style of the right hemisphere is especially sensitive to the associative implications of the frame. We discussed the implications of these findings for goal framing research, and the valence hypothesis. We also discussed how these findings converge with prior valence framing research and how they can be of potential use to health care providers.
Influence of musical training on understanding voiced and whispered speech in noise.
Ruggles, Dorea R; Freyman, Richard L; Oxenham, Andrew J
2014-01-01
This study tested the hypothesis that the previously reported advantage of musicians over non-musicians in understanding speech in noise arises from more efficient or robust coding of periodic voiced speech, particularly in fluctuating backgrounds. Speech intelligibility was measured in listeners with extensive musical training, and in those with very little musical training or experience, using normal (voiced) or whispered (unvoiced) grammatically correct nonsense sentences in noise that was spectrally shaped to match the long-term spectrum of the speech, and was either continuous or gated with a 16-Hz square wave. Performance was also measured in clinical speech-in-noise tests and in pitch discrimination. Musicians exhibited enhanced pitch discrimination, as expected. However, no systematic or statistically significant advantage for musicians over non-musicians was found in understanding either voiced or whispered sentences in either continuous or gated noise. Musicians also showed no statistically significant advantage in the clinical speech-in-noise tests. Overall, the results provide no evidence for a significant difference between young adult musicians and non-musicians in their ability to understand speech in noise.
Bilateral and unilateral cochlear implant users compared on speech perception in noise.
Dunn, Camille C; Noble, William; Tyler, Richard S; Kordus, Monika; Gantz, Bruce J; Ji, Haihong
2010-04-01
Compare speech performance in noise with matched bilateral cochlear implant (CICI) and unilateral cochlear implant (CI only) users. Thirty CICI and 30 CI-only subjects were tested on a battery of speech perception tests in noise that use an eight-loudspeaker array. On average, CICI subject's performance with speech in noise was significantly better than the CI-only subjects. The CICI group showed significantly better performance on speech perception in noise compared with the CI-only subjects, supporting the hypothesis that CICI is more beneficial than CI only.
The NTID speech recognition test: NSRT(®).
Bochner, Joseph H; Garrison, Wayne M; Doherty, Karen A
2015-07-01
The purpose of this study was to collect and analyse data necessary for expansion of the NSRT item pool and to evaluate the NSRT adaptive testing software. Participants were administered pure-tone and speech recognition tests including W-22 and QuickSIN, as well as a set of 323 new NSRT items and NSRT adaptive tests in quiet and background noise. Performance on the adaptive tests was compared to pure-tone thresholds and performance on other speech recognition measures. The 323 new items were subjected to Rasch scaling analysis. Seventy adults with mild to moderately severe hearing loss participated in this study. Their mean age was 62.4 years (sd = 20.8). The 323 new NSRT items fit very well with the original item bank, enabling the item pool to be more than doubled in size. Data indicate high reliability coefficients for the NSRT and moderate correlations with pure-tone thresholds (PTA and HFPTA) and other speech recognition measures (W-22, QuickSIN, and SRT). The adaptive NSRT is an efficient and effective measure of speech recognition, providing valid and reliable information concerning respondents' speech perception abilities.
Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis
Vielsmeier, Veronika; Kreuzer, Peter M.; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R. O.; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin
2016-01-01
Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments (“How would you rate your ability to understand speech?”; “How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?”). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role. PMID:28018209
Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis.
Vielsmeier, Veronika; Kreuzer, Peter M; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R O; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin
2016-01-01
Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments ("How would you rate your ability to understand speech?"; "How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?"). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role.
Strand, Edythe A; McCauley, Rebecca J; Weigand, Stephen D; Stoeckel, Ruth E; Baas, Becky S
2013-04-01
In this article, the authors report reliability and validity evidence for the Dynamic Evaluation of Motor Speech Skill (DEMSS), a new test that uses dynamic assessment to aid in the differential diagnosis of childhood apraxia of speech (CAS). Participants were 81 children between 36 and 79 months of age who were referred to the Mayo Clinic for diagnosis of speech sound disorders. Children were given the DEMSS and a standard speech and language test battery as part of routine evaluations. Subsequently, intrajudge, interjudge, and test-retest reliability were evaluated for a subset of participants. Construct validity was explored for all 81 participants through the use of agglomerative cluster analysis, sensitivity measures, and likelihood ratios. The mean percentage of agreement for 171 judgments was 89% for test-retest reliability, 89% for intrajudge reliability, and 91% for interjudge reliability. Agglomerative hierarchical cluster analysis showed that total DEMSS scores largely differentiated clusters of children with CAS vs. mild CAS vs. other speech disorders. Positive and negative likelihood ratios and measures of sensitivity and specificity suggested that the DEMSS does not overdiagnose CAS but sometimes fails to identify children with CAS. The value of the DEMSS in differential diagnosis of severe speech impairments was supported on the basis of evidence of reliability and validity.
Gender differences in lateralization of mismatch negativity in dichotic listening tasks.
Ikezawa, Satoru; Nakagome, Kazuyuki; Mimura, Masaru; Shinoda, Junko; Itoh, Kenji; Homma, Ikuo; Kamijima, Kunitoshi
2008-04-01
With the aim of investigating gender differences in the functional lateralization subserving preattentive processing of language stimuli, we compared auditory mismatch negativities (MMNs) using dichotic listening tasks. Forty-four healthy volunteers, including 23 males and 21 females, participated in the study. MMNs generated by pure-tone and phonetic stimuli were compared, to check for the existence of language-specific gender differences in lateralization. Both EEG amplitude and scalp current density (SCD) data were analyzed. With phonetic MMNs, EEG findings revealed significantly larger amplitude in females than males, especially in the right hemisphere, while SCD findings revealed left hemisphere dominance and contralateral dominance in males alone. With pure-tone MMNs, no significant gender differences in hemispheric lateralization appeared in either EEG or SCD findings. While males exhibited left-lateralized activation with phonetic MMNs, females exhibited more bilateral activity. Further, the contralateral dominance of the SCD distribution associated with the ear receiving deviant stimuli in males indicated that ipsilateral input as well as interhemispheric transfer across the corpus callosum to the ipsilateral side was more suppressed in males than in females. The findings of the present study suggest that functional lateralization subserving preattentive detection of phonetic change differs between the genders. These results underscore the significance of considering the gender differences in the study of MMN, especially when phonetic stimulus is adopted. Moreover, they support the view of Voyer and Flight [Voyer, D., Flight, J., 2001. Gender differences in laterality on a dichotic task: the influence of report strategies. Cortex 37, 345-362.] in that the gender difference in hemispheric lateralization of language function is observed in a well-managed-attention condition, which fits the condition adopted in the MMN measurement; subjects are required to focus attention to a distraction task and thereby ignore the phonetic stimuli that elicit MMN.
Automated Speech Rate Measurement in Dysarthria
ERIC Educational Resources Information Center
Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc
2015-01-01
Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…
Speech and Language Consequences of Unilateral Hearing Loss: A Systematic Review.
Anne, Samantha; Lieu, Judith E C; Cohen, Michael S
2017-10-01
Objective Unilateral hearing loss has been shown to have negative consequences for speech and language development in children. The objective of this study was to systematically review the current literature to quantify the impact of unilateral hearing loss on children, with the use of objective measures of speech and language. Data Sources PubMed, EMBASE, Medline, CINAHL, and Cochrane Library were searched from inception to March 2015. Manual searches of references were also completed. Review Methods All studies that described speech and language outcomes for children with unilateral hearing loss were included. Outcome measures included results from any test of speech and language that evaluated or had age-standardized norms. Due to heterogeneity of the data, quantitative analysis could not be completed. Qualitative analysis was performed on the included studies. Two independent evaluators reviewed each abstract and article. Results A total of 429 studies were identified; 13 met inclusion criteria and were reviewed. Overall, 7 studies showed poorer scores on various speech and language tests, with effects more pronounced for children with severe to profound hearing loss. Four studies did not demonstrate any difference in testing results between patients with unilateral hearing loss and those with normal hearing. Two studies that evaluated effects on speech and language longitudinally showed initial speech problems, with improvement in scores over time. Conclusions There are inconsistent data regarding effects of unilateral hearing loss on speech and language outcomes for children. The majority of recent studies suggest poorer speech and language testing results, especially for patients with severe to profound unilateral hearing loss.
Subjective comparison and evaluation of speech enhancement algorithms
Hu, Yi; Loizou, Philipos C.
2007-01-01
Making meaningful comparisons between the performance of the various speech enhancement algorithms proposed over the years, has been elusive due to lack of a common speech database, differences in the types of noise used and differences in the testing methodology. To facilitate such comparisons, we report on the development of a noisy speech corpus suitable for evaluation of speech enhancement algorithms. This corpus is subsequently used for the subjective evaluation of 13 speech enhancement methods encompassing four classes of algorithms: spectral subtractive, subspace, statistical-model based and Wiener-type algorithms. The subjective evaluation was performed by Dynastat, Inc. using the ITU-T P.835 methodology designed to evaluate the speech quality along three dimensions: signal distortion, noise distortion and overall quality. This paper reports the results of the subjective tests. PMID:18046463
On the importance of early reflections for speech in rooms.
Bradley, J S; Sato, H; Picard, M
2003-06-01
This paper presents the results of new studies based on speech intelligibility tests in simulated sound fields and analyses of impulse response measurements in rooms used for speech communication. The speech intelligibility test results confirm the importance of early reflections for achieving good conditions for speech in rooms. The addition of early reflections increased the effective signal-to-noise ratio and related speech intelligibility scores for both impaired and nonimpaired listeners. The new results also show that for common conditions where the direct sound is reduced, it is only possible to understand speech because of the presence of early reflections. Analyses of measured impulse responses in rooms intended for speech show that early reflections can increase the effective signal-to-noise ratio by up to 9 dB. A room acoustics computer model is used to demonstrate that the relative importance of early reflections can be influenced by the room acoustics design.
Research and Studies Directory for Manpower, Personnel, and Training
1988-01-01
314-889-6505 PSYCHOPHYSIOLCGICAL MAPPING OF COGNITIVE PROCESSES SUGA N* WASHINGTON UNIV ST LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE...VISUAL PERCEPTION CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX DICHOTIC LISTENING TO COMPLEX SOUNDS: EFFECTS OF STIMULUS CHARACTERISTICS AND
Effects of auditory selective attention on chirp evoked auditory steady state responses.
Bohr, Andreas; Bernarding, Corinna; Strauss, Daniel J; Corona-Strauss, Farah I
2011-01-01
Auditory steady state responses (ASSRs) are frequently used to assess auditory function. Recently, the interest in effects of attention on ASSRs has increased. In this paper, we investigated for the first time possible effects of attention on AS-SRs evoked by amplitude modulated and frequency modulated chirps paradigms. Different paradigms were designed using chirps with low and high frequency content, and the stimulation was presented in a monaural and dichotic modality. A total of 10 young subjects participated in the study, they were instructed to ignore the stimuli and after a second repetition they had to detect a deviant stimulus. In the time domain analysis, we found enhanced amplitudes for the attended conditions. Furthermore, we noticed higher amplitudes values for the condition using frequency modulated low frequency chirps evoked by a monaural stimulation. The most difference between attended and unattended modality was exhibited at the dichotic case of the amplitude modulated condition using chirps with low frequency content.
Vol'f, N V
1998-01-01
Sexual differences in the hemispheric organization of verbal functions were shown in experiments with dichotic presentation of word lists, in Sternbeg's memory scanning task, in studies of EEG power and coherence while memorizing the lists of dichotically presented words. The efficiency of word retrieval and speed of memory scanning for stimuli presented to the right hemisphere were higher in women. EEG activation while memorizing words was more pronounced in men. There were negative correlations between left ear word retrieval and EEG activation in women. The author's findings showed sexual dimorphism in functional connections within the cortical regions of the brain while memorizing verbal information. The changes in coherence were in positive correlation with the efficiency of word retrieval in women and in inverse correlation in men, and this was evidence for the different physiological significance of changes in coherence in men and women. This suggests that the physiological significance of changes in coherence differs in men and women.
Lawler, Marshall; Yu, Jeffrey; Aronoff, Justin M
Although speech perception is the gold standard for measuring cochlear implant (CI) users' performance, speech perception tests often require extensive adaptation to obtain accurate results, particularly after large changes in maps. Spectral ripple tests, which measure spectral resolution, are an alternate measure that has been shown to correlate with speech perception. A modified spectral ripple test, the spectral-temporally modulated ripple test (SMRT) has recently been developed, and the objective of this study was to compare speech perception and performance on the SMRT for a heterogeneous population of unilateral CI users, bilateral CI users, and bimodal users. Twenty-five CI users (eight using unilateral CIs, nine using bilateral CIs, and eight using a CI and a hearing aid) were tested on the Arizona Biomedical Institute Sentence Test (AzBio) with a +8 dB signal to noise ratio, and on the SMRT. All participants were tested with their clinical programs. There was a significant correlation between SMRT and AzBio performance. After a practice block, an improvement of one ripple per octave for SMRT corresponded to an improvement of 12.1% for AzBio. Additionally, there was no significant difference in slope or intercept between any of the CI populations. The results indicate that performance on the SMRT correlates with speech recognition in noise when measured across unilateral, bilateral, and bimodal CI populations. These results suggest that SMRT scores are strongly associated with speech recognition in noise ability in experienced CI users. Further studies should focus on increasing both the size and diversity of the tested participants, and on determining whether the SMRT technique can be used for early predictions of long-term speech scores, or for evaluating differences among different stimulation strategies or parameter settings.
FEENAUGHTY, LYNDA; TJADEN, KRIS; BENEDICT, RALPH H.B.; WEINSTOCK-GUTTMAN, BIANCA
2017-01-01
This preliminary study investigated how cognitive-linguistic status in multiple sclerosis (MS) is reflected in two speech tasks (i.e. oral reading, narrative) that differ in cognitive-linguistic demand. Twenty individuals with MS were selected to comprise High and Low performance groups based on clinical tests of executive function and information processing speed and efficiency. Ten healthy controls were included for comparison. Speech samples were audio-recorded and measures of global speech timing were obtained. Results indicated predicted differences in global speech timing (i.e. speech rate and pause characteristics) for speech tasks differing in cognitive-linguistic demand, but the magnitude of these task-related differences was similar for all speaker groups. Findings suggest that assumptions concerning the cognitive-linguistic demands of reading aloud as compared to spontaneous speech may need to be re-considered for individuals with cognitive impairment. Qualitative trends suggest that additional studies investigating the association between cognitive-linguistic and speech motor variables in MS are warranted. PMID:23294227
Speech-discrimination scores modeled as a binomial variable.
Thornton, A R; Raffin, M J
1978-09-01
Many studies have reported variability data for tests of speech discrimination, and the disparate results of these studies have not been given a simple explanation. Arguments over the relative merits of 25- vs 50-word tests have ignored the basic mathematical properties inherent in the use of percentage scores. The present study models performance on clinical tests of speech discrimination as a binomial variable. A binomial model was developed, and some of its characteristics were tested against data from 4120 scores obtained on the CID Auditory Test W-22. A table for determining significant deviations between scores was generated and compared to observed differences in half-list scores for the W-22 tests. Good agreement was found between predicted and observed values. Implications of the binomial characteristics of speech-discrimination scores are discussed.
Ben-David, Boaz M; Multani, Namita; Shakuf, Vered; Rudzicz, Frank; van Lieshout, Pascal H H M
2016-02-01
Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5 discrete emotions (anger, fear, happiness, sadness, and neutral) presented in prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody or semantics). We observed supremacy of congruency, failure of selective attention, and prosodic dominance. Supremacy of congruency means that a sentence that presents the same emotion in both speech channels was rated highest; failure of selective attention means that listeners were unable to selectively attend to one channel when instructed; and prosodic dominance means that prosodic information plays a larger role than semantics in processing emotional speech. Emotional prosody and semantics are separate but not separable channels, and it is difficult to perceive one without the influence of the other. Our findings indicate that the Test for Rating of Emotions in Speech can reveal specific aspects in the processing of emotional speech and may in the future prove useful for understanding emotion-processing deficits in individuals with pathologies.
Speech Intelligibility in Persian Hearing Impaired Children with Cochlear Implants and Hearing Aids.
Rezaei, Mohammad; Emadi, Maryam; Zamani, Peyman; Farahani, Farhad; Lotfi, Gohar
2017-04-01
The aim of present study is to evaluate and compare speech intelligibility in hearing impaired children with cochlear implants (CI) and hearing aid (HA) users and children with normal hearing (NH). The sample consisted of 45 Persian-speaking children aged 3 to 5-years-old. They were divided into three groups, and each group had 15, children, children with CI and children using hearing aids in Hamadan. Participants was evaluated by the test of speech intelligibility level. Results of ANOVA on speech intelligibility test showed that NH children had significantly better reading performance than hearing impaired children with CI and HA. Post-hoc analysis, using Scheffe test, indicated that the mean score of speech intelligibility of normal children was higher than the HA and CI groups; but the difference was not significant between mean of speech intelligibility in children with hearing loss that use cochlear implant and those using HA. It is clear that even with remarkabkle advances in HA technology, many hearing impaired children continue to find speech production a challenging problem. Given that speech intelligibility is a key element in proper communication and social interaction, consequently, educational and rehabilitation programs are essential to improve speech intelligibility of children with hearing loss.
The development and validation of the speech quality instrument.
Chen, Stephanie Y; Griffin, Brianna M; Mancuso, Dean; Shiau, Stephanie; DiMattia, Michelle; Cellum, Ilana; Harvey Boyd, Kelly; Prevoteau, Charlotte; Kohlberg, Gavriel D; Spitzer, Jaclyn B; Lalwani, Anil K
2017-12-08
Although speech perception tests are available to evaluate hearing, there is no standardized validated tool to quantify speech quality. The objective of this study is to develop a validated tool to measure quality of speech heard. Prospective instrument validation study of 35 normal hearing adults recruited at a tertiary referral center. Participants listened to 44 speech clips of male/female voices reciting the Rainbow Passage. Speech clips included original and manipulated excerpts capturing goal qualities such as mechanical and garbled. Listeners rated clips on a 10-point visual analog scale (VAS) of 18 characteristics (e.g. cartoonish, garbled). Skewed distribution analysis identified mean ratings in the upper and lower 2-point limits of the VAS (ratings of 8-10, 0-2, respectively); items with inconsistent responses were eliminated. The test was pruned to a final instrument of nine speech clips that clearly define qualities of interest: speech-like, male/female, cartoonish, echo-y, garbled, tinny, mechanical, rough, breathy, soothing, hoarse, like, pleasant, natural. Mean ratings were highest for original female clips (8.8) and lowest for not-speech manipulation (2.1). Factor analysis identified two subsets of characteristics: internal consistency demonstrated Cronbach's alpha of 0.95 and 0.82 per subset. Test-retest reliability of total scores was high, with an intraclass correlation coefficient of 0.76. The Speech Quality Instrument (SQI) is a concise, valid tool for assessing speech quality as an indicator for hearing performance. SQI may be a valuable outcome measure for cochlear implant recipients who, despite achieving excellent speech perception, often experience poor speech quality. 2b. Laryngoscope, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Tracking Change in Children with Severe and Persisting Speech Difficulties
ERIC Educational Resources Information Center
Newbold, Elisabeth Joy; Stackhouse, Joy; Wells, Bill
2013-01-01
Standardised tests of whole-word accuracy are popular in the speech pathology and developmental psychology literature as measures of children's speech performance. However, they may not be sensitive enough to measure changes in speech output in children with severe and persisting speech difficulties (SPSD). To identify the best ways of doing this,…
Bilateral and Unilateral Cochlear Implant Users Compared on Speech Perception in Noise
Dunn, Camille C.; Noble, William; Tyler, Richard S.; Kordus, Monika; Gantz, Bruce J.; Ji, Haihong
2009-01-01
Objective Compare speech performance in noise with matched bilateral (CICI) and unilateral (CI-Only) cochlear implant users. Design Thirty CICI and 30 CI-Only subjects were tested on a battery of speech perception tests in noise that utilize an 8-loudspeaker array. Results On average, CICI subject's performance with speech in noise was significantly better than the CI-Only subjects. Conclusion The CICI group showed significantly better performance on speech perception in noise compared to the CI-Only subjects, supporting the hypothesis that bilateral cochlear implantation is more beneficial than unilateral implantation. PMID:19858720
ERIC Educational Resources Information Center
Davidow, Jason H.; Ingham, Roger J.
2013-01-01
Purpose: This study examined the effect of speech rate on phonated intervals (PIs), in order to test whether a reduction in the frequency of short PIs is an important part of the fluency-inducing mechanism of chorus reading. The influence of speech rate on stuttering frequency, speaker-judged speech effort, and listener-judged naturalness was also…
Shpak, Talma; Most, Tova; Luntz, Michal
2014-01-01
The aim of this study was to examine the role of fundamental frequency (F0) information in improving speech perception of individuals with a cochlear implant (CI) who use a contralateral hearing aid (HA). The authors hypothesized that in bilateral-bimodal (CI/HA) users the perception of natural prosody speech would be superior to the perception of speech with monotonic flattened F0 contour, whereas in unilateral CI users the perception of both speech signals would be similar. They also hypothesized that in the CI/HA listening condition the speech perception scores would improve as a function of the magnitude of the difference between the F0 characteristics of the target speech signal and the F0 characteristics of the competitors, whereas in the CI-alone condition such a pattern would not be recognized, or at least not as clearly. Two tests were administered to 29 experienced CI/HA adult users who, regardless of their residual hearing or speech perception abilities, had chosen to continue using an HA in the nonimplanted ear for at least 75% of their waking hours. In the first test, the difference between the perception of speech characterized by natural prosody and speech characterized by monotonic flattened F0 contour was assessed in the presence of babble noise produced by three competing male talkers. In the second test the perception of semantically unpredictable sentences was evaluated in the presence of a competing reversed speech sentence spoken by different single talkers with different F0 characteristics. Each test was carried out under two listening conditions: CI alone and CI/HA. Under both listening conditions, the perception of speech characterized by natural prosody was significantly better than the perception of speech in which monotonic F0 contour was flattened. Differences between the scores for natural prosody and for monotonic flattened F0 speech contour were significantly greater, however, in the CI/HA condition than with CI alone. In the second test, the overall scores for perception of semantically unpredictable sentences in the presence of all competitors were higher in the CI/HA condition in the presence of all competitors. In both listening conditions, scores increased significantly with increasing difference between the F0 characteristics of the target speech signal and the F0 characteristics of the competitor. The higher scores obtained in the CI/HA condition than with CI alone in both of the task-specific tests suggested that the use of a contralateral HA provides improved low-frequency information, resulting in better performance by the CI/HA users.
Semeraro, Hannah D; Rowan, Daniel; van Besouw, Rachel M; Allsopp, Adrian A
2017-10-01
The studies described in this article outline the design and development of a British English version of the coordinate response measure (CRM) speech-in-noise (SiN) test. Our interest in the CRM is as a SiN test with high face validity for occupational auditory fitness for duty (AFFD) assessment. Study 1 used the method of constant stimuli to measure and adjust the psychometric functions of each target word, producing a speech corpus with equal intelligibility. After ensuring all the target words had similar intelligibility, for Studies 2 and 3, the CRM was presented in an adaptive procedure in stationary speech-spectrum noise to measure speech reception thresholds and evaluate the test-retest reliability of the CRM SiN test. Studies 1 (n = 20) and 2 (n = 30) were completed by normal-hearing civilians. Study 3 (n = 22) was completed by hearing impaired military personnel. The results display good test-retest reliability (95% confidence interval (CI) < 2.1 dB) and concurrent validity when compared to the triple-digit test (r ≤ 0.65), and the CRM is sensitive to hearing impairment. The British English CRM using stationary speech-spectrum noise is a "ready to use" SiN test, suitable for investigation as an AFFD assessment tool for military personnel.
NASA Astrophysics Data System (ADS)
Lynch, John T.
1987-02-01
The present technique for coping with fading and burst noise on HF channels used in digital voice communications transmits digital voice only during high S/N time intervals, and speeds up the speech when necessary to avoid conversation-hindering delays. On the basis of informal listening tests, four test conditions were selected in order to characterize those conditions of speech interruption which would render it comprehensible or incomprehensible. One of the test conditions, 2 secs on and 1/2-sec off, yielded test scores comparable to the reference continuous speech case and is a reasonable match to the temporal variations of a disturbed ionosphere.
Assessing Auditory Discrimination Skill of Malay Children Using Computer-based Method.
Ting, H; Yunus, J; Mohd Nordin, M Z
2005-01-01
The purpose of this paper is to investigate the auditory discrimination skill of Malay children using computer-based method. Currently, most of the auditory discrimination assessments are conducted manually by Speech-Language Pathologist. These conventional tests are actually general tests of sound discrimination, which do not reflect the client's specific speech sound errors. Thus, we propose computer-based Malay auditory discrimination test to automate the whole process of assessment as well as to customize the test according to the specific speech error sounds of the client. The ability in discriminating voiced and unvoiced Malay speech sounds was studied for the Malay children aged between 7 and 10 years old. The study showed no major difficulty for the children in discriminating the Malay speech sounds except differentiating /g/-/k/ sounds. Averagely the children of 7 years old failed to discriminate /g/-/k/ sounds.
Effect of gap detection threshold on consistency of speech in children with speech sound disorder.
Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz
2017-02-01
The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nomura, Yukihiro; Lu, Jianming; Sekiya, Hiroo; Yahagi, Takashi
This paper presents a speech enhancement using the classification between the dominants of speech and noise. In our system, a new classification scheme between the dominants of speech and noise is proposed. The proposed classifications use the standard deviation of the spectrum of observation signal in each band. We introduce two oversubtraction factors for the dominants of speech and noise, respectively. And spectral subtraction is carried out after the classification. The proposed method is tested on several noise types from the Noisex-92 database. From the investigation of segmental SNR, Itakura-Saito distance measure, inspection of spectrograms and listening tests, the proposed system is shown to be effective to reduce background noise. Moreover, the enhanced speech using our system generates less musical noise and distortion than that of conventional systems.
Result on speech perception after conversion from Spectra® to Freedom®.
Magalhães, Ana Tereza de Matos; Goffi-Gomez, Maria Valéria Schmidt; Hoshino, Ana Cristina; Tsuji, Robinson Koji; Bento, Ricardo Ferreira; Brito, Rubens
2012-04-01
New technology in the Freedom® speech processor for cochlear implants was developed to improve how incoming acoustic sound is processed; this applies not only for new users, but also for previous generations of cochlear implants. To identify the contribution of this technology-- the Nucleus 22®--on speech perception tests in silence and in noise, and on audiometric thresholds. A cross-sectional cohort study was undertaken. Seventeen patients were selected. The last map based on the Spectra® was revised and optimized before starting the tests. Troubleshooting was used to identify malfunction. To identify the contribution of the Freedom® technology for the Nucleus22®, auditory thresholds and speech perception tests were performed in free field in sound-proof booths. Recorded monosyllables and sentences in silence and in noise (SNR = 0dB) were presented at 60 dBSPL. The nonparametric Wilcoxon test for paired data was used to compare groups. Freedom® applied for the Nucleus22® showed a statistically significant difference in all speech perception tests and audiometric thresholds. The Freedom® technology improved the performance of speech perception and audiometric thresholds of patients with Nucleus 22®.
Cortical Correlates of Binaural Temporal Processing Deficits in Older Adults.
Eddins, Ann Clock; Eddins, David A
This study was designed to evaluate binaural temporal processing in young and older adults using a binaural masking level difference (BMLD) paradigm. Using behavioral and electrophysiological measures within the same listeners, a series of stimulus manipulations was used to evaluate the relative contribution of binaural temporal fine-structure and temporal envelope cues. We evaluated the hypotheses that age-related declines in the BMLD task would be more strongly associated with temporal fine-structure than envelope cues and that age-related declines in behavioral measures would be correlated with cortical auditory evoked potential (CAEP) measures. Thirty adults participated in the study, including 10 young normal-hearing, 10 older normal-hearing, and 10 older hearing-impaired adults with bilaterally symmetric, mild-to-moderate sensorineural hearing loss. Behavioral and CAEP thresholds were measured for diotic (So) and dichotic (Sπ) tonal signals presented in continuous diotic (No) narrowband noise (50-Hz wide) maskers. Temporal envelope cues were manipulated by using two different narrowband maskers; Gaussian noise (GN) with robust envelope fluctuations and low-noise noise (LNN) with minimal envelope fluctuations. The potential to use temporal fine-structure cues was controlled by varying the signal frequency (500 or 4000 Hz), thereby relying on the natural decline in phase-locking with increasing frequency. Behavioral and CAEP thresholds were similar across groups for diotic conditions, while the masking release in dichotic conditions was larger for younger than for older participants. Across all participants, BMLDs were larger for GN than LNN and for 500-Hz than for 4000-Hz conditions, where envelope and fine-structure cues were most salient, respectively. Specific age-related differences were demonstrated for 500-Hz dichotic conditions in GN and LNN, reflecting reduced binaural temporal fine-structure coding. No significant age effects were observed for 4000-Hz dichotic conditions, consistent with similar use of binaural temporal envelope cues across age in these conditions. For all groups, thresholds and derived BMLD values obtained using the behavioral and CAEP methods were strongly correlated, supporting the notion that CAEP measures may be useful as an objective index of age-related changes in binaural temporal processing. These results demonstrate an age-related decline in the processing of binaural temporal fine-structure cues with preserved temporal envelope coding that was similar with and without mild-to-moderate peripheral hearing loss. Such age-related changes can be reliably indexed by both behavioral and CAEP measures in young and older adults.
2013-01-01
Children with severe hearing loss most likely receive the greatest benefit from a cochlear implant (CI) when implanted at less than 2 years of age. Children with a hearing loss may also benefit greater from binaural sensory stimulation. Four children who received their first CI under 12 months of age were included in this study. Effects on auditory development were determined using the German LittlEARS Auditory Questionnaire, closed- and open-set monosyllabic word tests, aided free-field, the Mainzer and Göttinger speech discrimination tests, Monosyllabic-Trochee-Polysyllabic (MTP), and Listening Progress Profile (LiP). Speech production and grammar development were evaluated using a German language speech development test (SETK), reception of grammar test (TROG-D) and active vocabulary test (AWST-R). The data showed that children implanted under 12 months of age reached open-set monosyllabic word discrimination at an age of 24 months. LiP results improved over time, and children recognized 100% of words in the MTP test after 12 months. All children performed as well as or better than their hearing peers in speech production and grammar development. SETK showed that the speech development of these children was in general age appropriate. The data suggests that early hearing loss intervention benefits speech and language development and supports the trend towards early cochlear implantation. Furthermore, the data emphasizes the potential benefits associated with bilateral implantation. PMID:23509653
May-Mederake, Birgit; Shehata-Dieler, Wafaa
2013-01-01
Children with severe hearing loss most likely receive the greatest benefit from a cochlear implant (CI) when implanted at less than 2 years of age. Children with a hearing loss may also benefit greater from binaural sensory stimulation. Four children who received their first CI under 12 months of age were included in this study. Effects on auditory development were determined using the German LittlEARS Auditory Questionnaire, closed- and open-set monosyllabic word tests, aided free-field, the Mainzer and Göttinger speech discrimination tests, Monosyllabic-Trochee-Polysyllabic (MTP), and Listening Progress Profile (LiP). Speech production and grammar development were evaluated using a German language speech development test (SETK), reception of grammar test (TROG-D) and active vocabulary test (AWST-R). The data showed that children implanted under 12 months of age reached open-set monosyllabic word discrimination at an age of 24 months. LiP results improved over time, and children recognized 100% of words in the MTP test after 12 months. All children performed as well as or better than their hearing peers in speech production and grammar development. SETK showed that the speech development of these children was in general age appropriate. The data suggests that early hearing loss intervention benefits speech and language development and supports the trend towards early cochlear implantation. Furthermore, the data emphasizes the potential benefits associated with bilateral implantation.
Stam, Mariska; Smits, Cas; Twisk, Jos W R; Lemke, Ulrike; Festen, Joost M; Kramer, Sophia E
2015-01-01
The first aim of the present study was to determine the change in speech recognition in noise over a period of 5 years in participants ages 18 to 70 years at baseline. The second aim was to investigate whether age, gender, educational level, the level of initial speech recognition in noise, and reported chronic conditions were associated with a change in speech recognition in noise. The baseline and 5-year follow-up data of 427 participants with and without hearing impairment participating in the National Longitudinal Study on Hearing (NL-SH) were analyzed. The ability to recognize speech in noise was measured twice with the online National Hearing Test, a digit-triplet speech-in-noise test. Speech-reception-threshold in noise (SRTn) scores were calculated, corresponding to 50% speech intelligibility. Unaided SRTn scores obtained with the same transducer (headphones or loudspeakers) at both test moments were included. Changes in SRTn were calculated as a raw shift (T1 - T0) and an adjusted shift for regression towards the mean. Paired t tests and multivariable linear regression analyses were applied. The mean increase (i.e., deterioration) in SRTn was 0.38-dB signal-to-noise ratio (SNR) over 5 years (p < 0.001). Results of the multivariable regression analyses showed that the age group of 50 to 59 years had a significantly larger deterioration in SRTn compared with the age group of 18 to 39 years (raw shift: beta: 0.64-dB SNR; 95% confidence interval: 0.07-1.22; p = 0.028, adjusted for initial speech recognition level - adjusted shift: beta: 0.82-dB SNR; 95% confidence interval: 0.27-1.34; p = 0.004). Gender, educational level, and the number of chronic conditions were not associated with a change in SRTn over time. No significant differences in increase of SRTn were found between the initial levels of speech recognition (i.e., good, insufficient, or poor) when taking into account the phenomenon regression towards the mean. The study results indicate that hearing deterioration of speech recognition in noise over 5 years can also be detected in adults ages 18 to 70 years. This rather small numeric change might represent a relevant impact on an individual's ability to understand speech in everyday life.
Production Variability and Single Word Intelligibility in Aphasia and Apraxia of Speech
ERIC Educational Resources Information Center
Haley, Katarina L.; Martin, Gwenyth
2011-01-01
This study was designed to estimate test-retest reliability of orthographic speech intelligibility testing in speakers with aphasia and AOS and to examine its relationship to the consistency of speaker and listener responses. Monosyllabic single word speech samples were recorded from 13 speakers with coexisting aphasia and AOS. These words were…
THE DEAF. PRENTICE-HALL FOUNDATIONS OF SPEECH PATHOLOGY SERIES.
ERIC Educational Resources Information Center
DI CARLO, LOUIS M.
DESIGNED FOR STUDENTS OF SPEECH PATHOLOGY AND AUDIOLOGY AND PRACTICING CLINICIANS, THIS BOOK PRESENTS AN HISTORICAL OVERVIEW OF ATTEMPTS TO TEACH THE DEAF FROM BEFORE THE 15TH CENTURY THROUGH THE 20TH CENTURY. A DISCUSSION OF DIAGNOSTIC PROCEDURES FOR AUDITORY DISORDERS IN CHILDREN INCLUDED INFORMAL TESTING, PLAY AUDIOMETRY, SPEECH TESTS,…
Processing and Comprehension of Accented Speech by Monolingual and Bilingual Children
ERIC Educational Resources Information Center
McDonald, Margarethe; Gross, Megan; Buac, Milijana; Batko, Michelle; Kaushanskaya, Margarita
2018-01-01
This study tested the effect of Spanish-accented speech on sentence comprehension in children with different degrees of Spanish experience. The hypothesis was that earlier acquisition of Spanish would be associated with enhanced comprehension of Spanish-accented speech. Three groups of 5-6-year-old children were tested: monolingual…
2017-03-31
dB Sound Pressure Level (SPL) background pink noise. The speech intelligibility tests shall result in a Modified Rhyme Test (MRT) score as listed...below. Speech intelligibility testing shall be measured per ANSI S3.2 for each background pink noise level using a minimum of ten talkers and of ten...listeners. The test shall be conducted wearing the JSAM-TA using appropriate communication amplification. Test must include the configurations
Perception of environmental sounds by experienced cochlear implant patients.
Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan
2011-01-01
Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries, or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds, and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Seventeen experienced postlingually deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern, and temporal order for tones tests), and a backward digit recall test. The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants, and r = 0.48 for vowels. HINT and CNC scores in quiet moderately correlated with the temporal order for tones. However, the correlation between speech and environmental sounds changed little after partialling out the variance due to other variables. Present findings indicate that environmental sound identification is difficult for CI patients. They further suggest that speech and environmental sounds may overlap considerably in their perceptual processing. Certain spectrotemproral processing abilities are separately associated with speech and environmental sound performance. However, they do not appear to mediate the relationship between speech and environmental sounds in CI patients. Environmental sound rehabilitation may be beneficial to some patients. Environmental sound testing may have potential diagnostic applications, especially with difficult-to-test populations and might also be predictive of speech performance for prelingually deafened patients with cochlear implants.
Automated analysis of free speech predicts psychosis onset in high-risk youths
Bedi, Gillinder; Carrillo, Facundo; Cecchi, Guillermo A; Slezak, Diego Fernández; Sigman, Mariano; Mota, Natália B; Ribeiro, Sidarta; Javitt, Daniel C; Copelli, Mauro; Corcoran, Cheryl M
2015-01-01
Background/Objectives: Psychiatry lacks the objective clinical tests routinely used in other specializations. Novel computerized methods to characterize complex behaviors such as speech could be used to identify and predict psychiatric illness in individuals. AIMS: In this proof-of-principle study, our aim was to test automated speech analyses combined with Machine Learning to predict later psychosis onset in youths at clinical high-risk (CHR) for psychosis. Methods: Thirty-four CHR youths (11 females) had baseline interviews and were assessed quarterly for up to 2.5 years; five transitioned to psychosis. Using automated analysis, transcripts of interviews were evaluated for semantic and syntactic features predicting later psychosis onset. Speech features were fed into a convex hull classification algorithm with leave-one-subject-out cross-validation to assess their predictive value for psychosis outcome. The canonical correlation between the speech features and prodromal symptom ratings was computed. Results: Derived speech features included a Latent Semantic Analysis measure of semantic coherence and two syntactic markers of speech complexity: maximum phrase length and use of determiners (e.g., which). These speech features predicted later psychosis development with 100% accuracy, outperforming classification from clinical interviews. Speech features were significantly correlated with prodromal symptoms. Conclusions: Findings support the utility of automated speech analysis to measure subtle, clinically relevant mental state changes in emergent psychosis. Recent developments in computer science, including natural language processing, could provide the foundation for future development of objective clinical tests for psychiatry. PMID:27336038
Agnew, Zarinah; Nagarajan, Srikantan; Houde, John; Ivry, Richard B.
2017-01-01
The cerebellum has been hypothesized to form a crucial part of the speech motor control network. Evidence for this comes from patients with cerebellar damage, who exhibit a variety of speech deficits, as well as imaging studies showing cerebellar activation during speech production in healthy individuals. To date, the precise role of the cerebellum in speech motor control remains unclear, as it has been implicated in both anticipatory (feedforward) and reactive (feedback) control. Here, we assess both anticipatory and reactive aspects of speech motor control, comparing the performance of patients with cerebellar degeneration and matched controls. Experiment 1 tested feedforward control by examining speech adaptation across trials in response to a consistent perturbation of auditory feedback. Experiment 2 tested feedback control, examining online corrections in response to inconsistent perturbations of auditory feedback. Both male and female patients and controls were tested. The patients were impaired in adapting their feedforward control system relative to controls, exhibiting an attenuated anticipatory response to the perturbation. In contrast, the patients produced even larger compensatory responses than controls, suggesting an increased reliance on sensory feedback to guide speech articulation in this population. Together, these results suggest that the cerebellum is crucial for maintaining accurate feedforward control of speech, but relatively uninvolved in feedback control. SIGNIFICANCE STATEMENT Speech motor control is a complex activity that is thought to rely on both predictive, feedforward control as well as reactive, feedback control. While the cerebellum has been shown to be part of the speech motor control network, its functional contribution to feedback and feedforward control remains controversial. Here, we use real-time auditory perturbations of speech to show that patients with cerebellar degeneration are impaired in adapting feedforward control of speech but retain the ability to make online feedback corrections; indeed, the patients show an increased sensitivity to feedback. These results indicate that the cerebellum forms a crucial part of the feedforward control system for speech but is not essential for online, feedback control. PMID:28842410
Statistical assessment of speech system performance
NASA Technical Reports Server (NTRS)
Moshier, Stephen L.
1977-01-01
Methods for the normalization of performance tests results of speech recognition systems are presented. Technological accomplishments in speech recognition systems, as well as planned research activities are described.
Assessing Auditory Processing Abilities in Typically Developing School-Aged Children.
McDermott, Erin E; Smart, Jennifer L; Boiano, Julie A; Bragg, Lisa E; Colon, Tiffany N; Hanson, Elizabeth M; Emanuel, Diana C; Kelly, Andrea S
2016-02-01
Large discrepancies exist in the literature regarding definition, diagnostic criteria, and appropriate assessment for auditory processing disorder (APD). Therefore, a battery of tests with normative data is needed. The purpose of this study is to collect normative data on a variety of tests for APD on children aged 7-12 yr, and to examine effects of outside factors on test performance. Children aged 7-12 yr with normal hearing, speech and language abilities, cognition, and attention were recruited for participation in this normative data collection. One hundred and forty-seven children were recruited using flyers and word of mouth. Of the participants recruited, 137 children qualified for the study. Participants attended schools located in areas that varied in terms of socioeconomic status, and resided in six different states. Audiological testing included a hearing screening (15 dB HL from 250 to 8000 Hz), word recognition testing, tympanometry, ipsilateral and contralateral reflexes, and transient-evoked otoacoustic emissions. The language, nonverbal IQ, phonological processing, and attention skills of each participant were screened using the Clinical Evaluation of Language Fundamentals-4 Screener, Test of Nonverbal Intelligence, Comprehensive Test of Phonological Processing, and Integrated Visual and Auditory-Continuous Performance Test, respectively. The behavioral APD battery included the following tests: Dichotic Digits Test, Frequency Pattern Test, Duration Pattern Test, Random Gap Detection Test, Compressed and Reverberated Words Test, Auditory Figure Ground (signal-to-noise ratio of +8 and +0), and Listening in Spatialized Noise-Sentences Test. Mean scores and standard deviations of each test were calculated, and analysis of variance tests were used to determine effects of factors such as gender, handedness, and birth history on each test. Normative data tables for the test battery were created for the following age groups: 7- and 8-yr-olds (n = 49), 9- and 10-yr-olds (n = 40), and 11- and 12-yr-olds (n = 48). No significant effects were seen for gender or handedness on any of the measures. The data collected in this study are appropriate for use in clinical diagnosis of APD. Use of a low-linguistically loaded core battery with the addition of more language-based tests, when language abilities are known, can provide a well-rounded picture of a child's auditory processing abilities. Screening for language, phonological processing, attention, and cognitive level can provide more information regarding a diagnosis of APD, determine appropriateness of the test battery for the individual child, and may assist with making recommendations or referrals. It is important to use a multidisciplinary approach in the diagnosis and treatment of APD due to the high likelihood of comorbidity with other language, learning, or attention deficits. Although children with other diagnoses may be tested for APD, it is important to establish previously made diagnoses before testing to aid in appropriate test selection and recommendations. American Academy of Audiology.
Diagnosing Dyslexia: The Screening of Auditory Laterality.
ERIC Educational Resources Information Center
Johansen, Kjeld
A study investigated whether a correlation exists between the degree and nature of left-brain laterality and specific reading and spelling difficulties. Subjects, 50 normal readers and 50 reading disabled persons native to the island of Bornholm, had their auditory laterality screened using pure-tone audiometry and dichotic listening. Results…
Cognitive Factors in Sexual Arousal: The Role of Distraction
ERIC Educational Resources Information Center
Geer, James H.; Fuhr, Robert
1976-01-01
Four groups of male undergraduates were instructed to perform complex cognitive operations when randomly presented single digits of a dichotic listening paradigm. An erotic tape recording was played into the nonattended ear. Sexual arousal varied directly as a function of the complexity of the distracting cognitive operations. (Author)
Native and Nonnative Processing of Japanese Pitch Accent
ERIC Educational Resources Information Center
Wu, Xianghua; Tu, Jung-Yueh; Wang, Yue
2012-01-01
The theoretical framework of this study is based on the prevalent debate of whether prosodic processing is influenced by higher level linguistic-specific circuits or reflects lower level encoding of physical properties. Using the dichotic listening technique, the study investigates the hemispheric processing of Japanese pitch accent by native…
Development and preliminary evaluation of a pediatric Spanish-English speech perception task.
Calandruccio, Lauren; Gomez, Bianca; Buss, Emily; Leibold, Lori J
2014-06-01
The purpose of this study was to develop a task to evaluate children's English and Spanish speech perception abilities in either noise or competing speech maskers. Eight bilingual Spanish-English and 8 age-matched monolingual English children (ages 4.9-16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish-English talkers. The target stimuli were 30 disyllabic English and Spanish words, familiar to 5-year-olds and easily illustrated. Competing stimuli included either 2-talker English or 2-talker Spanish speech (corresponding to target language) and spectrally matched noise. For both groups of children, regardless of test language, performance was significantly worse for the 2-talker than for the noise masker condition. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. Results indicated that the stimuli and task were appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use.
Development and preliminary evaluation of a pediatric Spanish/English speech perception task
Calandruccio, Lauren; Gomez, Bianca; Buss, Emily; Leibold, Lori J.
2014-01-01
Purpose To develop a task to evaluate children’s English and Spanish speech perception abilities in either noise or competing speech maskers. Methods Eight bilingual Spanish/English and eight age matched monolingual English children (ages 4.9 –16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish/English talkers. The target stimuli were thirty disyllabic English and Spanish words, familiar to five-year-olds, and easily illustrated. Competing stimuli included either two-talker English or two-talker Spanish speech (corresponding to target language) and spectrally matched noise. Results For both groups of children, regardless of test language, performance was significantly worse for the two-talker than the noise masker. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. Conclusions Results indicate that the stimuli and task are appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use. PMID:24686915
Halting in Single Word Production: A Test of the Perceptual Loop Theory of Speech Monitoring
ERIC Educational Resources Information Center
Slevc, L. Robert; Ferreira, Victor S.
2006-01-01
The "perceptual loop theory" of speech monitoring (Levelt, 1983) claims that inner and overt speech are monitored by the comprehension system, which detects errors by comparing the comprehension of formulated utterances to originally intended utterances. To test the perceptual loop monitor, speakers named pictures and sometimes attempted to halt…
The Development of the Mealings, Demuth, Dillon, and Buchholz Classroom Speech Perception Test
ERIC Educational Resources Information Center
Mealings, Kiri T.; Demuth, Katherine; Buchholz, Jörg; Dillon, Harvey
2015-01-01
Purpose: Open-plan classroom styles are increasingly being adopted in Australia despite evidence that their high intrusive noise levels adversely affect learning. The aim of this study was to develop a new Australian speech perception task (the Mealings, Demuth, Dillon, and Buchholz Classroom Speech Perception Test) and use it in an open-plan…
APEX/SPIN: a free test platform to measure speech intelligibility.
Francart, Tom; Hofmann, Michael; Vanthornhout, Jonas; Van Deun, Lieselot; van Wieringen, Astrid; Wouters, Jan
2017-02-01
Measuring speech intelligibility in quiet and noise is important in clinical practice and research. An easy-to-use free software platform for conducting speech tests is presented, called APEX/SPIN. The APEX/SPIN platform allows the use of any speech material in combination with any noise. A graphical user interface provides control over a large range of parameters, such as number of loudspeakers, signal-to-noise ratio and parameters of the procedure. An easy-to-use graphical interface is provided for calibration and storage of calibration values. To validate the platform, perception of words in quiet and sentences in noise were measured both with APEX/SPIN and with an audiometer and CD player, which is a conventional setup in current clinical practice. Five normal-hearing listeners participated in the experimental evaluation. Speech perception results were similar for the APEX/SPIN platform and conventional procedures. APEX/SPIN is a freely available and open source platform that allows the administration of all kinds of custom speech perception tests and procedures.
Ellis, Rachel J; Rönnberg, Jerker
2015-01-01
Proactive interference (PI) is the capacity to resist interference to the acquisition of new memories from information stored in the long-term memory. Previous research has shown that PI correlates significantly with the speech-in-noise recognition scores of younger adults with normal hearing. In this study, we report the results of an experiment designed to investigate the extent to which tests of visual PI relate to the speech-in-noise recognition scores of older adults with hearing loss, in aided and unaided conditions. The results suggest that measures of PI correlate significantly with speech-in-noise recognition only in the unaided condition. Furthermore the relation between PI and speech-in-noise recognition differs to that observed in younger listeners without hearing loss. The findings suggest that the relation between PI tests and the speech-in-noise recognition scores of older adults with hearing loss relates to capability of the test to index cognitive flexibility.
Ellis, Rachel J.; Rönnberg, Jerker
2015-01-01
Proactive interference (PI) is the capacity to resist interference to the acquisition of new memories from information stored in the long-term memory. Previous research has shown that PI correlates significantly with the speech-in-noise recognition scores of younger adults with normal hearing. In this study, we report the results of an experiment designed to investigate the extent to which tests of visual PI relate to the speech-in-noise recognition scores of older adults with hearing loss, in aided and unaided conditions. The results suggest that measures of PI correlate significantly with speech-in-noise recognition only in the unaided condition. Furthermore the relation between PI and speech-in-noise recognition differs to that observed in younger listeners without hearing loss. The findings suggest that the relation between PI tests and the speech-in-noise recognition scores of older adults with hearing loss relates to capability of the test to index cognitive flexibility. PMID:26283981
Adaptation to spectrally-rotated speech.
Green, Tim; Rosen, Stuart; Faulkner, Andrew; Paterson, Ruth
2013-08-01
Much recent interest surrounds listeners' abilities to adapt to various transformations that distort speech. An extreme example is spectral rotation, in which the spectrum of low-pass filtered speech is inverted around a center frequency (2 kHz here). Spectral shape and its dynamics are completely altered, rendering speech virtually unintelligible initially. However, intonation, rhythm, and contrasts in periodicity and aperiodicity are largely unaffected. Four normal hearing adults underwent 6 h of training with spectrally-rotated speech using Continuous Discourse Tracking. They and an untrained control group completed pre- and post-training speech perception tests, for which talkers differed from the training talker. Significantly improved recognition of spectrally-rotated sentences was observed for trained, but not untrained, participants. However, there were no significant improvements in the identification of medial vowels in /bVd/ syllables or intervocalic consonants. Additional tests were performed with speech materials manipulated so as to isolate the contribution of various speech features. These showed that preserving intonational contrasts did not contribute to the comprehension of spectrally-rotated speech after training, and suggested that improvements involved adaptation to altered spectral shape and dynamics, rather than just learning to focus on speech features relatively unaffected by the transformation.
Winn, Matthew B; Won, Jong Ho; Moon, Il Joon
This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language.
Winn, Matthew B.; Won, Jong Ho; Moon, Il Joon
2016-01-01
Objectives This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). We hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. We further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design Nineteen CI listeners and 10 listeners with normal hearing (NH) participated in a suite of tasks that included spectral ripple discrimination (SRD), temporal modulation detection (TMD), and syllable categorization, which was split into a spectral-cue-based task (targeting the /ba/-/da/ contrast) and a timing-cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated in order to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression in order to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for CI listeners. Results CI users were generally less successful at utilizing both spectral and temporal cues for categorization compared to listeners with normal hearing. For the CI listener group, SRD was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. TMD using 100 Hz and 10 Hz modulated noise was not correlated with the CI subjects’ categorization of VOT, nor with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart non-linguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (VOT) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language. PMID:27438871
Laterality, spatial abilities, and accident proneness.
Voyer, Susan D; Voyer, Daniel
2015-01-01
Although handedness as a measure of cerebral specialization has been linked to accident proneness, more direct measures of laterality are rarely considered. The present study aimed to fill that gap in the existing research. In addition, individual difference factors in accident proneness were further examined with the inclusion of mental rotation and navigation abilities measures. One hundred and forty participants were asked to complete the Mental Rotations Test, the Santa Barbara Sense of Direction scale, the Greyscales task, the Fused Dichotic Word Test, the Waterloo Handedness Questionnaire, and a grip strength task before answering questions related to number of accidents in five areas. Results indicated that handedness scores, absolute visual laterality score, absolute response time on the auditory laterality index, and navigation ability were significant predictors of the total number of accidents. Results are discussed with respect to cerebral hemispheric specialization and risk-taking attitudes and behavior.
[Development and equivalence evaluation of spondee lists of mandarin speech test materials].
Zhang, Hua; Wang, Shuo; Wang, Liang; Chen, Jing; Chen, Ai-ting; Guo, Lian-sheng; Zhao, Xiao-yan; Ji, Chen
2006-06-01
To edit the spondee (disyllable) word lists as a part of mandarin speech test materials (MSTM). These will be basic speech materials for routine tests in clinics and laboratories. Two groups of professionals (audiologists, Chinese and Mandarin scientists, linguistician and statistician) were set up at first. The editing principles were established after 3 round table meetings. Ten spondee lists, each with 50 words, were edited and recorded into cassettes. All lists were phonemically balanced (3-dimensions: vowels, consonants and Chinese tones). Seventy-three normal hearing college students were tested. The speech was presented by earphone monaurally. Three statistic methods were used for equivalent analysis. Related analysis showed that all lists were much related, except List 5. Cluster analysis showed that all ten lists could be classified as two groups. But Kappa test showed that the lists' homogeneity were not well. Spondee lists are one of the most routine speech test materials. Their editing, recording and equivalent evaluation are affected by many factors. This also needs multi-discipline cooperation. All lists edited in present study need future modification in recording and testing in order to be used clinically and in research. The phonemic balance should be kept.
Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special
ERIC Educational Resources Information Center
Vroomen, Jean; Stekelenburg, Jeroen J.
2011-01-01
Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…
Phonological mismatch makes aided speech recognition in noise cognitively taxing.
Rudner, Mary; Foo, Catharina; Rönnberg, Jerker; Lunner, Thomas
2007-12-01
The working memory framework for Ease of Language Understanding predicts that speech processing becomes more effortful, thus requiring more explicit cognitive resources, when there is mismatch between speech input and phonological representations in long-term memory. To test this prediction, we changed the compression release settings in the hearing instruments of experienced users and allowed them to train for 9 weeks with the new settings. After training, aided speech recognition in noise was tested with both the trained settings and orthogonal settings. We postulated that training would lead to acclimatization to the trained setting, which in turn would involve establishment of new phonological representations in long-term memory. Further, we postulated that after training, testing with orthogonal settings would give rise to phonological mismatch, associated with more explicit cognitive processing. Thirty-two participants (mean=70.3 years, SD=7.7) with bilateral sensorineural hearing loss (pure-tone average=46.0 dB HL, SD=6.5), bilaterally fitted for more than 1 year with digital, two-channel, nonlinear signal processing hearing instruments and chosen from the patient population at the Linköping University Hospital were randomly assigned to 9 weeks training with new, fast (40 ms) or slow (640 ms), compression release settings in both channels. Aided speech recognition in noise performance was tested according to a design with three within-group factors: test occasion (T1, T2), test setting (fast, slow), and type of noise (unmodulated, modulated) and one between-group factor: experience setting (fast, slow) for two types of speech materials-the highly constrained Hagerman sentences and the less-predictable Hearing in Noise Test (HINT). Complex cognitive capacity was measured using the reading span and letter monitoring tests. PREDICTION: We predicted that speech recognition in noise at T2 with mismatched experience and test settings would be associated with more explicit cognitive processing and thus stronger correlations with complex cognitive measures, as well as poorer performance if complex cognitive capacity was exceeded. Under mismatch conditions, stronger correlations were found between performance on speech recognition with the Hagerman sentences and reading span, along with poorer speech recognition for participants with low reading span scores. No consistent mismatch effect was found with HINT. The mismatch prediction generated by the working memory framework for Ease of Language Understanding is supported for speech recognition in noise with the highly constrained Hagerman sentences but not the less-predictable HINT.
Speech Perception and Short Term Memory Deficits in Persistent Developmental Speech Disorder
Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.
2008-01-01
Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech perception and short-term memory. Nine adults with a persistent familial developmental speech disorder without language impairment were compared with 20 controls on tasks requiring the discrimination of fine acoustic cues for word identification and on measures of verbal and nonverbal short-term memory. Significant group differences were found in the slopes of the discrimination curves for first formant transitions for word identification with stop gaps of 40 and 20 ms with effect sizes of 1.60 and 1.56. Significant group differences also occurred on tests of nonverbal rhythm and tonal memory, and verbal short-term memory with effect sizes of 2.38, 1.56 and 1.73. No group differences occurred in the use of stop gap durations for word identification. Because frequency-based speech perception and short-term verbal and nonverbal memory deficits both persisted into adulthood in the speech-impaired adults, these deficits may be involved in the persistence of speech disorders without language impairment. PMID:15896836
Developing a weighted measure of speech sound accuracy.
Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J
2011-02-01
To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.
Haumann, Sabine; Hohmann, Volker; Meis, Markus; Herzke, Tobias; Lenarz, Thomas; Büchner, Andreas
2012-01-01
Owing to technological progress and a growing body of clinical experience, indication criteria for cochlear implants (CI) are being extended to less severe hearing impairments. It is, therefore, worth reconsidering these indication criteria by introducing novel testing procedures. The diagnostic evidence collected will be evaluated. The investigation includes postlingually deafened adults seeking a CI. Prior to surgery, speech perception tests [Freiburg Speech Test and Oldenburg sentence (OLSA) test] were performed unaided and aided using the Oldenburg Master Hearing Aid (MHA) system. Linguistic skills were assessed with the visual Text Reception Threshold (TRT) test, and general state of health, socio-economic status (SES) and subjective hearing were evaluated through questionnaires. After surgery, the speech tests were repeated aided with a CI. To date, 97 complete data sets are available for evaluation. Statistical analyses showed significant correlations between postsurgical speech reception threshold (SRT) measured with the adaptive OLSA test and pre-surgical data such as the TRT test (r=−0.29), SES (r=−0.22) and (if available) aided SRT (r=0.53). The results suggest that new measures and setups such as the TRT test, SES and speech perception with the MHA provide valuable extra information regarding indication for CI. PMID:26557327
NASA Astrophysics Data System (ADS)
Feenaughty, Lynda
Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners judged each speech sample using the perceptual construct of Speech Severity using a visual analog scale. Additional measures obtained to describe participants included the Sentence Intelligibility Test (SIT), the 10-item Communication Participation Item Bank (CPIB), and standard biopsychosocial measures of depression (Beck Depression Inventory-Fast Screen; BDI-FS), fatigue (Fatigue Severity Scale; FSS), and overall disease severity (Expanded Disability Status Scale; EDSS). Healthy controls completed all measures, with the exception of the CPIB and EDSS. All data were analyzed using standard, descriptive and parametric statistics. For the MSCI group, the relationship between neuropsychological test scores and speech-language variables were explored for each speech task using Pearson correlations. The relationship between neuropsychological test scores and Speech Severity also was explored. Results and Discussion: Topic familiarity for descriptive discourse did not strongly influence speech production or perceptual variables; however, results indicated predicted task-related differences for some spoken language measures. With the exception of the MSCI group, all speaker groups produced the same or slower global speech timing (i.e., speech and articulatory rates), more silent and filled pauses, more grammatical and longer silent pause durations in spontaneous discourse compared to reading aloud. Results revealed no appreciable task differences for linguistic complexity measures. Results indicated group differences for speech rate. The MSCI group produced significantly faster speech rates compared to the MSDYS group. Both the MSDYS and the MSCI groups were judged to have significantly poorer perceived Speech Severity compared to typically aging adults. The Task x Group interaction was only significant for the number of silent pauses. The MSDYS group produced fewer silent pauses in spontaneous speech and more silent pauses in the reading task compared to other groups. Finally, correlation analysis revealed moderate relationships between neuropsychological test scores and speech hesitation measures, within the MSCI group. Slower information processing and poorer memory were significantly correlated with more silent pauses and poorer executive function was associated with fewer filled pauses in the Unfamiliar discourse task. Results have both clinical and theoretical implications. Overall, clinicians should demonstrate caution when interpreting global measures of speech timing and perceptual measures in the absence of information about cognitive ability. Results also have implications for a comprehensive model of spoken language incorporating cognitive, linguistic, and motor variables.
NASA Astrophysics Data System (ADS)
Fiedler, Lorenz; Wöstmann, Malte; Graversen, Carina; Brandmeyer, Alex; Lunner, Thomas; Obleser, Jonas
2017-06-01
Objective. Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening (‘cocktail party’) scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. Approach. To investigate whether a listener’s attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal (‘in-Ear-EEG’) and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n = 7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Main results. Each individual participants’ attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. Significance. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener’s focus of attention.
Speech perception in noise in unilateral hearing loss.
Mondelli, Maria Fernanda Capoani Garcia; Dos Santos, Marina de Marchi; José, Maria Renata
2016-01-01
Unilateral hearing loss is characterized by a decrease of hearing in one ear only. In the presence of ambient noise, individuals with unilateral hearing loss are faced with greater difficulties understanding speech than normal listeners. To evaluate the speech perception of individuals with unilateral hearing loss in speech perception with and without competitive noise, before and after the hearing aid fitting process. The study included 30 adults of both genders diagnosed with moderate or severe sensorineural unilateral hearing loss using the Hearing In Noise Test - Hearing In Noise Test-Brazil, in the following scenarios: silence, frontal noise, noise to the right, and noise to the left, before and after the hearing aid fitting process. The study participants had a mean age of 41.9 years and most of them presented right unilateral hearing loss. In all cases evaluated with Hearing In Noise Test, a better performance in speech perception was observed with the use of hearing aids. Using the Hearing In Noise Test-Brazil test evaluation, individuals with unilateral hearing loss demonstrated better performance in speech perception when using hearing aids, both in silence and in situations with a competing noise, with use of hearing aids. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Enhancing Speech Intelligibility: Interactions among Context, Modality, Speech Style, and Masker
ERIC Educational Resources Information Center
Van Engen, Kristin J.; Phelps, Jasmine E. B.; Smiljanic, Rajka; Chandrasekaran, Bharath
2014-01-01
Purpose: The authors sought to investigate interactions among intelligibility-enhancing speech cues (i.e., semantic context, clearly produced speech, and visual information) across a range of masking conditions. Method: Sentence recognition in noise was assessed for 29 normal-hearing listeners. Testing included semantically normal and anomalous…
Decoding spectrotemporal features of overt and covert speech from the human cortex
Martin, Stéphanie; Brunner, Peter; Holdgraf, Chris; Heinze, Hans-Jochen; Crone, Nathan E.; Rieger, Jochem; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.
2014-01-01
Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10−5; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate. PMID:24904404
2017-03-01
in an environment 71-115 dB Sound Pressure Level (SPL) background pink noise. The speech intelligibility tests shall result in a Modified Rhyme... Test (MRT) score as listed below. Speech intelligibility testing shall be measured per ANSI S3.2 for each background pink noise level using a...minimum of ten talkers and of ten listeners. The test shall be conducted wearing the JSAM-TA using appropriate communication amplification. Test must
2017-04-06
Pressure Level (SPL) background pink noise. The speech intelligibility tests shall result in a Modified Rhyme Test (MRT) score as listed below...Speech intelligibility testing shall be measured per ANSI S3.2 for each background pink noise level using a minimum of ten talkers and of ten...listeners. The test shall be conducted wearing the JSAM-TA using appropriate communication 6 DISTRIBUTION STATEMENT A: Approved for public release
ERIC Educational Resources Information Center
Masapollo, Matthew; Polka, Linda; Ménard, Lucie
2016-01-01
To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre-babbling infants (at 4-6 months) prefer listening to…
Chest Wall Motion during Speech Production in Patients with Advanced Ankylosing Spondylitis
ERIC Educational Resources Information Center
Kalliakosta, Georgia; Mandros, Charalampos; Tzelepis, George E.
2007-01-01
Purpose: To test the hypothesis that ankylosing spondylitis (AS) alters the pattern of chest wall motion during speech production. Method: The pattern of chest wall motion during speech was measured with respiratory inductive plethysmography in 6 participants with advanced AS (5 men, 1 woman, age 45 plus or minus 8 years, Schober test 1.45 plus or…
Analysis of glottal source parameters in Parkinsonian speech.
Hanratty, Jane; Deegan, Catherine; Walsh, Mary; Kirkpatrick, Barry
2016-08-01
Diagnosis and monitoring of Parkinson's disease has a number of challenges as there is no definitive biomarker despite the broad range of symptoms. Research is ongoing to produce objective measures that can either diagnose Parkinson's or act as an objective decision support tool. Recent research on speech based measures have demonstrated promising results. This study aims to investigate the characteristics of the glottal source signal in Parkinsonian speech. An experiment is conducted in which a selection of glottal parameters are tested for their ability to discriminate between healthy and Parkinsonian speech. Results for each glottal parameter are presented for a database of 50 healthy speakers and a database of 16 speakers with Parkinsonian speech symptoms. Receiver operating characteristic (ROC) curves were employed to analyse the results and the area under the ROC curve (AUC) values were used to quantify the performance of each glottal parameter. The results indicate that glottal parameters can be used to discriminate between healthy and Parkinsonian speech, although results varied for each parameter tested. For the task of separating healthy and Parkinsonian speech, 2 out of the 7 glottal parameters tested produced AUC values of over 0.9.
Stenbäck, Victoria; Hällgren, Mathias; Lyxell, Björn; Larsby, Birgitta
2015-06-01
Cognitive functions and speech-recognition-in-noise were evaluated with a cognitive test battery, assessing response inhibition using the Hayling task, working memory capacity (WMC) and verbal information processing, and an auditory test of speech recognition. The cognitive tests were performed in silence whereas the speech recognition task was presented in noise. Thirty young normally-hearing individuals participated in the study. The aim of the study was to investigate one executive function, response inhibition, and whether it is related to individual working memory capacity (WMC), and how speech-recognition-in-noise relates to WMC and inhibitory control. The results showed a significant difference between initiation and response inhibition, suggesting that the Hayling task taps cognitive activity responsible for executive control. Our findings also suggest that high verbal ability was associated with better performance in the Hayling task. We also present findings suggesting that individuals who perform well on tasks involving response inhibition, and WMC, also perform well on a speech-in-noise task. Our findings indicate that capacity to resist semantic interference can be used to predict performance on speech-in-noise tasks. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Effects of hearing aid settings for electric-acoustic stimulation.
Dillon, Margaret T; Buss, Emily; Pillsbury, Harold C; Adunka, Oliver F; Buchman, Craig A; Adunka, Marcia C
2014-02-01
Cochlear implant (CI) recipients with postoperative hearing preservation may utilize an ipsilateral bimodal listening condition known as electric-acoustic stimulation (EAS). Studies on EAS have reported significant improvements in speech perception abilities over CI-alone listening conditions. Adjustments to the hearing aid (HA) settings to match prescription targets routinely used in the programming of conventional amplification may provide additional gains in speech perception abilities. Investigate the difference in users' speech perception scores when listening with the recommended HA settings for EAS patients versus HA settings adjusted to match National Acoustic Laboratories' nonlinear fitting procedure version 1 (NAL-NL1) targets. Prospective analysis of the influence of HA settings. Nine EAS recipients with greater than 12 mo of listening experience with the DUET speech processor. Subjects were tested in the EAS listening condition with two different HA setting configurations. Speech perception materials included consonant-nucleus-consonant (CNC) words in quiet, AzBio sentences in 10-talker speech babble at a signal-to-noise ratio (SNR) of +10, and the Bamford-Kowal-Bench sentences in noise (BKB-SIN) test. The speech perception performance on each test measure was compared between the two HA configurations. Subjects experienced a significant improvement in speech perception abilities with the HA settings adjusted to match NAL-NL1 targets over the recommended HA settings. EAS subjects have been shown to experience improvements in speech perception abilities when listening to ipsilateral combined stimulation. This population's abilities may be underestimated with current HA settings. Tailoring the HA output to the patient's individual hearing loss offers improved outcomes on speech perception measures. American Academy of Audiology.
Qi, Beier; Liu, Bo; Liu, Sha; Liu, Haihong; Dong, Ruijuan; Zhang, Ning; Gong, Shusheng
2011-05-01
To study the effect of cochlear electrode coverage and different insertion region on speech recognition, especially tone perception of cochlear implant users whose native language is Mandarin Chinese. Setting seven test conditions by fitting software. All conditions were created by switching on/off respective channels in order to simulate different insertion position. Then Mandarin CI users received 4 Speech tests, including Vowel Identification test, Consonant Identification test, Tone Identification test-male speaker, Mandarin HINT test (SRS) in quiet and noise. To all test conditions: the average score of vowel identification was significantly different, from 56% to 91% (Rank sum test, P < 0.05). The average score of consonant identification was significantly different, from 72% to 85% (ANOVNA, P < 0.05). The average score of Tone identification was not significantly different (ANOVNA, P > 0.05). However the more channels activated, the higher scores obtained, from 68% to 81%. This study shows that there is a correlation between insertion depth and speech recognition. Because all parts of the basement membrane can help CI users to improve their speech recognition ability, it is very important to enhance verbal communication ability and social interaction ability of CI users by increasing insertion depth and actively stimulating the top region of cochlear.
Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo
2009-04-01
The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.
[Characteristics, advantages, and limits of matrix tests].
Brand, T; Wagener, K C
2017-03-01
Deterioration of communication abilities due to hearing problems is particularly relevant in listening situations with noise. Therefore, speech intelligibility tests in noise are required for audiological diagnostics and evaluation of hearing rehabilitation. This study analyzed the characteristics of matrix tests assessing the 50 % speech recognition threshold in noise. What are their advantages and limitations? Matrix tests are based on a matrix of 50 words (10 five-word sentences with same grammatical structure). In the standard setting, 20 sentences are presented using an adaptive procedure estimating the individual 50 % speech recognition threshold in noise. At present, matrix tests in 17 different languages are available. A high international comparability of matrix tests exists. The German language matrix test (OLSA, male speaker) has a reference 50 % speech recognition threshold of -7.1 (± 1.1) dB SNR. Before using a matrix test for the first time, the test person has to become familiar with the basic speech material using two training lists. Hereafter, matrix tests produce constant results even if repeated many times. Matrix tests are suitable for users of hearing aids and cochlear implants, particularly for assessment of benefit during the fitting process. Matrix tests can be performed in closed form and consequently with non-native listeners, even if the experimenter does not speak the test person's native language. Short versions of matrix tests are available for listeners with a shorter memory span, e.g., children.
Polat, Zahra; Bulut, Erdoğan; Ataş, Ahmet
2016-09-01
Spoken word recognition and speech perception tests in quiet are being used as a routine in assessment of the benefit which children and adult cochlear implant users receive from their devices. Cochlear implant users generally demonstrate high level performances in these test materials as they are able to achieve high level speech perception ability in quiet situations. Although these test materials provide valuable information regarding Cochlear Implant (CI) users' performances in optimal listening conditions, they do not give realistic information regarding performances in adverse listening conditions, which is the case in the everyday environment. The aim of this study was to assess the speech intelligibility performance of post lingual CI users in the presence of noise at different signal-to-noise ratio with the Matrix Test developed for Turkish language. Cross-sectional study. The thirty post lingual implant user adult subjects, who had been using implants for a minimum of one year, were evaluated with Turkish Matrix test. Subjects' speech intelligibility was measured using the adaptive and non-adaptive Matrix Test in quiet and noisy environments. The results of the study show a correlation between Pure Tone Average (PTA) values of the subjects and Matrix test Speech Reception Threshold (SRT) values in the quiet. Hence, it is possible to asses PTA values of CI users using the Matrix Test also. However, no correlations were found between Matrix SRT values in the quiet and Matrix SRT values in noise. Similarly, the correlation between PTA values and intelligibility scores in noise was also not significant. Therefore, it may not be possible to assess the intelligibility performance of CI users using test batteries performed in quiet conditions. The Matrix Test can be used to assess the benefit of CI users from their systems in everyday life, since it is possible to perform intelligibility test with the Matrix test using a material that CI users experience in their everyday life and it is possible to assess their difficulty in speech discrimination in noisy conditions they have to cope with.
Hearing Evaluation in Children (For Parents)
... be used to test hearing, depending on a child's age, development, and health status. During behavioral tests, an audiologist carefully watches a child respond to sounds like calibrated speech (speech that ...
The Influence of Private Speech on Writing Development: A Vygotskian Perspective.
ERIC Educational Resources Information Center
Schimmoeller, Margaret A.
This paper presents a portion of a larger study testing assumptions from Lev Vygotsky's spontaneous private speech theory and the relationship between private speech (overt self-talk) and writing development. Sixteen kindergarten and first-grade children were observed over time in natural classroom settings to note changes in private speech and…
Neural Coding of Formant-Exaggerated Speech in the Infant Brain
ERIC Educational Resources Information Center
Zhang, Yang; Koerner, Tess; Miller, Sharon; Grice-Patil, Zach; Svec, Adam; Akbari, David; Tusler, Liz; Carney, Edward
2011-01-01
Speech scientists have long proposed that formant exaggeration in infant-directed speech plays an important role in language acquisition. This event-related potential (ERP) study investigated neural coding of formant-exaggerated speech in 6-12-month-old infants. Two synthetic /i/ vowels were presented in alternating blocks to test the effects of…
The Tuning of Human Neonates' Preference for Speech
ERIC Educational Resources Information Center
Vouloumanos, Athena; Hauser, Marc D.; Werker, Janet F.; Martin, Alia
2010-01-01
Human neonates prefer listening to speech compared to many nonspeech sounds, suggesting that humans are born with a bias for speech. However, neonates' preference may derive from properties of speech that are not unique but instead are shared with the vocalizations of other species. To test this, thirty neonates and sixteen 3-month-olds were…
Vanryckeghem, Martine; Matthews, Michael; Xu, Peixin
2017-11-08
The aim of this study was to evaluate the usefulness of the Speech Situation Checklist for adults who stutter (SSC) in differentiating people who stutter (PWS) from speakers with no stutter based on self-reports of anxiety and speech disruption in communicative settings. The SSC's psychometric properties were examined, norms were established, and suggestions for treatment were formulated. The SSC was administered to 88 PWS seeking treatment and 209 speakers with no stutter between the ages of 18 and 62. The SSC consists of 2 sections investigating negative emotional reaction and speech disruption in 38 speech situations that are identical in both sections. The SSC-Emotional Reaction and SSC-Speech Disruption data show that these self-report tests differentiate PWS from speakers with no stutter to a statistically significant extent and have great discriminative value. The tests have good internal reliability, content, and construct validity. Age and gender do not affect the scores of the PWS. The SSC-Emotional Reaction and SSC-Speech Disruption seem to be powerful measures to investigate negative emotion and speech breakdown in an array of speech situations. The item scores give direction to treatment by suggesting speech situations that need a clinician's attention in terms of generalization and carry-over of within-clinic therapeutic gains into in vivo settings.
Maxillary dental arch dimensions in 6-year-old children with articulatory speech disorders.
Heliövaara, Arja
2011-01-01
To evaluate maxillary dental arch dimensions in 6-year-old children with articulatory speech disorders and to compare their dental arch dimensions with age- and sex-matched controls without speech disorders. Fifty-two children (15 girls) with errors in the articulation of the sounds /r/, /s/ or /l/ were compared retrospectively with age- and sex-matched controls from dental casts taken at a mean age of 6.4 years (range 5.0-8.4). All children with articulatory speech disorders had been referred to City of Helsinki Health Care, Dental Care Department by a phoniatrician or a speech therapist in order to get oral-motor activators (removable palatal plates) to be used in their speech therapy. A χ2-test and paired Student's t tests were used in the statistical analyses. The children with articulatory speech disorders had similar maxillary dental arch widths but smaller maxillary dental arch length than the controls. This small series suggests that 6-year-old children with articulatory speech disorders may have decreased maxillary dental arch length. Copyright © 2011 S. Karger AG, Basel.
ERIC Educational Resources Information Center
Blood, Gordon W.
1985-01-01
Results of a study involving 76 stutterers and 76 nonstutterers (seven to 15 years old) included (1) a right- ear preference for both groups; (2) differences in dichotic stuttering and nonstuttering Ss; and (3) a relationship between stuttering severity and hemispheric dominance dependency on manner of data analysis. (Author/CL)
ERIC Educational Resources Information Center
Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen
2013-01-01
Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…
ERIC Educational Resources Information Center
Kershner, John R.
2016-01-01
Rapidly changing environments in day-to-day activities, enriched with stimuli competing for attention, require a cognitive control mechanism to select relevant stimuli, ignore irrelevant stimuli, and shift attention between alternative features of the environment. Such attentional orchestration is essential to the acquisition of reading skills. In…
ERIC Educational Resources Information Center
Hahn, Constanze; Neuhaus, Andres H.; Pogun, Sakire; Dettling, Michael; Kotz, Sonja A.; Hahn, Eric; Brune, Martin; Gunturkun, Onur
2011-01-01
Schizophrenia has been associated with deficits in functional brain lateralization. According to some authors, the reduction of asymmetry could even promote this psychosis. At the same time, schizophrenia is accompanied by a high prevalence of nicotine dependency compared to any other population. This association is very interesting, because…
Selective attention and the auditory vertex potential. 1: Effects of stimulus delivery rate
NASA Technical Reports Server (NTRS)
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
1975-01-01
Enhancement of the auditory vertex potentials with selective attention to dichotically presented tone pips was found to be critically sensitive to the range of inter-stimulus intervals in use. Only at the shortest intervals was a clear-cut enhancement of the latency component to stimuli observed for the attended ear.
Perception and Lateralization of Spoken Emotion by Youths with High-Functioning Forms of Autism
ERIC Educational Resources Information Center
Baker, Kimberly F.; Montgomery, Allen A.; Abramson, Ruth
2010-01-01
The perception and the cerebral lateralization of spoken emotions were investigated in children and adolescents with high-functioning forms of autism (HFFA), and age-matched typically developing controls (TDC). A dichotic listening task using nonsense passages was used to investigate the recognition of four emotions: happiness, sadness, anger, and…
Comparison of Psychophysiological and Dual-Task Measures of Listening Effort
ERIC Educational Resources Information Center
Seeman, Scott; Sims, Rebecca
2015-01-01
Purpose: We wished to make a comparison of psychophysiological measures of listening effort with subjective and dual-task measures of listening effort for a diotic-dichotic-digits and a sentences-in-noise task. Method: Three groups of young adults (18-38 years old) with normal hearing participated in three experiments: two psychophysiological…
Selective attention to emotional prosody in social anxiety: a dichotic listening study.
Peschard, Virginie; Gilboa-Schechtman, Eva; Philippot, Pierre
2017-12-01
The majority of evidence on social anxiety (SA)-linked attentional biases to threat comes from research using facial expressions. Emotions are, however, communicated through other channels, such as voice. Despite its importance in the interpretation of social cues, emotional prosody processing in SA has been barely explored. This study investigated whether SA is associated with enhanced processing of task-irrelevant angry prosody. Fifty-three participants with high and low SA performed a dichotic listening task in which pairs of male/female voices were presented, one to each ear, with either the same or different prosody (neutral or angry). Participants were instructed to focus on either the left or right ear and to identify the speaker's gender in the attended side. Our main results show that, once attended, task-irrelevant angry prosody elicits greater interference than does neutral prosody. Surprisingly, high socially anxious participants were less prone to distraction from attended-angry (compared to attended-neutral) prosody than were low socially anxious individuals. These findings emphasise the importance of examining SA-related biases across modalities.
The influence of memory and attention on the ear advantage in dichotic listening.
D'Anselmo, Anita; Marzoli, Daniele; Brancucci, Alfredo
2016-12-01
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant-vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention. Copyright © 2016 Elsevier B.V. All rights reserved.
Understanding the abstract role of speech in communication at 12 months.
Martin, Alia; Onishi, Kristine H; Vouloumanos, Athena
2012-04-01
Adult humans recognize that even unfamiliar speech can communicate information between third parties, demonstrating an ability to separate communicative function from linguistic content. We examined whether 12-month-old infants understand that speech can communicate before they understand the meanings of specific words. Specifically, we test the understanding that speech permits the transfer of information about a Communicator's target object to a Recipient. Initially, the Communicator selectively grasped one of two objects. In test, the Communicator could no longer reach the objects. She then turned to the Recipient and produced speech (a nonsense word) or non-speech (coughing). Infants looked longer when the Recipient selected the non-target than the target object when the Communicator had produced speech but not coughing (Experiment 1). Looking time patterns differed from the speech condition when the Recipient rather than the Communicator produced the speech (Experiment 2), and when the Communicator produced a positive emotional vocalization (Experiment 3), but did not differ when the Recipient had previously received information about the target by watching the Communicator's selective grasping (Experiment 4). Thus infants understand the information-transferring properties of speech and recognize some of the conditions under which others' information states can be updated. These results suggest that infants possess an abstract understanding of the communicative function of speech, providing an important potential mechanism for language and knowledge acquisition. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Loukina, Anastassia; Buzick, Heather
2017-01-01
This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…
ERIC Educational Resources Information Center
Paatsch, Louise E.; Blamey, Peter J.; Sarant, Julia Z.; Martin, Lois F.A.; Bow, Catherine P.
2004-01-01
Open-set word and sentence speech-perception test scores are commonly used as a measure of hearing abilities in children and adults using cochlear implants and/or hearing aids. These tests ore usually presented auditorily with a verbal response. In the case of children, scores are typically lower and more variable than for adults with hearing…
Parrell, Benjamin; Agnew, Zarinah; Nagarajan, Srikantan; Houde, John; Ivry, Richard B
2017-09-20
The cerebellum has been hypothesized to form a crucial part of the speech motor control network. Evidence for this comes from patients with cerebellar damage, who exhibit a variety of speech deficits, as well as imaging studies showing cerebellar activation during speech production in healthy individuals. To date, the precise role of the cerebellum in speech motor control remains unclear, as it has been implicated in both anticipatory (feedforward) and reactive (feedback) control. Here, we assess both anticipatory and reactive aspects of speech motor control, comparing the performance of patients with cerebellar degeneration and matched controls. Experiment 1 tested feedforward control by examining speech adaptation across trials in response to a consistent perturbation of auditory feedback. Experiment 2 tested feedback control, examining online corrections in response to inconsistent perturbations of auditory feedback. Both male and female patients and controls were tested. The patients were impaired in adapting their feedforward control system relative to controls, exhibiting an attenuated anticipatory response to the perturbation. In contrast, the patients produced even larger compensatory responses than controls, suggesting an increased reliance on sensory feedback to guide speech articulation in this population. Together, these results suggest that the cerebellum is crucial for maintaining accurate feedforward control of speech, but relatively uninvolved in feedback control. SIGNIFICANCE STATEMENT Speech motor control is a complex activity that is thought to rely on both predictive, feedforward control as well as reactive, feedback control. While the cerebellum has been shown to be part of the speech motor control network, its functional contribution to feedback and feedforward control remains controversial. Here, we use real-time auditory perturbations of speech to show that patients with cerebellar degeneration are impaired in adapting feedforward control of speech but retain the ability to make online feedback corrections; indeed, the patients show an increased sensitivity to feedback. These results indicate that the cerebellum forms a crucial part of the feedforward control system for speech but is not essential for online, feedback control. Copyright © 2017 the authors 0270-6474/17/379249-10$15.00/0.
Shen, Yi; Kern, Allison B.
2018-01-01
Individual differences in the recognition of monosyllabic words, either in isolation (NU6 test) or in sentence context (SPIN test), were investigated under the theoretical framework of the speech intelligibility index (SII). An adaptive psychophysical procedure, namely the quick-band-importance-function procedure, was developed to enable the fitting of the SII model to individual listeners. Using this procedure, the band importance function (i.e., the relative weights of speech information across the spectrum) and the link function relating the SII to recognition scores can be simultaneously estimated while requiring only 200 to 300 trials of testing. Octave-frequency band importance functions and link functions were estimated separately for NU6 and SPIN materials from 30 normal-hearing listeners who were naïve to speech recognition experiments. For each type of speech material, considerable individual differences in the spectral weights were observed in some but not all frequency regions. At frequencies where the greatest intersubject variability was found, the spectral weights were correlated between the two speech materials, suggesting that the variability in spectral weights reflected listener-originated factors. PMID:29532711
Davidow, Jason H; Grossman, Heather L; Edge, Robin L
2018-05-01
Voluntary stuttering techniques involve persons who stutter purposefully interjecting disfluencies into their speech. Little research has been conducted on the impact of these techniques on the speech pattern of persons who stutter. The present study examined whether changes in the frequency of voluntary stuttering accompanied changes in stuttering frequency, articulation rate, speech naturalness, and speech effort. In total, 12 persons who stutter aged 16-34 years participated. Participants read four 300-syllable passages during a control condition, and three voluntary stuttering conditions that involved attempting to produce purposeful, tension-free repetitions of initial sounds or syllables of a word for two or more repetitions (i.e., bouncing). The three voluntary stuttering conditions included bouncing on 5%, 10%, and 15% of syllables read. Friedman tests and follow-up Wilcoxon signed ranks tests were conducted for the statistical analyses. Stuttering frequency, articulation rate, and speech naturalness were significantly different between the voluntary stuttering conditions. Speech effort did not differ between the voluntary stuttering conditions. Stuttering frequency was significantly lower during the three voluntary stuttering conditions compared to the control condition, and speech effort was significantly lower during two of the three voluntary stuttering conditions compared to the control condition. Due to changes in articulation rate across the voluntary stuttering conditions, it is difficult to conclude, as has been suggested previously, that voluntary stuttering is the reason for stuttering reductions found when using voluntary stuttering techniques. Additionally, future investigations should examine different types of voluntary stuttering over an extended period of time to determine their impact on stuttering frequency, speech rate, speech naturalness, and speech effort.
NASA Technical Reports Server (NTRS)
Simpson, Carol A.
1990-01-01
The U.S. Army Crew Station Research and Development Facility uses vintage 1984 speech recognizers. An evaluation was performed of newer off-the-shelf speech recognition devices to determine whether newer technology performance and capabilities are substantially better than that of the Army's current speech recognizers. The Phonetic Discrimination (PD-100) Test was used to compare recognizer performance in two ambient noise conditions: quiet office and helicopter noise. Test tokens were spoken by males and females and in isolated-word and connected-work mode. Better overall recognition accuracy was obtained from the newer recognizers. Recognizer capabilities needed to support the development of human factors design requirements for speech command systems in advanced combat helicopters are listed.
Qualitative Assessment of Speech Perception Performance of Early and Late Cochlear Implantees.
Kant, Anjali R; Pathak, Sonal
2015-09-01
The present study aims to provide a qualitative description and comparison of speech perception performance using model based tests like multisyllabic lexical neighborhood test (MLNT) and lexical neighborhood test (LNT), in early and late implanted (prelingual) hearing impaired children using cochlear implants. The subjects comprised of cochlear implantees; Group I (early implantees)-n = 15, 3-6 years of age; mean age at implantation-3½ years. Group II (late implantees)-n = 15, 7-13 years of age; mean age at implantation-5 years. The tests were presented in a sound treated room at 70 dBSPL. The children were instructed to repeat the words on hearing them. Responses were scored as percentage of words correctly repeated. Their means were computed. The late implantees achieved higher scores for words on MLNT than those on LNT. This may imply that late implantees are making use of length cues in order to aid them in speech perception. The major phonological process used by early implantees was deletion and by the late implantees was substitution. One needs to wait until the child achieves a score of 20 % on LNT before assessing other aspects of his/her speech perception abilities. There appears to be a need to use speech perception tests which are based on theoretical empirical models, in order to enable us to give a descriptive analysis of post implant speech perception performance.
Speech recognition in individuals with sensorineural hearing loss.
de Andrade, Adriana Neves; Iorio, Maria Cecilia Martinelli; Gil, Daniela
2016-01-01
Hearing loss can negatively influence the communication performance of individuals, who should be evaluated with suitable material and in situations of listening close to those found in everyday life. To analyze and compare the performance of patients with mild-to-moderate sensorineural hearing loss in speech recognition tests carried out in silence and with noise, according to the variables ear (right and left) and type of stimulus presentation. The study included 19 right-handed individuals with mild-to-moderate symmetrical bilateral sensorineural hearing loss, submitted to the speech recognition test with words in different modalities and speech test with white noise and pictures. There was no significant difference between right and left ears in any of the tests. The mean number of correct responses in the speech recognition test with pictures, live voice, and recorded monosyllables was 97.1%, 85.9%, and 76.1%, respectively, whereas after the introduction of noise, the performance decreased to 72.6% accuracy. The best performances in the Speech Recognition Percentage Index were obtained using monosyllabic stimuli, represented by pictures presented in silence, with no significant differences between the right and left ears. After the introduction of competitive noise, there was a decrease in individuals' performance. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Dazert, Stefan; Thomas, Jan Peter; Büchner, Andreas; Müller, Joachim; Hempel, John Martin; Löwenheim, Hubert; Mlynski, Robert
2017-03-01
The RONDO is a single-unit cochlear implant audio processor, which omits the need for a behind-the-ear (BTE) audio processor. The primary aim was to compare speech perception results in quiet and in noise with the RONDO and the OPUS 2, a BTE audio processor. Secondary aims were to determine subjects' self-assessed levels of sound quality and gather subjective feedback on RONDO use. All speech perception tests were performed with the RONDO and the OPUS 2 behind-the-ear audio processor at 3 test intervals. Subjects were required to use the RONDO between test intervals. Subjects were tested at upgrade from the OPUS 2 to the RONDO and at 1 and 6 months after upgrade. Speech perception was determined using the Freiburg Monosyllables in quiet test and the Oldenburg Sentence Test (OLSA) in noise. Subjective perception was determined using the Hearing Implant Sound Quality Index (HISQUI 19 ), and a RONDO device-specific questionnaire. 50 subjects participated in the study. Neither speech perception scores nor self-perceived sound quality scores were significantly different at any interval between the RONDO and the OPUS 2. Subjects reported high levels of satisfaction with the RONDO. The RONDO provides comparable speech perception to the OPUS 2 while providing users with high levels of satisfaction and comfort without increasing health risk. The RONDO is a suitable and safe alternative to traditional BTE audio processors.
Influences of selective adaptation on perception of audiovisual speech
Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.
2016-01-01
Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781
Free Speech and GWOT: Back to the Future?
2008-02-29
associated cases from the WWI era) focused on speech as evidence of a substantive crime (there, leaflets were proof that the accused was fomenting the...substantive crime – insurrection within the Army). In Gitlow and Whitney, there was no substantive crime for which speech was the evidence. The...substantive crime was the substance of the speech itself. That this test evaluated the content of the speech itself would later become a major criticism
Development of a speech autocuer
NASA Astrophysics Data System (ADS)
Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; McCartney, M. L.
1980-12-01
A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.
Development of a speech autocuer
NASA Technical Reports Server (NTRS)
Bedles, R. L.; Kizakvich, P. N.; Lawson, D. T.; Mccartney, M. L.
1980-01-01
A wearable, visually based prosthesis for the deaf based upon the proven method for removing lipreading ambiguity known as cued speech was fabricated and tested. Both software and hardware developments are described, including a microcomputer, display, and speech preprocessor.
Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance
ERIC Educational Resources Information Center
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2013-01-01
Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…
Family Pedigrees of Children with Suspected Childhood Apraxia of Speech
ERIC Educational Resources Information Center
Lewis, Barbara A.; Freebairn, Lisa A.; Hansen, Amy; Taylor, H. Gerry; Iyengar, Sudha; Shriberg, Lawrence D.
2004-01-01
Forty-two children (29 boys and 13 girls), ages 3-10 years, were referred from the caseloads of clinical speech-language pathologists for suspected childhood apraxia of speech (CAS). According to results from tests of speech and oral motor skills, 22 children met criteria for CAS, including a severely limited consonant and vowel repertoire,…
Reference-free automatic quality assessment of tracheoesophageal speech.
Huang, Andy; Falk, Tiago H; Chan, Wai-Yip; Parsa, Vijay; Doyle, Philip
2009-01-01
Evaluation of the quality of tracheoesophageal (TE) speech using machines instead of human experts can enhance the voice rehabilitation process for patients who have undergone total laryngectomy and voice restoration. Towards the goal of devising a reference-free TE speech quality estimation algorithm, we investigate the efficacy of speech signal features that are used in standard telephone-speech quality assessment algorithms, in conjunction with a recently introduced speech modulation spectrum measure. Tests performed on two TE speech databases demonstrate that the modulation spectral measure and a subset of features in the standard ITU-T P.563 algorithm estimate TE speech quality with better correlation (up to 0.9) than previously proposed features.
Neumann, K; Holler-Zittlau, I; van Minnen, S; Sick, U; Zaretsky, Y; Euler, H A
2011-01-01
The German Kindersprachscreening (KiSS) is a universal speech and language screening test for large-scale identification of Hessian kindergarten children requiring special educational language training or clinical speech/language therapy. To calculate the procedural screening validity, 257 children (aged 4.0 to 4.5 years) were tested using KiSS and four language tests (Reynell Development Language Scales III, Patholinguistische Diagnostik, PLAKSS, AWST-R). The majority or consensus judgements of three speech-language professionals, based on the language test results, served as a reference criterion. The base (fail) rates of the professionals were either self-determined or preset based on known prevalence rates. Screening validity was higher for preset than for self-determined base rates due to higher inter-judge agreement. The confusion matrices of the overall index classification of the KiSS (speech-language abnormalities with educational or clinical needs) with the fixed base rate expert judgement about language impairment, including fluency or voice disorders, yielded a sensitivity of 88% and a specificity of 78%, for just language impairment 84% and 75%, respectively. Specificities for disorders requiring clinical diagnostics in the KiSS (language impairment alone or combined with fluency/voice disorders) related to the test-based consensus expert judgment was about 93%. Sensitivities were unsatisfactory because the differentiation between educational and clinical needs requires improvement. Since the judgement concordances between the speech-language professionals was only moderate, the development of a comprehensive German reference test for speech and language disorders with evidence-based algorithmic decision rules rather than subjective clinical judgement is advocated.
Effects of interior aircraft noise on speech intelligibility and annoyance
NASA Technical Reports Server (NTRS)
Pearsons, K. S.; Bennett, R. L.
1977-01-01
Recordings of the aircraft ambiance from ten different types of aircraft were used in conjunction with four distinct speech interference tests as stimuli to determine the effects of interior aircraft background levels and speech intelligibility on perceived annoyance in 36 subjects. Both speech intelligibility and background level significantly affected judged annoyance. However, the interaction between the two variables showed that above an 85 db background level the speech intelligibility results had a minimal effect on annoyance ratings. Below this level, people rated the background as less annoying if there was adequate speech intelligibility.
ERIC Educational Resources Information Center
Ashwell, Tim; Elam, Jesse R.
2017-01-01
The ultimate aim of our research project was to use the Google Web Speech API to automate scoring of elicited imitation (EI) tests. However, in order to achieve this goal, we had to take a number of preparatory steps. We needed to assess how accurate this speech recognition tool is in recognizing native speakers' production of the test items; we…
ERIC Educational Resources Information Center
De Felice, Rachele; Deane, Paul
2012-01-01
This study proposes an approach to automatically score the "TOEIC"® Writing e-mail task. We focus on one component of the scoring rubric, which notes whether the test-takers have used particular speech acts such as requests, orders, or commitments. We developed a computational model for automated speech act identification and tested it…
Dietrich, Susanne; Hertrich, Ingo; Müller-Dahlhaus, Florian; Ackermann, Hermann; Belardinelli, Paolo; Desideri, Debora; Seibold, Verena C; Ziemann, Ulf
2018-01-01
The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient "virtual lesion" using continuous theta-burst stimulation (cTBS). Participants were tested (1) prior to (pre-baseline), (2) 10 min after (test condition for the cTBS effect), and (3) 60 min after stimulation (post-baseline) using a sentence repetition task (formant-synthesized at rates of 8, 10, 12, 14, and 16 syllables/s). Speech comprehension was quantified by the percentage of correctly reproduced speech material. For high speech rates, subjects showed decreased performance after cTBS of pre-SMA. Regarding the error pattern, the number of incorrect words without any semantic or phonological similarity to the target context increased, while related words decreased. Thus, the transient impairment of pre-SMA seems to affect its inhibitory function that normally eliminates erroneous speech material prior to speaking or, in case of perception, prior to encoding into a semantically/pragmatically meaningful message.
Dietrich, Susanne; Hertrich, Ingo; Müller-Dahlhaus, Florian; Ackermann, Hermann; Belardinelli, Paolo; Desideri, Debora; Seibold, Verena C.; Ziemann, Ulf
2018-01-01
The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient “virtual lesion” using continuous theta-burst stimulation (cTBS). Participants were tested (1) prior to (pre-baseline), (2) 10 min after (test condition for the cTBS effect), and (3) 60 min after stimulation (post-baseline) using a sentence repetition task (formant-synthesized at rates of 8, 10, 12, 14, and 16 syllables/s). Speech comprehension was quantified by the percentage of correctly reproduced speech material. For high speech rates, subjects showed decreased performance after cTBS of pre-SMA. Regarding the error pattern, the number of incorrect words without any semantic or phonological similarity to the target context increased, while related words decreased. Thus, the transient impairment of pre-SMA seems to affect its inhibitory function that normally eliminates erroneous speech material prior to speaking or, in case of perception, prior to encoding into a semantically/pragmatically meaningful message. PMID:29896086
Motor functions and adaptive behaviour in children with childhood apraxia of speech.
Tükel, Şermin; Björelius, Helena; Henningsson, Gunilla; McAllister, Anita; Eliasson, Ann Christin
2015-01-01
Undiagnosed motor and behavioural problems have been reported for children with childhood apraxia of speech (CAS). This study aims to understand the extent of these problems by determining the profile of and relationships between speech/non-speech oral, manual and overall body motor functions and adaptive behaviours in CAS. Eighteen children (five girls and 13 boys) with CAS, 4 years 4 months to 10 years 6 months old, participated in this study. The assessments used were the Verbal Motor Production Assessment for Children (VMPAC), Bruininks-Oseretsky Test of Motor Proficiency (BOT-2) and Adaptive Behaviour Assessment System (ABAS-II). Median result of speech/non-speech oral motor function was between -1 and -2 SD of the mean VMPAC norms. For BOT-2 and ABAS-II, the median result was between the mean and -1 SD of test norms. However, on an individual level, many children had co-occurring difficulties (below -1 SD of the mean) in overall and manual motor functions and in adaptive behaviour, despite few correlations between sub-tests. In addition to the impaired speech motor output, children displayed heterogeneous motor problems suggesting the presence of a global motor deficit. The complex relationship between motor functions and behaviour may partly explain the undiagnosed developmental difficulties in CAS.
Hearing in Noise Test Brazil: standardization for young adults with normal hearing.
Sbompato, Andressa Forlevise; Corteletti, Lilian Cassia Bornia Jacob; Moret, Adriane de Lima Mortari; Jacob, Regina Tangerino de Souza
2015-01-01
Individuals with the same ability of speech recognition in quiet can have extremely different results in noisy environments. To standardize speech perception in adults with normal hearing in the free field using the Brazilian Hearing in Noise Test. Contemporary, cross-sectional cohort study. 79 adults with normal hearing and without cognitive impairment participated in the study. Lists of Hearing in Noise Test sentences were randomly in quiet, noise front, noise right, and noise left. There were no significant differences between right and left ears at all frequencies tested (paired t-1 test). Nor were significant differences observed when comparing gender and interaction between these conditions. A difference was observed among the free field positions tested, except in the situations of noise right and noise left. Results of speech perception in adults with normal hearing in the free field during different listening situations in noise indicated poorer performance during the condition with noise and speech in front, i.e., 0°/0°. The values found in the standardization of the Hearing in Noise Test free field can be used as a reference in the development of protocols for tests of speech perception in noise, and for monitoring individuals with hearing impairment. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Miyoshi, Masayuki; Fukuhara, Takahiro; Kataoka, Hideyuki; Hagino, Hiroshi
2016-04-01
The use of tracheoesophageal speech with voice prosthesis (T-E speech) after total laryngectomy has increased recently as a method of vocalization following laryngeal cancer. Previous research has not investigated the relationship between quality of life (QOL) and phonatory function in those using T-E speech. This study aimed to demonstrate the relationship between phonatory function and both comprehensive health-related QOL and QOL related to speech in people using T-E speech. The subjects of the study were 20 male patients using T-E speech after total laryngectomy. At a visit to our clinic, the subjects underwent a phonatory function test and completed three questionnaires: the MOS 8-Item Short-Form Health Survey (SF-8), the Voice Handicap Index-10 (VHI-10), and the Voice-Related Quality of Life (V-RQOL) Measure. A significant correlation was observed between the physical component summary (PCS), a summary score of SF-8, and VHI-10. Additionally, a significant correlation was observed between the SF-8 mental component summary (MCS) and both VHI-10 and VRQOL. Significant correlations were also observed between voice intensity in the phonatory function test and both VHI-10 and V-RQOL. Finally, voice intensity was significantly correlated with the SF-8 PCS. QOL questionnaires and phonatory function tests showed that, in people using T-E speech after total laryngectomy, voice intensity was correlated with comprehensive QOL, including physical and mental health. This finding suggests that voice intensity can be used as a performance index for speech rehabilitation.
Motor laterality as an indicator of speech laterality.
Flowers, Kenneth A; Hudson, John M
2013-03-01
The determination of speech laterality, especially where it is anomalous, is both a theoretical issue and a practical problem for brain surgery. Handedness is commonly thought to be related to speech representation, but exactly how is not clearly understood. This investigation analyzed handedness by preference rating and performance on a reliable task of motor laterality in 34 patients undergoing a Wada test, to see if they could provide an indicator of speech laterality. Hand usage preference ratings divided patients into left, right, and mixed in preference. Between-hand differences in movement time on a pegboard task determined motor laterality. Results were correlated (χ2) with speech representation as determined by a standard Wada test. It was found that patients whose between-hand difference in speed on the motor task was small or inconsistent were the ones whose Wada test speech representation was likely to be ambiguous or anomalous, whereas all those with a consistently large between-hand difference showed clear unilateral speech representation in the hemisphere controlling the better hand (χ2 = 10.45, df = 1, p < .01, η2 = 0.55) This relationship prevailed across hand preference and level of skill in the hands itself. We propose that motor and speech laterality are related where they both involve a central control of motor output sequencing and that a measure of that aspect of the former will indicate the likely representation of the latter. A between-hand measure of motor laterality based on such a measure may indicate the possibility of anomalous speech representation. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Rossi, N F; Giacheti, C M
2017-07-01
Williams syndrome (WS) phenotype is described as unique and intriguing. The aim of this study was to investigate the associations between speech-language abilities, general cognitive functioning and behavioural problems in individuals with WS, considering age effects and speech-language characteristics of WS sub-groups. The study's participants were 26 individuals with WS and their parents. General cognitive functioning was assessed with the Wechsler Intelligence Scale. Peabody Picture Vocabulary Test, Token Test and the Cookie Theft Picture test were used as speech-language measures. Five speech-language characteristics were evaluated from a 30-min conversation (clichés, echolalia, perseverative speech, exaggerated prosody and monotone intonation). The Child Behaviour Checklist (CBCL 6-18) was used to assess behavioural problems. Higher single-word receptive vocabulary and narrative vocabulary were negatively associated with CBCL T-scores for Social Problems, Aggressive Behaviour and Total Problems. Speech rate was negatively associated with the CBCL Withdrawn/Depressed T-score. Monotone intonation was associated with shy behaviour, as well as exaggerated prosody with talkative behaviour. WS with perseverative speech and exaggerated prosody presented higher scores on Thought Problems. Echolalia was significantly associated with lower Verbal IQ. No significant association was found between IQ and behaviour problems. Age-associated effects were observed only for the Aggressive Behaviour scale. Associations reported in the present study may represent an insightful background for future predictive studies of speech-language, cognition and behaviour problems in WS. © 2017 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Best, Virginia; Keidser, Gitte; Buchholz, Jörg M; Freeston, Katrina
2015-01-01
There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing-aid benefit from those measured in the standard environment. The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests.
Best, Virginia; Keidser, Gitte; Buchholz, J(x004E7)rg M.; Freeston, Katrina
2016-01-01
Objective There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Design Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. Study Sample The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Results Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing aid benefit from those measured in the standard environment. Conclusions The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests. PMID:25853616
Kim, Soo Ji; Jo, Uiri
2013-01-01
Based on the anatomical and functional commonality between singing and speech, various types of musical elements have been employed in music therapy research for speech rehabilitation. This study was to develop an accent-based music speech protocol to address voice problems of stroke patients with mixed dysarthria. Subjects were 6 stroke patients with mixed dysarthria and they received individual music therapy sessions. Each session was conducted for 30 minutes and 12 sessions including pre- and post-test were administered for each patient. For examining the protocol efficacy, the measures of maximum phonation time (MPT), fundamental frequency (F0), average intensity (dB), jitter, shimmer, noise to harmonics ratio (NHR), and diadochokinesis (DDK) were compared between pre and post-test and analyzed with a paired sample t-test. The results showed that the measures of MPT, F0, dB, and sequential motion rates (SMR) were significantly increased after administering the protocol. Also, there were statistically significant differences in the measures of shimmer, and alternating motion rates (AMR) of the syllable /K$\\inve$/ between pre- and post-test. The results indicated that the accent-based music speech protocol may improve speech motor coordination including respiration, phonation, articulation, resonance, and prosody of patients with dysarthria. This suggests the possibility of utilizing the music speech protocol to maximize immediate treatment effects in the course of a long-term treatment for patients with dysarthria.
Automatic intelligibility classification of sentence-level pathological speech
Kim, Jangwon; Kumar, Naveen; Tsiartas, Andreas; Li, Ming; Narayanan, Shrikanth S.
2014-01-01
Pathological speech usually refers to the condition of speech distortion resulting from atypicalities in voice and/or in the articulatory mechanisms owing to disease, illness or other physical or biological insult to the production system. Although automatic evaluation of speech intelligibility and quality could come in handy in these scenarios to assist experts in diagnosis and treatment design, the many sources and types of variability often make it a very challenging computational processing problem. In this work we propose novel sentence-level features to capture abnormal variation in the prosodic, voice quality and pronunciation aspects in pathological speech. In addition, we propose a post-classification posterior smoothing scheme which refines the posterior of a test sample based on the posteriors of other test samples. Finally, we perform feature-level fusions and subsystem decision fusion for arriving at a final intelligibility decision. The performances are tested on two pathological speech datasets, the NKI CCRT Speech Corpus (advanced head and neck cancer) and the TORGO database (cerebral palsy or amyotrophic lateral sclerosis), by evaluating classification accuracy without overlapping subjects’ data among training and test partitions. Results show that the feature sets of each of the voice quality subsystem, prosodic subsystem, and pronunciation subsystem, offer significant discriminating power for binary intelligibility classification. We observe that the proposed posterior smoothing in the acoustic space can further reduce classification errors. The smoothed posterior score fusion of subsystems shows the best classification performance (73.5% for unweighted, and 72.8% for weighted, average recalls of the binary classes). PMID:25414544
Smart command recognizer (SCR) - For development, test, and implementation of speech commands
NASA Technical Reports Server (NTRS)
Simpson, Carol A.; Bunnell, John W.; Krones, Robert R.
1988-01-01
The SCR, a rapid prototyping system for the development, testing, and implementation of speech commands in a flight simulator or test aircraft, is described. A single unit performs all functions needed during these three phases of system development, while the use of common software and speech command data structure files greatly reduces the preparation time for successive development phases. As a smart peripheral to a simulation or flight host computer, the SCR interprets the pilot's spoken input and passes command codes to the simulation or flight computer.
NASA Technical Reports Server (NTRS)
Olorenshaw, Lex; Trawick, David
1991-01-01
The purpose was to develop a speech recognition system to be able to detect speech which is pronounced incorrectly, given that the text of the spoken speech is known to the recognizer. Better mechanisms are provided for using speech recognition in a literacy tutor application. Using a combination of scoring normalization techniques and cheater-mode decoding, a reasonable acceptance/rejection threshold was provided. In continuous speech, the system was tested to be able to provide above 80 pct. correct acceptance of words, while correctly rejecting over 80 pct. of incorrectly pronounced words.
Methods of analysis speech rate: a pilot study.
Costa, Luanna Maria Oliveira; Martins-Reis, Vanessa de Oliveira; Celeste, Letícia Côrrea
2016-01-01
To describe the performance of fluent adults in different measures of speech rate. The study included 24 fluent adults, of both genders, speakers of Brazilian Portuguese, who were born and still living in the metropolitan region of Belo Horizonte, state of Minas Gerais, aged between 18 and 59 years. Participants were grouped by age: G1 (18-29 years), G2 (30-39 years), G3 (40-49 years), and G4 (50-59 years). The speech samples were obtained following the methodology of the Speech Fluency Assessment Protocol. In addition to the measures of speech rate proposed by the protocol (speech rate in words and syllables per minute), the rate of speech into phonemes per second and the articulation rate with and without the disfluencies were calculated. We used the nonparametric Friedman test and the Wilcoxon test for multiple comparisons. Groups were compared using the nonparametric Kruskal Wallis. The significance level was of 5%. There were significant differences between measures of speech rate involving syllables. The multiple comparisons showed that all the three measures were different. There was no effect of age for the studied measures. These findings corroborate previous studies. The inclusion of temporal acoustic measures such as speech rate in phonemes per second and articulation rates with and without disfluencies can be a complementary approach in the evaluation of speech rate.
Measures for assessing architectural speech security (privacy) of closed offices and meeting rooms.
Gover, Bradford N; Bradley, John S
2004-12-01
Objective measures were investigated as predictors of the speech security of closed offices and rooms. A new signal-to-noise type measure is shown to be a superior indicator for security than existing measures such as the Articulation Index, the Speech Intelligibility Index, the ratio of the loudness of speech to that of noise, and the A-weighted level difference of speech and noise. This new measure is a weighted sum of clipped one-third-octave-band signal-to-noise ratios; various weightings and clipping levels are explored. Listening tests had 19 subjects rate the audibility and intelligibility of 500 English sentences, filtered to simulate transmission through various wall constructions, and presented along with background noise. The results of the tests indicate that the new measure is highly correlated with sentence intelligibility scores and also with three security thresholds: the threshold of intelligibility (below which speech is unintelligible), the threshold of cadence (below which the cadence of speech is inaudible), and the threshold of audibility (below which speech is inaudible). The ratio of the loudness of speech to that of noise, and simple A-weighted level differences are both shown to be well correlated with these latter two thresholds (cadence and audibility), but not well correlated with intelligibility.
Carroll, Jeff; Zeng, Fan-Gang
2007-01-01
Increasing the number of channels at low frequencies improves discrimination of fundamental frequency (F0) in cochlear implants [Geurts and Wouters 2004]. We conducted three experiments to test whether improved F0 discrimination can be translated into increased speech intelligibility in noise in a cochlear implant simulation. The first experiment measured F0 discrimination and speech intelligibility in quiet as a function of channel density over different frequency regions. The results from this experiment showed a tradeoff in performance between F0 discrimination and speech intelligibility with a limited number of channels. The second experiment tested whether improved F0 discrimination and optimizing this tradeoff could improve speech performance with a competing talker. However, improved F0 discrimination did not improve speech intelligibility in noise. The third experiment identified the critical number of channels needed at low frequencies to improve speech intelligibility in noise. The result showed that, while 16 channels below 500 Hz were needed to observe any improvement in speech intelligibility in noise, even 32 channels did not achieve normal performance. Theoretically, these results suggest that without accurate spectral coding, F0 discrimination and speech perception in noise are two independent processes. Practically, the present results illustrate the need to increase the number of independent channels in cochlear implants. PMID:17604581
Extensions to the Speech Disorders Classification System (SDCS)
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
This report describes three extensions to a classification system for pediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three subtypes of motor speech disorders. Part II describes the Madison Speech Assessment Protocol (MSAP), an approximately two-hour battery of 25 measures that includes 15 speech tests and tasks. Part III describes the Competence, Precision, and Stability Analytics (CPSA) framework, a current set of approximately 90 perceptual- and acoustic-based indices of speech, prosody, and voice used to quantify and classify subtypes of Speech Sound Disorders (SSD). A companion paper, Shriberg, Fourakis, et al. (2010) provides reliability estimates for the perceptual and acoustic data reduction methods used in the SDCS. The agreement estimates in the companion paper support the reliability of SDCS methods and illustrate the complementary roles of perceptual and acoustic methods in diagnostic analyses of SSD of unknown origin. Examples of research using the extensions to the SDCS described in the present report include diagnostic findings for a sample of youth with motor speech disorders associated with galactosemia (Shriberg, Potter, & Strand, 2010) and a test of the hypothesis of apraxia of speech in a group of children with autism spectrum disorders (Shriberg, Paul, Black, & van Santen, 2010). All SDCS methods and reference databases running in the PEPPER (Programs to Examine Phonetic and Phonologic Evaluation Records; [Shriberg, Allen, McSweeny, & Wilson, 2001]) environment will be disseminated without cost when complete. PMID:20831378
ERIC Educational Resources Information Center
Ashtiani, Farshid Tayari; Zafarghandi, Amir Mahdavi
2015-01-01
The present study was an attempt to investigate the impact of English verbal songs on connected speech aspects of adult English learners' speech production. 40 participants were selected based on the results of their performance in a piloted and validated version of NELSON test given to 60 intermediate English learners in a language institute in…
ERIC Educational Resources Information Center
Messaoud-Galusi, Souhila; Hazan, Valerie; Rosen, Stuart
2011-01-01
Purpose: The claim that speech perception abilities are impaired in dyslexia was investigated in a group of 62 children with dyslexia and 51 average readers matched in age. Method: To test whether there was robust evidence of speech perception deficits in children with dyslexia, speech perception in noise and quiet was measured using 8 different…
ERIC Educational Resources Information Center
Strand, Edythe A.; McCauley, Rebecca J.; Weigand, Stephen D.; Stoeckel, Ruth E.; Baas, Becky S.
2013-01-01
Purpose: In this article, the authors report reliability and validity evidence for the Dynamic Evaluation of Motor Speech Skill (DEMSS), a new test that uses dynamic assessment to aid in the differential diagnosis of childhood apraxia of speech (CAS). Method: Participants were 81 children between 36 and 79 months of age who were referred to the…
ERIC Educational Resources Information Center
Macrae, Toby; Tyler, Ann A.
2014-01-01
Purpose: The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. Method: In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different…
ERIC Educational Resources Information Center
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2013-01-01
Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up…
Auditory Speech Perception Tests in Relation to the Coding Strategy in Cochlear Implant.
Bazon, Aline Cristine; Mantello, Erika Barioni; Gonçales, Alina Sanches; Isaac, Myriam de Lima; Hyppolito, Miguel Angelo; Reis, Ana Cláudia Mirândola Barbosa
2016-07-01
The objective of the evaluation of auditory perception of cochlear implant users is to determine how the acoustic signal is processed, leading to the recognition and understanding of sound. To investigate the differences in the process of auditory speech perception in individuals with postlingual hearing loss wearing a cochlear implant, using two different speech coding strategies, and to analyze speech perception and handicap perception in relation to the strategy used. This study is prospective cross-sectional cohort study of a descriptive character. We selected ten cochlear implant users that were characterized by hearing threshold by the application of speech perception tests and of the Hearing Handicap Inventory for Adults. There was no significant difference when comparing the variables subject age, age at acquisition of hearing loss, etiology, time of hearing deprivation, time of cochlear implant use and mean hearing threshold with the cochlear implant with the shift in speech coding strategy. There was no relationship between lack of handicap perception and improvement in speech perception in both speech coding strategies used. There was no significant difference between the strategies evaluated and no relation was observed between them and the variables studied.
Dwivedi, Raghav C; St Rose, Suzanne; Chisholm, Edward J; Bisase, Brian; Amen, Furrat; Nutting, Christopher M; Clarke, Peter M; Kerawala, Cyrus J; Rhys-Evans, Peter H; Harrington, Kevin J; Kazi, Rehan
2012-06-01
The aim of this study was to explore post-treatment speech impairments using English version of Speech Handicap Index (SHI) (first speech-specific questionnaire) in a cohort of oral cavity (OC) and oropharyngeal (OP) cancer patients. Sixty-three consecutive OC and OP cancer patients in follow-up participated in this study. Descriptive analyses have been presented as percentages, while Mann-Whitney U-test and Kruskall-Wallis test have been used for the quantitative variables. Statistical Package for Social Science-15 statistical software (SPSS Inc., Chicago, IL) was used for the statistical analyses. Over a third (36.1%) of patients reported their speech as either average or bad. Speech intelligibility and articulation were the main speech concerns for 58.8% and 52.9% OC and 31.6% and 34.2% OP cancer patients, respectively. While feeling of incompetent and being less outgoing were the speech-related psychosocial concerns for 64.7% and 23.5% OC and 15.8% and 18.4% OP cancer patients, respectively. Worse speech outcomes were noted for oral tongue and base of tongue cancers vs. tonsillar cancers, mean (SD) values were 56.7 (31.3) and 52.0 (38.4) vs. 10.9 (14.8) (P<0.001) and late vs. early T stage cancers 65.0 (29.9) vs. 29.3 (32.7) (P<0.005). The English version of the SHI is a reliable, valid and useful tool for the evaluation of speech in HNC patients. Over one-third of OC and OP cancer patients reported speech problems in their day-do-day life. Advanced T-stage tumors affecting the oral tongue or base of tongue are particularly associated with poor speech outcomes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Electroacoustic verification of frequency modulation systems in cochlear implant users.
Fidêncio, Vanessa Luisa Destro; Jacob, Regina Tangerino de Souza; Tanamati, Liége Franzini; Bucuvic, Érika Cristina; Moret, Adriane Lima Mortari
2017-12-26
The frequency modulation system is a device that helps to improve speech perception in noise and is considered the most beneficial approach to improve speech recognition in noise in cochlear implant users. According to guidelines, there is a need to perform a check before fitting the frequency modulation system. Although there are recommendations regarding the behavioral tests that should be performed at the fitting of the frequency modulation system to cochlear implant users, there are no published recommendations regarding the electroacoustic test that should be performed. Perform and determine the validity of an electroacoustic verification test for frequency modulation systems coupled to different cochlear implant speech processors. The sample included 40 participants between 5 and 18 year's users of four different models of speech processors. For the electroacoustic evaluation, we used the Audioscan Verifit device with the HA-1 coupler and the listening check devices corresponding to each speech processor model. In cases where the transparency was not achieved, a modification was made in the frequency modulation gain adjustment and we used the Brazilian version of the "Phrases in Noise Test" to evaluate the speech perception in competitive noise. It was observed that there was transparency between the frequency modulation system and the cochlear implant in 85% of the participants evaluated. After adjusting the gain of the frequency modulation receiver in the other participants, the devices showed transparency when the electroacoustic verification test was repeated. It was also observed that patients demonstrated better performance in speech perception in noise after a new adjustment, that is, in these cases; the electroacoustic transparency caused behavioral transparency. The electroacoustic evaluation protocol suggested was effective in evaluation of transparency between the frequency modulation system and the cochlear implant. Performing the adjustment of the speech processor and the frequency modulation system gain are essential when fitting this device. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de
2017-06-08
to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.
Switching in the Cocktail Party: Exploring Intentional Control of Auditory Selective Attention
ERIC Educational Resources Information Center
Koch, Iring; Lawo, Vera; Fels, Janina; Vorlander, Michael
2011-01-01
Using a novel variant of dichotic selective listening, we examined the control of auditory selective attention. In our task, subjects had to respond selectively to one of two simultaneously presented auditory stimuli (number words), always spoken by a female and a male speaker, by performing a numerical size categorization. The gender of the…
ERIC Educational Resources Information Center
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were…
Corpus Callosum Size is Linked to Dichotic Deafness and Hemisphericity, Not Sex or Handedness
ERIC Educational Resources Information Center
Morton, Bruce E.; Rafto, Stein E.
2006-01-01
Individuals differ in the number of corpus callosum (CC) nerve fibers interconnecting their cerebral hemispheres by about threefold. Early reports suggested that males had smaller CCs than females. This was often interpreted to support the concept that the male brain is more "lateralized" or "specialized," thus accounting for presumed male…
Examining Lateralized Lexical Ambiguity Processing Using Dichotic and Cross-Modal Tasks
ERIC Educational Resources Information Center
Atchley, Ruth Ann; Grimshaw, Gina; Schuster, Jonathan; Gibson, Linzi
2011-01-01
The individual roles played by the cerebral hemispheres during the process of language comprehension have been extensively studied in tasks that require individuals to read text (for review see Jung-Beeman, 2005). However, it is not clear whether or not some aspects of the theorized laterality models of semantic comprehension are a result of the…
ERIC Educational Resources Information Center
Techentin, Cheryl; Voyer, Daniel; Klein, Raymond M.
2009-01-01
The present study investigated the influence of within- and between-ear congruency on interference and laterality effects in an auditory semantic/prosodic conflict task. Participants were presented dichotically with words (e.g., mad, sad, glad) pronounced in either congruent or incongruent emotional tones (e.g., angry, happy, or sad) and…
Attention and Cognitive Control Networks Assessed in a Dichotic Listening fMRI Study
ERIC Educational Resources Information Center
Falkenberg, Liv E.; Specht, Karsten; Westerhausen, Rene
2011-01-01
A meaningful interaction with our environment relies on the ability to focus on relevant sensory input and to ignore irrelevant information, i.e. top-down control and attention processes are employed to select from competing stimuli following internal goals. In this, the demands for the recruitment of top-down control processes depend on the…
ERIC Educational Resources Information Center
Brancucci, Alfredo; Tommasi, Luca
2011-01-01
Since about two decades neuroscientists have systematically faced the problem of consciousness: the aim is to discover the neural activity specifically related to conscious perceptions, i.e. the biological properties of what philosophers call qualia. In this view, a neural correlate of consciousness (NCC) is a precise pattern of brain activity…
ERIC Educational Resources Information Center
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W.
2013-01-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in…
ERIC Educational Resources Information Center
Bomba, Marie D.; Singhal, Anthony
2010-01-01
Previous dual-task research pairing complex visual tasks involving non-spatial cognitive processes during dichotic listening have shown effects on the late component (Ndl) of the negative difference selective attention waveform but no effects on the early (Nde) response suggesting that the Ndl, but not the Nde, is affected by non-spatial…
Kramer, Sophia E; Teunissen, Charlotte E; Zekveld, Adriana A
2016-01-01
Pupillometry is one method that has been used to measure processing load expended during speech understanding. Notably, speech perception (in noise) tasks can evoke a pupil response. It is not known if there is concurrent activation of the sympathetic nervous system as indexed by salivary cortisol and chromogranin A (CgA) and whether such activation differs between normally hearing (NH) and hard-of-hearing (HH) adults. Ten NH and 10 adults with mild-to-moderate hearing loss (mean age 52 years) participated. Two speech perception tests were administered in random order: one in quiet targeting 100% correct performance and one in noise targeting 50% correct performance. Pupil responses and salivary samples for cortisol and CgA analyses were collected four times: before testing, after the two speech perception tests, and at the end of the session. Participants rated their perceived accuracy, effort, and motivation. Effects were examined using repeated-measures analyses of variance. Correlations between outcomes were calculated. HH listeners had smaller peak pupil dilations (PPDs) than NH listeners in the speech in noise condition only. No group or condition effects were observed for the cortisol data, but HH listeners tended to have higher cortisol levels across conditions. CgA levels were larger at the pretesting time than at the three other test times. Hearing impairment did not affect CgA. Self-rated motivation correlated most often with cortisol or PPD values. The three physiological indicators of cognitive load and stress (PPD, cortisol, and CgA) are not equally affected by speech testing or hearing impairment. Each of them seem to capture a different dimension of sympathetic nervous system activity.
Sanguebuche, Taissane Rodrigues; Peixe, Bruna Pias; Bruno, Rúbia Soares; Biaggio, Eliara Pinto Vieira; Garcia, Michele Vargas
2018-01-01
Introduction The auditory system consists of sensory structures and central connections. The evaluation of the auditory pathway at a central level can be performed through behavioral and electrophysiological tests, because they are complementary to each other and provide important information about comprehension. Objective To correlate the findings of speech brainstem-evoked response audiometry with the behavioral tests Random Gap Detection Test and Masking Level Difference in adults with hearing loss. Methods All patients were submitted to a basic audiological evaluation, to the aforementioned behavioral tests, and to an electrophysiological assessment, by means of click-evoked and speech-evoked brainstem response audiometry. Results There were no statistically significant values among the electrophysiological test and the behavioral tests. However, there was a significant correlation between the V and A waves, as well as the D and F waves, of the speech-evoked brainstem response audiometry peaks. Such correlations are positive, indicating that the increase of a variable implies an increase in another and vice versa. Conclusion It was possible to correlate the findings of the speech-evoked brainstem response audiometry with those of the behavioral tests Random Gap Detection and Masking Level Difference. However, there was no statistically significant correlation between them. This shows that the electrophysiological evaluation does not depend uniquely on the behavioral skills of temporal resolution and selective attention. PMID:29379574
A Review of Standardized Tests of Nonverbal Oral and Speech Motor Performance in Children
ERIC Educational Resources Information Center
McCauley, Rebecca J.; Strand, Edythe A.
2008-01-01
Purpose: To review the content and psychometric characteristics of 6 published tests currently available to aid in the study, diagnosis, and treatment of motor speech disorders in children. Method: We compared the content of the 6 tests and critically evaluated the degree to which important psychometric characteristics support the tests' use for…
Psychometric Characteristics of Single-Word Tests of Children's Speech Sound Production
ERIC Educational Resources Information Center
Flipsen, Peter, Jr.; Ogiela, Diane A.
2015-01-01
Purpose: Our understanding of test construction has improved since the now-classic review by McCauley and Swisher (1984) . The current review article examines the psychometric characteristics of current single-word tests of speech sound production in an attempt to determine whether our tests have improved since then. It also provides a resource…
Salivary testosterone levels are unrelated to handedness or cerebral lateralization for language.
Papadatou-Pastou, Marietta; Martin, Maryanne; Mohr, Christine
2017-03-01
Behavioural and cerebral lateralization are thought to be controlled, at least in part, by prenatal testosterone (T) levels, explaining why sex differences are found in both laterality traits. The present study investigated hormonal effects on laterality using adult salivary T levels, to explore the adequacy of competing theories: the Geschwind, Behan and Galaburda, the callosal, and the sexual differentiation hypotheses. Sixty participants (15 right-handers and 15 left-handers of each sex) participated. Behavioural lateralization was studied by means of hand preference tests (i.e., the Edinburgh Handedness Inventory and the Quantification of Hand Preference test) and a hand skill test (i.e., the Peg-Moving test) whereas cerebral lateralization for language was studied using the Consonant-Vowel Dichotic Listening test and the Visual Half-Field Lexical Decision test. Salivary T and cortisol (C) concentrations were measured by luminescence immunoassay. Canonical correlations did not reveal significant relationships between T levels and measures of hand preference, hand skill, or language laterality. Thus, our findings add to the growing literature showing no relationship between T concentrations with behavioural or cerebral lateralization. It is claimed that prenatal T is not a major determinant of individual variability in either behavioural or cerebral lateralization.
Design of a robust baseband LPC coder for speech transmission over 9.6 kbit/s noisy channels
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Russell, W. H.; Higgins, A. L.
1982-04-01
This paper describes the design of a baseband Linear Predictive Coder (LPC) which transmits speech over 9.6 kbit/sec synchronous channels with random bit errors of up to 1%. Presented are the results of our investigation of a number of aspects of the baseband LPC coder with the goal of maximizing the quality of the transmitted speech. Important among these aspects are: bandwidth of the baseband, coding of the baseband residual, high-frequency regeneration, and error protection of important transmission parameters. The paper discusses these and other issues, presents the results of speech-quality tests conducted during the various stages of optimization, and describes the details of the optimized speech coder. This optimized speech coding algorithm has been implemented as a real-time full-duplex system on an array processor. Informal listening tests of the real-time coder have shown that the coder produces good speech quality in the absence of channel bit errors and introduces only a slight degradation in quality for channel bit error rates of up to 1%.
The role of accent imitation in sensorimotor integration during processing of intelligible speech
Adank, Patti; Rueschemeyer, Shirley-Ann; Bekkering, Harold
2013-01-01
Recent theories on how listeners maintain perceptual invariance despite variation in the speech signal allocate a prominent role to imitation mechanisms. Notably, these simulation accounts propose that motor mechanisms support perception of ambiguous or noisy signals. Indeed, imitation of ambiguous signals, e.g., accented speech, has been found to aid effective speech comprehension. Here, we explored the possibility that imitation in speech benefits perception by increasing activation in speech perception and production areas. Participants rated the intelligibility of sentences spoken in an unfamiliar accent of Dutch in a functional Magnetic Resonance Imaging experiment. Next, participants in one group repeated the sentences in their own accent, while a second group vocally imitated the accent. Finally, both groups rated the intelligibility of accented sentences in a post-test. The neuroimaging results showed an interaction between type of training and pre- and post-test sessions in left Inferior Frontal Gyrus, Supplementary Motor Area, and left Superior Temporal Sulcus. Although alternative explanations such as task engagement and fatigue need to be considered as well, the results suggest that imitation may aid effective speech comprehension by supporting sensorimotor integration. PMID:24109447
Performance of a low data rate speech codec for land-mobile satellite communications
NASA Technical Reports Server (NTRS)
Gersho, Allen; Jedrey, Thomas C.
1990-01-01
In an effort to foster the development of new technologies for the emerging land mobile satellite communications services, JPL funded two development contracts in 1984: one to the Univ. of Calif., Santa Barbara and the other to the Georgia Inst. of Technology, to develop algorithms and real time hardware for near toll quality speech compression at 4800 bits per second. Both universities have developed and delivered speech codecs to JPL, and the UCSB codec was extensively tested by JPL in a variety of experimental setups. The basic UCSB speech codec algorithms and the test results of the various experiments performed with this codec are presented.
Discriminating between auditory and motor cortical responses to speech and non-speech mouth sounds
Agnew, Z.K.; McGettigan, C.; Scott, S.K.
2012-01-01
Several perspectives on speech perception posit a central role for the representation of articulations in speech comprehension, supported by evidence for premotor activation when participants listen to speech. However no experiments have directly tested whether motor responses mirror the profile of selective auditory cortical responses to native speech sounds, or whether motor and auditory areas respond in different ways to sounds. We used fMRI to investigate cortical responses to speech and non-speech mouth (ingressive click) sounds. Speech sounds activated bilateral superior temporal gyri more than other sounds, a profile not seen in motor and premotor cortices. These results suggest that there are qualitative differences in the ways that temporal and motor areas are activated by speech and click sounds: anterior temporal lobe areas are sensitive to the acoustic/phonetic properties while motor responses may show more generalised responses to the acoustic stimuli. PMID:21812557
Estimating psycho-physiological state of a human by speech analysis
NASA Astrophysics Data System (ADS)
Ronzhin, A. L.
2005-05-01
Adverse effects of intoxication, fatigue and boredom could degrade performance of highly trained operators of complex technical systems with potentially catastrophic consequences. Existing physiological fitness for duty tests are time consuming, costly, invasive, and highly unpopular. Known non-physiological tests constitute a secondary task and interfere with the busy workload of the tested operator. Various attempts to assess the current status of the operator by processing of "normal operational data" often lead to excessive amount of computations, poorly justified metrics, and ambiguity of results. At the same time, speech analysis presents a natural, non-invasive approach based upon well-established efficient data processing. In addition, it supports both behavioral and physiological biometric. This paper presents an approach facilitating robust speech analysis/understanding process in spite of natural speech variability and background noise. Automatic speech recognition is suggested as a technique for the detection of changes in the psycho-physiological state of a human that typically manifest themselves by changes of characteristics of voice tract and semantic-syntactic connectivity of conversation. Preliminary tests have confirmed that the statistically significant correlation between the error rate of automatic speech recognition and the extent of alcohol intoxication does exist. In addition, the obtained data allowed exploring some interesting correlations and establishing some quantitative models. It is proposed to utilize this approach as a part of fitness for duty test and compare its efficiency with analyses of iris, face geometry, thermography and other popular non-invasive biometric techniques.
Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N.
2012-01-01
Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients. PMID:22891070
Flaherty, Mary; Dent, Micheal L.; Sawusch, James R.
2017-01-01
The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with “d” or “t” and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal. PMID:28562597
Flaherty, Mary; Dent, Micheal L; Sawusch, James R
2017-01-01
The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.
Lorens, Artur; Zgoda, Małgorzata; Obrycka, Anita; Skarżynski, Henryk
2010-12-01
Presently, there are only few studies examining the benefits of fine structure information in coding strategies. Against this background, this study aims to assess the objective and subjective performance of children experienced with the C40+ cochlear implant using the CIS+ coding strategy who were upgraded to the OPUS 2 processor using FSP and HDCIS. In this prospective study, 60 children with more than 3.5 years of experience with the C40+ cochlear implant were upgraded to the OPUS 2 processor and fit and tested with HDCIS (Interval I). After 3 months of experience with HDCIS, they were fit with the FSP coding strategy (Interval II) and tested with all strategies (FSP, HDCIS, CIS+). After an additional 3-4 months, they were assessed on all three strategies and asked to choose their take-home strategy (Interval III). The children were tested using the Adaptive Auditory Speech Test which measures speech reception threshold (SRT) in quiet and noise at each test interval. The children were also asked to rate on a Visual Analogue Scale their satisfaction and coding strategy preference when listening to speech and a pop song. However, since not all tests could be performed at one single visit, some children were not able complete all tests at all intervals. At the study endpoint, speech in quiet showed a significant difference in SRT of 1.0 dB between FSP and HDCIS, with FSP performing better. FSP proved a better strategy compared with CIS+, showing lower SRT results of 5.2 dB. Speech in noise tests showed FSP to be significantly better than CIS+ by 0.7 dB, and HDCIS to be significantly better than CIS+ by 0.8 dB. Both satisfaction and coding strategy preference ratings also revealed that FSP and HDCIS strategies were better than CIS+ strategy when listening to speech and music. FSP was better than HDCIS when listening to speech. This study demonstrates that long-term pediatric users of the COMBI 40+ are able to upgrade to a newer processor and coding strategy without compromising their listening performance and even improving their performance with FSP after a short time of experience. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dat, Tran Huy; Takeda, Kazuya; Itakura, Fumitada
We present a multichannel speech enhancement method based on MAP speech spectral magnitude estimation using a generalized gamma model of speech prior distribution, where the model parameters are adapted from actual noisy speech in a frame-by-frame manner. The utilization of a more general prior distribution with its online adaptive estimation is shown to be effective for speech spectral estimation in noisy environments. Furthermore, the multi-channel information in terms of cross-channel statistics are shown to be useful to better adapt the prior distribution parameters to the actual observation, resulting in better performance of speech enhancement algorithm. We tested the proposed algorithm in an in-car speech database and obtained significant improvements of the speech recognition performance, particularly under non-stationary noise conditions such as music, air-conditioner and open window.
Air traffic controllers' long-term speech-in-noise training effects: A control group study.
Zaballos, Maria T P; Plasencia, Daniel P; González, María L Z; de Miguel, Angel R; Macías, Ángel R
2016-01-01
Speech perception in noise relies on the capacity of the auditory system to process complex sounds using sensory and cognitive skills. The possibility that these can be trained during adulthood is of special interest in auditory disorders, where speech in noise perception becomes compromised. Air traffic controllers (ATC) are constantly exposed to radio communication, a situation that seems to produce auditory learning. The objective of this study has been to quantify this effect. 19 ATC and 19 normal hearing individuals underwent a speech in noise test with three signal to noise ratios: 5, 0 and -5 dB. Noise and speech were presented through two different loudspeakers in azimuth position. Speech tokes were presented at 65 dB SPL, while white noise files were at 60, 65 and 70 dB respectively. Air traffic controllers outperform the control group in all conditions [P<0.05 in ANOVA and Mann-Whitney U tests]. Group differences were largest in the most difficult condition, SNR=-5 dB. However, no correlation between experience and performance were found for any of the conditions tested. The reason might be that ceiling performance is achieved much faster than the minimum experience time recorded, 5 years, although intrinsic cognitive abilities cannot be disregarded. ATC demonstrated enhanced ability to hear speech in challenging listening environments. This study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions, although good cognitive qualities are likely to be a basic requirement for this training to be effective. Our results show that ATC outperform the control group in all conditions. Thus, this study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions.
Effect of signal to noise ratio on the speech perception ability of older adults
Shojaei, Elahe; Ashayeri, Hassan; Jafari, Zahra; Zarrin Dast, Mohammad Reza; Kamali, Koorosh
2016-01-01
Background: Speech perception ability depends on auditory and extra-auditory elements. The signal- to-noise ratio (SNR) is an extra-auditory element that has an effect on the ability to normally follow speech and maintain a conversation. Speech in noise perception difficulty is a common complaint of the elderly. In this study, the importance of SNR magnitude as an extra-auditory effect on speech perception in noise was examined in the elderly. Methods: The speech perception in noise test (SPIN) was conducted on 25 elderly participants who had bilateral low–mid frequency normal hearing thresholds at three SNRs in the presence of ipsilateral white noise. These participants were selected by available sampling method. Cognitive screening was done using the Persian Mini Mental State Examination (MMSE) test. Results: Independent T- test, ANNOVA and Pearson Correlation Index were used for statistical analysis. There was a significant difference in word discrimination scores at silence and at three SNRs in both ears (p≤0.047). Moreover, there was a significant difference in word discrimination scores for paired SNRs (0 and +5, 0 and +10, and +5 and +10 (p≤0.04)). No significant correlation was found between age and word recognition scores at silence and at three SNRs in both ears (p≥0.386). Conclusion: Our results revealed that decreasing the signal level and increasing the competing noise considerably reduced the speech perception ability in normal hearing at low–mid thresholds in the elderly. These results support the critical role of SNRs for speech perception ability in the elderly. Furthermore, our results revealed that normal hearing elderly participants required compensatory strategies to maintain normal speech perception in challenging acoustic situations. PMID:27390712
Segmenting words from natural speech: subsegmental variation in segmental cues.
Rytting, C Anton; Brew, Chris; Fosler-Lussier, Eric
2010-06-01
Most computational models of word segmentation are trained and tested on transcripts of speech, rather than the speech itself, and assume that speech is converted into a sequence of symbols prior to word segmentation. We present a way of representing speech corpora that avoids this assumption, and preserves acoustic variation present in speech. We use this new representation to re-evaluate a key computational model of word segmentation. One finding is that high levels of phonetic variability degrade the model's performance. While robustness to phonetic variability may be intrinsically valuable, this finding needs to be complemented by parallel studies of the actual abilities of children to segment phonetically variable speech.
A Comparison of LBG and ADPCM Speech Compression Techniques
NASA Astrophysics Data System (ADS)
Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.
Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.
Correlational Analysis of Speech Intelligibility Tests and Metrics for Speech Transmission
2017-12-04
frequency scale (male voice; normal voice effort) ............................... 4 Fig. 2 Diagram of a speech communication system (Letowski...languages. Consonants contain mostly high frequency (above 1500 Hz) speech energy, but this energy is relatively small in comparison to that of the whole...voices (Letowski et al. 1993). Since the mid- frequency spectral region contains mostly vowel energy while consonants are high frequency sounds, an
Phonetic Modification of Vowel Space in Storybook Speech to Infants up to 2 Years of Age
ERIC Educational Resources Information Center
Burnham, Evamarie B.; Wieland, Elizabeth A.; Kondaurova, Maria V.; McAuley, J. Devin; Bergeson, Tonya R.; Dilley, Laura C.
2015-01-01
Purpose: A large body of literature has indicated vowel space area expansion in infant-directed (ID) speech compared with adult-directed (AD) speech, which may promote language acquisition. The current study tested whether this expansion occurs in storybook speech read to infants at various points during their first 2 years of life. Method: In 2…
ERIC Educational Resources Information Center
Loukina, Anastassia; Zechner, Klaus; Yoon, Su-Youn; Zhang, Mo; Tao, Jidong; Wang, Xinhao; Lee, Chong Min; Mulholland, Matthew
2017-01-01
This report presents an overview of the "SpeechRater"? automated scoring engine model building and evaluation process for several item types with a focus on a low-English-proficiency test-taker population. We discuss each stage of speech scoring, including automatic speech recognition, filtering models for nonscorable responses, and…
Evaluation of NASA speech encoder
NASA Technical Reports Server (NTRS)
1976-01-01
Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.
Free Field Word recognition test in the presence of noise in normal hearing adults.
Almeida, Gleide Viviani Maciel; Ribas, Angela; Calleros, Jorge
In ideal listening situations, subjects with normal hearing can easily understand speech, as can many subjects who have a hearing loss. To present the validation of the Word Recognition Test in a Free Field in the Presence of Noise in normal-hearing adults. Sample consisted of 100 healthy adults over 18 years of age with normal hearing. After pure tone audiometry, a speech recognition test was applied in free field condition with monosyllables and disyllables, with standardized material in three listening situations: optimal listening condition (no noise), with a signal to noise ratio of 0dB and a signal to noise ratio of -10dB. For these tests, an environment in calibrated free field was arranged where speech was presented to the subject being tested from two speakers located at 45°, and noise from a third speaker, located at 180°. All participants had speech audiometry results in the free field between 88% and 100% in the three listening situations. Word Recognition Test in Free Field in the Presence of Noise proved to be easy to be organized and applied. The results of the test validation suggest that individuals with normal hearing should get between 88% and 100% of the stimuli correct. The test can be an important tool in measuring noise interference on the speech perception abilities. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Assessing Speech Discrimination in Individual Infants
ERIC Educational Resources Information Center
Houston, Derek M.; Horn, David L.; Qi, Rong; Ting, Jonathan Y.; Gao, Sujuan
2007-01-01
Assessing speech discrimination skills in individual infants from clinical populations (e.g., infants with hearing impairment) has important diagnostic value. However, most infant speech discrimination paradigms have been designed to test group effects rather than individual differences. Other procedures suffer from high attrition rates. In this…
Reliability of Speech Diadochokinetic Test Measurement
ERIC Educational Resources Information Center
Gadesmann, Miriam; Miller, Nick
2008-01-01
Background: Measures of articulatory diadochokinesis (DDK) are widely used in the assessment of motor speech disorders and they play a role in detecting abnormality, monitoring speech performance changes and classifying syndromes. Although in clinical practice DDK is generally measured perceptually, without support from instrumental methods that…
An algorithm to improve speech recognition in noise for hearing-impaired listeners
Healy, Eric W.; Yoho, Sarah E.; Wang, Yuxuan; Wang, DeLiang
2013-01-01
Despite considerable effort, monaural (single-microphone) algorithms capable of increasing the intelligibility of speech in noise have remained elusive. Successful development of such an algorithm is especially important for hearing-impaired (HI) listeners, given their particular difficulty in noisy backgrounds. In the current study, an algorithm based on binary masking was developed to separate speech from noise. Unlike the ideal binary mask, which requires prior knowledge of the premixed signals, the masks used to segregate speech from noise in the current study were estimated by training the algorithm on speech not used during testing. Sentences were mixed with speech-shaped noise and with babble at various signal-to-noise ratios (SNRs). Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions. These increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs. They were also often substantial, allowing several HI listeners to improve intelligibility from scores near zero to values above 70%. PMID:24116438
The organization and reorganization of audiovisual speech perception in the first year of life.
Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F
2017-04-01
The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.
The organization and reorganization of audiovisual speech perception in the first year of life
Danielson, D. Kyle; Bruderer, Alison G.; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F.
2017-01-01
The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone. PMID:28970650
Köbler, S; Rosenhall, U
2002-10-01
Speech intelligibility and horizontal localization of 19 subjects with mild-to-moderate hearing loss were studied in order to evaluate the advantages and disadvantages of bilateral and unilateral hearing aid (HA) fittings. Eight loudspeakers were arranged in a circular array covering the horizontal plane around the subjects. Speech signals of a sentence test were delivered by one, randomly chosen, loudspeaker. At the same time, the other seven loudspeakers emitted noise with the same long-term average spectrum as the speech signals. The subjects were asked to repeat the speech signal and to point out the corresponding loudspeaker. Speech intelligibility was significantly improved by HAs, bilateral amplification being superior to unilateral. Horizontal localization could not be improved by HA amplification. However, bilateral HAs preserved the subjects' horizontal localization, whereas unilateral amplification decreased their horizontal localization abilities. Front-back confusions were common in the horizontal localization test. The results indicate that bilateral HA amplification has advantages compared with unilateral amplification.
Speech Perception Abilities of Adults with Dyslexia: Is There Any Evidence for a True Deficit?
ERIC Educational Resources Information Center
Hazan, Valerie; Messaoud-Galusi, Souhila; Rosen, Stuart; Nouwens, Suzan; Shakespeare, Bethanie
2009-01-01
Purpose: This study investigated whether adults with dyslexia show evidence of a consistent speech perception deficit by testing phoneme categorization and word perception in noise. Method: Seventeen adults with dyslexia and 20 average readers underwent a test battery including standardized reading, language and phonological awareness tests, and…
The Promise of NLP and Speech Processing Technologies in Language Assessment
ERIC Educational Resources Information Center
Chapelle, Carol A.; Chung, Yoo-Ree
2010-01-01
Advances in natural language processing (NLP) and automatic speech recognition and processing technologies offer new opportunities for language testing. Despite their potential uses on a range of language test item types, relatively little work has been done in this area, and it is therefore not well understood by test developers, researchers or…
Stimulus Characteristics of Single-Word Tests of Children's Speech Sound Production
ERIC Educational Resources Information Center
Macrae, Toby
2017-01-01
Purpose: This clinical focus article provides readers with a description of the stimulus characteristics of 12 popular tests of speech sound production. Method: Using significance testing and descriptive analyses, stimulus items were compared in terms of the number of opportunities for production of all consonant singletons, clusters, and rhotic…
Aided speech recognition in single-talker competition by elderly hearing-impaired listeners
NASA Astrophysics Data System (ADS)
Coughlin, Maureen; Humes, Larry
2004-05-01
This study examined the speech-identification performance in one-talker interference conditions that increased in complexity while audibility was ensured over a wide bandwidth (200-4000 Hz). Factorial combinations of three independent variables were used to vary the amount of informational masking. These variables were: (1) competition playback direction (forward or reverse); (2) gender match between target and competition talkers (same or different); and (3) target talker uncertainty (one of three possible talkers from trial to trial). Four groups of listeners, two elderly hearing-impaired groups differing in age (65-74 and 75-84 years) and two young normal-hearing groups, were tested. One of the groups of young normal-hearing listeners was tested under acoustically equivalent test conditions and one was tested under perceptually equivalent test conditions. The effect of each independent variable on speech-identification performance and informational masking was generally consistent with expectations. Group differences in the observed informational masking were most pronounced for the oldest group of hearing-impaired listeners. The eight measures of speech-identification performance were found to be strongly correlated with one another, and individual differences in speech understanding performance among the elderly were found to be associated with age and level of education. [Work supported, in part, by NIA.
STI: An objective measure for the performance of voice communication systems
NASA Astrophysics Data System (ADS)
Houtgast, T.; Steeneken, H. J. M.
1981-06-01
A measuring device was developed for determining the quality of speech communication systems. It comprises two parts, a signal source which replaces the talker, producing an artificial speech-like signal, and an analysis part which replaces the listener, by which the signal at the receiving end of the system under test is evaluated. Each single measurement results in an index (ranging from 0-100%) which indicates the effect of that communication system on speech intelligibility. The index is called STI (Speech Transmission Index). A careful design of the characteristics of the test signal and of the type of signal analysis makes the present approach widely applicable. It was verified experimentally that a given STI implies a given effect on speech intelligibility, irrespective of the nature of the actual disturbance (noise interference, band-pass limiting, peak clipping, etc.).
Speech Intelligibility Advantages using an Acoustic Beamformer Display
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Sunder, Kaushik; Godfroy, Martine; Otto, Peter
2015-01-01
A speech intelligibility test conforming to the Modified Rhyme Test of ANSI S3.2 "Method for Measuring the Intelligibility of Speech Over Communication Systems" was conducted using a prototype 12-channel acoustic beamformer system. The target speech material (signal) was identified against speech babble (noise), with calculated signal-noise ratios of 0, 5 and 10 dB. The signal was delivered at a fixed beam orientation of 135 deg (re 90 deg as the frontal direction of the array) and the noise at 135 deg (co-located) and 0 deg (separated). A significant improvement in intelligibility from 57% to 73% was found for spatial separation for the same signal-noise ratio (0 dB). Significant effects for improved intelligibility due to spatial separation were also found for higher signal-noise ratios (5 and 10 dB).
Feragen, Kristin Billaud; Særvold, Tone Kristin; Aukner, Ragnhild; Stock, Nicola Marie
2017-03-01
Despite the use of multidisciplinary services, little research has addressed issues involved in the care of those with cleft lip and/or palate across disciplines. The aim was to investigate associations between speech, language, reading, and reports of teasing, subjective satisfaction with speech, and psychological adjustment. Cross-sectional data collected during routine, multidisciplinary assessments in a centralized treatment setting, including speech and language therapists and clinical psychologists. Children with cleft with palatal involvement aged 10 years from three birth cohorts (N = 170) and their parents. Speech: SVANTE-N. Language: Language 6-16 (sentence recall, serial recall, vocabulary, and phonological awareness). Reading: Word Chain Test and Reading Comprehension Test. Psychological measures: Strengths and Difficulties Questionnaire and extracts from the Satisfaction With Appearance Scale and Child Experience Questionnaire. Reading skills were associated with self- and parent-reported psychological adjustment in the child. Subjective satisfaction with speech was associated with psychological adjustment, while not being consistently associated with speech therapists' assessments. Parent-reported teasing was found to be associated with lower levels of reading skills. Having a medical and/or psychological condition in addition to the cleft was found to affect speech, language, and reading significantly. Cleft teams need to be aware of speech, language, and/or reading problems as potential indicators of psychological risk in children with cleft. This study highlights the importance of multiple reports (self, parent, and specialist) and a multidisciplinary approach to cleft care and research.
A Spanish matrix sentence test for assessing speech reception thresholds in noise.
Hochmuth, Sabine; Brand, Thomas; Zokoll, Melanie A; Castro, Franz Zenker; Wardenga, Nina; Kollmeier, Birger
2012-07-01
To develop, optimize, and evaluate a new Spanish sentence test in noise. The test comprises a basic matrix of ten names, verbs, numerals, nouns, and adjectives. From this matrix, test lists of ten sentences with an equal syntactical structure can be formed at random, with each list containing the whole speech material. The speech material represents the phoneme distribution of the Spanish language. The test was optimized for measuring speech reception thresholds (SRTs) in noise by adjusting the presentation levels of the individual words. Subsequently, the test was evaluated by independent measurements investigating the training effects, the comparability of test lists, open-set vs. closed-set test format, and performance of listeners of different Spanish varieties. In total, 68 normal-hearing native Spanish-speaking listeners. SRTs measured using an adaptive procedure were 6.2 ± 0.8 dB SNR for the open-set and 7.2 ± 0.7 dB SNR for the closed-set test format. The residual training effect was less than 1 dB after using two double-lists before data collection. No significant differences were found for listeners of different Spanish varieties indicating that the test is applicable to Spanish as well as Latin American listeners. Test lists can be used interchangeably.
Rhythm Perception and Its Role in Perception and Learning of Dysrhythmic Speech.
Borrie, Stephanie A; Lansford, Kaitlin L; Barrett, Tyson S
2017-03-01
The perception of rhythm cues plays an important role in recognizing spoken language, especially in adverse listening conditions. Indeed, this has been shown to hold true even when the rhythm cues themselves are dysrhythmic. This study investigates whether expertise in rhythm perception provides a processing advantage for perception (initial intelligibility) and learning (intelligibility improvement) of naturally dysrhythmic speech, dysarthria. Fifty young adults with typical hearing participated in 3 key tests, including a rhythm perception test, a receptive vocabulary test, and a speech perception and learning test, with standard pretest, familiarization, and posttest phases. Initial intelligibility scores were calculated as the proportion of correct pretest words, while intelligibility improvement scores were calculated by subtracting this proportion from the proportion of correct posttest words. Rhythm perception scores predicted intelligibility improvement scores but not initial intelligibility. On the other hand, receptive vocabulary scores predicted initial intelligibility scores but not intelligibility improvement. Expertise in rhythm perception appears to provide an advantage for processing dysrhythmic speech, but a familiarization experience is required for the advantage to be realized. Findings are discussed in relation to the role of rhythm in speech processing and shed light on processing models that consider the consequence of rhythm abnormalities in dysarthria.
Articulation of sounds in Serbian language in patients who learned esophageal speech successfully.
Vekić, Maja; Veselinović, Mila; Mumović, Gordana; Mitrović, Slobodan M
2014-01-01
Articulation of pronounced sounds during the training and subsequent use of esophageal speech is very important because it contributes significantly to intelligibility and aesthetics of spoken words and sentences, as well as of speech and language itself. The aim of this research was to determine the quality of articulation of sounds of Serbian language by groups of sounds in patients who had learned esophageal speech successfully as well as the effect of age and tooth loss on the quality of articulation. This retrospective-prospective study included 16 patients who had undergone total laryngectomy. Having completed the rehabilitation of speech, these patient used esophageal voice and speech. The quality of articulation was tested by the "Global test of articulation." Esophageal speech was rated with grade 5, 4 and 3 in 62.5%, 31.3% and one patient, respectively. Serbian was the native language of all the patients. The study included 30 sounds of Serbian language in 16 subjects (480 total sounds). Only two patients (12.5%) articulated all sounds properly, whereas 87.5% of them had incorrect articulation. The articulation of affricates and fricatives, especially sound /h/ from the group of the fricatives, was found to be the worst in the patients who had successfully mastered esophageal speech. The age and the tooth loss of patients who have mastered esophageal speech do not affect the articulation of sounds in Serbian language.
Time-compressed speech test in the elderly.
Arceno, Rayana Silva; Scharlach, Renata Coelho
2017-09-28
The present study aimed to evaluate the performance of elderly people in the time-compressed speech test according to the variables ears and order of display, and analyze the types of errors presented by the volunteers. This is an observational, descriptive, quantitative, analytical and primary cross-sectional study involving 22 elderly with normal hearing or mild sensorineural hearing loss between the ages of 60 and 80. The elderly were submitted to the time-compressed speech test with compression ratio of 60%, through the electromechanical time compression method. A list of 50 disyllables was applied to each ear and the initial side was chosen at random. On what concerns to the performance in the test, the elderly fell short in relation to the adults and there was no statistical difference between the ears. It was found statistical evidence of better performance for the second ear in the test. The most mistaken words were the ones initiated with the phonemes /p/ and /d/. The presence of consonant combination in a word also increased the occurrence of mistakes. The elderly have worse performance in the auditory closure ability when assessed by the time-compressed speech test compared to adults. This result suggests that elderly people have difficulty in recognizing speech when this is pronounced in faster rates. Therefore, strategies must be used to facilitate the communicative process, regardless the presence of hearing loss.
Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise
Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther
2016-01-01
Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18–35 years) and 22 older (60–78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults’ poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access. PMID:27458400
Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise.
Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther
2016-01-01
Vocabulary size has been suggested as a useful measure of "verbal abilities" that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18-35 years) and 22 older (60-78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults' poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access.
How linguistic closure and verbal working memory relate to speech recognition in noise--a review.
Besser, Jana; Koelewijn, Thomas; Zekveld, Adriana A; Kramer, Sophia E; Festen, Joost M
2013-06-01
The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations.
Intra-oral pressure-based voicing control of electrolaryngeal speech with intra-oral vibrator.
Takahashi, Hirokazu; Nakao, Masayuki; Kikuchi, Yataro; Kaga, Kimitaka
2008-07-01
In normal speech, coordinated activities of intrinsic laryngeal muscles suspend a glottal sound at utterance of voiceless consonants, automatically realizing a voicing control. In electrolaryngeal speech, however, the lack of voicing control is one of the causes of unclear voice, voiceless consonants tending to be misheard as the corresponding voiced consonants. In the present work, we developed an intra-oral vibrator with an intra-oral pressure sensor that detected utterance of voiceless phonemes during the intra-oral electrolaryngeal speech, and demonstrated that an intra-oral pressure-based voicing control could improve the intelligibility of the speech. The test voices were obtained from one electrolaryngeal speaker and one normal speaker. We first investigated on the speech analysis software how a voice onset time (VOT) and first formant (F1) transition of the test consonant-vowel syllables contributed to voiceless/voiced contrasts, and developed an adequate voicing control strategy. We then compared the intelligibility of consonant-vowel syllables among the intra-oral electrolaryngeal speech with and without online voicing control. The increase of intra-oral pressure, typically with a peak ranging from 10 to 50 gf/cm2, could reliably identify utterance of voiceless consonants. The speech analysis and intelligibility test then demonstrated that a short VOT caused the misidentification of the voiced consonants due to a clear F1 transition. Finally, taking these results together, the online voicing control, which suspended the prosthetic tone while the intra-oral pressure exceeded 2.5 gf/cm2 and during the 35 milliseconds that followed, proved efficient to improve the voiceless/voiced contrast.
How Linguistic Closure and Verbal Working Memory Relate to Speech Recognition in Noise—A Review
Koelewijn, Thomas; Zekveld, Adriana A.; Kramer, Sophia E.; Festen, Joost M.
2013-01-01
The ability to recognize masked speech, commonly measured with a speech reception threshold (SRT) test, is associated with cognitive processing abilities. Two cognitive factors frequently assessed in speech recognition research are the capacity of working memory (WM), measured by means of a reading span (Rspan) or listening span (Lspan) test, and the ability to read masked text (linguistic closure), measured by the text reception threshold (TRT). The current article provides a review of recent hearing research that examined the relationship of TRT and WM span to SRTs in various maskers. Furthermore, modality differences in WM capacity assessed with the Rspan compared to the Lspan test were examined and related to speech recognition abilities in an experimental study with young adults with normal hearing (NH). Span scores were strongly associated with each other, but were higher in the auditory modality. The results of the reviewed studies suggest that TRT and WM span are related to each other, but differ in their relationships with SRT performance. In NH adults of middle age or older, both TRT and Rspan were associated with SRTs in speech maskers, whereas TRT better predicted speech recognition in fluctuating nonspeech maskers. The associations with SRTs in steady-state noise were inconclusive for both measures. WM span was positively related to benefit from contextual information in speech recognition, but better TRTs related to less interference from unrelated cues. Data for individuals with impaired hearing are limited, but larger WM span seems to give a general advantage in various listening situations. PMID:23945955
Smith, Sherri L.; Pichora-Fuller, M. Kathleen
2015-01-01
Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners’ auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding. PMID:26441769
Jacquin-Courtois, S; Rode, G; Pavani, F; O'Shea, J; Giard, M H; Boisson, D; Rossetti, Y
2010-03-01
Unilateral neglect is a disabling syndrome frequently observed following right hemisphere brain damage. Symptoms range from visuo-motor impairments through to deficient visuo-spatial imagery, but impairment can also affect the auditory modality. A short period of adaptation to a rightward prismatic shift of the visual field is known to improve a wide range of hemispatial neglect symptoms, including visuo-manual tasks, mental imagery, postural imbalance, visuo-verbal measures and number bisection. The aim of the present study was to assess whether the beneficial effects of prism adaptation may generalize to auditory manifestations of neglect. Auditory extinction, whose clinical manifestations are independent of the sensory modalities engaged in visuo-manual adaptation, was examined in neglect patients before and after prism adaptation. Two separate groups of neglect patients (all of whom exhibited left auditory extinction) underwent prism adaptation: one group (n = 6) received a classical prism treatment ('Prism' group), the other group (n = 6) was submitted to the same procedure, but wore neutral glasses creating no optical shift (placebo 'Control' group). Auditory extinction was assessed by means of a dichotic listening task performed three times: prior to prism exposure (pre-test), upon prism removal (0 h post-test) and 2 h later (2 h post-test). The total number of correct responses, the lateralization index (detection asymmetry between the two ears) and the number of left-right fusion errors were analysed. Our results demonstrate that prism adaptation can improve left auditory extinction, thus revealing transfer of benefit to a sensory modality that is orthogonal to the visual, proprioceptive and motor modalities directly implicated in the visuo-motor adaptive process. The observed benefit was specific to the detection asymmetry between the two ears and did not affect the total number of responses. This indicates a specific effect of prism adaptation on lateralized processes rather than on general arousal. Our results suggest that the effects of prism adaptation can extend to unexposed sensory systems. The bottom-up approach of visuo-motor adaptation appears to interact with higher order brain functions related to multisensory integration and can have beneficial effects on sensory processing in different modalities. These findings should stimulate the development of therapeutic approaches aimed at bypassing the affected sensory processing modality by adapting other sensory modalities.
The Oral Speech Mechanism Screening Examination (OSMSE).
ERIC Educational Resources Information Center
St. Louis, Kenneth O.; Ruscello, Dennis M.
Although speech-language pathologists are expected to be able to administer and interpret oral examinations, there are currently no screening tests available that provide careful administration instructions and data for intra-examiner and inter-examiner reliability. The Oral Speech Mechanism Screening Examination (OSMSE) is designed primarily for…
The influence of (central) auditory processing disorder in speech sound disorders.
Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein
2016-01-01
Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Role of working memory and lexical knowledge in perceptual restoration of interrupted speech.
Nagaraj, Naveen K; Magimairaj, Beula M
2017-12-01
The role of working memory (WM) capacity and lexical knowledge in perceptual restoration (PR) of missing speech was investigated using the interrupted speech perception paradigm. Speech identification ability, which indexed PR, was measured using low-context sentences periodically interrupted at 1.5 Hz. PR was measured for silent gated, low-frequency speech noise filled, and low-frequency fine-structure and envelope filled interrupted conditions. WM capacity was measured using verbal and visuospatial span tasks. Lexical knowledge was assessed using both receptive vocabulary and meaning from context tests. Results showed that PR was better for speech noise filled condition than other conditions tested. Both receptive vocabulary and verbal WM capacity explained unique variance in PR for the speech noise filled condition, but were unrelated to performance in the silent gated condition. It was only receptive vocabulary that uniquely predicted PR for fine-structure and envelope filled conditions. These findings suggest that the contribution of lexical knowledge and verbal WM during PR depends crucially on the information content that replaced the silent intervals. When perceptual continuity was partially restored by filler speech noise, both lexical knowledge and verbal WM capacity facilitated PR. Importantly, for fine-structure and envelope filled interrupted conditions, lexical knowledge was crucial for PR.
Fuller, Christina; Free, Rolien; Maat, Bert; Başkent, Deniz
2012-08-01
In normal-hearing listeners, musical background has been observed to change the sound representation in the auditory system and produce enhanced performance in some speech perception tests. Based on these observations, it has been hypothesized that musical background can influence sound and speech perception, and as an extension also the quality of life, by cochlear-implant users. To test this hypothesis, this study explored musical background [using the Dutch Musical Background Questionnaire (DMBQ)], and self-perceived sound and speech perception and quality of life [using the Nijmegen Cochlear Implant Questionnaire (NCIQ) and the Speech Spatial and Qualities of Hearing Scale (SSQ)] in 98 postlingually deafened adult cochlear-implant recipients. In addition to self-perceived measures, speech perception scores (percentage of phonemes recognized in words presented in quiet) were obtained from patient records. The self-perceived hearing performance was associated with the objective speech perception. Forty-one respondents (44% of 94 respondents) indicated some form of formal musical training. Fifteen respondents (18% of 83 respondents) judged themselves as having musical training, experience, and knowledge. No association was observed between musical background (quantified by DMBQ), and self-perceived hearing-related performance or quality of life (quantified by NCIQ and SSQ), or speech perception in quiet.
NASA Astrophysics Data System (ADS)
Kayasith, Prakasith; Theeramunkong, Thanaruk
It is a tedious and subjective task to measure severity of a dysarthria by manually evaluating his/her speech using available standard assessment methods based on human perception. This paper presents an automated approach to assess speech quality of a dysarthric speaker with cerebral palsy. With the consideration of two complementary factors, speech consistency and speech distinction, a speech quality indicator called speech clarity index (Ψ) is proposed as a measure of the speaker's ability to produce consistent speech signal for a certain word and distinguished speech signal for different words. As an application, it can be used to assess speech quality and forecast speech recognition rate of speech made by an individual dysarthric speaker before actual exhaustive implementation of an automatic speech recognition system for the speaker. The effectiveness of Ψ as a speech recognition rate predictor is evaluated by rank-order inconsistency, correlation coefficient, and root-mean-square of difference. The evaluations had been done by comparing its predicted recognition rates with ones predicted by the standard methods called the articulatory and intelligibility tests based on the two recognition systems (HMM and ANN). The results show that Ψ is a promising indicator for predicting recognition rate of dysarthric speech. All experiments had been done on speech corpus composed of speech data from eight normal speakers and eight dysarthric speakers.
Speech perception and production in severe environments
NASA Astrophysics Data System (ADS)
Pisoni, David B.
1990-09-01
The goal was to acquire new knowledge about speech perception and production in severe environments such as high masking noise, increased cognitive load or sustained attentional demands. Changes were examined in speech production under these adverse conditions through acoustic analysis techniques. One set of studies focused on the effects of noise on speech production. The experiments in this group were designed to generate a database of speech obtained in noise and in quiet. A second set of experiments was designed to examine the effects of cognitive load on the acoustic-phonetic properties of speech. Talkers were required to carry out a demanding perceptual motor task while they read lists of test words. A final set of experiments explored the effects of vocal fatigue on the acoustic-phonetic properties of speech. Both cognitive load and vocal fatigue are present in many applications where speech recognition technology is used, yet their influence on speech production is poorly understood.
Peng, Jianxin; Yan, Nanjie; Wang, Dan
2015-01-01
The present study investigated Chinese speech intelligibility in 28 classrooms from nine different elementary schools in Guangzhou, China. The subjective Chinese speech intelligibility in the classrooms was evaluated with children in grades 2, 4, and 6 (7 to 12 years old). Acoustical measurements were also performed in these classrooms. Subjective Chinese speech intelligibility scores and objective speech intelligibility parameters, such as speech transmission index (STI), were obtained at each listening position for all tests. The relationship between subjective Chinese speech intelligibility scores and STI was revealed and analyzed. The effects of age on Chinese speech intelligibility scores were compared. Results indicate high correlations between subjective Chinese speech intelligibility scores and STI for grades 2, 4, and 6 children. Chinese speech intelligibility scores increase with increase of age under the same STI condition. The differences in scores among different age groups decrease as STI increases. To achieve 95% Chinese speech intelligibility scores, the STIs required for grades 2, 4, and 6 children are 0.75, 0.69, and 0.63, respectively.
ERIC Educational Resources Information Center
Iaccino, James F.; Sowa, Stephen J.
Since past studies have shown that females as well as left-handers do not demonstrate a right-ear advantage for verbal materials, suggesting that linguistic functions may not be handled in the left hemisphere exclusively, a study was conducted to examine these laterality effects more closely. Subjects, 24 undergraduate students at a small college…
Cochlear implant rehabilitation outcomes in Waardenburg syndrome children.
de Sousa Andrade, Susana Margarida; Monteiro, Ana Rita Tomé; Martins, Jorge Humberto Ferreira; Alves, Marisa Costa; Santos Silva, Luis Filipe; Quadros, Jorge Manuel Cardoso; Ribeiro, Carlos Alberto Reis
2012-09-01
The purpose of this study was to review the outcomes of children with documented Waardenburg syndrome implanted in the ENT Department of Centro Hospitalar de Coimbra, concerning postoperative speech perception and production, in comparison to the rest of non-syndromic implanted children. A retrospective chart review was performed for children congenitally deaf who had undergone cochlear implantation with multichannel implants, diagnosed as having Waardenburg syndrome, between 1992 and 2011. Postoperative performance outcomes were assessed and confronted with results obtained by children with non-syndromic congenital deafness also implanted in our department. Open-set auditory perception skills were evaluated by using European Portuguese speech discrimination tests (vowels test, monosyllabic word test, number word test and words in sentence test). Meaningful auditory integration scales (MAIS) and categories of auditory performance (CAP) were also measured. Speech production was further assessed and included results on meaningful use of speech Scale (MUSS) and speech intelligibility rating (SIR). To date, 6 implanted children were clinically identified as having WS type I, and one met the diagnosis of type II. All WS children received multichannel cochlear implants, with a mean age at implantation of 30.6±9.7months (ranging from 19 to 42months). Postoperative outcomes in WS children were similar to other nonsyndromic children. In addition, in number word and vowels discrimination test WS group showed slightly better performances, as well as in MUSS and MAIS assessment. Our study has shown that cochlear implantation should be considered a rehabilitative option for Waardenburg syndrome children with profound deafness, enabling the development and improvement of speech perception and production abilities in this group of patients, reinforcing their candidacy for this audio-oral rehabilitation method. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Nicotine effects on brain function and functional connectivity in schizophrenia.
Jacobsen, Leslie K; D'Souza, D Cyril; Mencl, W Einar; Pugh, Kenneth R; Skudlarski, Pawel; Krystal, John H
2004-04-15
Nicotine in tobacco smoke can improve functioning in multiple cognitive domains. High rates of smoking among schizophrenic patients may reflect an effort to remediate cognitive dysfunction. Our primary aim was to determine whether nicotine improves cognitive function by facilitating activation of brain regions mediating task performance or by facilitating functional connectivity. Thirteen smokers with schizophrenia and 13 smokers with no mental illness were withdrawn from tobacco and underwent functional magnetic resonance imaging (fMRI) scanning twice, once after placement of a placebo patch and once after placement of a nicotine patch. During scanning, subjects performed an n-back task with two levels of working memory load and of selective attention load. During the most difficult (dichotic 2-back) task condition, nicotine improved performance of schizophrenic subjects and worsened performance of control subjects. Nicotine also enhanced activation of a network of regions, including anterior cingulate cortex and bilateral thalamus, and modulated thalamocortical functional connectivity to a greater degree in schizophrenic than in control subjects during dichotic 2-back task performance. In tasks that tax working memory and selective attention, nicotine may improve performance in schizophrenia patients by enhancing activation of and functional connectivity between brain regions that mediate task performance.
Bruder, Gerard E; Stewart, Jonathan W; McGrath, Patrick J
2017-07-01
The right and left side of the brain are asymmetric in anatomy and function. We review electrophysiological (EEG and event-related potential), behavioral (dichotic and visual perceptual asymmetry), and neuroimaging (PET, MRI, NIRS) evidence of right-left asymmetry in depressive disorders. Recent electrophysiological and fMRI studies of emotional processing have provided new evidence of altered laterality in depressive disorders. EEG alpha asymmetry and neuroimaging findings at rest and during cognitive or emotional tasks are consistent with reduced left prefrontal activity in depressed patients, which may impair downregulation of amygdala response to negative emotional information. Dichotic listening and visual hemifield findings for non-verbal or emotional processing have revealed abnormal perceptual asymmetry in depressive disorders, and electrophysiological findings have shown reduced right-lateralized responsivity to emotional stimuli in occipitotemporal or parietotemporal cortex. We discuss models of neural networks underlying these alterations. Of clinical relevance, individual differences among depressed patients on measures of right-left brain function are related to diagnostic subtype of depression, comorbidity with anxiety disorders, and clinical response to antidepressants or cognitive behavioral therapy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Understanding the Abstract Role of Speech in Communication at 12 Months
ERIC Educational Resources Information Center
Martin, Alia; Onishi, Kristine H.; Vouloumanos, Athena
2012-01-01
Adult humans recognize that even unfamiliar speech can communicate information between third parties, demonstrating an ability to separate communicative function from linguistic content. We examined whether 12-month-old infants understand that speech can communicate before they understand the meanings of specific words. Specifically, we test the…
Modification of Speech Discrimination in Patients with Binaural Asymmetrical Hearing Loss
ERIC Educational Resources Information Center
Arkebauer, Herbert J.; Mencher, George T.
1971-01-01
Testing of patients with bilateral asymmetrical hearing losses for differences in speech discrimination under varying listening conditions showed detrimental interaction between ears. Also, when difference between ears was greater, speech discrimination was better, and the greater the impairment in better ear, the greater results obtained by…
Speech production gains following constraint-induced movement therapy in children with hemiparesis.
Allison, Kristen M; Reidy, Teressa Garcia; Boyle, Mary; Naber, Erin; Carney, Joan; Pidcock, Frank S
2017-01-01
The purpose of this study was to investigate changes in speech skills of children who have hemiparesis and speech impairment after participation in a constraint-induced movement therapy (CIMT) program. While case studies have reported collateral speech gains following CIMT, the effect of CIMT on speech production has not previously been directly investigated to the knowledge of these investigators. Eighteen children with hemiparesis and co-occurring speech impairment participated in a 21-day clinical CIMT program. The Goldman-Fristoe Test of Articulation-2 (GFTA-2) was used to assess children's articulation of speech sounds before and after the intervention. Changes in percent of consonants correct (PCC) on the GFTA-2 were used as a measure of change in speech production. Children made significant gains in PCC following CIMT. Gains were similar in children with left and right-sided hemiparesis, and across age groups. This study reports significant collateral gains in speech production following CIMT and suggests benefits of CIMT may also spread to speech motor domains.
de Taillez, Tobias; Grimm, Giso; Kollmeier, Birger; Neher, Tobias
2018-06-01
To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC). Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality. Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each). IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality. The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.
Messaoud-Galusi, Souhila; Hazan, Valerie; Rosen, Stuart
2012-01-01
Purpose The claim that speech perception abilities are impaired in dyslexia was investigated in a group of 62 dyslexic children and 51 average readers matched in age. Method To test whether there was robust evidence of speech perception deficits in children with dyslexia, speech perception in noise and quiet was measured using eight different tasks involving the identification and discrimination of a complex and highly natural synthetic ‘pea’-‘bee’ contrast (copy synthesised from natural models) and the perception of naturally-produced words. Results Children with dyslexia, on average, performed more poorly than average readers in the synthetic syllables identification task in quiet and in across-category discrimination (but not when tested using an adaptive procedure). They did not differ from average readers on two tasks of word recognition in noise or identification of synthetic syllables in noise. For all tasks, a majority of individual children with dyslexia performed within norms. Finally, speech perception generally did not correlate with pseudo-word reading or phonological processing, the core skills related to dyslexia. Conclusions On the tasks and speech stimuli we used, most children with dyslexia do not appear to show a consistent deficit in speech perception. PMID:21930615
Automated Intelligibility Assessment of Pathological Speech Using Phonological Features
NASA Astrophysics Data System (ADS)
Middag, Catherine; Martens, Jean-Pierre; Van Nuffelen, Gwen; De Bodt, Marc
2009-12-01
It is commonly acknowledged that word or phoneme intelligibility is an important criterion in the assessment of the communication efficiency of a pathological speaker. People have therefore put a lot of effort in the design of perceptual intelligibility rating tests. These tests usually have the drawback that they employ unnatural speech material (e.g., nonsense words) and that they cannot fully exclude errors due to listener bias. Therefore, there is a growing interest in the application of objective automatic speech recognition technology to automate the intelligibility assessment. Current research is headed towards the design of automated methods which can be shown to produce ratings that correspond well with those emerging from a well-designed and well-performed perceptual test. In this paper, a novel methodology that is built on previous work (Middag et al., 2008) is presented. It utilizes phonological features, automatic speech alignment based on acoustic models that were trained on normal speech, context-dependent speaker feature extraction, and intelligibility prediction based on a small model that can be trained on pathological speech samples. The experimental evaluation of the new system reveals that the root mean squared error of the discrepancies between perceived and computed intelligibilities can be as low as 8 on a scale of 0 to 100.
Heimbauer, Lisa A; Beran, Michael J; Owren, Michael J
2011-07-26
A long-standing debate concerns whether humans are specialized for speech perception, which some researchers argue is demonstrated by the ability to understand synthetic speech with significantly reduced acoustic cues to phonetic content. We tested a chimpanzee (Pan troglodytes) that recognizes 128 spoken words, asking whether she could understand such speech. Three experiments presented 48 individual words, with the animal selecting a corresponding visuographic symbol from among four alternatives. Experiment 1 tested spectrally reduced, noise-vocoded (NV) synthesis, originally developed to simulate input received by human cochlear-implant users. Experiment 2 tested "impossibly unspeechlike" sine-wave (SW) synthesis, which reduces speech to just three moving tones. Although receiving only intermittent and noncontingent reward, the chimpanzee performed well above chance level, including when hearing synthetic versions for the first time. Recognition of SW words was least accurate but improved in experiment 3 when natural words in the same session were rewarded. The chimpanzee was more accurate with NV than SW versions, as were 32 human participants hearing these items. The chimpanzee's ability to spontaneously recognize acoustically reduced synthetic words suggests that experience rather than specialization is critical for speech-perception capabilities that some have suggested are uniquely human. Copyright © 2011 Elsevier Ltd. All rights reserved.
A speech pronunciation practice system for speech-impaired children: A study to measure its success.
Salim, Siti Salwah; Mustafa, Mumtaz Begum Binti Peer; Asemi, Adeleh; Ahmad, Azila; Mohamed, Noraini; Ghazali, Kamila Binti
2016-09-01
The speech pronunciation practice (SPP) system enables children with speech impairments to practise and improve their speech pronunciation. However, little is known about the surrogate measures of the SPP system. This research aims to measure the success and effectiveness of the SPP system using three surrogate measures: usage (frequency of use), performance (recognition accuracy) and satisfaction (children's subjective reactions), and how these measures are aligned with the success of the SPP system, as well as to each other. We have measured the absolute change in the word error rate (WER) between the pre- and post-training, using the ANOVA test. Correlation co-efficiency (CC) analysis was conducted to test the relation between the surrogate measures, while a Structural Equation Model (SEM) was used to investigate the causal relations between the measures. The CC test results indicate a positive correlation between the surrogate measures. The SEM supports all the proposed gtheses. The ANOVA results indicate that SPP is effective in reducing the WER of impaired speech. The SPP system is an effective assistive tool, especially for high levels of severity. We found that performance is a mediator of the relation between "usage" and "satisfaction". Copyright © 2016 Elsevier Ltd. All rights reserved.
Quantitative application of the primary progressive aphasia consensus criteria.
Wicklund, Meredith R; Duffy, Joseph R; Strand, Edythe A; Machulda, Mary M; Whitwell, Jennifer L; Josephs, Keith A
2014-04-01
To determine how well the consensus criteria could classify subjects with primary progressive aphasia (PPA) using a quantitative speech and language battery that matches the test descriptions provided by the consensus criteria. A total of 105 participants with a neurodegenerative speech and language disorder were prospectively recruited and underwent neurologic, neuropsychological, and speech and language testing and MRI in this case-control study. Twenty-one participants with apraxia of speech without aphasia served as controls. Select tests from the speech and language battery were chosen for application of consensus criteria and cutoffs were employed to determine syndromic classification. Hierarchical cluster analysis was used to examine participants who could not be classified. Of the 84 participants, 58 (69%) could be classified as agrammatic (27%), semantic (7%), or logopenic (35%) variants of PPA. The remaining 31% of participants could not be classified. Of the unclassifiable participants, 2 clusters were identified. The speech and language profile of the first cluster resembled mild logopenic PPA and the second cluster semantic PPA. Gray matter patterns of loss of these 2 clusters of unclassified participants also resembled mild logopenic and semantic variants. Quantitative application of consensus PPA criteria yields the 3 syndromic variants but leaves a large proportion unclassified. Therefore, the current consensus criteria need to be modified in order to improve sensitivity.
Saslow, Laura R; McCoy, Shannon; van der Löwe, Ilmo; Cosley, Brandon; Vartan, Arbi; Oveis, Christopher; Keltner, Dacher; Moskowitz, Judith T; Epel, Elissa S
2014-03-01
What can a speech reveal about someone's state? We tested the idea that greater stress reactivity would relate to lower linguistic cognitive complexity while speaking. In Study 1, we tested whether heart rate and emotional stress reactivity to a stressful discussion would relate to lower linguistic complexity. In Studies 2 and 3, we tested whether a greater cortisol response to a standardized stressful task including a speech (Trier Social Stress Test) would be linked to speaking with less linguistic complexity during the task. We found evidence that measures of stress responsivity (emotional and physiological) and chronic stress are tied to variability in the cognitive complexity of speech. Taken together, these results provide evidence that our individual experiences of stress or "stress signatures"-how our body and mind react to stress both in the moment and over the longer term-are linked to how complex our speech under stress. Copyright © 2013 Society for Psychophysiological Research.
The impact of extrinsic demographic factors on Cantonese speech acquisition.
To, Carol K S; Cheung, Pamela S P; McLeod, Sharynne
2013-05-01
This study modeled the associations between extrinsic demographic factors and children's speech acquisition in Hong Kong Cantonese. The speech of 937 Cantonese-speaking children aged 2;4 to 6;7 in Hong Kong was assessed using a standardized speech test. Demographic information regarding household income, paternal education, maternal education, presence of siblings and having a domestic helper as the main caregiver was collected via parent questionnaires. After controlling for age and sex, higher maternal education and higher household income were significantly associated with better speech skills; however, these variables explained a negligible amount of variance. Paternal education, number of siblings and having a foreign domestic helper did not associate with a child's speech acquisition. Extrinsic factors only exerted minimal influence on children's speech acquisition. A large amount of unexplained variance in speech ability still warrants further research.
Rader, T; Fastl, H; Baumann, U
2017-03-01
After implantation of cochlear implants with hearing preservation for combined electronic acoustic stimulation (EAS), the residual acoustic hearing ability relays fundamental speech frequency information in the low frequency range. With the help of acoustic simulation of EAS hearing perception the impact of frequency and level fine structure of speech signals can be systematically examined. The aim of this study was to measure the speech reception threshold (SRT) under various noise conditions with acoustic EAS simulation by variation of the frequency and level information of the fundamental frequency f0 of speech. The study was carried out to determine to what extent the SRT is impaired by modification of the f0 fine structure. Using partial tone time pattern analysis an acoustic EAS simulation of the speech material from the Oldenburg sentence test (OLSA) was generated. In addition, determination of the f0 curve of the speech material was conducted. Subsequently, either the parameter frequency or level of f0 was fixed in order to remove one of the two fine contour information of the speech signal. The processed OLSA sentences were used to determine the SRT in background noise under various test conditions. The conditions "f0 fixed frequency" and "f0 fixed level" were tested under two different situations, under "amplitude modulated background noise" and "continuous background noise" conditions. A total of 24 subjects with normal hearing participated in the study. The SRT in background noise for the condition "f0 fixed frequency" was more favorable in continuous noise with 2.7 dB and in modulated noise with 0.8 dB compared to the condition "f0 fixed level" with 3.7 dB and 2.9 dB, respectively. In the simulation of speech perception with cochlear implants and acoustic components, the level information of the fundamental frequency had a stronger impact on speech intelligibility than the frequency information. The method of simulation of transmission of cochlear implants allows investigation of how various parameters influence speech intelligibility in subjects with normal hearing.
The effect of guessing on the speech reception thresholds of children.
Moodley, A
1990-01-01
Speech audiometry is an essential part of the assessment of hearing impaired children and it is now widely used throughout the United Kingdom. Although instructions are universally agreed upon as an important aspect in the administration of any form of audiometric testing, there has been little, if any, research towards evaluating the influence which instructions that are given to a listener have on the Speech Reception Threshold obtained. This study attempts to evaluate what effect guessing has on the Speech Reception Threshold of children. A sample of 30 secondary school pupils between 16 and 18 years of age with normal hearing was used in the study. It is argued that the type of instruction normally used for Speech Reception Threshold in audiometric testing may not provide a sufficient amount of control for guessing and the implications of this, using data obtained in the study, are examined.
Nathan, Liz; Stackhouse, Joy; Goulandris, Nata; Snowling, Margaret J
2004-06-01
Children with speech difficulties may have associated educational problems. This paper reports a study examining the educational attainment of children at Key Stage 1 of the National Curriculum who had previously been identified with a speech difficulty. (1) To examine the educational attainment at Key Stage 1 of children diagnosed with speech difficulties two/three years prior to the present study. (2) To compare the Key Stage 1 assessment results of children whose speech problems had resolved at the time of assessment with those whose problems persisted. Data were available from 39 children who had an earlier diagnosis of speech difficulties at age 4/5 (from an original cohort of 47) at the age of 7. A control group of 35 children identified and matched at preschool on age, nonverbal ability and gender provided comparative data. Results of Statutory Assessment Tests (SATs) in reading, reading comprehension, spelling, writing and maths, administered to children at the end of Year 2 of school were analysed. Performance across the two groups was compared. Performance was also compared to published statistics on national levels of attainment. Children with a history of speech difficulties performed less well than controls on reading, spelling and maths. However, children whose speech problems had resolved by the time of assessment performed no differently to controls. Children with persisting speech problems performed less well than controls on tests of literacy and maths. Spelling performance was a particular area of difficulty for children with persisting speech problems. Children with speech difficulties are likely to perform less well than expected on literacy and maths SAT's at age 7. Performance is related to whether the speech problem resolves early on and whether associated language problems exist. Whilst it is unclear whether poorer performance on maths is because of the language components of this task, the results indicate that speech problems, especially persisting ones, can affect the ability to access the National Curriculum to expected levels.
Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan
2014-01-01
Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise. PMID:25566159
Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan
2014-01-01
Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.
Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina
2014-12-01
In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.