ERIC Educational Resources Information Center
Markevych, Vladlena; Asbjornsen, Arve E.; Lind, Ola; Plante, Elena; Cone, Barbara
2011-01-01
The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a…
Dichotic listening performance predicts language comprehension.
Asbjørnsen, Arve E; Helland, Turid
2006-05-01
Dichotic listening performance is considered a reliable and valid procedure for the assessment of language lateralisation in the brain. However, the documentation of a relationship between language functions and dichotic listening performance is sparse, although it is accepted that dichotic listening measures language perception. In particular, language comprehension should show close correspondence to perception of language stimuli. In the present study, we tested samples of reading-impaired and normally achieving children between 10 and 13 years of age with tests of reading skills, language comprehension, and dichotic listening to consonant-vowel (CV) syllables. A high correlation between the language scores and the dichotic listening performance was expected. However, since the left ear score is believed to be an error when assessing language laterality, covariation was expected for the right ear scores only. In addition, directing attention to one ear input was believed to reduce the influence of random factors, and thus show a more concise estimate of left hemisphere language capacity. Thus, a stronger correlation between language comprehension skills and the dichotic listening performance when attending to the right ear was expected. The analyses yielded a positive correlation between the right ear score in DL and language comprehension, an effect that was stronger when attending to the right ear. The present results confirm the assumption that dichotic listening with CV syllables measures an aspect of language perception and language skills that is related to general language comprehension.
Bruder, Gerard E; Stewart, Jonathan W; McGrath, Patrick J; Deliyannides, Deborah; Quitkin, Frederic M
2004-09-01
Patients having a depressive disorder vary widely in their therapeutic responsiveness to a selective serotonin reuptake inhibitor (SSRI), but there are no clinical predictors of treatment outcome. Studies using dichotic listening, electrophysiologic and neuroimaging measures suggest that pretreatment differences among depressed patients in functional brain asymmetry are related to responsiveness to antidepressants. Two new studies replicate differences in dichotic listening asymmetry between fluoxetine responders and nonresponders, and demonstrate the importance of gender in this context. Right-handed outpatients who met DSM-IV criteria for major depression, dysthymia, or depression not otherwise specified were tested on dichotic fused-words and complex tones tests before completing 12 weeks of fluoxetine treatment. Perceptual asymmetry (PA) scores were compared for 75 patients (38 women) who responded to treatment and 39 patients (14 women) who were nonresponders. Normative data were also obtained for 101 healthy adults (61 women). Patients who responded to fluoxetine differed from nonresponders and healthy adults in favoring left- over right-hemisphere processing of dichotic stimuli, and this difference was dependent on gender and test. Heightened left-hemisphere advantage for dichotic words in responders was present among women but not men, whereas reduced right-hemisphere advantage for dichotic tones in responders was present among men but not women. Pretreatment PA was also predictive of change in depression severity following treatment. Responder vs nonresponder differences for verbal dichotic listening in women and nonverbal dichotic listening in men are discussed in terms of differences in cognitive function, hemispheric organization, and neurotransmitter function.
Markevych, Vladlena; Asbjørnsen, Arve E; Lind, Ola; Plante, Elena; Cone, Barbara
2011-07-01
The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a nonforced, and also attention-right, and attention-left condition. Transient evoked otoacoustic emissions (TEOAEs) were recorded for both ears, with and without the presentation of contralateral broadband noise. The main finding was a strong negative correlation between language laterality as measured with the dichotic listening task and of the TEOAE responses. The findings support a hypothesis of shared variance between central and peripheral auditory lateralities, and contribute to the attentional theory of auditory lateralization. The results have implications for the understanding of the cortico-fugal efferent control of cochlear activity. 2011 Elsevier Inc. All rights reserved.
Benchmarks for the Dichotic Sentence Identification test in Brazilian Portuguese for ear and age.
Andrade, Adriana Neves de; Gil, Daniela; Iorio, Maria Cecilia Martinelli
2015-01-01
Dichotic listening tests should be used in local languages and adapted for the population. Standardize the Brazilian Portuguese version of the Dichotic Sentence Identification test in normal listeners, comparing the performance for age and ear. This prospective study included 200 normal listeners divided into four groups according to age: 13-19 years (GI), 20-29 years (GII), 30-39 years (GIII), and 40-49 years (GIV). The Dichotic Sentence Identification was applied in four stages: training, binaural integration and directed sound from right and left. Better results for the right ear were observed in the stages of binaural integration in all assessed groups. There was a negative correlation between age and percentage of correct responses in both ears for free report and training. The worst performance in all stages of the test was observed for the age group 40-49 years old. Reference values for the Brazilian Portuguese version of the Dichotic Sentence Identification test in normal listeners aged 13-49 years were established according to age, ear, and test stage; they should be used as benchmarks when evaluating individuals with these characteristics. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Interictal and Postictal Performances on Dichotic Listening Test in Children with Focal Epilepsy
ERIC Educational Resources Information Center
Carlsson, G.; Wiegand, G.; Stephani, U.
2011-01-01
Dichotic listening test (DL) is an important tool to disclose speech dominance in healthy subjects and in clinical cases. The aim of this study was to probe if focal epilepsy in children reveals a corresponding suppression of the ear reports contralateral to seizure onset site. Thus, 15 children and adolescents with clinically and…
Testing the Benefits of Neurofeedback on Selective Attention Measured Through Dichotic Listening.
Gadea, Marien; Aliño, Marta; Garijo, Evelio; Espert, Raul; Salvador, Alicia
2016-06-01
The electrophysiological changes after a single session of neurofeedback training (↑SMR/↓Theta) and its effects on executive attention during a dichotic listening test with forced attentional procedures were measured in a sample of 20 healthy women. A pre-post moment test double blind design, with the inclusion of a group receiving sham neurofeedback, allowed for minimization of alien influences. The interaction of Moment × Group was significant, indicating an enhancement of SMR band after the real neurofeedback. The dichotic listening scores were correlated with the amplitude of Beta band in baseline conditions. The performance on the forced left attentional condition in dichotic listening was significantly improved and correlated positively with the post-training enhancement of the SMR band. The sham neurofeedback group also improved DL scores, so a clear affirmation about the benefits of neurofeedback training over cognitive performance could not be unambiguously established. It is concluded that the protocol showed a good independence and acceptable trainability in modifying the EEG results, but there was limited interpretability regarding cognitive outcomes.
Dichotic Listening and Left-Right Confusion
ERIC Educational Resources Information Center
Hirnstein, Marco
2011-01-01
The present study examined the relationship between individual differences in dichotic listening (DL) and the susceptibility to left-right confusion (LRC). Thirty-six men and 59 women completed a consonant-vowel DL test, a behavioral LRC task, and an LRC self-rating questionnaire. Significant negative correlations between overall DL accuracy and…
Dichotic Listening Deficits in Children with Dyslexia
ERIC Educational Resources Information Center
Moncrieff, Deborah W.; Black, Jeffrey R.
2008-01-01
Several auditory processing deficits have been reported in children with dyslexia. In order to assess for the presence of a binaural integration type of auditory processing deficit, dichotic listening tests with digits, words and consonant-vowel (CV) pairs were administered to two groups of right-handed 11-year-old children, one group diagnosed…
Vanhoucke, Elodie; Cousin, Emilie; Baciu, Monica
2013-03-01
Growing evidence suggests that age impacts on interhemispheric representation of language. Dichotic listening test allows assessing language lateralization for spoken language and it generally reveals right-ear/left-hemisphere (LH) predominance for language in young adult subjects. According to reported results, elderly would display increasing LH predominance in some studies or stable LH language lateralization for language in others ones. The aim of this study was to depict the main pattern of results in respect with the effect of normal aging on the hemisphere specialization for language by using dichotic listening test. A meta-analysis based on 11 studies has been performed. The inter-hemisphere asymmetry does not seem to increase according to age. A supplementary qualitative analysis suggests that right-ear advantage seems to increase between 40 and 49 y old and becomes stable or decreases after 55 y old, suggesting right-ear/LH decline.
ERIC Educational Resources Information Center
Gadea, Marien; Marti-Bonmati, Luis; Arana, Estanislao; Espert, Raul; Salvador, Alicia; Casanova, Bonaventura
2009-01-01
This study conducted a follow-up of 13 early-onset slightly disabled Relapsing-Remitting Multiple Sclerosis (RRMS) patients within an year, evaluating both CC area measurements in a midsagittal Magnetic Resonance (MR) image, and Dichotic Listening (DL) testing with stop consonant vowel (C-V) syllables. Patients showed a significant progressive…
Hypothalamic digoxin, hemispheric chemical dominance, and sarcoidosis.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-11-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin, dolichol, and ubiquinone. This was assessed in patients with systemic sarcoidosis. All l5 patients with sarcoidosis were right-handed/left hemispheric dominant by the dichotic listening test. The pathway was also studied in normal right hemispheric, left hemispheric, and bihemispheric dominant individuals for comparison to find out the role of hemispheric dominance in the pathogenesis of sarcoidosis. In patients with sarcoidosis there was elevated digoxin synthesis, increased dolichol, and glycoconjugate levels, and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites. There was an increase in cholesterol:phospholipid ratio and a reduction in glycoconjugate level of RBC membrane in these patients. The neurotransmitter/digoxin-mediated increased intra cellular calcium induced immune activation, ubiquinone deficiency-related mitochondrial dysfunction/free radical generation, and increased dolichol-related altered glycoconjugate metabolism/endogenous self-glycoprotein antigen generation are crucial to the pathogenesis of sarcoidosis. The biochemical patterns obtained in sarcoidosis are similar to those obtained in left-handed/right hemispheric chemically dominant individuals by the dichotic listening test. But all the patients with sarcoidosis were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Sarcoidosis occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function.
Dichotic listening during forced-attention in a patient with left hemispherectomy.
Wester, K; Hugdahl, K; Asbjørnsen, A
1991-02-01
A young left-handed girl with an extensive posttraumatic lesion in the left hemisphere was tested with dichotic listening (DL) under three different attentional instructions. The major aim of the study was to evaluate a structural vs attentional explanation for dichotic listening. As both her expressive and receptive language functions were intact after the lesion, it was assumed that the right hemisphere was the language-dominant one. In the free-report condition, she was free to divert attention to and to report from both ear inputs. In the forced-right condition, she was instructed to attend to and report only from the right ear input. In the forced-left condition, she was instructed to attend to and to report only from the left-ear input. Her performance was compared with data from a previously collected sample of normal left-handed females. Analysis showed that the patient, in contrast to the normal sample, revealed a complete right-ear extinction phenomenon, irrespective of attentional instruction. Furthermore, she showed superior correct reports from the left ear, compared with those of the normal sample, also irrespective of attentional instruction. It is concluded that these results support a structural, rather than attentional explanation for the right-ear advantage (REA) typically observed in dichotic listening. The utility of validating the dichotic listening technique on patients with brain lesions is discussed.
Sex Differences in Dichotic Listening
ERIC Educational Resources Information Center
Voyer, Daniel
2011-01-01
The present study quantified the magnitude of sex differences in perceptual asymmetries as measured with dichotic listening. This was achieved by means of a meta-analysis of the literature dating back from the initial use of dichotic listening as a measure of laterality. The meta-analysis included 249 effect sizes pertaining to sex differences and…
Dichotic and dichoptic digit perception in normal adults.
Lawfield, Angela; McFarland, Dennis J; Cacace, Anthony T
2011-06-01
Verbally based dichotic-listening experiments and reproduction-mediated response-selection strategies have been used for over four decades to study perceptual/cognitive aspects of auditory information processing and make inferences about hemispheric asymmetries and language lateralization in the brain. Test procedures using dichotic digits have also been used to assess for disorders of auditory processing. However, with this application, limitations exist and paradigms need to be developed to improve specificity of the diagnosis. Use of matched tasks in multiple sensory modalities is a logical approach to address this issue. Herein, we use dichotic listening and dichoptic viewing of visually presented digits for making this comparison. To evaluate methodological issues involved in using matched tasks of dichotic listening and dichoptic viewing in normal adults. A multivariate assessment of the effects of modality (auditory vs. visual), digit-span length (1-3 pairs), response selection (recognition vs. reproduction), and ear/visual hemifield of presentation (left vs. right) on dichotic and dichoptic digit perception. Thirty adults (12 males, 18 females) ranging in age from 18 to 30 yr with normal hearing sensitivity and normal or corrected-to-normal visual acuity. A computerized, custom-designed program was used for all data collection and analysis. A four-way repeated measures analysis of variance (ANOVA) evaluated the effects of modality, digit-span length, response selection, and ear/visual field of presentation. The ANOVA revealed that performances on dichotic listening and dichoptic viewing tasks were dependent on complex interactions between modality, digit-span length, response selection, and ear/visual hemifield of presentation. Correlation analysis suggested a common effect on overall accuracy of performance but isolated only an auditory factor for a laterality index. The variables used in this experiment affected performances in the auditory modality to a greater extent than in the visual modality. The right-ear advantage observed in the dichotic-digits task was most evident when reproduction mediated response selection was used in conjunction with three-digit pairs. This effect implies that factors such as "speech related output mechanisms" and digit-span length (working memory) contribute to laterality effects in dichotic listening performance with traditional paradigms. Thus, the use of multiple-digit pairs to avoid ceiling effects and the application of verbal reproduction as a means of response selection may accentuate the role of nonperceptual factors in performance. Ideally, tests of perceptual abilities should be relatively free of such effects. American Academy of Audiology.
Dichotic listening in patients with splenial and nonsplenial callosal lesions.
Pollmann, Stefan; Maertens, Marianne; von Cramon, D Yves; Lepsien, Joeran; Hugdahl, Kenneth
2002-01-01
The authors found splenial lesions to be associated with left ear suppression in dichotic listening of consonant-vowel syllables. This was found in both a rapid presentation dichotic monitoring task and a standard dichotic listening task, ruling out attentional limitations in the processing of high stimulus loads as a confounding factor. Moreover, directed attention to the left ear did not improve left ear target detection in the patients, independent of callosal lesion location. The authors' data may indicate that auditory callosal fibers pass through the splenium more posterior than previously thought. However, further studies should investigate whether callosal fibers between primary and secondary auditory cortices, or between higher level multimodal cortices, are vital for the detection of left ear targets in dichotic listening.
Exploring auditory neglect: Anatomo-clinical correlations of auditory extinction.
Tissieres, Isabel; Crottaz-Herbette, Sonia; Clarke, Stephanie
2018-05-23
The key symptoms of auditory neglect include left extinction on tasks of dichotic and/or diotic listening and rightward shift in locating sounds. The anatomical correlates of the latter are relatively well understood, but no systematic studies have examined auditory extinction. Here, we performed a systematic study of anatomo-clinical correlates of extinction by using dichotic and/or diotic listening tasks. In total, 20 patients with right hemispheric damage (RHD) and 19 with left hemispheric damage (LHD) performed dichotic and diotic listening tasks. Either task consists of the simultaneous presentation of word pairs; in the dichotic task, 1 word is presented to each ear, and in the diotic task, each word is lateralized by means of interaural time differences and presented to one side. RHD was associated with exclusively contralesional extinction in dichotic or diotic listening, whereas in selected cases, LHD led to contra- or ipsilesional extinction. Bilateral symmetrical extinction occurred in RHD or LHD, with dichotic or diotic listening. The anatomical correlates of these extinction profiles offer an insight into the organisation of the auditory and attentional systems. First, left extinction in dichotic versus diotic listening involves different parts of the right hemisphere, which explains the double dissociation between these 2 neglect symptoms. Second, contralesional extinction in the dichotic task relies on homologous regions in either hemisphere. Third, ipsilesional extinction in dichotic listening after LHD was associated with lesions of the intrahemispheric white matter, interrupting callosal fibres outside their midsagittal or periventricular trajectory. Fourth, bilateral symmetrical extinction was associated with large parieto-fronto-temporal LHD or smaller parieto-temporal RHD, which suggests that divided attention, supported by the right hemisphere, and auditory streaming, supported by the left, likely play a critical role. Copyright © 2018. Published by Elsevier Masson SAS.
Watanabe, S; Tasaki, H; Hojo, K; Yoshimura, I; Sato, T; Nakaoka, T; Iwabuchi, T
1982-06-01
The authors made neuropsychological studies by the tachistoscope and the dichotic listening test on a subject who had undergone the transection of the posterior part of the corpus callosum. As to the tachistoscopic recognition, stimulus material was composed with the various Japanese letters (Katakana, Hiragana, Kanji), various faces (variations of the eyebrow form and the mouth form) and various slopes of line. Table 1 shows results of the cases (the subject was the present case, subjects 1 and subject 2 were past cases). It was seen that the performance of the subject on Japanese letters tasks showed greater right visual field superiority than the one of subject 1 and subject 2. As to the auditory recognition, the tasks used for the dichotic listening test were the following (Table 2, 3, 4). Different digits (three pairs) of the subject showed greater right ear superiority (right ear: 61.1, left ear 5.9) than the ones of subject 1 and subject 2.
Hypothalamic digoxin, hemispheric chemical dominance, and interstitial lung disease.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-10-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin, dolichol, and ubiquinone. This was assessed in patients with idiopathic pulmonary fibrosis and in individuals of differing hemispheric dominance to find out the role of hemispheric dominance in the pathogenesis of idiopathic pulmonary fibrosis. All 15 cases of interstitial lung disease were right-handed/left hemispheric dominant by the dichotic listening test. The isoprenoidal metabolites--digoxin, dolichol, and ubiquinone, RBC membrane Na(+)-K+ ATPase activity, serum magnesium, tyrosine/tryptophan catabolic patterns, free radical metabolism, glycoconjugate metabolism, and RBC membrane composition--were assessed in idiopathic pulmonary fibrosis as well as in individuals with differing hemispheric dominance. In patients with idiopathic pulmonary fibrosis there was elevated digoxin synthesis, increased dolichol and glycoconjugate levels, and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites. There was an increase in cholesterol phospholipid ratio and a reduction in glycoconjugate level of RBC membrane in patients with idiopathic pulmonary fibrosis. Isoprenoid pathway dysfunction con tributes to the pathogenesis of idiopathic pulmonary fibrosis. The biochemical patterns obtained in interstitial lung disease are similar to those obtained in left-handed/right hemispheric chemically dominant individuals by the dichotic listening test. However, all the patients with interstitial lung disease were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Interstitial lung disease occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function.
Extinction of auditory stimuli in hemineglect: Space versus ear.
Spierer, Lucas; Meuli, Reto; Clarke, Stephanie
2007-02-01
Unilateral extinction of auditory stimuli, a key feature of the neglect syndrome, was investigated in 15 patients with right (11), left (3) or bilateral (1) hemispheric lesions using a verbal dichotic condition, in which each ear received simultaneously one word, and a interaural-time-difference (ITD) diotic condition, in which both ears received both words lateralised by means of ITD. Additional investigations included sound localisation, visuo-spatial attention and general cognitive status. Five patients presented a significant asymmetry in the ITD diotic test, due to a decrease of left hemispace reporting but no asymmetry was found in dichotic listening. Six other patients presented a significant asymmetry in the dichotic test due to a significant decrease of left or right ear reporting, but no asymmetry in diotic listening. Ten of the above patients presented mild to severe deficits in sound localisation and eight signs of visuo-spatial neglect (three with selective asymmetry in the diotic and five in the dichotic task). Four other patients presented a significant asymmetry in both the diotic and dichotic listening tasks. Three of them presented moderate deficits in localisation and all four moderate visuo-spatial neglect. Thus, extinction for left ear and left hemispace can double dissociate, suggesting distinct underlying neural processes. Furthermore, the co-occurrence with sound localisation disturbance and with visuo-spatial hemineglect speaks in favour of the involvement of multisensory attentional representations.
Elliott, D; Weeks, D J
1993-03-01
Adults with Down's syndrome and a group of undifferentiated mentally handicapped persons were examined using a free recall dichotic listening procedure to determine a laterality index for the perception of speech sounds. Subjects also performed both the visual and verbal portions of a standard apraxia battery. As in previous research, subjects with Down's syndrome tended to display a left ear advantage on the dichotic listening test. As well, they performed better on the apraxia battery when movements were cued visually rather than verbally. This verbal-motor disadvantage increased as the left ear dichotic listening advantage became more pronounced. It is argued that the verbal-motor difficulties experienced by persons with Down's syndrome stem from a dissociation of the functional systems responsible for speech perception and movement organization (Elliott and Weeks, 1990).
The effect of stimulus intensity on the right ear advantage in dichotic listening.
Hugdahl, Kenneth; Westerhausen, René; Alho, Kimmo; Medvedev, Svyatoslav; Hämäläinen, Heikki
2008-01-24
The dichotic listening test is non-invasive behavioural technique to study brain lateralization and it has been shown, that its results can be systematically modulated by varying stimulation properties (bottom-up effects) or attentional instructions (top-down effects) of the testing procedure. The goal of the present study was to further investigate the bottom-up modulation, by examining the effect of differences in the right or left ear stimulus intensity on the ear advantage. For this purpose, interaural intensity difference were gradually varied in steps of 3 dB from -21 dB in favour of the left ear to +21 dB in favour of the right ear, also including a no difference baseline condition. Thirty-three right-handed adult participants with normal hearing acuity were tested. The dichotic listening paradigm was based on consonant-vowel stimuli pairs. Only pairs with the same voicing (voice or non-voiced) of the consonant sound were used. The results showed: (a) a significant right ear advantage (REA) for interaural intensity differences from 21 to -3 dB, (b) no ear advantage (NEA) for the -6 dB difference, and (c) a significant left ear advantage (LEA) for differences form -9 to -21 dB. It is concluded that the right ear advantage in dichotic listening to CV syllables withstands an interaural intensity difference of -9 dB before yielding to a significant left ear advantage. This finding could have implications for theories of auditory laterality and hemispheric asymmetry for phonological processing.
Focused attention in a simple dichotic listening task: an fMRI experiment.
Jäncke, Lutz; Specht, Karsten; Shah, Joni Nadim; Hugdahl, Kenneth
2003-04-01
Whole-head functional magnetic resonance imaging (fMRI) was used in nine neurologically intact subjects to measure the hemodynamic responses in the context of dichotic listening (DL). In order to eliminate the influence of verbal information processing, tones of different frequencies were used as stimuli. Three different dichotic listening tasks were used: the subjects were instructed to either concentrate on the stimuli presented in both ears (DIV), or only in the left (FL) or right (FR) ear and to monitor the auditory input for a specific target tone. When the target tone was detected, the subjects were required to indicate this by pressing a response button. Compared to the resting state, all dichotic listening tasks evoked strong hemodynamic responses within a distributed network comprising of temporal, parietal, and frontal brain areas. Thus, it is clear that dichotic listening makes use of various cognitive functions located within the dorsal and ventral stream of auditory information processing (i.e., the 'what' and 'where' streams). Comparing the three different dichotic listening conditions with each other only revealed a significant difference in the pre-SMA and within the left planum temporale area. The pre-SMA was generally more strongly activated during the DIV condition than during the FR and FL conditions. Within the planum temporale, the strongest activation was found during the FR condition and the weakest during the DIV condition. These findings were taken as evidence that even a simple dichotic listening task such as the one used here, makes use of a distributed neural network comprising of the dorsal and ventral stream of auditory information processing. In addition, these results support the previously made assumption that planum temporale activation is modulated by attentional strategies. Finally, the present findings uncovered that the pre-SMA, which is mostly thought to be involved in higher-order motor control processes, is also involved in cognitive processes operative during dichotic listening.
Dichotic Hearing in Elderly Hearing Aid Users Who Choose to Use a Single-Ear Device
Ribas, Angela; Mafra, Nicoli; Marques, Jair; Mottecy, Carla; Silvestre, Renata; Kozlowski, Lorena
2014-01-01
Introduction Elderly individuals with bilateral hearing loss often do not use hearing aids in both ears. Because of this, dichotic tests to assess hearing in this group may help identify peculiar degenerative processes of aging and hearing aid selection. Objective To evaluate dichotic hearing for a group of elderly hearing aid users who did not adapt to using binaural devices and to verify the correlation between ear dominance and the side chosen to use the device. Methods A cross-sectional descriptive study involving 30 subjects from 60 to 81 years old, of both genders, with an indication for bilateral hearing aids for over 6 months, but using only a single device. Medical history, pure tone audiometry, and dichotic listening tests were all completed. Results All subjects (100%) of the sample failed the dichotic digit test; 94% of the sample preferred to use the device in one ear because bilateral use bothered them and affected speech understanding. In 6%, the concern was aesthetics. In the dichotic digit test, there was significant predominance of the right ear over the left, and there was a significant correlation between the dominant side with the ear chosen by the participant for use of the hearing aid. Conclusion In elderly subjects with bilateral hearing loss who have chosen to use only one hearing aid, there is dominance of the right ear over the left in dichotic listening tasks. There is a correlation between the dominant ear and the ear chosen for hearing aid fitting. PMID:25992120
Cued Dichotic Listening with Right-Handed, Left-Handed, Bilingual and Learning-Disabled Children.
ERIC Educational Resources Information Center
Obrzut, John E.; And Others
This study used cued dichotic listening to investigate differences in language lateralization among right-handed (control), left handed, bilingual, and learning disabled children. Subjects (N=60) ranging in age from 7-13 years were administered a consonant-vowel-consonant dichotic paradigm with three experimental conditions (free recall, directed…
Auditory temporal-order processing of vowel sequences by young and elderly listeners.
Fogerty, Daniel; Humes, Larry E; Kewley-Port, Diane
2010-04-01
This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18-31 years) and older (N=151; 60-88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners' SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age.
Auditory temporal-order processing of vowel sequences by young and elderly listeners1
Fogerty, Daniel; Humes, Larry E.; Kewley-Port, Diane
2010-01-01
This project focused on the individual differences underlying observed variability in temporal processing among older listeners. Four measures of vowel temporal-order identification were completed by young (N=35; 18–31 years) and older (N=151; 60–88 years) listeners. Experiments used forced-choice, constant-stimuli methods to determine the smallest stimulus onset asynchrony (SOA) between brief (40 or 70 ms) vowels that enabled identification of a stimulus sequence. Four words (pit, pet, pot, and put) spoken by a male talker were processed to serve as vowel stimuli. All listeners identified the vowels in isolation with better than 90% accuracy. Vowel temporal-order tasks included the following: (1) monaural two-item identification, (2) monaural four-item identification, (3) dichotic two-item vowel identification, and (4) dichotic two-item ear identification. Results indicated that older listeners had more variability and performed poorer than young listeners on vowel-identification tasks, although a large overlap in distributions was observed. Both age groups performed similarly on the dichotic ear-identification task. For both groups, the monaural four-item and dichotic two-item tasks were significantly harder than the monaural two-item task. Older listeners’ SOA thresholds improved with additional stimulus exposure and shorter dichotic stimulus durations. Individual differences of temporal-order performance among the older listeners demonstrated the influence of cognitive measures, but not audibility or age. PMID:20370033
A Performance of Individual Differences in Selective Attention.
ERIC Educational Resources Information Center
Wahl, Otto
A reliable, easily administered performance test of selective attentional ability was sought. A monaural listening task provided a baseline control for adequate hearing and memory; a dichotic listening task then provided indices of ability to focus attention and resist distraction while a simultaneous listening task provided measures of ability to…
Perception of Simultaneous Auditive Contents
NASA Astrophysics Data System (ADS)
Tschinkel, Christian
Based on a model of pluralistic music, we may approach an aesthetic concept of music, which employs dichotic listening situations. The concept of dichotic listening stems from neuropsychological test conditions in lateralization experiments on brain hemispheres, in which each ear is exposed to a different auditory content. In the framework of such sound experiments, the question which primarily arises concerns a new kind of hearing, which is also conceivable without earphones as a spatial composition, and which may superficially be linked to its degree of complexity. From a psychological perspective, the degree of complexity is correlated with the degree of attention given, with the listener's musical or listening experience and the level of his appreciation. Therefore, we may possibly also expect a measurable increase in physical activity. Furthermore, a dialectic interpretation of such "dualistic" music presents itself.
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-07-01
The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were assessed in the dyslexic children. We presented the dyslexic children with a phonological short-term memory task and a phonemic awareness task to quantify their phonological skills. Visual attention spans correlated positively with individual scores obtained on the dichotic listening task while phonological skills did not correlate with either dichotic scores or visual attention span measures. Moreover, all the dyslexic children with a dichotic listening deficit showed a simultaneous visual processing deficit, and a substantial number of dyslexic children exhibited phonological processing deficits whether or not they exhibited low dichotic listening scores. These findings suggest that processing simultaneous auditory stimuli may be impaired in dyslexic children regardless of phonological processing difficulties and be linked to similar problems in the visual modality.
Reliability of Laterality Effects in a Dichotic Listening Task with Words and Syllables
ERIC Educational Resources Information Center
Russell, Nancy L.; Voyer, Daniel
2004-01-01
Large and reliable laterality effects have been found using a dichotic target detection task in a recent experiment using word stimuli pronounced with an emotional component. The present study tested the hypothesis that the magnitude and reliability of the laterality effects would increase with the removal of the emotional component and variations…
The Effect of Lexical Content on Dichotic Speech Recognition in Older Adults.
Findlen, Ursula M; Roup, Christina M
2016-01-01
Age-related auditory processing deficits have been shown to negatively affect speech recognition for older adult listeners. In contrast, older adults gain benefit from their ability to make use of semantic and lexical content of the speech signal (i.e., top-down processing), particularly in complex listening situations. Assessment of auditory processing abilities among aging adults should take into consideration semantic and lexical content of the speech signal. The purpose of this study was to examine the effects of lexical and attentional factors on dichotic speech recognition performance characteristics for older adult listeners. A repeated measures design was used to examine differences in dichotic word recognition as a function of lexical and attentional factors. Thirty-five older adults (61-85 yr) with sensorineural hearing loss participated in this study. Dichotic speech recognition was evaluated using consonant-vowel-consonant (CVC) word and nonsense CVC syllable stimuli administered in the free recall, directed recall right, and directed recall left response conditions. Dichotic speech recognition performance for nonsense CVC syllables was significantly poorer than performance for CVC words. Dichotic recognition performance varied across response condition for both stimulus types, which is consistent with previous studies on dichotic speech recognition. Inspection of individual results revealed that five listeners demonstrated an auditory-based left ear deficit for one or both stimulus types. Lexical content of stimulus materials affects performance characteristics for dichotic speech recognition tasks in the older adult population. The use of nonsense CVC syllable material may provide a way to assess dichotic speech recognition performance while potentially lessening the effects of lexical content on performance (i.e., measuring bottom-up auditory function both with and without top-down processing). American Academy of Audiology.
Dlouha, Olga; Novak, Alexej; Vokral, Jan
2007-06-01
The aim of this project is to use central auditory tests for diagnosis of central auditory processing disorder (CAPD) in children with specific language impairment (SLI), in order to confirm relationship between speech-language impairment and central auditory processing. We attempted to establish special dichotic binaural tests in Czech language modified for younger children. Tests are based on behavioral audiometry using dichotic listening (different auditory stimuli that presented to each ear simultaneously). The experimental tasks consisted of three auditory measures (test 1-3)-dichotic listening of two-syllable words presented like binaural interaction tests. Children with SLI are unable to create simple sentences from two words that are heard separately but simultaneously. Results in our group of 90 pre-school children (6-7 years old) confirmed integration deficit and problems with quality of short-term memory. Average rate of success of children with specific language impairment was 56% in test 1, 64% in test 2 and 63% in test 3. Results of control group: 92% in test 1, 93% in test 2 and 92% in test 3 (p<0.001). Our results indicate the relationship between disorders of speech-language perception and central auditory processing disorders.
Swanson, H L
1987-01-01
Three theoretical models (additive, independence, maximum rule) that characterize and predict the influence of independent hemispheric resources on learning-disabled and skilled readers' simultaneous processing were tested. Predictions related to word recall performance during simultaneous encoding conditions (dichotic listening task) were made from unilateral (dichotic listening task) presentations. The maximum rule model best characterized both ability groups in that simultaneous encoding produced no better recall than unilateral presentations. While the results support the hypothesis that both ability groups use similar processes in the combining of hemispheric resources (i.e., weak/dominant processing), ability group differences do occur in the coordination of such resources.
Mahdavi, Mohammad Ebrahim; Pourbakht, Akram; Parand, Akram; Jalaie, Shohreh
2018-03-01
Evaluation of dichotic listening to digits is a common part of many studies for diagnosis and managing auditory processing disorders in children. Previous researchers have verified test-retest relative reliability of dichotic digits results in normal children and adults. However, detecting intervention-related changes in the ear scores after dichotic listening training requires information regarding trial-to-trial typical variation of individual ear scores that is estimated using indices of absolute reliability. Previous studies have not addressed absolute reliability of dichotic listening results. To compare the results of the Persian randomized dichotic digits test (PRDDT) and its relative and absolute indices of reliability between typical achieving (TA) and learning-disabled (LD) children. A repeated measures observational study. Fifteen LD children were recruited from a previously performed study with age range of 7-12 yr. The control group consisted of 15 TA schoolchildren with age range of 8-11 yr. The Persian randomized dichotic digits test was administered on the children under free recall condition in two test sessions 7-12 days apart. We compared the average of the ear scores and ear advantage between TA and LD children. Relative indices of reliability included Pearson's correlation and intraclass correlation (ICC 2,1 ) coefficients and absolute reliability was evaluated by calculation of standard error of measurement (SEM) and minimal detectable change (MDC) using the raw ear scores. The Pearson correlation coefficient indicated that in both groups of children the ear scores of test and retest sessions were strongly and positively (greater than +0.8) correlated. The ear scores showed excellent ICC coefficient of consistency (0.78-0.82) and fair to excellent ICC coefficient of absolute agreement (0.62-0.74) in TA children and excellent ICC coefficients of consistency and absolute agreement in LD children (0.76-0.87). SEM and SEM% of the ear scores in TA children were 1.46 and 1.44% for the right ear and 4.68 and 5.47% for the left ear. SEM and SEM% of the ear scores in LD children were 4.55 and 5.88% for the right ear to 7.56 and 12.81% for the left ear. MDC and MDC% of the ear scores in TA children varied from 4.03 and 3.99% for the right ear to 12.93 and 15.13% for the left ear. MDC and MDC% of the ear scores in LD children varied from 12.57 and 16.25% for the right ear to 20.89 and 35.39% for the left ear. The LD children indicated test-retest relative reliability as high as TA children in the ear scores measured by PRDDT. However, within-subject variations of the ear scores calculated by indices of absolute reliability were considerably higher in LD children versus TA children. The results of the current study could have implications for detecting real training-related changes in the ear scores. American Academy of Audiology
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-11-01
The isoprenoid pathway including endogenous digoxin was assessed in systemic lupus erythematosis (SLE). All the patients with SLE were right-handed/left hemispheric dominant by the dichotic listening test. This was also studied for comparison in patients with right hemispheric and left hemispheric dominance. The isoprenoid pathway was upregulated with increased digoxin synthesis in patients with SLE and in those with right hemispheric dominance. In this group of patients (i) the tryptophan catabolites were increased and the tyrosine catabolites reduced, (ii) the dolichol and glycoconjugate levels were elevated, (iii) lysosomal stability was reduced, (iv) ubiquinone levels were low and free radical levels increased, and (v) the membrane cholesterol:phospholipid ratios were increased and membrane glycoconjugates reduced. On the other hand, in patients with left hemispheric dominance the reverse patterns were obtained. The biochemical patterns obtained in SLE is similar to those obtained in left-handed/right hemispheric chemically dominant individuals. But all the patients with SLE were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. SLE occurs in right hemispheric chemically dominant individuals, and is a reflection of altered brain function. The role of the isoprenoid pathway in the pathogenesis of SLE and its relation to hemispheric dominance is discussed.
Selective Attention with Human Earphones.
ERIC Educational Resources Information Center
Goodwin, C. James
1988-01-01
Describes a method for demonstrating dichotic listening tasks in the classroom which involves substituting live readers for tape recorded messages to allow direct student observation of various selective attention phenomena. Concludes that live readers offer pedagogical benefits that make them superior to tape recorded dichotic listening tasks.…
Hypothalamic digoxin, hemispheric chemical dominance, and inflammatory bowel disease.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-09-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin, dolichol, and ubiquinone. It was considered pertinent to assess the pathway in inflammatory bowel disease (ulcerative colitis and regional ileitis). Since endogenous digoxin can regulate neurotransmitter transport, the pathway and the related cascade were also assessed in individuals with differing hemispheric dominance to find out the role of hemispheric dominance in its pathogenesis. All the patients with inflammatory bowel disease were right-handed/left hemispheric dominant by the dichotic listening test. The following parameters were measured in patients with inflammatory bowel disease and in individuals with differing hemispheric dominance: (1) plasma HMG CoA reductase, digoxin, dolichol, ubiquinone, and magnesium levels; (2) tryptophan/tyrosine catabolic patterns; (3) free-radical metabolism; (4) glycoconjugate metabolism; and (5) membrane composition and RBC membrane Na+-K+ ATPase activity. Statistical analysis was done by ANOVA. In patients with inflammatory bowel disease there was elevated digoxin synthesis, increased dolichol and glycoconjugate levels, and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites. There was an increase in cholesterol:phospholipid ratio and a reduction in glycoconjugate level of RBC membrane in these groups of patients. Inflammatory bowel disease is associated with an upregulated isoprenoid pathway and elevated digoxin secretion from the hypothalamus. This can contribute to immune activation, defective glycoprotein bowel antigen presentation, and autoimmunity and a schizophreniform psychosis important in its pathogenesis. The biochemical patterns obtained in inflammatory bowel disease is similar to those obtained in left-handed/right hemispheric dominant individuals by the dichotic listening test. But all the patients with peptic ulcer disease were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Inflammatory bowel disease occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function.
Roup, Christina M; Leigh, Elizabeth D
2015-06-01
The purpose of the present study was to examine individual differences in binaural processing across the adult life span. Sixty listeners (aged 23-80 years) with symmetrical hearing were tested. Binaural behavioral processing was measured by the Words-in-Noise Test, the 500-Hz masking level difference, and the Dichotic Digit Test. Electrophysiologic responses were assessed by the auditory middle latency response binaural interaction component. No correlations among binaural measures were found. Age accounted for the greatest amount of variability in speech-in-noise performance. Age was significantly correlated with the Words-in-Noise Test binaural advantage and dichotic ear advantage. Partial correlations, however, revealed that this was an effect of hearing status rather than age per se. Inspection of individual results revealed that 20% of listeners demonstrated reduced binaural performance for at least 2 of the binaural measures. The lack of significant correlations among variables suggests that each is an important measurement of binaural abilities. For some listeners, binaural processing was abnormal, reflecting a binaural processing deficit not identified by monaural audiologic tests. The inclusion of a binaural test battery in the audiologic evaluation is supported given that these listeners may benefit from alternative forms of audiologic rehabilitation.
ERIC Educational Resources Information Center
Fernandes, M. A.; Smith, M. L.; Logan, W.; Crawley, A.; McAndrews, M. P.
2006-01-01
We investigated the relationship between ear advantage scores on the Fused Dichotic Words Test (FDWT), and laterality of activation in fMRI using a verb generation paradigm in fourteen children with epilepsy. The magnitude of the laterality index (LI), based on spatial extent and magnitude of activation in classical language areas (BA 44/45,…
Cognitive Conflict and Inhibition in Primed Dichotic Listening
ERIC Educational Resources Information Center
Saetrevik, Bjorn; Specht, Karsten
2009-01-01
In previous behavioral studies, a prime syllable was presented just prior to a dichotic syllable pair, with instructions to ignore the prime and report one syllable from the dichotic pair. When the prime matched one of the syllables in the dichotic pair, response selection was biased towards selecting the unprimed target. The suggested mechanism…
Use of the Dichotic Listening Technique with Learning Disabilities
ERIC Educational Resources Information Center
Obrzut, John E.; Mahoney, Emery B.
2011-01-01
Dichotic listening (DL) techniques have been used extensively as a non-invasive procedure to assess language lateralization among children with and without learning disabilities (LD), and with individuals who have other auditory system related brain disorders. Results of studies using DL have indicated that language is lateralized in children with…
Dichotic listening in patients with situs inversus: brain asymmetry and situs asymmetry.
Tanaka, S; Kanzaki, R; Yoshibayashi, M; Kamiya, T; Sugishita, M
1999-06-01
In order to investigate the relation between situs asymmetry and functional asymmetry of the human brain, a consonant-vowel syllable dichotic listening test known as the Standard Dichotic Listening Test (SDLT) was administered to nine subjects with situs inversus (SI) that ranged in age from 6 to 46 years old (mean of 21.8 years old, S.D. = 15.6); the four males and five females all exhibited strong right-handedness. The SDLT was also used to study twenty four age-matched normal subjects that were from 6 to 48 years old (mean 21.7 years old, S.D. = 15.3); the twelve males and twelve females were all strongly right-handed and served as a control group. Eight out of the nine subjects (88.9%) with SI more often reproduced the sounds from the right ear than sounds from the left ear; this is called right ear advantage (REA). The ratio of REA in the control group was almost the same, i.e., nineteen out of the twenty-four subjects (79.1%) showed REA. Results of the present study suggest that the left-right reversal in situs inversus does not involve functional asymmetry of the brain. As such, the system that produces functional asymmetry in the human brain must independently recognize laterality from situs asymmetry.
Dichotic Listening and School Performance in Dyslexia
ERIC Educational Resources Information Center
Helland, Turid; Asbjornsen, Arve E.; Hushovd, Aud Ellen; Hugdahl, Kenneth
2008-01-01
This study focused on the relationship between school performance and performance on a dichotic listening (DL) task in dyslexic children. Dyslexia is associated with impaired phonological processing, related to functions in the left temporal lobe. DL is a frequently used task to assess functions of the left temporal lobe. Due to the predominance…
ERIC Educational Resources Information Center
Soveri, Anna; Laine, Matti; Hamalainen, Heikki; Hugdahl, Kenneth
2011-01-01
It has been claimed that due to their experience in controlling two languages, bilinguals exceed monolinguals in certain executive functions, especially inhibition of task-irrelevant stimuli. Here we investigated the effects of bilingualism on an executive phonological task, namely the forced-attention dichotic listening task with syllabic…
Hormones and Dichotic Listening: Evidence from the Study of Menstrual Cycle Effects
ERIC Educational Resources Information Center
Cowell, Patricia E.; Ledger, William L.; Wadnerkar, Meghana B.; Skilling, Fiona M.; Whiteside, Sandra P.
2011-01-01
This report presents evidence for changes in dichotic listening asymmetries across the menstrual cycle, which replicate studies from our laboratory and others. Increases in the right ear advantage (REA) were present in women at phases of the menstrual cycle associated with higher levels of ovarian hormones. The data also revealed correlations…
ERIC Educational Resources Information Center
Kimura, Doreen
2011-01-01
In this paper Doreen Kimura gives a personal history of the "right-ear effect" in dichotic listening. The focus is on the early ground-breaking papers, describing how she did the first dichotic listening studies relating the effects to brain asymmetry. The paper also gives a description of the visual half-field technique for lateralized stimulus…
Hypothalamic digoxin, hemispheric chemical dominance, and mesenteric artery occlusion.
Kurup, Ravi Kumar; Kurup, Paramesware Achutha
2003-12-01
The role of the isoprenoid pathway in vascular thrombosis, especially mesenteric artery occlusion and its relation to hemispheric dominance, was assessed in this study. The following parameters were measured in patients with mesenteric artery occlusion and individuals with right hemispheric, left hemispheric, and bihemispheric dominance: (1) plasma HMG CoA reductase, digoxin, dolichol, ubiquinone, and magnesium levels; (2) tryptophan/tyrosine catabolic patterns; (3) free radical metabolism; (4) glycoconjugate metabolism; and (5) membrane composition. In patients with mesenteric artery occlusion there was elevated digoxin synthesis, increased dolichol and glycoconjugate levels, low ubiquinone, and elevated free radical levels. The RBC membrane Na(+)-K+ ATPase activity and serum magnesium were decreased. There was also an increase in tryptophan catabolites and reduction in tyrosine catabolites in the serum. There was an increase in cholesterol:phospholipid ratio and a reduction in glycoconjugate level of RBC membrane in these patients. The biochemical patterns obtained in mesenteric artery occlusion is similar to those obtained in left-handed/right hemispheric dominant individuals by the dichotic listening test. But all the patients with mesenteric artery occlusion were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Mesenteric artery occlusion occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function. Hemispheric chemical dominance may thus control the risk for developing vascular thrombosis in individuals.
Reliability and Magnitude of Laterality Effects in Dichotic Listening with Exogenous Cueing
ERIC Educational Resources Information Center
Voyer, Daniel
2004-01-01
The purpose of the present study was to replicate and extend to word recognition previous findings of reduced magnitude and reliability of laterality effects when exogenous cueing was used in a dichotic listening task with syllable pairs. Twenty right-handed undergraduate students with normal hearing (10 females, 10 males) completed a dichotic…
The Effects of Background Noise on Dichotic Listening to Consonant-Vowel Syllables
ERIC Educational Resources Information Center
Sequeira, Sarah Dos Santos; Specht, Karsten; Hamalainen, Heikki; Hugdahl, Kenneth
2008-01-01
Lateralization of verbal processing is frequently studied with the dichotic listening technique, yielding a so called right ear advantage (REA) to consonant-vowel (CV) syllables. However, little is known about how background noise affects the REA. To address this issue, we presented CV-syllables either in silence or with traffic background noise…
A Forced-Attention Dichotic Listening fMRI Study on 113 Subjects
ERIC Educational Resources Information Center
Kompus, Kristiina; Specht, Karsten; Ersland, Lars; Juvodden, Hilde T.; van Wageningen, Heidi; Hugdahl, Kenneth; Westerhausen, Rene
2012-01-01
We report fMRI and behavioral data from 113 subjects on attention and cognitive control using a variant of the classic dichotic listening paradigm with pairwise presentations of consonant-vowel syllables. The syllable stimuli were presented in a block-design while subjects were in the MR scanner. The subjects were instructed to pay attention to…
Dichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users.
Vannson, Nicolas; Innes-Brown, Hamish; Marozeau, Jeremy
2015-08-26
Musical enjoyment for cochlear implant (CI) recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic). Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear) and the diotic mode (both parts in both ears). Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy). Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users) participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity and preference. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants' preference ratings, or their judgments of intended emotion. © The Author(s) 2015.
Hypothalamic digoxin, hemispheric chemical dominance, and chronic bronchitis emphysema.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-09-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin (membrane sodium-potassium ATPase inhibitor, immunomodulator, and regulator of neurotransmitter/amino acid transport), dolichol (regulates N-glycosylation of proteins), and ubiquinone (free radical scavenger). This was assessed in patients with chronic bronchitis emphysema. The pathway was also assessed in patients with right hemispheric, left hemispheric, and bihemispheric dominance to find the role of hemispheric dominance in the pathogenesis of chronic bronchitis emphysema. All the 15 patients with chronic bronchitis emphysema were right-handed/left hemispheric dominant by the dichotic listening test. In patients with chronic bronchitis emphysema there was elevated digoxin synthesis, increased dolichol, and glycoconjugate levels, and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites. There was an increase in cholesterol:phospholipid ratio and a reduction in glycoconjugate levels of RBC membrane in patients with chronic bronchitis emphysema. The same biochemical patterns were obtained in individuals with right hemispheric dominance. Endogenous digoxin by activating the calcineurin signal transduction pathway of T-cell can contribute to immune activation in chronic bronchitis emphysema. Increased free radical generation can also lead to immune activation. Endogenous synthesis of nicotine can contribute to the pathogenesis of the disease. Altered glycoconjugate metabolism and membranogenesis can lead to defective lysosomal stability contributing to the disease process by increased release of lysosomal proteases. The role of an endogenous digoxin and hemispheric dominance in the pathogenesis of chronic bronchitis emphysema and in the regulation of lung structure/function is discussed. The biochemical patterns obtained in chronic bronchitis emphysema is similar to those obtained in left-handed/right hemispheric chemically dominant individuals by the dichotic listening test. But all the patients with chronic bronchitis emphysema were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Chronic bronchitis emphysema occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function. Hemispheric chemical dominance can play a role in the regulation of lung function and structure.
ERIC Educational Resources Information Center
Gadea, Marien; Espert, Raul; Salvador, Alicia; Marti-Bonmati, Luis
2011-01-01
Dichotic Listening (DL) is a valuable tool to study emotional brain lateralization. Regarding the perception of sadness and anger through affective prosody, the main finding has been a left ear advantage (LEA) for the sad but contradictory data for the anger prosody. Regarding an induced mood in the laboratory, its consequences upon DL were a…
Evaluation of 2 cognitive abilities tests in a dual-task environment
NASA Technical Reports Server (NTRS)
Vidulich, M. A.; Tsang, P. S.
1986-01-01
Most real world operators are required to perform multiple tasks simultaneously. In some cases, such as flying a high performance aircraft or trouble shooting a failing nuclear power plant, the operator's ability to time share or process in parallel" can be driven to extremes. This has created interest in selection tests of cognitive abilities. Two tests that have been suggested are the Dichotic Listening Task and the Cognitive Failures Questionnaire. Correlations between these test results and time sharing performance were obtained and the validity of these tests were examined. The primary task was a tracking task with dynamically varying bandwidth. This was performed either alone or concurrently with either another tracking task or a spatial transformation task. The results were: (1) An unexpected negative correlation was detected between the two tests; (2) The lack of correlation between either test and task performance made the predictive utility of the tests scores appear questionable; (3) Pilots made more errors on the Dichotic Listening Task than college students.
Hemispheric Differences in Processing Dichotic Meaningful and Non-Meaningful Words
ERIC Educational Resources Information Center
Yasin, Ifat
2007-01-01
Classic dichotic-listening paradigms reveal a right-ear advantage (REA) for speech sounds as compared to non-speech sounds. This REA is assumed to be associated with a left-hemisphere dominance for meaningful speech processing. This study objectively probed the relationship between ear advantage and hemispheric dominance in a dichotic-listening…
Sætrevik, Bjørn
2012-01-01
The dichotic listening task is typically administered by presenting a consonant-vowel (CV) syllable to each ear and asking the participant to report the syllable heard most clearly. The results tend to show more reports of the right ear syllable than of the left ear syllable, an effect called the right ear advantage (REA). The REA is assumed to be due to the crossing over of auditory fibres and the processing of language stimuli being lateralised to left temporal areas. However, the tendency for most dichotic listening experiments to use only CV syllable stimuli limits the extent to which the conclusions can be generalised to also apply to other speech phonemes. The current study re-examines the REA in dichotic listening by using both CV and vowel-consonant (VC) syllables and combinations thereof. Results showed a replication of the REA response pattern for both CV and VC syllables, thus indicating that the general assumption of left-side localisation of processing can be applied for both types of stimuli. Further, on trials where a CV is presented in one ear and a VC is presented in the other ear, the CV is selected more often than the VC, indicating that these phonemes have an acoustic or processing advantage.
Bruder, Gerard E; Haggerty, Agnes; Siegle, Greg J
2017-02-01
There are no commonly used clinical indicators of whether an individual will benefit from cognitive therapy (CT) for depression. A prior study found right ear (left hemisphere) advantage for perceiving dichotic words predicted CT response. This study replicates this finding at a different research center in clinical trials that included clinically representative samples and community therapists. Right-handed individuals with unipolar major depressive disorder who subsequently received 12-14 weeks of CT at the University of Pittsburgh were tested on dichotic fused words and complex tones tests. Responders to CT showed twice the mean right ear advantage in dichotic fused words performance than non-responders. Patients with a right ear advantage greater than the mean for healthy controls had an 81% response rate to CT, whereas those with performance lower than the mean for controls had a 46% response rate. Individuals with a right ear advantage, indicative of strong left hemisphere language dominance, may be better at utilizing cognitive processes and left frontotemporal cortical regions critical for success of CT for depression. Findings at two clinical research centers suggest that verbal dichotic listening may be a clinically disseminative brief, inexpensive and easily automated test prognostic for response to CT across diverse clinical settings. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A comparison of aphasic and non-brain-injured adults on a dichotic CV-syllable listening task.
Shanks, J; Ryan, W
1976-06-01
A dichotic CV-syllable listening task was administered to a group of eleven non-brain-injured adults and to a group of eleven adult aphasics. The results of this study may be summarized as follows: 1)The group of non-brain-injured adults showed a slight right ear advantage for dichotically presented CV-syllables. 2)In comparison with the control group the asphasic group showed a bilateral deficit in response to the dichotic CV-syllables, superimposed on a non-significant right ear advantage. 3) The asphasic group demonstrated a great deal of intersubject variability on the dichotic task with six aphasics showing a right ear preference for the stimuli. The non-brain-injured subjects performed more homogeneously on the task. 4) The two subgroups of aphasics, a right ear advantage group and a left ear advantage group, performed significantly different on the dichotic listening task. 5) Single correct data analysis proved valuable by deleting accuracy of report for an examination of trials in which there was true competition for the single left hemispheric speech processor. These results were analyzed in terms of a functional model of auditory processing. In view of this model, the bilateral deficit in dichotic performance of the asphasic group was accounted for by the presence of a lesion within the dominant left hemisphere, where the speech signals from both ears converge for final processing. The right ear advantage shown by one asphasic subgroup was explained by a lesion interfering with the corpus callosal pathways from the left hemisphere; the left ear advantage observed within the other subgroup was explained by a lesion in the area of the auditory processor of the left hemisphere.
Auditory Processing Speed and Signal Detection in Schizophrenia
ERIC Educational Resources Information Center
Korboot, P. J.; Damiani, N.
1976-01-01
Two differing explanations of schizophrenic processing deficit were examined: Chapman and McGhie's and Yates'. Thirty-two schizophrenics, classified on the acute-chronic and paranoid-nonparanoid dimensions, and eight neurotics were tested on two dichotic listening tasks. (Editor)
Hemispheric Language Dominance of Language-Disordered, Articulation-Disordered, and Normal Children.
ERIC Educational Resources Information Center
Pettit, John M.; Helms, Suzanne B.
1979-01-01
The hemispheric dominance for language of three groups of six- to nine- year-olds (ten language-disordered, ten articulation-disordered, and ten normal children) was compared, and two dichotic listening tests (digits and animal names) were administered. (Author/CL)
Christianson, S A; Nilsson, L G; Silfvenius, H
1989-01-01
Dichotic listening tests were used to determine cerebral hemisphere memory functions in patients with complex partial seizures before, 10 days after, and 1-3 yr after right (RTE) or left (LTE) temporal-lobe excisions. Control subjects were also tested on two occasions. The tests consisted of presenting a series of 12-word lists and 7-word lists alternately to the two ears while backward speech was presented to the other ear. Measures of immediate free recall, final free recall, final cued recall, and serial recall were employed. The results revealed: (a) that both groups of patients were inferior the control group in tests tapping long-term memory functions rather than short-term memory functions, (b) a right-ear advantage for RTE patients at postoperative testing, (c) that the LTE group was more affected by surgery than the RTE group, and (d) a general improvement in recall performance from early to late postoperative testing. Taken together, these results indicate that the present dichotic test can be used as a non-invasive hemisphere memory test to complement invasive techniques for diagnosis of patients considered for epilepsy surgery.
Hypothalamic digoxin, hemispheric chemical dominance, and peptic ulcer disease.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-10-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin-like factor (EDLF) (membrane sodium-potassium ATPase inhibitor and regulator of neurotransmitter transport), ubiquinone (free radical scavenger), and dolichol (regulator of glycoconjugate metabolism). The pathway was assessed in peptic ulcer and acid peptic disease and its relation to hemispheric dominance studied. The activity of HMG CoA reductase, serum levels of EDLF, magnesium, tryptophan catabolites, and tyrosine catabolites were measured in acid peptic disease, right hemispheric dominant, left hemispheric dominant, and bihemispheric dominant individuals. All the patients with peptic ulcer disease were right-handed/left hemispheric dominant by the dichotic listening test. The pathway was upregulated with increased EDLF synthesis in peptic ulcer disease (PUD). There was increase in tryptophan catabolites and reduction in tyrosine catabolites in these patients. The ubiquinone levels were low and free radical production increased. Dolichol and glycoconjugate levels were increased and lysosomal stability reduced in patients with acid peptic disease (APD). There was increase in cholesterol:phospholipid ratio with decreased glyco conjugate levels in membranes of patients with PUD. Acid peptic disease represents an elevated EDLF state which can modulate gastric acid secretion and the structure of the gastric mucous barrier. It can also lead to persistence of Helicobacter pylori infection. The biochemical pattern obtained in peptic ulcer disease is similar to those obtained in left-handed/right hemispheric chemically dominant individuals. But all the patients with peptic ulcer disease were right-handed/left hemispheric dominant by the dichotic listen ing test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Peptic ulcer disease occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function.
Divergent Thinking and Hemispheric Dominance for Language Function among Preschool Children.
ERIC Educational Resources Information Center
Tegano, Deborah Walker; And Others
1983-01-01
An investigation of the relationship of hemispheric dominance (dichotic listening) and divergent thinking (Torrance Tests of Creative Thinking) with 27 preschool children indicates that divergent thinking is associated with right hemispheric dominance in children as young as four years. (Author/PN)
Morton, L L; Siegel, L S
1991-02-01
Twenty reading comprehension-disabled (CD) and 20 reading comprehension and word recognition-disabled (CWRD), right-handed male children were matched with 20 normal-achieving age-matched controls and 20 normal-achieving reading level-matched controls and tested for left ear report on dichotic listening tasks using digits and consonant-vowel combinations (CVs). Left ear report for CVs and digits did not correlate for any of the groups. Both reading-disabled groups showed lower left ear report on digits. On CVs the CD group showed a high left ear report but only when there were no priming precursors, such as directions to attend right first and to process digits first. Priming effects interfered with the processing of both digits and CVs. Theoretically, the CWRD group seems to be characterized by a depressed right hemisphere, whereas the CD group may have a more labile right hemisphere, perhaps tending to overengagement for CV tasks but vulnerable to situational precursors in the form of priming effects. Implications extend to (1) subtyping practices in research with the learning-disabled, (2) inferences drawn from studies using different dichotic stimuli, and (3) the neuropsychology of reading disorders.
The influence of target-masker similarity on across-ear interference in dichotic listening
NASA Astrophysics Data System (ADS)
Brungart, Douglas; Simpson, Brian
2004-05-01
In most dichotic listening tasks, the comprehension of a target speech signal presented in one ear is unaffected by the presence of irrelevant speech in the opposite ear. However, recent results have shown that contralaterally presented interfering speech signals do influence performance when a second interfering speech signal is present in the same ear as the target speech. In this experiment, we examined the influence of target-masker similarity on this effect by presenting ipsilateral and contralateral masking phrases spoken by the same talker, a different same-sex talker, or a different-sex talker than the one used to generate the target speech. The results show that contralateral target-masker similarity has the greatest influence on performance when an easily segregated different-sex masker is presented in the target ear, and the least influence when a difficult-to-segregate same-talker masker is presented in the target ear. These results indicate that across-ear interference in dichotic listening is not directly related to the difficulty of the segregation task in the target ear, and suggest that contralateral maskers are least likely to interfere with dichotic speech perception when the same general strategy could be used to segregate the target from the masking voices in the ipsilateral and contralateral ears.
ERIC Educational Resources Information Center
Lavie, Limor; Banai, Karen; Karni, Avi; Attias, Joseph
2015-01-01
Purpose: We tested whether using hearing aids can improve unaided performance in speech perception tasks in older adults with hearing impairment. Method: Unaided performance was evaluated in dichotic listening and speech-in-noise tests in 47 older adults with hearing impairment; 36 participants in 3 study groups were tested before hearing aid…
Perspectives on Dichotic Listening and the Corpus Callosum
ERIC Educational Resources Information Center
Musiek, Frank E.; Weihing, Jeffrey
2011-01-01
The present review summarizes historic and recent research which has investigated the role of the corpus callosum in dichotic processing within the context of audiology. Examination of performance by certain clinical groups, including split brain patients, multiple sclerosis cases, and other types of neurological lesions is included. Maturational,…
Dichotic Listening in the Study of Semantic Relations
ERIC Educational Resources Information Center
Kadesh, Irving; And Others
1976-01-01
A study is reported in which pairs of synonyms, antonyms, coordinates, and super super-subordinates were presented dichotically to university students. After each pair the subject reported what he heard. In one condition the two members of a pair were presented simultaneously, and in another they were presented sequentially. (Author/RM)
Frequency-shift detectors bind binaural as well as monaural frequency representations.
Carcagno, Samuele; Semal, Catherine; Demany, Laurent
2011-12-01
Previous psychophysical work provided evidence for the existence of automatic frequency-shift detectors (FSDs) that establish perceptual links between successive sounds. In this study, we investigated the characteristics of the FSDs with respect to the binaural system. Listeners were presented with sound sequences consisting of a chord of pure tones followed by a single test tone. Two tasks were performed. In the "present/absent" task, the test tone was either identical to one of the chord components or positioned halfway in frequency between two components, and listeners had to discriminate between these two possibilities. In the "up/down" task, the test tone was slightly different in frequency from one of the chord components and listeners had to identify the direction (up or down) of the corresponding shift. When the test tone was a pure tone presented monaurally, either to the same ear as the chord or to the opposite ear, listeners performed the up/down task better than the present/absent task. This paradoxical advantage for directional frequency shifts, providing evidence for FSDs, persisted when the test tone was replaced by a dichotic stimulus consisting of noise but evoking a pitch sensation as a consequence of binaural processing. Performance in the up/down task was similar for the dichotic stimulus and for a monaural narrow-band noise matched in pitch salience to it. Our results indicate that the FSDs are insensitive to sound localization mechanisms and operate on central frequency representations, at or above the level of convergence of the monaural auditory pathways.
Westerhausen, René; Kompus, Kristiina; Hugdahl, Kenneth
2014-01-01
Functional hemispheric differences for speech and language processing have been traditionally studied by using verbal dichotic-listening paradigms. The commonly observed right-ear preference for the report of dichotically presented syllables is taken to reflect the left hemispheric dominance for speech processing. However, the results of recent functional imaging studies also show that both hemispheres - not only the left - are engaged by dichotic listening, suggesting a more complex relationship between behavioral laterality and functional hemispheric activation asymmetries. In order to more closely examine the hemispheric differences underlying dichotic-listening performance, we report an analysis of functional magnetic resonance imaging (fMRI) data of 104 right-handed subjects, for the first time combining an interhemispheric difference and conjunction analysis. This approach allowed for a distinction of homotopic brain regions which showed symmetrical (i.e., brain region significantly activated in both hemispheres and no activation difference between the hemispheres), relative asymmetrical (i.e., activated in both hemispheres but significantly stronger in one than the other hemisphere), and absolute asymmetrical activation patterns (i.e., activated only in one hemisphere and this activation is significantly stronger than in the other hemisphere). Symmetrical activation was found in large clusters encompassing temporal, parietal, inferior frontal, and medial superior frontal regions. Relative and absolute left-ward asymmetries were found in the posterior superior temporal gyrus, located adjacent to symmetrically activated areas, and creating a lateral-medial gradient from symmetrical towards absolute asymmetrical activation within the peri-Sylvian region. Absolute leftward asymmetry was also found in the post-central and medial superior frontal gyri, while rightward asymmetries were found in middle temporal and middle frontal gyri. We conclude that dichotic listening engages a bihemispheric cortical network, showing a symmetrical and mostly leftward asymmetrical pattern. The here obtained functional (a)symmetry map might serve as a basis for future studies which - by studying the relevance of the here identified regions - clarify the relationship between behavioral laterality measures and hemispheric asymmetry. © 2013 Elsevier Inc. All rights reserved.
Hirnstein, Marco; Westerhausen, René; Korsnes, Maria S; Hugdahl, Kenneth
2013-01-01
Men are often believed to have a functionally more asymmetrical brain organization than women, but the empirical evidence for sex differences in lateralization is unclear to date. Over the years we have collected data from a vast number of participants using the same consonant-vowel dichotic listening task, a reliable marker for language lateralization. One dataset comprised behavioral data from 1782 participants (885 females, 125 non-right-handers), who were divided in four age groups (children <10 yrs, adolescents = 10-15 yrs, younger adults = 16-49 yrs, and older adults >50 yrs). In addition, we had behavioral and functional imaging (fMRI) data from another 104 younger adults (49 females, aged 18-45 yrs), who completed the same dichotic listening task in a 3T scanner. This database allowed us to comprehensively test whether there is a sex difference in functional language lateralization. Across all participants and in both datasets a right ear advantage (REA) emerged, reflecting left-hemispheric language lateralization. Accordingly, the fMRI data revealed a leftward asymmetry in superior temporal lobe language processing areas. In the N = 1782 dataset no main effect of sex but a significant sex by age interaction emerged: the REA increased with age in both sexes but as a result of an earlier onset in females the REA was stronger in female than male adolescents. In turn, male younger adults showed greater asymmetry than female younger adults (accounting for <1% of variance). There were no sex differences in children and older adults. The males in the fMRI dataset (N = 104) also had a greater REA than females (accounting for 4% of variance), but no sex difference emerged in the neuroimaging data. Handedness did not affect these findings. Taken together, our findings suggest that sex differences in language lateralization as assessed with dichotic listening exist, but they are (a) not necessarily reflected in fMRI data, (b) age-dependent and (c) relatively small. Copyright © 2012 Elsevier Ltd. All rights reserved.
Phélip, Marion; Donnot, Julien; Vauclair, Jacques
2015-12-18
In their groundbreaking work featuring verbal dichotic listening tasks, Mondor and Bryden showed that tone cues do not enhance children's attentional orienting, in contrast to adults. The magnitude of the children's right-ear advantage was not attenuated when their attention was directed to the left ear. Verbal cues did, however, appear to favour the orientation of attention at around 10 years, although stimulus-onset asynchronies (SOAs), which ranged between 450 and 750 ms, were not rigorously controlled. The aim of our study was therefore to investigate the role of both types of cues in a typical CV-syllable dichotic listening task administered to 8- to 10-year-olds, applying a protocol as similar as possible to that used by Mondor and Bryden, but controlling for SOA as well as for cued ear. Results confirmed that verbal cues are more effective than tone cues in orienting children's attention. However, in contrast to adults, no effect of SOA was observed. We discuss the relative difficulty young children have processing CV syllables, as well as the role of top-down processes in attentional orienting abilities.
Bruder, Gerard E; Schneier, Franklin R; Stewart, Jonathan W; McGrath, Patrick J; Quitkin, Frederic
2004-01-01
Behavioral, electrophysiological, and imaging studies have found evidence that anxiety disorders are associated with left hemisphere dysfunction or higher than normal activation of right hemisphere regions. Few studies, however, have examined hemispheric asymmetries of function in social phobia, and the influence of comorbidity with depressive disorders is unknown. The present study used dichotic listening tests to assess lateralized cognitive processing in patients with social phobia, depression, or comorbid social phobia and depression. The study used a two-by-two factorial design in which one factor was social phobia (present versus absent) and the second factor was depressive disorder (present versus absent). A total of 125 unmedicated patients with social phobia, depressive disorder, or comorbid social phobia and depressive disorder and 44 healthy comparison subjects were tested on dichotic fused-words, consonant-vowel syllable, and complex tone tests. Patients with social phobia with or without a comorbid depressive disorder had a smaller left hemisphere advantage for processing words and syllables, compared with subjects without social phobia, whereas no difference between groups was found in the right hemisphere advantage for processing complex tones. Depressed women had a larger left hemisphere advantage for processing words, compared with nondepressed women, but this difference was not seen among men. The results support the hypothesis that social phobia is associated with dysfunction of left hemisphere regions mediating verbal processing. Given the importance of verbal processes in social interactions, this dysfunction may contribute to the stress and difficulty experienced by patients with social phobia in social situations.
ERIC Educational Resources Information Center
Schepman, Astrid; Rodway, Paul; Geddes, Pauline
2012-01-01
Valence-specific laterality effects have been frequently obtained in facial emotion perception but not in vocal emotion perception. We report a dichotic listening study further examining whether valence-specific laterality effects generalise to vocal emotions. Based on previous literature, we tested whether valence-specific laterality effects were…
ERIC Educational Resources Information Center
Bouma, Anke; Gootjes, Liselotte
2011-01-01
This article presents an overview of our studies in elderly and Alzheimer patients employing Kimura's dichotic digits paradigm as a measure for left hemispheric predominance for processing language stimuli. In addition to structural brain mechanisms, we demonstrated that attention modulates the direction and degree of ear asymmetry in dichotic…
Cameron, Sharon; Glyde, Helen; Dillon, Harvey; Whitfield, Jessica; Seymour, John
2016-06-01
The dichotic digits test is one of the most widely used assessment tools for central auditory processing disorder. However, questions remain concerning the impact of cognitive factors on test results. To develop the Dichotic Digits difference Test (DDdT), an assessment tool that could differentiate children with cognitive deficits from children with genuine dichotic deficits based on differential test results. The DDdT consists of four subtests: dichotic free recall (FR), dichotic directed left ear (DLE), dichotic directed right ear (DRE), and diotic. Scores for six conditions are calculated (FR left ear [LE], FR right ear [RE], and FR total, as well as DLE, DRE, and diotic). Scores for four difference measures are also calculated: dichotic advantage, right-ear advantage (REA) FR, REA directed, and attention advantage. Experiment 1 involved development of the DDdT, including error rate analysis. Experiment 2 involved collection of normative and test-retest reliability data. Twenty adults (aged 25 yr 10 mo to 50 yr 7 mo, mean 36 yr 4 mo) took part in the development study; 62 normal-hearing, typically developing, primary-school children (aged 7 yr 1 mo to 11 yr 11 mo, mean 9 yr 4 mo) and 10 adults (aged 25 yr 0 mo to 51 yr 6 mo, mean 34 yr 10 mo) took part in the normative and test-retest reliability study. In Experiment 1, error rate analysis was conducted on the 36 digit-pair combinations of the DDdT. Normative data collected in Experiment 2 were arcsine transformed to achieve a distribution that was closer to a normal distribution and z-scores calculated. Pearson product-moment correlations were used to determine the strength of relationships between DDdT conditions. The development study revealed no significant differences in the adult population between test and retest on any DDdT condition. Error rates on 36 digit pairs ranged from 1.5% to 16.7%. The most and the least error-prone digits were removed before commencement of the normative data study, leaving 25 unique digit pairs. Average z-scores calculated from the arcsine-transformed data collected from the 62 children who took part in the normative data study revealed that FR dichotic processing (LE, RE, and total) was highly correlated with diotic processing (r ranging from 0.5 to 0.6; p < 0.0001). Significant improvements in performance on retest occurred for the FR LE, RE, total, and diotic conditions (p ranging from 0.05 to 0.0004), the conditions that would be expected to improve with practice if the participant's response strategies are better the second time around. The addition of a diotic control task-that shares many response demands with the usual dichotic tasks-opens up the possibility of differentiating children who perform below expectations because of poor dichotic processing skills from those who perform poorly because of impaired attention, memory, or other cognitive abilities. The high correlation between dichotic and diotic performance suggests that factors other than dichotic performance play a substantial role in a child's ability to perform a dichotic listening task. This hypothesis is investigated further in the cognitive correlation study that follows in the companion paper (DDdT Study Part 2; Cameron et al, 2016). American Academy of Audiology.
Kurup, Ravi Kumar; Kurup, Paramesware Achutha
2003-12-01
This study assessed the changes in the isoprenoid pathway and its metabolites digoxin, dolichol, and ubiquinone in multiple myeloma. The isoprenoid pathway and digoxin status were also studied for comparison in individuals of differing hemispheric dominance to find out the rote of cerebral dominance in the genesis of multiple myeloma and neoplasms. The following parameters were assessed: isoprenoid pathway metabolites, tyrosine and tryptophan catabolites, glycoconjugate metabolism, RBC membrane composition, and free radical metabolism--in multiple myeloma, as well as in individuals of differing hemispheric dominance. There was elevation in plasma HMG CoA reductase activity, serum digoxin, and dolichol, and a reduction in RBC membrane Na(+)-K+ ATPase activity, serum ubiquinone, and magnesium levels. Serum tryptophan, serotonin, nicotine, strychnine, and quinolinic acid were elevated, while tyrosine, dopamine, noradrenaline, and morphine were decreased. The total serum glycosaminoglycans and glycosaminoglycan fractions, the activity of GAG degrading enzymes and glycohydrolases, carbohydrate residues of glycoproteins, and serum glycolipids were elevated. The RBC membrane glycosaminoglycans, hexose, and fucose residues of glycoproteins, cholesterol, and phospholipids were reduced. The activity of all free-radical scavenging enzymes, concentration of glutathione, iron binding capacity, and ceruloplasmin decreased significantly, while the concentration of lipid peroxidation products and nitric oxide increased. Hyperdigoxinemia-related altered intracellular Ca++/Mg++ ratios mediated oncogene activation, dolichol-induced altered glycoconjugate metabolism, and ubiquinone deficiency-related mitochondrial dysfunction can contribute to the pathogenesis of multiple myeloma. The biochemical patterns obtained in multiple myeloma are similar to those obtained in left-handed/right hemispheric chemically dominant individuals by the dichotic listening test. But all the patients with multiple myeloma were right-handed/left hemispheric dominant by the dichotic listening test. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test. Multiple myeloma occurs in right hemispheric chemically dominant individuals and is a reflection of altered brain function.
Comparison of Psychophysiological and Dual-Task Measures of Listening Effort
ERIC Educational Resources Information Center
Seeman, Scott; Sims, Rebecca
2015-01-01
Purpose: We wished to make a comparison of psychophysiological measures of listening effort with subjective and dual-task measures of listening effort for a diotic-dichotic-digits and a sentences-in-noise task. Method: Three groups of young adults (18-38 years old) with normal hearing participated in three experiments: two psychophysiological…
Interaction of attention and acoustic factors in dichotic listening for fused words.
McCulloch, Katie; Lachner Bass, Natascha; Dial, Heather; Hiscock, Merrill; Jansen, Ben
2017-07-01
Two dichotic listening experiments examined the degree to which the right-ear advantage (REA) for linguistic stimuli is altered by a "top-down" variable (i.e., directed attention) in conjunction with selected "bottom-up" (acoustic) variables. Halwes fused dichotic words were administered to 99 right-handed adults with instructions to attend to the left or right ear, or to divide attention equally. Stimuli in Experiment 1 were presented without noise or mixed with noise that was high-pass or low-pass filtered, or unfiltered. The stimuli themselves in Experiment 2 were high-pass or low-pass filtered, or unfiltered. The initial consonants of each dichotic pair were categorized according to voice onset time (VOT) and place of articulation (PoA). White noise extinguished both the REA and selective attention, and filtered noise nullified selective attention without extinguishing the REA. Frequency filtering of the words themselves did not alter performance. VOT effects were inconsistent across experiments but PoA analyses indicated that paired velar consonants (/k/ and /g/) yield a left-ear advantage and paradoxical selective-attention results. The findings show that ear asymmetry and the effectiveness of directed attention can be altered by bottom-up variables.
Age- and Gender-Specific Normative Information from Children Assessed with a Dichotic Words Test.
Moncrieff, Deborah
2015-01-01
The most widely used assessment in the clinical auditory processing disorder (APD) battery is the dichotic listening test. New tests with normative information are helpful for assessment and cross-check of results for reliable diagnosis. The Dichotic Words Test was developed for use in the clinical test battery for diagnosis of APD. The test stimuli were common single syllable words matched for average root-mean-square amplitude and each pair was temporally aligned at both onset and offset. The study was conducted to collect performance results from typically developing children to create normative information for the test. The study follows a cross-sectional design. Typically developing children (n = 416) between the ages of 5 and 12 yr were recruited from schools in the community. There were 217 males and 199 females in the study sample. Only children who passed a hearing screening were eligible to participate. Scores for each ear were recorded during administration of the first free recall version of the test. Ear advantages based on results recorded for left and right ears were used to measure prevalence of right, left, and no ear advantages. Results for each listener's dominant and non-dominant ears and the absolute difference between them were put into the data analysis. Results were analyzed for normality and because no results were normally distributed, all further analyses were done with nonparametric statistical tests. Normative data for dominant and non-dominant ear scores and ear advantages were determined at the 95% confidence interval through bootstrapping methods with 1,000 samples. Children were divided into four age groups based on results in their dominant ears. Females generally performed better than males and the prevalence of a right-ear advantage was ∼60% across all children tested. Normative lower-bound cut-off scores were established for males and females within each age group for dominant and non-dominant ear scores. Normative upper-bound cut-off scores were established for males and females within each age group for ear advantage scores. Normative information specific to age group and gender will be useful in clinical assessment for APD. Prevalence of left-ear advantage results in the sample may have been partly due to uncontrolled influences of voice-onset time in arranging the dichotic pairs. American Academy of Audiology.
ERIC Educational Resources Information Center
Turney, Michael T.; And Others
This report on speech research contains papers describing experiments involving both information processing and speech production. The papers concerned with information processing cover such topics as peripheral and central processes in vision, separate speech and nonspeech processing in dichotic listening, and dichotic fusion along an acoustic…
ERIC Educational Resources Information Center
Bedoin, Nathalie; Ferragne, Emmanuel; Marsico, Egidio
2010-01-01
Dichotic listening experiments show a right-ear advantage (REA), reflecting a left-hemisphere (LH) dominance. However, we found a decrease in REA when the initial stop consonants of two simultaneous French CVC words differed in voicing rather than place of articulation (Experiment 1). This result suggests that the right hemisphere (RH) is more…
Brungart, Douglas S; Simpson, Brian D
2007-09-01
Similarity between the target and masking voices is known to have a strong influence on performance in monaural and binaural selective attention tasks, but little is known about the role it might play in dichotic listening tasks with a target signal and one masking voice in the one ear and a second independent masking voice in the opposite ear. This experiment examined performance in a dichotic listening task with a target talker in one ear and same-talker, same-sex, or different-sex maskers in both the target and the unattended ears. The results indicate that listeners were most susceptible to across-ear interference with a different-sex within-ear masker and least susceptible with a same-talker within-ear masker, suggesting that the amount of across-ear interference cannot be predicted from the difficulty of selectively attending to the within-ear masking voice. The results also show that the amount of across-ear interference consistently increases when the across-ear masking voice is more similar to the target speech than the within-ear masking voice is, but that no corresponding decline in across-ear interference occurs when the across-ear voice is less similar to the target than the within-ear voice. These results are consistent with an "integrated strategy" model of speech perception where the listener chooses a segregation strategy based on the characteristics of the masker present in the target ear and the amount of across-ear interference is determined by the extent to which this strategy can also effectively be used to suppress the masker in the unattended ear.
Andrade, Adriana Neves de; Silva, Mariane Richetto da; Iorio, Maria Cecilia Martinelli; Gil, Daniela
2015-01-01
To compare the performance of the Dichotic Sentence Identification (DSI) test in the Brazilian Portuguese version, considering: the right and left ears and the educational status in normal-hearing individuals. This investigation assessed 200 individuals who are normal listeners and right-handed and were divided into seven groups according to the years of schooling. All the participants underwent basic audiologic evaluation and behavioral auditory processing assessment (sound localization test, memory test for verbal and nonverbal sounds in sequence, dichotic digits test, and DSI). The evaluated individuals revealed an average educational status of 13.1 years and results within normal limits in the selected tests for the audiologic and auditory processing assessments. Regarding the DSI test, the educational status showed a dependent relationship with the percentages of correct answers in each stage of the test and the evaluated ear. There was a statistically significant positive correlation between the educational status and the percentage of correct answers for all the stages of the DSI test in both the ears. There was also an effect of the educational level on the results obtained in each condition of the DSI test, with the exception of directed attention to the right ear. Comparing the performance considering the variables studied in the DSI test, we concluded that there is an advantage of the right ear and that, the better the educational level, the better the performance of the individuals.
[Dichotic perception of Mandarin third tone by Mexican Chinese learners].
Wang, Hongbin
2014-05-01
To investigate the relationship between the advantage ear (cerebral hemisphere) of Spanish-speaking Mexican learners and the third Chinese tone. Third tone Chinese vowel syllables were used as experimental materials with dichotic listening technology to test the Spanish-speaking Mexican Chinese learners (20-32 years old) who studied Chinese about 20 h. In terms of error rates to identify the third Chinese tone, the Spanish-speaking Mexican Chinese learners's reaction to the third tone suggested that their left ears were the advantageous ear (the right cerebral hemisphere) (Z=-2.091, P=0.036). The verbal information of tones influenced the perception of Mexican Chinese learners' mandarin tones. In the process of learning mandarin tones, Mexican Chinese learners gradually formed the category of tones.
Carlyon, Robert P.; Long, Christopher J.; Deeks, John M.
2008-01-01
Experiment 1 measured rate discrimination of electric pulse trains by bilateral cochlear implant (CI) users, for standard rates of 100, 200, and 300 pps. In the diotic condition the pulses were presented simultaneously to the two ears. Consistent with previous results with unilateral stimulation, performance deteriorated at higher standard rates. In the signal interval of each trial in the dichotic condition, the standard rate was presented to the left ear and the (higher) signal rate was presented to the right ear; the non-signal intervals were the same as in the diotic condition. Performance in the dichotic condition was better for some listeners than in the diotic condition for standard rates of 100 and 200 pps, but not at 300 pps. It is concluded that the deterioration in rate discrimination observed for CI users at high rates cannot be alleviated by the introduction of a binaural cue, and is unlikely to be limited solely by central pitch processes. Experiment 2 performed an analogous experiment in which 300-pps acoustic pulse trains were bandpass filtered (3900-5400 Hz) and presented in a noise background to normal-hearing listeners. Unlike the results of experiment 1, performance was superior in the dichotic than in the diotic condition. PMID:18397032
Hund-Georgiadis, Margret; Lex, Ulrike; Friederici, Angela D; von Cramon, D Yves
2002-07-01
Language lateralization was assessed by two independent functional techniques, fMRI and a dichotic listening test (DLT), in an attempt to establish a reliable and non-invasive protocol of dominance determination. This should particularly address the high intraindividual variability of language lateralization and allow decision-making in individual cases. Functional MRI of word classification tasks showed robust language lateralization in 17 right-handers and 17 left-handers in terms of activation in the inferior frontal gyrus. The DLT was introduced as a complementary tool to MR mapping for language dominance assessment, providing information on perceptual language processing located in superior temporal cortices. The overall agreement of lateralization assessment between the two techniques was 97.1%. Conflicting results were found in one subject, and diverging indices in ten further subjects. Increasing age, non-familial sinistrality, and a non-dominant writing hand were identified as the main factors explaining the observed mismatch between the two techniques. This finding stresses the concept of an intrahemispheric distribution of language function that is obviously associated with certain behavioral characteristics.
Bruder, Gerard E.; Stewart, Jonathan W.; Hellerstein, David; Alvarenga, Jorge E.; Alschuler, Daniel; McGrath, Patrick J.
2012-01-01
Prior studies have found abnormalities of functional brain asymmetry in patients having a major depressive disorder (MDD). This study aimed to replicate findings of reduced right hemisphere advantage for perceiving dichotic complex tones in depressed patients, and to determine whether patients having “pure” dysthymia show the same abnormality of perceptual asymmetry as MDD. It also examined gender differences in lateralization, and the extent to which abnormalities of perceptual asymmetry in depressed patients are dependent on gender. Unmedicated patients having either a MDD (n=96) or “pure” dysthymic disorder (n=42) and healthy controls (n=114) were tested on dichotic fused-words and complex-tone tests. Patient and control groups differed in right hemisphere advantage for complex tones, but not left hemisphere advantage for words. Reduced right hemisphere advantage for tones was equally present in MDD and dysthymia, but was more evident among depressed men than depressed women. Also, healthy men had greater hemispheric asymmetry than healthy women for both words and tones, whereas this gender difference was not seen for depressed patients. Dysthymia and MDD share a common abnormality of hemispheric asymmetry for dichotic listening. PMID:22397909
Bruder, Gerard E; Stewart, Jonathan W; Hellerstein, David; Alvarenga, Jorge E; Alschuler, Daniel; McGrath, Patrick J
2012-04-30
Prior studies have found abnormalities of functional brain asymmetry in patients having a major depressive disorder (MDD). This study aimed to replicate findings of reduced right hemisphere advantage for perceiving dichotic complex tones in depressed patients, and to determine whether patients having "pure" dysthymia show the same abnormality of perceptual asymmetry as MDD. It also examined gender differences in lateralization, and the extent to which abnormalities of perceptual asymmetry in depressed patients are dependent on gender. Unmedicated patients having either a MDD (n=96) or "pure" dysthymic disorder (n=42) and healthy controls (n=114) were tested on dichotic fused-words and complex-tone tests. Patient and control groups differed in right hemisphere advantage for complex tones, but not left hemisphere advantage for words. Reduced right hemisphere advantage for tones was equally present in MDD and dysthymia, but was more evident among depressed men than depressed women. Also, healthy men had greater hemispheric asymmetry than healthy women for both words and tones, whereas this gender difference was not seen for depressed patients. Dysthymia and MDD share a common abnormality of hemispheric asymmetry for dichotic listening. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen
2013-01-01
Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…
Standardization of Performance Tests: A Proposal for Further Steps.
1986-07-01
obviously demand substantial attention can sometimes be time shared perfectly. Wickens describes cases in which skilled pianists can time share sight-reading...effects of divided attention on information processing in tracking. Journal of Experimental Psychology, 1, 1-13. Wickens, C.D. (1984). Processing resources... attention he regards focused- divided attention tasks (e.g. dichotic listening, dual task situations) as theoretically useful. From his point of view good
[Children with specific language impairment: electrophysiological and pedaudiological findings].
Rinker, T; Hartmann, K; Smith, E; Reiter, R; Alku, P; Kiefer, M; Brosch, S
2014-08-01
Auditory deficits may be at the core of the language delay in children with Specific Language Impairment (SLI). It was therefore hypothesized that children with SLI perform poorly on 4 tests typically used to diagnose central auditory processing disorder (CAPD) as well in the processing of phonetic and tone stimuli in an electrophysiological experiment. 14 children with SLI (mean age 61,7 months) and 16 children without SLI (mean age 64,9 months) were tested with 4 tasks: non-word repetition, language discrimination in noise, directional hearing, and dichotic listening. The electrophysiological recording Mismatch Negativity (MMN) employed sine tones (600 vs. 650 Hz) and phonetic stimuli (/ε/ versus /e/). Control children and children with SLI differed significantly in the non-word repetition as well as in the dichotic listening task but not in the two other tasks. Only the control children recognized the frequency difference in the MMN-experiment. The phonetic difference was discriminated by both groups, however, effects were longer lasting for the control children. Group differences were not significant. Children with SLI show limitations in auditory processing that involve either a complex task repeating unfamiliar or difficult material and show subtle deficits in auditory processing at the neural level. © Georg Thieme Verlag KG Stuttgart · New York.
On the possibility of a place code for the low pitch of high-frequency complex tonesa
Santurette, Sébastien; Dau, Torsten; Oxenham, Andrew J.
2012-01-01
Harmonics are considered unresolved when they interact with neighboring harmonics and cannot be heard out separately. Several studies have suggested that the pitch derived from unresolved harmonics is coded via temporal fine-structure cues emerging from their peripheral interactions. Such conclusions rely on the assumption that the components of complex tones with harmonic ranks down to at least 9 were indeed unresolved. The present study tested this assumption via three different measures: (1) the effects of relative component phase on pitch matches, (2) the effects of dichotic presentation on pitch matches, and (3) listeners' ability to hear out the individual components. No effects of relative component phase or dichotic presentation on pitch matches were found in the tested conditions. Large individual differences were found in listeners' ability to hear out individual components. Overall, the results are consistent with the coding of individual harmonic frequencies, based on the tonotopic activity pattern or phase locking to individual harmonics, rather than with temporal coding of single-channel interactions. However, they are also consistent with more general temporal theories of pitch involving the across-channel summation of information from resolved and/or unresolved harmonics. Simulations of auditory-nerve responses to the stimuli suggest potential benefits to a spatiotemporal mechanism. PMID:23231119
Sex differences in left-right confusion depend on hemispheric asymmetry.
Hirnstein, Marco; Ocklenburg, Sebastian; Schneider, Daniel; Hausmann, Markus
2009-01-01
Numerous studies have reported that women believe they are more susceptible to left-right confusion than men. Indeed, some studies have also found sex differences in behavioural tasks. It has been suggested that women have more difficulties with left-right discrimination, because they are less lateralised than men and a lower degree of lateralisation might lead to more left-right confusion (LRC). However, those studies reporting more left-right confusion for women have been criticised because the tasks that have been used involved mental rotation, a spatial ability in which men typically excel. In the present study, 34 right-handed women and 31 right-handed men completed two behavioural left-right discrimination tasks, in which mental rotation was either experimentally controlled for or was not needed. To measure the degree of hemispheric asymmetry participants also completed a dichotic listening test. Although women were not less lateralised than men, both tasks consistently revealed that women were more susceptible to left-right confusion than men. However, only women with a significant right ear advantage in the dichotic listening test had more difficulties in LRC tasks than men. There was no sex difference in less lateralised participants. This finding suggests that the impact of functional verbal asymmetries on LRC is mediated by sex.
Demodulation Processes in Auditory Perception.
1992-08-15
not provide a fused image that the listener can process binaurally . 5 A type of dichotic profile has been developed for this study in which the stimulus...the component frequencies between the two ears may allow the i listener to develop a better fused image to be processed i binaurally than in the...listener was seated facing a 3 monitor and computer keyboard (Radio Shack Color Computer II). Signals were presented binaurally via Sennheiser HD414SL
Corpus callosum functioning in patients with normal pressure hydrocephalus before and after surgery.
Mataró, Maria; Poca, Maria Antonia; Matarín, Mar; Sahuquillo, Juan; Sebastián, Nuria; Junqué, Carme
2006-05-01
Our aim was to evaluate corpus callosum functioning in a group of patients with normal pressure hydrocephalus (NPH) before and after shunting. Left ear-extinction under a dichotic listening task was evaluated in twenty-three patients with NPH, 30 patients with Alzheimer's disease and 30 aged controls. Patients with NPH had higher levels of left ear extinction than the control and Alzheimer's groups. Sixty-one percent of NPH patients exhibited left ear suppression, compared with 13% of Alzheimer's patients and 17% of controls. Following surgery, NPH patients showed a significant change in the degree of asymmetry in the dichotic listening task. Hydrocephalus was associated with left-ear extinction,which diminished after surgery. Our results may indicate reversible functional damage in the corpus callosum.
Divided listening in noise in a mock-up of a military command post.
Abel, Sharon M; Nakashima, Ann; Smith, Ingrid
2012-04-01
This study investigated divided listening in noise in a mock-up of a vehicular command post. The effects of background noise from the vehicle, unattended speech of coworkers on speech understanding, and a visual cue that directed attention to the message source were examined. Sixteen normal-hearing males participated in sixteen listening conditions, defined by combinations of the absence/presence of vehicle and speech babble noises, availability of a vision cue, and number of channels (2 or 3, diotic or dichotic, and loudspeakers) over which concurrent series of call sign, color, and number phrases were presented. All wore a communications headset with integrated hearing protection. A computer keyboard was used to encode phrases beginning with an assigned call sign. Subjects achieved close to 100% correct phrase identification when presented over the headset (with or without vehicle noise) or over the loudspeakers, without vehicle noise. In contrast, the percentage correct phrase identification was significantly less by 30 to 35% when presented over loudspeakers with vehicle noise. Vehicle noise combined with babble noise decreased the accuracy by an additional 12% for dichotic listening. Vision cues increased phrase identification accuracy by 7% for diotic listening. Outcomes could be explained by the at-ear energy spectra of the speech and noise.
A right-ear bias of auditory selective attention is evident in alpha oscillations.
Payne, Lisa; Rogers, Chad S; Wingfield, Arthur; Sekuler, Robert
2017-04-01
Auditory selective attention makes it possible to pick out one speech stream that is embedded in a multispeaker environment. We adapted a cued dichotic listening task to examine suppression of a speech stream lateralized to the nonattended ear, and to evaluate the effects of attention on the right ear's well-known advantage in the perception of linguistic stimuli. After being cued to attend to input from either their left or right ear, participants heard two different four-word streams presented simultaneously to the separate ears. Following each dichotic presentation, participants judged whether a spoken probe word had been in the attended ear's stream. We used EEG signals to track participants' spatial lateralization of auditory attention, which is marked by interhemispheric differences in EEG alpha (8-14 Hz) power. A right-ear advantage (REA) was evident in faster response times and greater sensitivity in distinguishing attended from unattended words. Consistent with the REA, we found strongest parietal and right frontotemporal alpha modulation during the attend-right condition. These findings provide evidence for a link between selective attention and the REA during directed dichotic listening. © 2016 Society for Psychophysiological Research.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-08-01
The isoprenoid pathway produces three key metabolites--digoxin (membrane sodium-potassium ATPase inhibitor and regulator of neurotransmitter transport), dolichol (regulator of N-glycosylation of proteins), and ubiquinone (free radical scavenger). The isoprenoid pathway was assessed in patients with bronchial asthma. The pathway was also assessed in patients with right hemispheric, left hemispheric, and bihemispheric dominance to find out the role of hemispheric dominance in the pathogenesis of bronchial asthma. The pathway was upregulated with increase in digoxin synthesis in bronchial asthma. There was an increase in tryptophan catabolites and a reduction in tyrosine catabolites in patients with bronchial asthma. The ubiquinone levels were low and lipid peroxidation increased in these patients. There was increase in dolichol and glycoconjugate levels and reduction in lysosomal stability in these patients. The cholesterol:phospholipid ratio was increased and glycoconjugate levels were reduced in the membranes of these patients. The patterns noticed in bronchial asthma were similar to those in patients with right hemispheric chemical dominance. Bronchial asthma occurs in right hemispheric chemically dominant individuals. Ninety percent of the patients with bronchial asthma were right-handed and left hemispheric dominant by the dichotic listening test. But their biochemical patterns were similar to those obtained in right hemispheric chemical dominance. Hemispheric chemical dominance is a different entity and has no correlation with handedness or the dichotic listening test.
Bless, Josef J.; Westerhausen, René; Torkildsen, Janne von Koss; Gudmundsen, Magne; Kompus, Kristiina; Hugdahl, Kenneth
2015-01-01
Left-hemispheric language dominance has been suggested by observations in patients with brain damages as early as the 19th century, and has since been confirmed by modern behavioural and brain imaging techniques. Nevertheless, most of these studies have been conducted in small samples with predominantly Anglo-American background, thus limiting generalization and possible differences between cultural and linguistic backgrounds may be obscured. To overcome this limitation, we conducted a global dichotic listening experiment using a smartphone application for remote data collection. The results from over 4,000 participants with more than 60 different language backgrounds showed that left-hemispheric language dominance is indeed a general phenomenon. However, the degree of lateralization appears to be modulated by linguistic background. These results suggest that more emphasis should be placed on cultural/linguistic specificities of psychological phenomena and on the need to collect more diverse samples. PMID:25588000
Symmetry and asymmetry in the human brain
NASA Astrophysics Data System (ADS)
Hugdahl, Kenneth
2005-10-01
Structural and functional asymmetry in the human brain and nervous system is reviewed in a historical perspective, focusing on the pioneering work of Broca, Wernicke, Sperry, and Geschwind. Structural and functional asymmetry is exemplified from work done in our laboratory on auditory laterality using an empirical procedure called dichotic listening. This also involves different ways of validating the dichotic listening procedure against both invasive and non-invasive techniques, including PET and fMRI blood flow recordings. A major argument is that the human brain shows a substantial interaction between structurally, or "bottom-up" asymmetry and cognitively, or "top-down" modulation, through a focus of attention to the right or left side in auditory space. These results open up a more dynamic and interactive view of functional brain asymmetry than the traditional static view that the brain is lateralized, or asymmetric, only for specific stimuli and stimulus properties.
Bless, Josef J; Westerhausen, René; von Koss Torkildsen, Janne; Gudmundsen, Magne; Kompus, Kristiina; Hugdahl, Kenneth
2015-01-01
Left-hemispheric language dominance has been suggested by observations in patients with brain damages as early as the 19th century, and has since been confirmed by modern behavioural and brain imaging techniques. Nevertheless, most of these studies have been conducted in small samples with predominantly Anglo-American background, thus limiting generalization and possible differences between cultural and linguistic backgrounds may be obscured. To overcome this limitation, we conducted a global dichotic listening experiment using a smartphone application for remote data collection. The results from over 4,000 participants with more than 60 different language backgrounds showed that left-hemispheric language dominance is indeed a general phenomenon. However, the degree of lateralization appears to be modulated by linguistic background. These results suggest that more emphasis should be placed on cultural/linguistic specificities of psychological phenomena and on the need to collect more diverse samples.
The influence of musical experience on lateralisation of auditory processing.
Spajdel, Marián; Jariabková, Katarína; Riecanský, Igor
2007-11-01
The influence of musical experience on free-recall dichotic listening to environmental sounds, two-tone sequences, and consonant-vowel (CV) syllables was investigated. A total of 60 healthy right-handed participants were divided into two groups according to their active musical competence ("musicians" and "non-musicians"). In both groups, we found a left ear advantage (LEA) for nonverbal stimuli (environmental sounds and two-tone sequences) and a right ear advantage (REA) for CV syllables. Dichotic listening to environmental sounds was uninfluenced by musical experience. The total accuracy of recall for two-tone sequences was higher in musicians than in non-musicians but the lateralisation was similar in both groups. For CV syllables a lower REA was found in male but not female musicians in comparison to non-musicians. The results indicate a specific sex-dependent effect of musical experience on lateralisation of phonological auditory processing.
Working memory predicts semantic comprehension in dichotic listening in older adults.
James, Philip J; Krishnan, Saloni; Aydelott, Jennifer
2014-10-01
Older adults have difficulty understanding spoken language in the presence of competing voices. Everyday social situations involving multiple simultaneous talkers may become increasingly challenging in later life due to changes in the ability to focus attention. This study examined whether individual differences in cognitive function predict older adults' ability to access sentence-level meanings in competing speech using a dichotic priming paradigm. Older listeners showed faster responses to words that matched the meaning of spoken sentences presented to the left or right ear, relative to a neutral baseline. However, older adults were more vulnerable than younger adults to interference from competing speech when the competing signal was presented to the right ear. This pattern of performance was strongly correlated with a non-auditory working memory measure, suggesting that cognitive factors play a key role in semantic comprehension in competing speech in healthy aging. Copyright © 2014 Elsevier B.V. All rights reserved.
Detection and localization of sounds: Virtual tones and virtual reality
NASA Astrophysics Data System (ADS)
Zhang, Peter Xinya
Modern physiologically based binaural models employ internal delay lines in the pathways from left and right peripheries to central processing nuclei. Various models apply the delay lines differently, and give different predictions for the detection of dichotic pitches, wherein listeners hear a virtual tone in the noise background. Two dichotic pitch stimuli (Huggins pitch and binaural coherence edge pitch) with low boundary frequencies were used to test the predictions by two different models. The results from five experiments show that the relative dichotic pitch strengths support the equalization-cancellation model and disfavor the central activity pattern (CAP) model. The CAP model makes predictions for the lateralization of Huggins pitch based on interaural time differences (ITD). By measuring human lateralization for Huggins pitches with two different types of phase boundaries (linear-phase and stepped phase), and by comparing with lateralization of sine-tones, it was shown that the lateralization of Huggins pitch stimuli is similar to that of the corresponding sine-tones, and the lateralizations of Huggins pitch stimuli with the two different boundaries were even more similar to one another. The results agreed roughly with the CAP model predictions. Agreement was significantly improved by incorporating individualized scale factors and offsets into the model, and was further unproved with a model including compression at large ITDs. Furthermore, ambiguous stimuli, with an interaural phase difference of 180 degrees, were consistently lateralized on the left or right based on individual asymmetries---which introduces the concept of "earedness". Interaural phase difference (IPD) and interaural time difference (ITD) are two different forms of temporal cues. With varying frequency, an auditory system based on IPD or ITD gives different quantitative predictions on lateralization. A lateralization experiment with sine tones tested whether human auditory system is an IPD-meter or an ITD-meter. Listeners estimated the lateral positions of 50 sine tones with IPDs ranging from -150° to +150° and with different frequencies, all in the range where signal fine structure supports lateralization. The estimates indicated that listeners lateralize sine tones on the basis of ITD and not IPD. In order to distinguish between sound sources in front and in back, listeners use spectral cues caused by the diffraction by pinna, head, neck and torso. To study this effect, the VRX technique was developed based on transaural technology. The technique was successful in presenting desired spectra into listeners' ears with high accuracy up to 16 kHz. When presented with real source and simulated virtual signal, listeners in an anechoic room could not distinguish between them. Eleven experiments on discrimination between front and back sources were carried out in an anechoic room. The results show several findings. First, the results support a multiple band comparison model, and disfavor a necessary band(s) model. Second, it was found that preserving the spectral dips was more important than preserving the spectral peaks for successful front/back discrimination. Moreover, it was confirmed that neither monaural cues nor interaural spectral level difference cues were adequate for front/back discrimination. Furthermore, listeners' performance did not deteriorate when presented with sharpened spectra. Finally, when presented with an interaural delay less than 200 mus, listeners could succeed to discriminate front from back, although the image was pulled to the side, which suggests that the localizations in azimuthal plane and in sagittal plane are independent within certain limits.
Fritz, Thomas Hans; Renders, Wiske; Müller, Karsten; Schmude, Paul; Leman, Marc; Turner, Robert; Villringer, Arno
2013-10-01
Helmholtz himself speculated about a role of the cochlea in the perception of musical dissonance. Here we indirectly investigated this issue, assessing the valence judgment of musical stimuli with variable consonance/dissonance and presented diotically (exactly the same dissonant signal was presented to both ears) or dichotically (a consonant signal was presented to each ear--both consonant signals were rhythmically identical but differed by a semitone in pitch). Differences in brain organisation underlying inter-subject differences in the percept of dichotically presented dissonance were determined with voxel-based morphometry. Behavioral results showed that diotic dissonant stimuli were perceived as more unpleasant than dichotically presented dissonance, indicating that interactions within the cochlea modulated the valence percept during dissonance. However, the behavioral data also suggested that the dissonance percept did not depend crucially on the cochlea, but also occurred as a result of binaural integration when listening to dichotic dissonance. These results also showed substantial between-participant variations in the valence response to dichotic dissonance. These differences were in a voxel-based morphometry analysis related to differences in gray matter density in the inferior colliculus, which strongly substantiated a key role of the inferior colliculus in consonance/dissonance representation in humans. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Differential neural contributions to native- and foreign-language talker identification
Perrachione, Tyler K.; Pierrehumbert, Janet B.; Wong, Patrick C.M.
2009-01-01
Humans are remarkably adept at identifying individuals by the sound of their voice, a behavior supported by the nervous system’s ability to integrate information from voice and speech perception. Talker-identification abilities are significantly impaired when listeners are unfamiliar with the language being spoken. Recent behavioral studies describing the language-familiarity effect implicate functionally integrated neural systems for speech and voice perception, yet specific neuroscientific evidence demonstrating the basis for such integration has not yet been shown. Listeners in the present study learned to identify voices speaking a familiar (native) or unfamiliar (foreign) language. The talker-identification performance of neural circuitry in each cerebral hemisphere was assessed using dichotic listening. To determine the relative contribution of circuitry in each hemisphere to ecological (binaural) talker identification abilities, we compared the predictive capacity of dichotic performance on binaural performance across languages. We found listeners’ right-ear (left hemisphere) performance to be a better predictor of overall accuracy in their native language than a foreign one. The enhanced predictive capacity of the classically language-dominant left-hemisphere on overall talker-identification accuracy demonstrates functionally integrated neural systems for speech and voice perception during natural talker identification. PMID:19968445
The auditory processing battery: survey of common practices.
Emanuel, Diana C
2002-02-01
A survey of auditory processing (AP) diagnostic practices was mailed to all licensed audiologists in the State of Maryland and sent as an electronic mail attachment to the American Speech-Language-Hearing Association and Educational Audiology Association Internet forums. Common AP protocols (25 from the Internet, 28 from audiologists in Maryland) included requiring basic audiologic testing, using questionnaires, and administering dichotic listening, monaural low-redundancy speech, temporal processing, and electrophysiologic tests. Some audiologists also administer binaural interaction, attention, memory, and speech-language/psychological/educational tests and incorporate a classroom observation. The various AP batteries presently administered appear to be based on the availability of AP tests with well-documented normative data. Resources for obtaining AP tests are listed.
Research and Studies Directory for Manpower, Personnel, and Training
1988-01-01
314-889-6505 PSYCHOPHYSIOLCGICAL MAPPING OF COGNITIVE PROCESSES SUGA N* WASHINGTON UNIV ST LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE...VISUAL PERCEPTION CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX DICHOTIC LISTENING TO COMPLEX SOUNDS: EFFECTS OF STIMULUS CHARACTERISTICS AND
Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.
Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi
2015-08-01
To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Affective Priming with Auditory Speech Stimuli
ERIC Educational Resources Information Center
Degner, Juliane
2011-01-01
Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In Experiment 2, stimulus onset asynchrony (SOA) was…
Diagnosing Dyslexia: The Screening of Auditory Laterality.
ERIC Educational Resources Information Center
Johansen, Kjeld
A study investigated whether a correlation exists between the degree and nature of left-brain laterality and specific reading and spelling difficulties. Subjects, 50 normal readers and 50 reading disabled persons native to the island of Bornholm, had their auditory laterality screened using pure-tone audiometry and dichotic listening. Results…
Cognitive Factors in Sexual Arousal: The Role of Distraction
ERIC Educational Resources Information Center
Geer, James H.; Fuhr, Robert
1976-01-01
Four groups of male undergraduates were instructed to perform complex cognitive operations when randomly presented single digits of a dichotic listening paradigm. An erotic tape recording was played into the nonattended ear. Sexual arousal varied directly as a function of the complexity of the distracting cognitive operations. (Author)
Native and Nonnative Processing of Japanese Pitch Accent
ERIC Educational Resources Information Center
Wu, Xianghua; Tu, Jung-Yueh; Wang, Yue
2012-01-01
The theoretical framework of this study is based on the prevalent debate of whether prosodic processing is influenced by higher level linguistic-specific circuits or reflects lower level encoding of physical properties. Using the dichotic listening technique, the study investigates the hemispheric processing of Japanese pitch accent by native…
Whiteford, Kelly L; Kreft, Heather A; Oxenham, Andrew J
2017-08-01
Natural sounds can be characterized by their fluctuations in amplitude and frequency. Ageing may affect sensitivity to some forms of fluctuations more than others. The present study used individual differences across a wide age range (20-79 years) to test the hypothesis that slow-rate, low-carrier frequency modulation (FM) is coded by phase-locked auditory-nerve responses to temporal fine structure (TFS), whereas fast-rate FM is coded via rate-place (tonotopic) cues, based on amplitude modulation (AM) of the temporal envelope after cochlear filtering. Using a low (500 Hz) carrier frequency, diotic FM and AM detection thresholds were measured at slow (1 Hz) and fast (20 Hz) rates in 85 listeners. Frequency selectivity and TFS coding were assessed using forward masking patterns and interaural phase disparity tasks (slow dichotic FM), respectively. Comparable interaural level disparity tasks (slow and fast dichotic AM and fast dichotic FM) were measured to control for effects of binaural processing not specifically related to TFS coding. Thresholds in FM and AM tasks were correlated, even across tasks thought to use separate peripheral codes. Age was correlated with slow and fast FM thresholds in both diotic and dichotic conditions. The relationship between age and AM thresholds was generally not significant. Once accounting for AM sensitivity, only diotic slow-rate FM thresholds remained significantly correlated with age. Overall, results indicate stronger effects of age on FM than AM. However, because of similar effects for both slow and fast FM when not accounting for AM sensitivity, the effects cannot be unambiguously ascribed to TFS coding.
Effects of stimulus characteristics and task demands on pilots' perception of dichotic messages
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.
1986-01-01
The experiment is an initial investigation of pilot performance when auditory advisory messages are presented dichotically, either with or without a concurrent pursuit task requiring visual/motor dexterity. The dependent measures were percent correct and correct reaction times for manual responses to the auditory messages. Two stimulus variables which show facilitory effects in traditional dichotic-listening paradigms, differences in pitch and semantic content of the messages, were examined to determine their effectiveness during the functional simulation of helicopter pursuit. In an effort to accumulate points for the advisory messages for accuracy alone or for both accuracy and reaction times which were faster than their opponent's. In general, the combined effects of the stimulus and task variables are additive. When interactions do occur they suggest that an increase in task demands can sometimes mitigate, but usually does not remove, any processing advantages accrued from stimulus characteristics. The implications of these results for cockpit displays are discussed.
Speech processing asymmetry revealed by dichotic listening and functional brain imaging.
Hugdahl, Kenneth; Westerhausen, René
2016-12-01
In this article, we review research in our laboratory from the last 25 to 30 years on the neuronal basis for laterality of speech perception focusing on the upper, posterior parts of the temporal lobes, and its functional and structural connections to other brain regions. We review both behavioral and brain imaging data, with a focus on dichotic listening experiments, and using a variety of imaging modalities. The data have come in most parts from healthy individuals and from studies on normally functioning brain, although we also review a few selected clinical examples. We first review and discuss the structural model for the explanation of the right-ear advantage (REA) and left hemisphere asymmetry for auditory language processing. A common theme across many studies have been our interest in the interaction between bottom-up, stimulus-driven, and top-down, instruction-driven, aspects of hemispheric asymmetry, and how perceptual factors interact with cognitive factors to shape asymmetry of auditory language information processing. In summary, our research have shown laterality for the initial processing of consonant-vowel syllables, first observed as a behavioral REA when subjects are required to report which syllable of a dichotic syllable-pair they perceive. In subsequent work we have corroborated the REA with brain imaging, and have shown that the REA is modulated through both bottom-up manipulations of stimulus properties, like sound intensity, and top-down manipulations of cognitive properties, like attention focus. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Westerhausen, René; Bless, Josef J.; Passow, Susanne; Kompus, Kristiina; Hugdahl, Kenneth
2015-01-01
The ability to use cognitive-control functions to regulate speech perception is thought to be crucial in mastering developmental challenges, such as language acquisition during childhood or compensation for sensory decline in older age, enabling interpersonal communication and meaningful social interactions throughout the entire life span.…
ERIC Educational Resources Information Center
Kershner, John R.
2016-01-01
Rapidly changing environments in day-to-day activities, enriched with stimuli competing for attention, require a cognitive control mechanism to select relevant stimuli, ignore irrelevant stimuli, and shift attention between alternative features of the environment. Such attentional orchestration is essential to the acquisition of reading skills. In…
ERIC Educational Resources Information Center
Hahn, Constanze; Neuhaus, Andres H.; Pogun, Sakire; Dettling, Michael; Kotz, Sonja A.; Hahn, Eric; Brune, Martin; Gunturkun, Onur
2011-01-01
Schizophrenia has been associated with deficits in functional brain lateralization. According to some authors, the reduction of asymmetry could even promote this psychosis. At the same time, schizophrenia is accompanied by a high prevalence of nicotine dependency compared to any other population. This association is very interesting, because…
Perception and Lateralization of Spoken Emotion by Youths with High-Functioning Forms of Autism
ERIC Educational Resources Information Center
Baker, Kimberly F.; Montgomery, Allen A.; Abramson, Ruth
2010-01-01
The perception and the cerebral lateralization of spoken emotions were investigated in children and adolescents with high-functioning forms of autism (HFFA), and age-matched typically developing controls (TDC). A dichotic listening task using nonsense passages was used to investigate the recognition of four emotions: happiness, sadness, anger, and…
Selective attention to emotional prosody in social anxiety: a dichotic listening study.
Peschard, Virginie; Gilboa-Schechtman, Eva; Philippot, Pierre
2017-12-01
The majority of evidence on social anxiety (SA)-linked attentional biases to threat comes from research using facial expressions. Emotions are, however, communicated through other channels, such as voice. Despite its importance in the interpretation of social cues, emotional prosody processing in SA has been barely explored. This study investigated whether SA is associated with enhanced processing of task-irrelevant angry prosody. Fifty-three participants with high and low SA performed a dichotic listening task in which pairs of male/female voices were presented, one to each ear, with either the same or different prosody (neutral or angry). Participants were instructed to focus on either the left or right ear and to identify the speaker's gender in the attended side. Our main results show that, once attended, task-irrelevant angry prosody elicits greater interference than does neutral prosody. Surprisingly, high socially anxious participants were less prone to distraction from attended-angry (compared to attended-neutral) prosody than were low socially anxious individuals. These findings emphasise the importance of examining SA-related biases across modalities.
The influence of memory and attention on the ear advantage in dichotic listening.
D'Anselmo, Anita; Marzoli, Daniele; Brancucci, Alfredo
2016-12-01
The role of memory retention and attentional control on hemispheric asymmetry was investigated using a verbal dichotic listening paradigm, with the consonant-vowel syllables (/ba/,/da/,/ga/,/ka/,/pa/and/ta/), while manipulating the focus of attention and the time interval between stimulus and response. Attention was manipulated using three conditions: non-forced (NF), forced left (FL) and forced right (FR) attention. Memory involvement was varied using four delays (0, 1, 3 and 4 s) between stimulus presentation and response. Results showed a significant right ear advantage (REA) in the NF condition and an increased REA in the FR condition. A left ear advantage (LEA) was found in FL condition. The REA increased significantly in the NF attention condition at the 3-s compared to the 0-s delay and in the FR condition at the 1-s compared to the 0-s delay. No modulation of the left ear advantage was observed in the FL condition. These results are discussed in terms of an interaction between attentional processes and memory retention. Copyright © 2016 Elsevier B.V. All rights reserved.
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W
2013-11-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in white noise. Relative to control stimuli that contain no inter-aural timing differences, dichotic pitch stimuli typically elicit an object related negativity (ORN) response, associated with the perceptual segregation of the tone and the carrier noise into distinct auditory objects. Autistic children failed to demonstrate an ORN, suggesting a failure of segregation; however, comparison with the ORNs of age-matched typically developing controls narrowly failed to attain significance. More striking, the autistic children demonstrated a significant differential response to the pitch stimulus, peaking at around 50 ms. This was not present in the control group, nor has it been found in other groups tested using similar stimuli. This response may be a neural signature of atypical processing of pitch in at least some autistic individuals.
Putter-Katz, Hanna; Adi-Bensaid, Limor; Feldman, Irit; Hildesheimer, Minka
2008-01-01
Twenty children with central auditory processing disorders [(C)APD] were subjected to a structured intervention program of listening skills in quiet and in noise. Their performance was compared to that of a control group of 10 children with (C)APD with no special treatment. Pretests were conducted in quiet and in degraded listening conditions (speech noise and competing speech). The (C)APD management approach was integrative and included top-down and bottom-up strategies. It focused on environmental modifications, remediation techniques, and compensatory strategies. Training was conducted with monosyllabic and polysyllabic words, sentences and phrases in quiet and in noise. Comparisons of pre- and post-management measures indicated increase in speech recognition performance in background noise and competing speech for the treatment group. This improvement was exhibited for both ears. A significant difference between ears was found with the left ear showing improvement in both the short and the long versions of competing sentence tests and the right ear performing better in the long competing sentences only following intervention. No changes were documented for the control group. These findings add to a growing body of literature suggesting that interactive auditory training can improve listening skills.
Tomlin, Danielle; Moore, David R.; Dillon, Harvey
2015-01-01
Objectives: In this study, the authors assessed the potential utility of a recently developed questionnaire (Evaluation of Children’s Listening and Processing Skills [ECLiPS]) for supporting the clinical assessment of children referred for auditory processing disorder (APD). Design: A total of 49 children (35 referred for APD assessment and 14 from mainstream schools) were assessed for auditory processing (AP) abilities, cognitive abilities, and symptoms of listening difficulty. Four questionnaires were used to capture the symptoms of listening difficulty from the perspective of parents (ECLiPS and Fisher’s auditory problem checklist), teachers (Teacher’s Evaluation of Auditory Performance), and children, that is, self-report (Listening Inventory for Education). Correlation analyses tested for convergence between the questionnaires and both cognitive and AP measures. Discriminant analyses were performed to determine the best combination of tests for discriminating between typically developing children and children referred for APD. Results: All questionnaires were sensitive to the presence of difficulty, that is, children referred for assessment had significantly more symptoms of listening difficulty than typically developing children. There was, however, no evidence of more listening difficulty in children meeting the diagnostic criteria for APD. Some AP tests were significantly correlated with ECLiPS factors measuring related abilities providing evidence for construct validity. All questionnaires correlated to a greater or lesser extent with the cognitive measures in the study. Discriminant analysis suggested that the best discrimination between groups was achieved using a combination of ECLiPS factors, together with nonverbal Intelligence Quotient (cognitive) and AP measures (i.e., dichotic digits test and frequency pattern test). Conclusions: The ECLiPS was particularly sensitive to cognitive difficulties, an important aspect of many children referred for APD, as well as correlating with some AP measures. It can potentially support the preliminary assessment of children referred for APD. PMID:26002277
Hypothalamic digoxin, hemispheric chemical dominance and sarcoidosis.
Ravi Kumar, A; Kurup, Parameswara Achutha
2004-06-01
The isoprenoid pathway produces three key metabolites: endogenous digoxin (membrane sodium-potassium ATPase inhibitor, immunomodulator and regulator of neurotransmitter/amino acid transport), dolichol (regulates N-glycosylation of proteins) and ubiquinone (free radical scavenger). The role of the isoprenoid pathway in the pathogenesis of sarcoidosis in relation to hemispheric dominance was studied. The isoprenoid pathway-related cascade was assessed in patients with systemic sarcoidosis with pulmonary involvement. The pathway was also assessed in patients with right hemispheric, left hemispheric and bihemispheric dominance for comparison to find out the role of hemispheric dominance in the pathogenesis of sarcoidosis. In patients with sarcoidosis there was elevated digoxin synthesis, increased dolichol and glycoconjugate levels and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites. There was an increase in the cholesterol:phospholipid ratio and a reduction in the glycoconjugate level of red blood cell (RBC) membrane in this group of patients. The same biochemical patterns were obtained in individuals with right hemispheric dominance. In individuals with left hemispheric dominance the patterns were reversed. Endogenous digoxin, by activating the calcineurin signal transduction pathway of T cells, can contribute to immune activation in sarcoidosis. An altered glycoconjugate metabolism can lead to the generation of endogenous self-glycoprotein antigens in the lung as well as other tissues. Increased free radical generation can also lead to immune activation. The role of a dysfunctional isoprenoid pathway and endogenous digoxin in the pathogenesis of sarcoidosis in relation to right hemispheric chemical dominance is discussed. All the patients with sarcoidosis were right-handed/left hemispheric dominant according to the dichotic listening test, but their biochemical patterns were suggestive of right hemispheric chemical dominance. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-08-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin (modulate tryptophan/tyrosine transport), dolichol (important in N -glycosylation of proteins), and ubiquinone (free radical scavenger). It was considered pertinent to assess the pathway in alcoholic addiction, alcoholic cirrhosis, and acquired hepatocerebral degeneration. Since endogenous digoxin can regulate neurotransmitter transport, the pathway was also assessed in individuals with differing hemispheric dominance to find out the role of hemispheric dominance in its pathogenesis. In the patient group there was elevated digoxin synthesis, increased dolichol and glycoconjugate levels, and low ubiquinone and elevated free radical levels. There was also an increase in tryptophan catabolites and a reduction in tyrosine catabolites as reduced endogenous morphine synthesis from tyrosine. There was an increase in cholesterol:phospholipid ratio and a reduction in glycoconjugate level of RBC membrane in these groups of patients. The same patterns were obtained in individuals with right hemispheric chemical dominance. Alcoholic cirrhosis, alcoholic addiction, and acquired hepatocerebral degeneration are associated with an upregulated isoprenoid pathway and elevated digoxin secretion from the hypothalamus. This can contribute to NMDA excitotoxicity and altered connective tissue/lipid metabolism important in its pathogenesis. Endogenous morphine deficiency plays a role in alcoholic addiction. Alcoholic cirrhosis, addiction, and acquired hepato -cerebral degeneration occur in right hemispheric chemically dominant individuals. Ninety percent of the patients with alcoholic addiction, alcoholic cirrhosis, and acquired hepatocerebral degeneration were right-handed and left hemispheric dominant by the dichotic listening test. However, their biochemical patterns were similar to those obtained in right hemispheric chemical dominance. Hemispheric chemical dominance is a different entity and has no correlation with handedness or the dichotic listening test.
Switching in the Cocktail Party: Exploring Intentional Control of Auditory Selective Attention
ERIC Educational Resources Information Center
Koch, Iring; Lawo, Vera; Fels, Janina; Vorlander, Michael
2011-01-01
Using a novel variant of dichotic selective listening, we examined the control of auditory selective attention. In our task, subjects had to respond selectively to one of two simultaneously presented auditory stimuli (number words), always spoken by a female and a male speaker, by performing a numerical size categorization. The gender of the…
ERIC Educational Resources Information Center
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were…
ERIC Educational Resources Information Center
Haskins Labs., New Haven, CT.
This collection on speech research presents a number of reports of experiments conducted on neurological, physiological, and phonological questions, using electronic equipment for analysis. The neurological experiments cover auditory and phonetic processes in speech perception, auditory storage, ear asymmetry in dichotic listening, auditory…
Attention and Cognitive Control Networks Assessed in a Dichotic Listening fMRI Study
ERIC Educational Resources Information Center
Falkenberg, Liv E.; Specht, Karsten; Westerhausen, Rene
2011-01-01
A meaningful interaction with our environment relies on the ability to focus on relevant sensory input and to ignore irrelevant information, i.e. top-down control and attention processes are employed to select from competing stimuli following internal goals. In this, the demands for the recruitment of top-down control processes depend on the…
ERIC Educational Resources Information Center
Brancucci, Alfredo; Tommasi, Luca
2011-01-01
Since about two decades neuroscientists have systematically faced the problem of consciousness: the aim is to discover the neural activity specifically related to conscious perceptions, i.e. the biological properties of what philosophers call qualia. In this view, a neural correlate of consciousness (NCC) is a precise pattern of brain activity…
ERIC Educational Resources Information Center
Brock, Jon; Bzishvili, Samantha; Reid, Melanie; Hautus, Michael; Johnson, Blake W.
2013-01-01
Atypical auditory perception is a widely recognised but poorly understood feature of autism. In the current study, we used magnetoencephalography to measure the brain responses of 10 autistic children as they listened passively to dichotic pitch stimuli, in which an illusory tone is generated by sub-millisecond inter-aural timing differences in…
ERIC Educational Resources Information Center
Bomba, Marie D.; Singhal, Anthony
2010-01-01
Previous dual-task research pairing complex visual tasks involving non-spatial cognitive processes during dichotic listening have shown effects on the late component (Ndl) of the negative difference selective attention waveform but no effects on the early (Nde) response suggesting that the Ndl, but not the Nde, is affected by non-spatial…
Reduced Capacity in a Dichotic Memory Test for Adult Patients with ADHD
ERIC Educational Resources Information Center
Dige, Niels; Maahr, Eija; Backenroth-Ohsako, Gunnel
2010-01-01
Objective: To evaluate whether a dichotic memory test would reveal deficits in short-term working-memory recall and long-term memory recall in a group of adult patients with ADHD. Methods: A dichotic memory test with ipsilateral backward speech distraction in an adult ADHD group (n = 69) and a control group (n = 66) is used to compare performance…
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
2014-01-01
Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems.
Norrelgen, Fritjof; Lilja, Anders; Ingvar, Martin; Gisselgård, Jens; Fransson, Peter
2012-01-01
Objective The aims of this study were to develop and assess a method to map language networks in children with two auditory fMRI protocols in combination with a dichotic listening task (DL). The method is intended for pediatric patients prior to epilepsy surgery. To evaluate the potential clinical usefulness of the method we first wanted to assess data from a group of healthy children. Methods In a first step language test materials were developed, intended for subsequent implementation in fMRI protocols. An evaluation of this material was done in 30 children with typical development, 10 from the 1st, 4th and the 7th grade, respectively. The language test material was then adapted and implemented in two fMRI protocols intended to target frontal and posterior language networks. In a second step language lateralization was assessed in 17 typical 10–11 year olds with fMRI and DL. To reach a conclusion about language lateralization, firstly, quantitative analyses of the index data from the two fMRI tasks and the index data from the DL task were done separately. In a second step a set of criteria were applied to these results to reach a conclusion about language lateralization. The steps of these analyses are described in detail. Results The behavioral assessment of the language test material showed that it was well suited for typical children. The results of the language lateralization assessments, based on fMRI data and DL data, showed that for 15 of the 17 subjects (88%) a conclusion could be reached about hemispheric language dominance. In 2 cases (12%) DL provided critical data. Conclusions The employment of DL combined with language mapping using fMRI for assessing hemispheric language dominance is novel and it was deemed valuable since it provided additional information compared to the results gained from each method individually. PMID:23284796
[fMRI study of the dominant hemisphere for language in patients with brain tumor].
Buklina, S B; Podoprigora, A E; Pronin, I N; Shishkina, L V; Boldyreva, G N; Bondarenko, A A; Fadeeva, L M; Kornienko, V N; Zhukov, V Iu
2013-01-01
Paper describes a study of language lateralization of patients with brain tumors, measured by preoperative functional magnetic resonance imaging (fMRI) and comparison results with tumor histology and profile of functional asymmetry. During the study 21 patient underwent fMRI scan. 15 patients had a tumor in the left and 6 in the right hemisphere. Tumors were localized mainly in the frontal, temporal and fronto-temporal regions. Histological diagnosis in 8 cases was malignant Grade IV, in 13 cases--Grade I-III. fMRI study was perfomed on scanner "Signa Exite" with a field strength of 1.5 As speech test reciting the months of the year in reverse order was used. fMRI scan results were compared with the profile of functional asymmetry, which was received with the results of questionnaire Annette and dichotic listening test. Broca's area was found in 7 cases in the left hemisphere, 6 had a tumor Grade I-III. And one patient with glioblastoma had a tumor of the right hemisphere. Broca's area in the right hemisphere was found in 3 patients (2 patients with left sided tumor, and one with right-sided tumor). One patient with left-sided tumor had mild motor aphasia. Bilateral activation in both hemispheres of the brain was observed in 6 patients. All of them had tumor Grade II-III of the left hemisphere. Signs of left-handedness were revealed only in half of these patients. Broca's area was not found in 4 cases. All of them had large malignant tumors Grade IV. One patient couldn't handle program of the research. Results of fMRI scans, questionnaire Annette and dichotic listening test frequently were not the same, which is significant. Bilateral activation in speech-loads may be a reflection of brain plasticity in cases of long-growing tumors. Thus it's important to consider the full range of clinical data in studying the problem of the dominant hemisphere for language.
Momen, Nausheen
2009-12-01
The use of computer-based, psychomotor testing systems for personnel selection and classification has gained popularity in the civilian and military worlds in recent years. However, several issues need to be resolved before adopting a computerized, psychomotor test. The purpose of this study was to compare the impact of alternative input devices used for the Test Of Basic Aviation Skills (TBAS) as well as to explore the practice effects of the TBAS. In study 1, participants were administered the TBAS tracking tests once with a throttle and once with foot pedals in a classic test-retest paradigm. The results confirmed that neither of the input devices provided a significant advantage on TBAS performance. In study 2, participants were administered the TBAS twice with a 24-hour interval between testing. The results demonstrated significant practice effects for all the TBAS subtests except for the dichotic listening tests.
Gender differences in lateralization of mismatch negativity in dichotic listening tasks.
Ikezawa, Satoru; Nakagome, Kazuyuki; Mimura, Masaru; Shinoda, Junko; Itoh, Kenji; Homma, Ikuo; Kamijima, Kunitoshi
2008-04-01
With the aim of investigating gender differences in the functional lateralization subserving preattentive processing of language stimuli, we compared auditory mismatch negativities (MMNs) using dichotic listening tasks. Forty-four healthy volunteers, including 23 males and 21 females, participated in the study. MMNs generated by pure-tone and phonetic stimuli were compared, to check for the existence of language-specific gender differences in lateralization. Both EEG amplitude and scalp current density (SCD) data were analyzed. With phonetic MMNs, EEG findings revealed significantly larger amplitude in females than males, especially in the right hemisphere, while SCD findings revealed left hemisphere dominance and contralateral dominance in males alone. With pure-tone MMNs, no significant gender differences in hemispheric lateralization appeared in either EEG or SCD findings. While males exhibited left-lateralized activation with phonetic MMNs, females exhibited more bilateral activity. Further, the contralateral dominance of the SCD distribution associated with the ear receiving deviant stimuli in males indicated that ipsilateral input as well as interhemispheric transfer across the corpus callosum to the ipsilateral side was more suppressed in males than in females. The findings of the present study suggest that functional lateralization subserving preattentive detection of phonetic change differs between the genders. These results underscore the significance of considering the gender differences in the study of MMN, especially when phonetic stimulus is adopted. Moreover, they support the view of Voyer and Flight [Voyer, D., Flight, J., 2001. Gender differences in laterality on a dichotic task: the influence of report strategies. Cortex 37, 345-362.] in that the gender difference in hemispheric lateralization of language function is observed in a well-managed-attention condition, which fits the condition adopted in the MMN measurement; subjects are required to focus attention to a distraction task and thereby ignore the phonetic stimuli that elicit MMN.
Persinger, M A; Moulden, J A; Richards, P M
1999-10-01
Analyses of the data from 212 boys and girls, aged 7-14 years, demonstrated a relatively abrupt and permanent decrease in the numbers of errors for dichotic (left ear) word listening and for toe gnosis after the ninth year. This pattern was not observed for right ear errors, finger gnosis, or indices of finger and foot agility. The results are compatible with the hypothesis that the final differentiation of the paracentral lobules and adjacent corpus callosum by the most distal portions of the Anterior Cerebral Artery occurs around 9 or 10 years of age. Implications for the development of the sense of self, enhanced apprehension, and "the sense of a presence" are discussed.
Lateralized goal framing: how selective presentation impacts message effectiveness.
McCormick, Michael; Seta, John J
2012-11-01
We tested whether framing a message as a gain or loss would alter its effectiveness by using a dichotic listening procedure to selectively present a health related message to the left or right hemisphere. A significant goal framing effect (losses > gains) was found when right, but not left, hemisphere processing was initially enhanced. The results support the position that the contextual processing style of the right hemisphere is especially sensitive to the associative implications of the frame. We discussed the implications of these findings for goal framing research, and the valence hypothesis. We also discussed how these findings converge with prior valence framing research and how they can be of potential use to health care providers.
Mood Modulates Auditory Laterality of Hemodynamic Mismatch Responses during Dichotic Listening
Schock, Lisa; Dyck, Miriam; Demenescu, Liliana R.; Edgar, J. Christopher; Hertrich, Ingo; Sturm, Walter; Mathiak, Klaus
2012-01-01
Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI). Faces expressing emotions (sad/happy/neutral) were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV) syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing. PMID:22384105
A novel procedure for examining pre-lexical phonetic-level analysis
NASA Astrophysics Data System (ADS)
Bashford, James A.; Warren, Richard M.; Lenz, Peter W.
2005-09-01
A recorded word repeated over and over is heard to undergo a series of illusory changes (verbal transformations) to other syllables and words in the listener's lexicon. When a second image of the same repeating word is added through dichotic presentation (with an interaural delay preventing fusion), the two distinct lateralized images of the word undergo independent illusory transformations at the same rate observed for a single image [Lenz et al., J. Acoust. Soc. Am. 107, 2857 (2000)]. However, when the contralateral word differs by even one phoneme, transformation rate decreases dramatically [Bashford et al., J. Acoust. Soc. Am. 110, 2658 (2001)]. This suppression of transformations did not occur when a nonspeech competitor was employed. The present study found that dichotic suppression of transformation rate also is independent of the top-down influence of a verbal competitor's word frequency, neighborhood density, and lexicality. However, suppression did increase with the extent of feature mismatch at a given phoneme position (e.g., transformations for ``dark'' were suppressed more by contralateral ``hark'' than by ``bark''). These and additional findings indicate that dichotic verbal transformations can provide experimental access to a pre-lexical phonetic analysis normally obscured by subsequent processing. [Work supported by NIH.
ERIC Educational Resources Information Center
Pimentel, Eduarda; Albuquerque, Pedro B.
2013-01-01
The Deese/Roediger-McDermott (DRM) paradigm comprises the study of lists in which words (e.g., bed, pillow, etc.) are all associates of a single nonstudied critical item (e.g., sleep). The probability of falsely recalling or recognising nonstudied critical items is often similar to (or sometimes higher than) the probability of correctly recalling…
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
2014-01-01
Background: Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Methods: Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Results: Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). Conclusion: The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems. PMID:25679009
Mölle, M; Albrecht, C; Marshall, L; Fehm, H L; Born, J
1997-01-01
This study examined the effects of ACTH 4-10, a fragment of adrenocorticotropin (ACTH) with known central nervous system (CNS) activity, on the dimensional complexity of the ongoing electroencephalographic (EEG) activity. Stressful stimuli cause ACTH to be released from the pituitary, and as a neuropeptide ACTH may concurrently exert adaptive influences on the brain's processing of these stimuli. Previous studies have indicated an impairing influence of ACTH on selective attention. Dimensional complexity of the EEG, which indexes the brain's way of stimulus processing, was evaluated while subjects performed tasks with different attention demands. Sixteen healthy men (23 to 33 years) were tested once after placebo and another time after administration of ACTH 4-10 (1.25 mg intravenously (i.v.), 30 minutes before testing). The EEG was recorded while subjects were presented with a dichotic listening task (consisting of the concurrent presentation of tone pips to the left and right ear). Subjects either a) listened to pips in both ears (divided attention), or b) listened selectively to pips in one ear (selective attention), or c) ignored all pips. Dimensional complexity of the EEG was higher during divided than selective attention. ACTH significantly increased the EEG complexity during selective attention, in particular over the midfrontal cortex (Fz, Cz). The effects support the view of a de-focusing action of ACTH during selective attention that could serve to improve the organism's adaptation to stress stimuli.
Ear Advantage for Musical Location and Relative Pitch: Effects of Musical Training and Attention.
Hutchison, Joanna L; Hubbard, Timothy L; Hubbard, Nicholas A; Rypma, Bart
2017-06-01
Trained musicians have been found to exhibit a right-ear advantage for high tones and a left-ear advantage for low tones. We investigated whether this right/high, left/low pattern of musical processing advantage exists in listeners who had varying levels of musical experience, and whether such a pattern might be modulated by attentional strategy. A dichotic listening paradigm was used in which different melodic sequences were presented to each ear, and listeners attended to (a) the left ear or the right ear or (b) the higher pitched tones or the lower pitched tones. Listeners judged whether tone-to-tone transitions within each melodic sequence moved upward or downward in pitch. Only musically experienced listeners could adequately judge the direction of successive pitch transitions when attending to a specific ear; however, all listeners could judge the direction of successive pitch transitions within a high-tone stream or a low-tone stream. Overall, listeners exhibited greater accuracy when attending to relatively higher pitches, but there was no evidence to support a right/high, left/low bias. Results were consistent with effects of attentional strategy rather than an ear advantage for high or low tones. Implications for a potential performer/audience paradox in listening space are considered.
Response procedure, memory, and dichotic emotion recognition.
Voyer, Daniel; Dempsey, Danielle; Harding, Jennifer A
2014-03-01
Three experiments investigated the role of memory and rehearsal in a dichotic emotion recognition task by manipulating the response procedure as well as the interval between encoding and retrieval while taking into account order of report. For all experiments, right-handed undergraduates were presented with dichotic pairs of the words bower, dower, power, and tower pronounced in a sad, angry, happy, or neutral tone of voice. Participants were asked to report the two emotions presented on each trial by clicking on the corresponding drawings or words on a computer screen, either following no delay or a five second delay. Experiment 1 applied the delay conditions as a between-subjects factor whereas it was a within-subject factor in Experiment 2. In Experiments 1 and 2, more correct responses occurred for the left than the right ear, reflecting a left ear advantage (LEA) that was slightly larger with a nonverbal than a verbal response. The LEA was also found to be larger with no delay than with the 5s delay. In addition, participants typically responded first to the left ear stimulus. In fact, the first response produced a LEA whereas the second response produced a right ear advantage. Experiment 3 involved a concurrent task during the delay to prevent rehearsal. In Experiment 3, the pattern of results supported the claim that rehearsal could account for the findings of the first two experiments. The findings are interpreted in the context of the role of rehearsal and memory in models of dichotic listening. Copyright © 2013 Elsevier Inc. All rights reserved.
Use of Disjunctive Response Requirements in Dual-Task Environments: Implications for Automation.
1986-05-01
could be momentarily held in a short-term sensory buffer for later processing. Broadbent, postulating an early filter model , assumed the physical nature...explicative power of the early filter model , further dichotic listening experiments began to support, as a minimum, a late filter model . Deutsch and Deutsch... filter 63 model came from a study by Cortecn and Wood (1972). Initially, they conditioned a list of city names to electrical shock until the
Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J
2015-06-01
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.
2015-01-01
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721
Binaural beats increase interhemispheric alpha-band coherence between auditory cortices.
Solcà, Marco; Mottaz, Anaïs; Guggisberg, Adrian G
2016-02-01
Binaural beats (BBs) are an auditory illusion occurring when two tones of slightly different frequency are presented separately to each ear. BBs have been suggested to alter physiological and cognitive processes through synchronization of the brain hemispheres. To test this, we recorded electroencephalograms (EEG) at rest and while participants listened to BBs or a monaural control condition during which both tones were presented to both ears. We calculated for each condition the interhemispheric coherence, which expressed the synchrony between neural oscillations of both hemispheres. Compared to monaural beats and resting state, BBs enhanced interhemispheric coherence between the auditory cortices. Beat frequencies in the alpha (10 Hz) and theta (4 Hz) frequency range both increased interhemispheric coherence selectively at alpha frequencies. In a second experiment, we evaluated whether this coherence increase has a behavioral aftereffect on binaural listening. No effects were observed in a dichotic digit task performed immediately after BBs presentation. Our results suggest that BBs enhance alpha-band oscillation synchrony between the auditory cortices during auditory stimulation. This effect seems to reflect binaural integration rather than entrainment. Copyright © 2015 Elsevier B.V. All rights reserved.
Assessment of central auditory processing in a group of workers exposed to solvents.
Fuente, Adrian; McPherson, Bradley; Muñoz, Verónica; Pablo Espina, Juan
2006-12-01
Despite having normal hearing thresholds and speech recognition thresholds, results for central auditory tests were abnormal in a group of workers exposed to solvents. Workers exposed to solvents may have difficulties in everyday listening situations that are not related to a decrement in hearing thresholds. A central auditory processing disorder may underlie these difficulties. To study central auditory processing abilities in a group of workers occupationally exposed to a mix of organic solvents. Ten workers exposed to a mix of organic solvents and 10 matched non-exposed workers were studied. The test battery comprised pure-tone audiometry, tympanometry, acoustic reflex measurement, acoustic reflex decay, dichotic digit, pitch pattern sequence, masking level difference, filtered speech, random gap detection and hearing-in-noise tests. All the workers presented normal hearing thresholds and no signs of middle ear abnormalities. Workers exposed to solvents had lower results in comparison with the control group and previously reported normative data, in the majority of the tests.
Klichowski, Michal; Króliczak, Gregory
2017-06-01
Potential links between language and numbers and the laterality of symbolic number representations in the brain are still debated. Furthermore, reports on bilingual individuals indicate that the language-number interrelationships might be quite complex. Therefore, we carried out a visual half-field (VHF) and dichotic listening (DL) study with action words and different forms of symbolic numbers used as stimuli to test the laterality of word and number processing in single-, dual-language and mixed -task and language- contexts. Experiment 1 (VHF) showed a significant right visual field/left hemispheric advantage in response accuracy for action word, as compared to any form of symbolic number processing. Experiment 2 (DL) revealed a substantially reversed effect - a significant right ear/left hemisphere advantage for arithmetic operations as compared to action word processing, and in response times in single- and dual-language contexts for number vs. action words. All these effects were language independent. Notably, for within-task response accuracy compared across modalities significant differences were found in all studied contexts. Thus, our results go counter to findings showing that action-relevant concepts and words, as well as number words are represented/processed primarily in the left hemisphere. Instead, we found that in the auditory context, following substantial engagement of working memory (here: by arithmetic operations), there is a subsequent functional reorganization of processing single stimuli, whether verbs or numbers. This reorganization - their weakened laterality - at least for response accuracy is not exclusive to processing of numbers, but the number of items to be processed. For response times, except for unpredictable tasks in mixed contexts, the "number problem" is more apparent. These outcomes are highly relevant to difficulties that simultaneous translators encounter when dealing with lengthy auditory material in which single items such as number words (and possibly other types of key words) need to be emphasized. Our results may also shed a new light on the "mathematical savant problem". Copyright © 2017 Elsevier Ltd. All rights reserved.
What Can We Learn about Auditory Processing from Adult Hearing Questionnaires?
Bamiou, Doris-Eva; Iliadou, Vasiliki Vivian; Zanchetta, Sthella; Spyridakou, Chrysa
2015-01-01
Questionnaires addressing auditory disability may identify and quantify specific symptoms in adult patients with listening difficulties. (1) To assess validity of the Speech, Spatial, and Qualities of Hearing Scale (SSQ), the (Modified) Amsterdam Inventory for Auditory Disability (mAIAD), and the Hyperacusis Questionnaire (HYP) in adult patients experiencing listening difficulties in the presence of a normal audiogram. (2) To examine which individual questionnaire items give the worse scores in clinical participants with an auditory processing disorder (APD). A prospective correlational analysis study. Clinical participants (N = 58) referred for assessment because of listening difficulties in the presence of normal audiometric thresholds to audiology/ear, nose, and throat or audiovestibular medicine clinics. Normal control participants (N = 30). The mAIAD, HYP, and the SSQ were administered to a clinical population of nonneurological adults who were referred for auditory processing (AP) assessment because of hearing complaints, in the presence of normal audiogram and cochlear function, and to a sample of age-matched normal-hearing controls, before the AP testing. Clinical participants with abnormal results in at least one ear and in at least two tests of AP (and at least one of these tests to be nonspeech) were classified as clinical APD (N = 39), and the remaining (16 of whom had a single test abnormality) as clinical non-APD (N = 19). The SSQ correlated strongly with the mAIAD and the HYP, and correlation was similar within the clinical group and the normal controls. All questionnaire total scores and subscores (except sound distinction of mAIAD) were significantly worse in the clinical APD versus the normal group, while questionnaire total scores and most subscores indicated greater listening difficulties for the clinical non-APD versus the normal subgroups. Overall, the clinical non-APD group tended to give better scores than the APD in all questionnaires administered. Correlation was strong for the worse-ear gaps-in-noise threshold with the SSQ, mAIAD, and HYP; strong to moderate for the speech in babble and left-ear dichotic digit test scores (at p < 0.01); and weak to moderate for the remaining AP tests except the frequency pattern test that did not correlate. The worse-scored items in all three questionnaires concerned speech-in-noise questions. This is similar to worse-scored items by hearing-impaired participants as reported in the literature. Worse-scored items of the clinical group also included quality aspects of listening questions from the SSQ, which most likely pertain to cognitive aspects of listening, such as ability to ignore other sounds and listening effort. Hearing questionnaires may help assess symptoms of adults with APD. The listening difficulties and needs of adults with APD to some extent overlap with those of hearing-impaired listeners, but there are significant differences. The correlation of the gaps-in-noise and duration pattern (but not frequency pattern) tests with the questionnaire scores indicates that temporal processing deficits may play an important role in clinical presentation. American Academy of Audiology.
[The role of the right hemisphere on recovery from Wernicke's aphasia].
Tabuchi, M; Fujii, T; Yamadori, A; Onodera, K; Endou, K
1998-04-01
We report a rare case of severe Wernicke's aphasia who showed a rapid and surprisingly good recovery despite of a large infarct involving the left posterior language area. A 68-year-old right-handed woman without a family history of left-handedness developed a severe comprehension difficulty and paraphasic output following a large infarct in the left posterior temporoparietal region. However, in 6 weeks, naming, comprehension, and repetition of words became almost normal. Spontaneous speech also became almost normal, although comprehension and repetition of sentences remained slightly impaired. The lesion size remained unchanged. A dichotic listening test 4 months after the onset showed clear left ear superiority. We speculate from these observations that the dormant language function in the right hemisphere might have played a role for rapid and good recovery of this case.
[Changes in hemispheric asymmetry under intensive foreign language instruction].
Bykova, L G; Smirnova, T I
1991-01-01
By means of dichotic listening test the study of perception dynamics was conducted in 20 persons during their intensive learning of French and German languages. Before the beginning of learning in all subjects, except three ones, the left hemisphere dominated in relation to verbal functions. In the process of learning in 84% of subjects activization of the opposite hemisphere was observed. In persons with the same initial level of language knowledge the success in colloquial practice was by expert evaluation as higher as bigger was the shift of the right ear coefficient at learning. The latter makes possible to evaluate quantitatively intellectual efforts spent in the process of learning. The authors suggest that such psychophysiological control of perception could be a support in solution of the problem of optimization of learning with the use of individual approach.
Jacquin-Courtois, S; Rode, G; Pavani, F; O'Shea, J; Giard, M H; Boisson, D; Rossetti, Y
2010-03-01
Unilateral neglect is a disabling syndrome frequently observed following right hemisphere brain damage. Symptoms range from visuo-motor impairments through to deficient visuo-spatial imagery, but impairment can also affect the auditory modality. A short period of adaptation to a rightward prismatic shift of the visual field is known to improve a wide range of hemispatial neglect symptoms, including visuo-manual tasks, mental imagery, postural imbalance, visuo-verbal measures and number bisection. The aim of the present study was to assess whether the beneficial effects of prism adaptation may generalize to auditory manifestations of neglect. Auditory extinction, whose clinical manifestations are independent of the sensory modalities engaged in visuo-manual adaptation, was examined in neglect patients before and after prism adaptation. Two separate groups of neglect patients (all of whom exhibited left auditory extinction) underwent prism adaptation: one group (n = 6) received a classical prism treatment ('Prism' group), the other group (n = 6) was submitted to the same procedure, but wore neutral glasses creating no optical shift (placebo 'Control' group). Auditory extinction was assessed by means of a dichotic listening task performed three times: prior to prism exposure (pre-test), upon prism removal (0 h post-test) and 2 h later (2 h post-test). The total number of correct responses, the lateralization index (detection asymmetry between the two ears) and the number of left-right fusion errors were analysed. Our results demonstrate that prism adaptation can improve left auditory extinction, thus revealing transfer of benefit to a sensory modality that is orthogonal to the visual, proprioceptive and motor modalities directly implicated in the visuo-motor adaptive process. The observed benefit was specific to the detection asymmetry between the two ears and did not affect the total number of responses. This indicates a specific effect of prism adaptation on lateralized processes rather than on general arousal. Our results suggest that the effects of prism adaptation can extend to unexposed sensory systems. The bottom-up approach of visuo-motor adaptation appears to interact with higher order brain functions related to multisensory integration and can have beneficial effects on sensory processing in different modalities. These findings should stimulate the development of therapeutic approaches aimed at bypassing the affected sensory processing modality by adapting other sensory modalities.
Lawo, Vera; Fels, Janina; Oberem, Josefa; Koch, Iring
2014-10-01
Using an auditory variant of task switching, we examined the ability to intentionally switch attention in a dichotic-listening task. In our study, participants responded selectively to one of two simultaneously presented auditory number words (spoken by a female and a male, one for each ear) by categorizing its numerical magnitude. The mapping of gender (female vs. male) and ear (left vs. right) was unpredictable. The to-be-attended feature for gender or ear, respectively, was indicated by a visual selection cue prior to auditory stimulus onset. In Experiment 1, explicitly cued switches of the relevant feature dimension (e.g., from gender to ear) and switches of the relevant feature within a dimension (e.g., from male to female) occurred in an unpredictable manner. We found large performance costs when the relevant feature switched, but switches of the relevant feature dimension incurred only small additional costs. The feature-switch costs were larger in ear-relevant than in gender-relevant trials. In Experiment 2, we replicated these findings using a simplified design (i.e., only within-dimension switches with blocked dimensions). In Experiment 3, we examined preparation effects by manipulating the cueing interval and found a preparation benefit only when ear was cued. Together, our data suggest that the large part of attentional switch costs arises from reconfiguration at the level of relevant auditory features (e.g., left vs. right) rather than feature dimensions (ear vs. gender). Additionally, our findings suggest that ear-based target selection benefits more from preparation time (i.e., time to direct attention to one ear) than gender-based target selection.
Sugar ingestion and dichotic listening: Increased perceptual capacity is more than motivation.
Scheel, Matthew H; Ambrose, Aimee L
2014-01-01
Participants ingested a sugar drink or a sugar-free drink and then engaged in a pair of dichotic listening tasks. Tasks presented category labels then played a series of word pairs, one in the left ear and one in the right. Participants attempted to identify pairs containing a target category member. Target category words were homonyms. For example, arms appeared as a target in the "body parts" category. Nontargets that played along with targets were related to a category-appropriate version of the target (e.g., sleeves), a category-inappropriate version (e.g., weapons), or were unrelated to either version of the target (e.g., plant). Hence, an effect of nontarget type on number of targets missed was evidence that participants processed nontargets for meaning. In the divided attention task, participants monitored both ears. In the focused attention task, participants monitored the left ear. Half the participants in each group had the divided attention task before the focused attention task; the other half had the focused attention task before the divided attention task. We set task lengths to about 12 min so working on the first task would give sufficient time for metabolizing sugar from the drink before the start of the second task. Nontarget word type significantly affected targets missed in both tasks. Drink type affected performance in the divided attention task only after sufficient time for converting sugar into blood glucose. The result supports an energy model for the effect of sugar ingestion on perceptual tasks rather than a motivational model.
How brain asymmetry relates to performance – a large-scale dichotic listening study
Hirnstein, Marco; Hugdahl, Kenneth; Hausmann, Markus
2014-01-01
All major mental functions including language, spatial and emotional processing are lateralized but how strongly and to which hemisphere is subject to inter- and intraindividual variation. Relatively little, however, is known about how the degree and direction of lateralization affect how well the functions are carried out, i.e., how lateralization and task performance are related. The present study therefore examined the relationship between lateralization and performance in a dichotic listening task for which we had data available from 1839 participants. In this task, consonant-vowel syllables are presented simultaneously to the left and right ear, such that each ear receives a different syllable. When asked which of the two they heard best, participants typically report more syllables from the right ear, which is a marker of left-hemispheric speech dominance. We calculated the degree of lateralization (based on the difference between correct left and right ear reports) and correlated it with overall response accuracy (left plus right ear reports). In addition, we used reference models to control for statistical interdependency between left and right ear reports. The results revealed a u-shaped relationship between degree of lateralization and overall accuracy: the stronger the left or right ear advantage, the better the overall accuracy. This u-shaped asymmetry-performance relationship consistently emerged in males, females, right-/non-right-handers, and different age groups. Taken together, the present study demonstrates that performance on lateralized language functions depends on how strongly these functions are lateralized. The present study further stresses the importance of controlling for statistical interdependency when examining asymmetry-performance relationships in general. PMID:24427151
Binaural pitch fusion: Comparison of normal-hearing and hearing-impaired listenersa)
Reiss, Lina A. J.; Shayman, Corey S.; Walker, Emily P.; Bennett, Keri O.; Fowler, Jennifer R.; Hartling, Curtis L.; Glickman, Bess; Lasarev, Michael R.; Oh, Yonghee
2017-01-01
Binaural pitch fusion is the fusion of dichotically presented tones that evoke different pitches between the ears. In normal-hearing (NH) listeners, the frequency range over which binaural pitch fusion occurs is usually <0.2 octaves. Recently, broad fusion ranges of 1–4 octaves were demonstrated in bimodal cochlear implant users. In the current study, it was hypothesized that hearing aid (HA) users would also exhibit broad fusion. Fusion ranges were measured in both NH and hearing-impaired (HI) listeners with hearing losses ranging from mild-moderate to severe-profound, and relationships of fusion range with demographic factors and with diplacusis were examined. Fusion ranges of NH and HI listeners averaged 0.17 ± 0.13 octaves and 1.7 ± 1.5 octaves, respectively. In HI listeners, fusion ranges were positively correlated with a principal component measure of the covarying factors of young age, early age of hearing loss onset, and long durations of hearing loss and HA use, but not with hearing threshold, amplification level, or diplacusis. In NH listeners, no correlations were observed with age, hearing threshold, or diplacusis. The association of broad fusion with early onset, long duration of hearing loss suggests a possible role of long-term experience with hearing loss and amplification in the development of broad fusion. PMID:28372056
Music-induced changes in functional cerebral asymmetries.
Hausmann, Markus; Hodgetts, Sophie; Eerola, Tuomas
2016-04-01
After decades of research, it remains unclear whether emotion lateralization occurs because one hemisphere is dominant for processing the emotional content of the stimuli, or whether emotional stimuli activate lateralised networks associated with the subjective emotional experience. By using emotion-induction procedures, we investigated the effect of listening to happy and sad music on three well-established lateralization tasks. In a prestudy, Mozart's piano sonata (K. 448) and Beethoven's Moonlight Sonata were rated as the most happy and sad excerpts, respectively. Participants listened to either one emotional excerpt, or sat in silence before completing an emotional chimeric faces task (Experiment 1), visual line bisection task (Experiment 2) and a dichotic listening task (Experiment 3 and 4). Listening to happy music resulted in a reduced right hemispheric bias in facial emotion recognition (Experiment 1) and visuospatial attention (Experiment 2) and increased left hemispheric bias in language lateralization (Experiments 3 and 4). Although Experiments 1-3 revealed an increased positive emotional state after listening to happy music, mediation analyses revealed that the effect on hemispheric asymmetries was not mediated by music-induced emotional changes. The direct effect of music listening on lateralization was investigated in Experiment 4 in which tempo of the happy excerpt was manipulated by controlling for other acoustic features. However, the results of Experiment 4 made it rather unlikely that tempo is the critical cue accounting for the effects. We conclude that listening to music can affect functional cerebral asymmetries in well-established emotional and cognitive laterality tasks, independent of music-induced changes in the emotion state. Copyright © 2016 Elsevier Inc. All rights reserved.
Salivary testosterone levels are unrelated to handedness or cerebral lateralization for language.
Papadatou-Pastou, Marietta; Martin, Maryanne; Mohr, Christine
2017-03-01
Behavioural and cerebral lateralization are thought to be controlled, at least in part, by prenatal testosterone (T) levels, explaining why sex differences are found in both laterality traits. The present study investigated hormonal effects on laterality using adult salivary T levels, to explore the adequacy of competing theories: the Geschwind, Behan and Galaburda, the callosal, and the sexual differentiation hypotheses. Sixty participants (15 right-handers and 15 left-handers of each sex) participated. Behavioural lateralization was studied by means of hand preference tests (i.e., the Edinburgh Handedness Inventory and the Quantification of Hand Preference test) and a hand skill test (i.e., the Peg-Moving test) whereas cerebral lateralization for language was studied using the Consonant-Vowel Dichotic Listening test and the Visual Half-Field Lexical Decision test. Salivary T and cortisol (C) concentrations were measured by luminescence immunoassay. Canonical correlations did not reveal significant relationships between T levels and measures of hand preference, hand skill, or language laterality. Thus, our findings add to the growing literature showing no relationship between T concentrations with behavioural or cerebral lateralization. It is claimed that prenatal T is not a major determinant of individual variability in either behavioural or cerebral lateralization.
Norrelgen, Fritjof; Lilja, Anders; Ingvar, Martin; Åmark, Per; Fransson, Peter
2015-01-01
The aim of this study was to evaluate the clinical use of a method to assess hemispheric language dominance in pediatric candidates for epilepsy surgery. The method is designed for patients but has previously been evaluated with healthy children. Nineteen patients, 8-18 years old, with intractable epilepsy and candidates for epilepsy surgery were assessed. The assessment consisted of two functional MRI protocols (fMRI) intended to target frontal and posterior language networks respectively, and a behavioral dichotic listening task (DL). Regional left/right indices for each fMRI task from the frontal, temporal and parietal lobe were calculated, and left/right indices of the DL task were calculated from responses of consonants and vowels, separately. A quantitative analysis of each patient's data set was done in two steps based on clearly specified criteria. First, fMRI data and DL data were analyzed separately to determine whether the result from each of these assessments were conclusive or not. Thereafter, the results from the individual assessments were combined to reach a final conclusion regarding hemispheric language dominance. For 14 of the 19 subjects (74%) a conclusion was reached about their hemispheric language dominance. Nine subjects had a left-sided and five subjects had a right-sided hemispheric dominance. In three cases (16%) DL provided critical data to reach a conclusive result. The success rate of conclusive language lateralization assessments in this study is comparable to reported rates on similar challenged pediatric populations. The results are promising but data from more patients than in the present study will be required to conclude on the clinical applicability of the method.
Norrelgen, Fritjof; Lilja, Anders; Ingvar, Martin; Åmark, Per; Fransson, Peter
2014-01-01
Objective The aim of this study was to evaluate the clinical use of a method to assess hemispheric language dominance in pediatric candidates for epilepsy surgery. The method is designed for patients but has previously been evaluated with healthy children. Methods Nineteen patients, 8–18 years old, with intractable epilepsy and candidates for epilepsy surgery were assessed. The assessment consisted of two functional MRI protocols (fMRI) intended to target frontal and posterior language networks respectively, and a behavioral dichotic listening task (DL). Regional left/right indices for each fMRI task from the frontal, temporal and parietal lobe were calculated, and left/right indices of the DL task were calculated from responses of consonants and vowels, separately. A quantitative analysis of each patient's data set was done in two steps based on clearly specified criteria. First, fMRI data and DL data were analyzed separately to determine whether the result from each of these assessments were conclusive or not. Thereafter, the results from the individual assessments were combined to reach a final conclusion regarding hemispheric language dominance. Results For 14 of the 19 subjects (74%) a conclusion was reached about their hemispheric language dominance. Nine subjects had a left-sided and five subjects had a right-sided hemispheric dominance. In three cases (16%) DL provided critical data to reach a conclusive result. Conclusions The success rate of conclusive language lateralization assessments in this study is comparable to reported rates on similar challenged pediatric populations. The results are promising but data from more patients than in the present study will be required to conclude on the clinical applicability of the method. PMID:25610785
Bruder, Gerard E; Stewart, Jonathan W; McGrath, Patrick J
2017-07-01
The right and left side of the brain are asymmetric in anatomy and function. We review electrophysiological (EEG and event-related potential), behavioral (dichotic and visual perceptual asymmetry), and neuroimaging (PET, MRI, NIRS) evidence of right-left asymmetry in depressive disorders. Recent electrophysiological and fMRI studies of emotional processing have provided new evidence of altered laterality in depressive disorders. EEG alpha asymmetry and neuroimaging findings at rest and during cognitive or emotional tasks are consistent with reduced left prefrontal activity in depressed patients, which may impair downregulation of amygdala response to negative emotional information. Dichotic listening and visual hemifield findings for non-verbal or emotional processing have revealed abnormal perceptual asymmetry in depressive disorders, and electrophysiological findings have shown reduced right-lateralized responsivity to emotional stimuli in occipitotemporal or parietotemporal cortex. We discuss models of neural networks underlying these alterations. Of clinical relevance, individual differences among depressed patients on measures of right-left brain function are related to diagnostic subtype of depression, comorbidity with anxiety disorders, and clinical response to antidepressants or cognitive behavioral therapy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Carey, Daniel; Mercure, Evelyne; Pizzioli, Fabrizio; Aydelott, Jennifer
2014-12-01
The effects of ear of presentation and competing speech on N400s to spoken words in context were examined in a dichotic sentence priming paradigm. Auditory sentence contexts with a strong or weak semantic bias were presented in isolation to the right or left ear, or with a competing signal presented in the other ear at a SNR of -12 dB. Target words were congruent or incongruent with the sentence meaning. Competing speech attenuated N400s to both congruent and incongruent targets, suggesting that the demand imposed by a competing signal disrupts the engagement of semantic comprehension processes. Bias strength affected N400 amplitudes differentially depending upon ear of presentation: weak contexts presented to the le/RH produced a more negative N400 response to targets than strong contexts, whereas no significant effect of bias strength was observed for sentences presented to the re/LH. The results are consistent with a model of semantic processing in which the RH relies on integrative processing strategies in the interpretation of sentence-level meaning. Copyright © 2014 Elsevier Ltd. All rights reserved.
Zenker Castro, Franz; Fernández Belda, Rafael; Barajas de Prat, José Juan
2008-12-01
In this study we present a case of a 71-year-old female patient with sensorineural hearing loss and fitted with bilateral hearing aids. The patient complained of scant benefit from the hearing aid fitting with difficulties in understanding speech with background noise. The otolaryngology examination was normal. Audiological tests revealed bilateral sensorineural hearing loss with threshold values of 51 and 50 dB HL in the right and left ear. The Dichotic Digit Test was administered in a divided attention mode and focalizing the attention to each ear. Results in this test are consistent with a Central Auditory Processing Disorder.
Central auditory processing effects induced by solvent exposure.
Fuente, Adrian; McPherson, Bradley
2007-01-01
Various studies have demonstrated that organic solvent exposure may induce auditory damage. Studies conducted in workers occupationally exposed to solvents suggest, on the one hand, poorer hearing thresholds than in matched non-exposed workers, and on the other hand, central auditory damage due to solvent exposure. Taking into account the potential auditory damage induced by solvent exposure due to the neurotoxic properties of such substances, the present research aimed at studying the possible auditory processing disorder (APD), and possible hearing difficulties in daily life listening situations that solvent-exposed workers may acquire. Fifty workers exposed to a mixture of organic solvents (xylene, toluene, methyl ethyl ketone) and 50 non-exposed workers matched by age, gender and education were assessed. Only subjects with no history of ear infections, high blood pressure, kidney failure, metabolic and neurological diseases, or alcoholism were selected. The subjects had either normal hearing or sensorineural hearing loss, and normal tympanometric results. Hearing-in-noise (HINT), dichotic digit (DD), filtered speech (FS), pitch pattern sequence (PPS), and random gap detection (RGD) tests were carried out in the exposed and non-exposed groups. A self-report inventory of each subject's performance in daily life listening situations, the Amsterdam Inventory for Auditory Disability and Handicap, was also administered. Significant threshold differences between exposed and non-exposed workers were found at some of the hearing test frequencies, for both ears. However, exposed workers still presented normal hearing thresholds as a group (equal or better than 20 dB HL). Also, for the HINT, DD, PPS, FS and RGD tests, non-exposed workers obtained better results than exposed workers. Finally, solvent-exposed workers reported significantly more hearing complaints in daily life listening situations than non-exposed workers. It is concluded that subjects exposed to solvents may acquire an APD and thus the sole use of pure-tone audiometry is insufficient to assess hearing in solvent-exposed populations.
Subjective and psychophysiological indices of listening effort in a competing-talker task
Mackersie, Carol L.; Cones, Heather
2010-01-01
Background The effects of noise and other competing backgrounds on speech recognition performance are well documented. There is less information, however, on listening effort and stress experienced by listeners during a speech recognition task that requires inhibition of competing sounds. Purpose The purpose was a) to determine if psychophysiological indices of listening effort were more sensitive than performance measures (percentage correct) obtained near ceiling level during a competing speech task b) to determine the relative sensitivity of four psychophysiological measures to changes in task demand and c) to determine the relationships between changes in psychophysiological measures and changes in subjective ratings of stress and workload. Research Design A repeated-measures experimental design was used to examine changes in performance, psychophysiological measures, and subjective ratings in response to increasing task demand. Study Sample Fifteen adults with normal hearing participated in the study. The mean age of the participants was 27 (range: 24–54). Data Collection and Analysis Psychophysiological recordings of heart rate, skin conductance, skin temperature, and electromyographic activity (EMG) were obtained during listening tasks of varying demand. Materials from the Dichotic Digits Test were used to modulate task demand. The three levels of tasks demand were: single digits presented to one ear (low-demand reference condition), single digits presented simultaneously to both ears (medium demand), and a series of two digits presented simultaneously to both ears (high demand). Participants were asked to repeat all the digits they heard while psychophysiological activity was recorded simultaneously. Subjective ratings of task load were obtained after each condition using the NASA-TLX questionnaire. Repeated-measures analyses of variance were completed for each measure using task demand and session as factors. Results Mean performance was higher than 96% for all listening tasks. There was no significant change in performance across listening conditions for any listener. There was, however, a significant increase in mean skin conductance and EMG activity as task demand increased. Heart rate and skin temperature did not change significantly. There was no strong association between subjective and psychophysiological measures, but all participants with mean normalized effort ratings of greater than 4.5 (i.e. effort increased by a factor of at least 4.5) showed significant changes in skin conductance. Conclusions Even in the absence of substantial performance changes, listeners may experience changes in subjective and psychophysiological responses consistent with activation of a stress response. Skin conductance appears to be the most promising measure for evaluating individual changes in psychophysiological responses during listening tasks. PMID:21463566
Subjective and psychophysiological indexes of listening effort in a competing-talker task.
Mackersie, Carol L; Cones, Heather
2011-02-01
The effects of noise and other competing backgrounds on speech recognition performance are well documented. There is less information, however, on listening effort and stress experienced by listeners during a speech-recognition task that requires inhibition of competing sounds. The purpose was (a) to determine if psychophysiological indexes of listening effort were more sensitive than performance measures (percentage correct) obtained near ceiling level during a competing speech task, (b) to determine the relative sensitivity of four psychophysiological measures to changes in task demand, and (c) to determine the relationships between changes in psychophysiological measures and changes in subjective ratings of stress and workload. A repeated-measures experimental design was used to examine changes in performance, psychophysiological measures, and subjective ratings in response to increasing task demand. Fifteen adults with normal hearing participated in the study. The mean age of the participants was 27 (range: 24-54). Psychophysiological recordings of heart rate, skin conductance, skin temperature, and electromyographic (EMG) activity were obtained during listening tasks of varying demand. Materials from the Dichotic Digits Test were used to modulate task demand. The three levels of task demand were single digits presented to one ear (low-demand reference condition), single digits presented simultaneously to both ears (medium demand), and a series of two digits presented simultaneously to both ears (high demand). Participants were asked to repeat all the digits they heard, while psychophysiological activity was recorded simultaneously. Subjective ratings of task load were obtained after each condition using the National Aeronautics and Space Administration Task Load Index questionnaire. Repeated-measures analyses of variance were completed for each measure using task demand and session as factors. Mean performance was higher than 96% for all listening tasks. There was no significant change in performance across listening conditions for any listener. There was, however, a significant increase in mean skin conductance and EMG activity as task demand increased. Heart rate and skin temperature did not change significantly. There was no strong association between subjective and psychophysiological measures, but all participants with mean normalized effort ratings of greater than 4.5 (i.e., effort increased by a factor of at least 4.5) showed significant changes in skin conductance. Even in the absence of substantial performance changes, listeners may experience changes in subjective and psychophysiological responses consistent with the activation of a stress response. Skin conductance appears to be the most promising measure for evaluating individual changes in psychophysiological responses during listening tasks. American Academy of Audiology.
Asynchronous glimpsing of speech: Spread of masking and task set-size
Ozmeral, Erol J.; Buss, Emily; Hall, Joseph W.
2012-01-01
Howard-Jones and Rosen [(1993). J. Acoust. Soc. Am. 93, 2915–2922] investigated the ability to integrate glimpses of speech that are separated in time and frequency using a “checkerboard” masker, with asynchronous amplitude modulation (AM) across frequency. Asynchronous glimpsing was demonstrated only for spectrally wide frequency bands. It is possible that the reduced evidence of spectro-temporal integration with narrower bands was due to spread of masking at the periphery. The present study tested this hypothesis with a dichotic condition, in which the even- and odd-numbered bands of the target speech and asynchronous AM masker were presented to opposite ears, minimizing the deleterious effects of masking spread. For closed-set consonant recognition, thresholds were 5.1–8.5 dB better for dichotic than for monotic asynchronous AM conditions. Results were similar for closed-set word recognition, but for open-set word recognition the benefit of dichotic presentation was more modest and level dependent, consistent with the effects of spread of masking being level dependent. There was greater evidence of asynchronous glimpsing in the open-set than closed-set tasks. Presenting stimuli dichotically supported asynchronous glimpsing with narrower frequency bands than previously shown, though the magnitude of glimpsing was reduced for narrower bandwidths even in some dichotic conditions. PMID:22894234
Osisanya, Ayo; Adewunmi, Abiodun T
2018-02-01
The need to develop a measure of managing children with a single profile of auditory processing disorders (APDs), and differentiate between true and artefactual improvements necessitated the study. The study also sought to determine the efficacy of interventions - both single and combined on APD, against no-treatment. A randomised controlled trial of interventions (RCT) was adopted. Participants were randomly allocated to each of the intervention groups or the no intervention group. The 10 weeks intervention included 45 minutes three times a week therapeutic intervention on listening with noise and sound localisation ability in the home and school environments. 80 pupils (7-11 years) with a single profile of APD participated in the study. Treatments were effective on the cocktail party and sound localisation. The best result was realised with the combined therapy (CT), and there was no significant difference in performance in the remaining treatment groups. The intervention groups were beneficial to pupils with APD and should be adopted by clinicians.
Examining age-related differences in auditory attention control using a task-switching procedure.
Lawo, Vera; Koch, Iring
2014-03-01
Using a novel task-switching variant of dichotic selective listening, we examined age-related differences in the ability to intentionally switch auditory attention between 2 speakers defined by their sex. In our task, young (M age = 23.2 years) and older adults (M age = 66.6 years) performed a numerical size categorization on spoken number words. The task-relevant speaker was indicated by a cue prior to auditory stimulus onset. The cuing interval was either short or long and varied randomly trial by trial. We found clear performance costs with instructed attention switches. These auditory attention switch costs decreased with prolonged cue-stimulus interval. Older adults were generally much slower (but not more error prone) than young adults, but switching-related effects did not differ across age groups. These data suggest that the ability to intentionally switch auditory attention in a selective listening task is not compromised in healthy aging. We discuss the role of modality-specific factors in age-related differences.
Fiedler, Lorenz; Wöstmann, Malte; Graversen, Carina; Brandmeyer, Alex; Lunner, Thomas; Obleser, Jonas
2017-06-01
Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening ('cocktail party') scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. To investigate whether a listener's attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal ('in-Ear-EEG') and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n = 7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Each individual participants' attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener's focus of attention.
Evaluation of selective attention in patients with misophonia.
Silva, Fúlvia Eduarda da; Sanchez, Tanit Ganz
2018-03-21
Misophonia is characterized by the aversion to very selective sounds, which evoke a strong emotional reaction. It has been inferred that misophonia, as well as tinnitus, is associated with hyperconnectivity between auditory and limbic systems. Individuals with bothersome tinnitus may have selective attention impairment, but it has not been demonstrated in case of misophonia yet. To characterize a sample of misophonic subjects and compare it with two control groups, one with tinnitus individuals (without misophonia) and the other with asymptomatic individuals (without misophonia and without tinnitus), regarding the selective attention. We evaluated 40 normal-hearing participants: 10 with misophonia, 10 with tinnitus (without misophonia) and 20 without tinnitus and without misophonia. In order to evaluate the selective attention, the dichotic sentence identification test was applied in three situations: firstly, the Brazilian Portuguese test was applied. Then, the same test was applied, combined with two competitive sounds: chewing sound (representing a sound that commonly triggers misophonia), and white noise (representing a common type of tinnitus which causes discomfort to patients). The dichotic sentence identification test with chewing sound, showed that the average of correct responses differed between misophonia and without tinnitus and without misophonia (p=0.027) and between misophonia and tinnitus (without misophonia) (p=0.002), in both cases lower in misophonia. Both, the dichotic sentence identification test alone, and with white noise, failed to show differences in the average of correct responses among the three groups (p≥0.452). The misophonia participants presented a lower percentage of correct responses in the dichotic sentence identification test with chewing sound; suggesting that individuals with misophonia may have selective attention impairment when they are exposed to sounds that trigger this condition. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Hannay, H. Julia; Walker, Amy; Dennis, Maureen; Kramer, Larry; Blaser, Susan; Fletcher, Jack M.
2009-01-01
Spina bifida meningomyelocele with hydrocephalus (SBM) is commonly associated with anomalies of the corpus callosum (CC). We describe MRI patterns of regional CC agenesis and relate CC anomalies to functional laterality based on a dichotic listening test in 90 children with SBM and 27 typically developing controls. Many children with SBM (n = 40) showed regional CC anomalies in the form of agenesis of the rostrum and0or splenium, and a smaller number (n = 20) showed hypoplasia (thinning) of all CC regions (rostrum, genu, body, and splenium). The expected right ear advantage (REA) was exhibited by normal controls and children with SBM having a normal or hypoplastic splenium. It was not shown by children with SBM who were left handed, missing a splenium, or had a higher level spinal cord lesion. Perhaps the right hemisphere of these children is more involved in processing some aspects of linguistic stimuli. PMID:18764972
Auditory orienting of attention: Effects of cues and verbal workload with children and adults.
Phélip, Marion; Donnot, Julien; Vauclair, Jacques
2016-01-01
The use of tone cues has an improving effect on the auditory orienting of attention for children as for adults. Verbal cues, on the contrary, do not seem to orient attention as efficiently before the age of 9 years. However, several studies have reported inconsistent effects of orienting attention on ear asymmetries. Multiple factors are questioned, such as the role of verbal workload. Indeed, the semantic nature of the dichotic pairs and their load of processing may explain orienting of attention performance. Thus, by controlling for the role of verbal workload, the present experiment aimed to evaluate the development of capacities for the auditory orienting of attention. Right-handed, 6- to 12-year-olds and adults were recruited to complete either a tone cue or a verbal cue dichotic listening task in the identification of familiar words or nonsense words. A factorial design analysis of variance showed a significant right-ear advantage for all the participants and for all the types of stimuli. A major developmental effect was observed in which verbal cues played an important role: they allowed the 6- to 8-year-olds to improve their performance of identification in the left ear. These effects were taken as evidence of the implication of top-down processes in cognitive flexibility across development.
The effects of preceding lead-alone and lag-alone click trains on the buildup of echo suppression.
Bishop, Christopher W; Yadav, Deepak; London, Sam; Miller, Lee M
2014-08-01
Spatial perception in echoic environments is influenced by recent acoustic history. For instance, echo suppression becomes more effective or "builds up" with repeated exposure to echoes having a consistent acoustic relationship to a temporally leading sound. Four experiments were conducted to investigate how buildup is affected by prior exposure to unpaired lead-alone or lag-alone click trains. Unpaired trains preceded lead-lag click trains designed to evoke and assay buildup. Listeners reported how many sounds they heard from the echo hemifield during the lead-lag trains. Stimuli were presented in free field (experiments 1 and 4) or dichotically through earphones (experiments 2 and 3). In experiment 1, listeners reported more echoes following a lead-alone train compared to a period of silence. In contrast, listeners reported fewer echoes following a lag-alone train; similar results were observed with earphones. Interestingly, the effects of lag-alone click trains on buildup were qualitatively different when compared to a no-conditioner trial type in experiment 4. Finally, experiment 3 demonstrated that the effects of preceding click trains on buildup cannot be explained by a change in counting strategy or perceived click salience. Together, these findings demonstrate that echo suppression is affected by prior exposure to unpaired stimuli.
NASA Astrophysics Data System (ADS)
Lauter, Judith
2002-05-01
As Research Director of CID, Ira emphasized the importance of combining information from biology with rigorous studies of behavior, such as psychophysics, to better understand how the brain and body accomplish the goals of everyday life. In line with this philosophy, my doctoral dissertation sought to explain brain functional asymmetries (studied with dichotic listening) in terms of the physical dimensions of a library of test sounds designed to represent a speech-music continuum. Results highlighted individual differences plus similarities in terms of patterns of relative ear advantages, suggesting an organizational basis for brain asymmetries depending on physical dimensions of stimulus and gesture with analogs in auditory, visual, somatosensory, and motor systems. My subsequent work has employed a number of noninvasive methods (OAEs, EPs, qEEG, PET, MRI) to explore the neurobiological bases of individual differences in general and functional asymmetries in particular. This research has led to (1) the AXS test battery for assessing the neurobiology of human sensory-motor function; (2) the handshaking model of brain function, describing dynamic relations along all three body/brain axes; (3) the four-domain EPIC model of functional asymmetries; and (4) the trimodal brain, a new model of individual differences based on psychoimmunoneuroendocrinology.
Rominger, Christian; Fink, Andreas; Weiss, Elisabeth M; Bosch, Jannis; Papousek, Ilona
2017-03-01
Positive schizotypy and creativity seem to be linked. However, the question still remains why they are related, and what may make the difference? As creative ideation is hypothesised as a dual process (association and inhibition), the propensity for remote associations might be a shared mechanism. However, positive schizotypy and creative thinking might be differentially linked to inhibition. Therefore, this study investigated a potentially overlapping feature of positive schizotypy and creativity (remote associations) as well as a potential dissociative factor (auditory inhibition). From a large screening sample, 46 participants covering a broad range of positive schizotypy were selected. Association proneness was assessed via two association tasks, auditory inhibition skill with the forced-left condition of the Dichotic Listening Test, and creative thinking by means of two creative ideation tests. Positive schizotypy and creative thinking were positively associated. Both traits were linked to lower rates of common associations. However, creative thinking was associated with higher and positive schizotypy with lower inhibitory control in the auditory domain. While creativity and positive schizotypy shared some variance (related to remote associations), profound inhibition skills may be vital for creative performance and may coincide with lower levels of positive schizotypy.
NASA Technical Reports Server (NTRS)
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
1976-01-01
The effects of varying the rate of delivery of dichotic tone pip stimuli on selective attention measured by evoked-potential amplitudes and signal detectability scores were studied. The subjects attended to one channel (ear) of tones, ignored the other, and pressed a button whenever occasional targets - tones of a slightly higher pitch were detected in the attended ear. Under separate conditions, randomized interstimulus intervals were short, medium, and long. Another study compared the effects of attention on the N1 component of the auditory evoked potential for tone pips presented alone and when white noise was added to make the tones barely above detectability threshold in a three-channel listening task. Major conclusions are that (1) N1 is enlarged to stimuli in an attended channel only in the short interstimulus interval condition (averaging 350 msec), (2) N1 and P3 are related to different modes of selective attention, and (3) attention selectivity in multichannel listening task is greater when tones are faint and/or difficult to detect.
Tune, Sarah; Wöstmann, Malte; Obleser, Jonas
2018-02-11
In recent years, hemispheric lateralisation of alpha power has emerged as a neural mechanism thought to underpin spatial attention across sensory modalities. Yet, how healthy ageing, beginning in middle adulthood, impacts the modulation of lateralised alpha power supporting auditory attention remains poorly understood. In the current electroencephalography study, middle-aged and older adults (N = 29; ~40-70 years) performed a dichotic listening task that simulates a challenging, multitalker scenario. We examined the extent to which the modulation of 8-12 Hz alpha power would serve as neural marker of listening success across age. With respect to the increase in interindividual variability with age, we examined an extensive battery of behavioural, perceptual and neural measures. Similar to findings on younger adults, middle-aged and older listeners' auditory spatial attention induced robust lateralisation of alpha power, which synchronised with the speech rate. Notably, the observed relationship between this alpha lateralisation and task performance did not co-vary with age. Instead, task performance was strongly related to an individual's attentional and working memory capacity. Multivariate analyses revealed a separation of neural and behavioural variables independent of age. Our results suggest that in age-varying samples as the present one, the lateralisation of alpha power is neither a sufficient nor necessary neural strategy for an individual's auditory spatial attention, as higher age might come with increased use of alternative, compensatory mechanisms. Our findings emphasise that explaining interindividual variability will be key to understanding the role of alpha oscillations in auditory attention in the ageing listener. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Priming motivation through unattended speech.
Radel, Rémi; Sarrazin, Philippe; Jehu, Marie; Pelletier, Luc
2013-12-01
This study examines whether motivation can be primed through unattended speech. Study 1 used a dichotic-listening paradigm and repeated strength measures. In comparison to the baseline condition, in which the unattended channel was only composed by neutral words, the presence of words related to high (low) intensity of motivation led participants to exert more (less) strength when squeezing a hand dynamometer. In a second study, a barely audible conversation was played while participants' attention was mobilized on a demanding task. Participants who were exposed to a conversation depicting intrinsic motivation performed better and persevered longer in a subsequent word-fragment completion task than those exposed to the same conversation made unintelligible. These findings suggest that motivation can be primed without attention. © 2013 The British Psychological Society.
Perceptual fluency and affect without recognition.
Anand, P; Sternthal, B
1991-05-01
A dichotic listening task was used to investigate the affect-without-recognition phenomenon. Subjects performed a distractor task by responding to the information presented in one ear while ignoring the target information presented in the other ear. The subjects' recognition of and affect toward the target information as well as toward foils was measured. The results offer evidence for the affect-without-recognition phenomenon. Furthermore, the data suggest that the subjects' affect toward the stimuli depended primarily on the extent to which the stimuli were perceived as familiar (i.e., subjective familiarity), and this perception was influenced by the ear in which the distractor or the target information was presented. These data are interpreted in terms of current models of recognition memory and hemispheric lateralization.
[Auditory and corporal laterality, logoaudiometry, and monaural hearing aid gain].
Benavides, Mariela; Peñaloza-López, Yolanda R; de la Sancha-Jiménez, Sabino; García Pedroza, Felipe; Gudiño, Paula K
2007-12-01
To identify the auditory or clinical test that has the best correlation with the ear in which we apply the monaural hearing aid in symmetric bilateral hearing loss. A total of 37 adult patients with symmetric bilateral hearing loss were examined regarding the correlation between the best score in speech discrimination test, corporal laterality, auditory laterality with dichotic digits in Spanish and score for filtered words with monaural hearing aid. The best correlation was obtained between auditory laterality and gain with hearing aid (0.940). The dichotic test for auditory laterality is a good tool for identifying the best ear in which to apply a monaural hearing aid. The results of this paper suggest the necessity to apply this test in patients before a hearing aid is indicated.
Alexander, G M; Sherwin, B B
1991-09-01
Twenty-six, eugonadal men between the ages of 18 and 27 participated in this investigation of the relationship between sexual arousal, testosterone (T) levels, and the processing of sexual information. At each of the two test sessions, subjects gave a blood sample, listened to an erotic or neutral priming audiotape, and completed a dichotic listening task designed to assess selective attention for sexual stimuli. Subjective levels of sexual arousal to the audiotape and sexual attitudes and sexual experience were assessed by self-report measures. Contrary to our hypothesis, there was no relationship between levels of free T and the strength of the selective attention bias for sexual stimuli. However, men who were more distracted by the sexual material in the task reported higher levels of sexual arousal to erotic imagery than men who were less distracted by the sexual material in the task (P less than 0.01). Moreover, men who were more sexually aroused by the erotic audiotape made significantly less shadowing errors in the erotic prime condition then they did during the neutral prime condition (P less than 0.05). There was a negative association between T and shadowing errors in the erotic prime condition (P less than 0.05). These results suggest that lower thresholds for sexual arousal are associated with a greater bias to attend to sexual information and that T may have effects on cognitive-motivational aspects of sexual behavior by enhancing attention to relevant stimuli.
Aghamollaei, Maryam; Jafari, Zahra; Tahaei, Aliakbar; Toufan, Reyhane; Keyhani, Mohammadreza; Rahimzade, Shadi; Esmaeili, Mahdieh
2013-09-01
The Dichotic Verbal Memory Test (DVMT) is useful in detecting verbal memory deficits and differences in memory function between the brain hemispheres. The purpose of this study was to prepare the Persian version of DVMT, to obtain its results in 18- to 25-yr-old Iranian individuals, and to examine the ear, gender, and serial position effect. The Persian version of DVMT consisted of 18 10-word lists. After preparing the 18 lists, content validity was assessed by a panel of eight experts and the equivalency of the lists was evaluated. Then the words were recorded on CD in a dichotic mode such that 10 words were presented to one ear, with the same words reversed simultaneously presented to the other ear. Thereafter, it was performed on a sample of young, normal, Iranian individuals. Thirty normal individuals (no history of neurological, ontological, or psychological diseases) with ages ranging from 18 to 25 yr were examined for evaluating the equivalency of the lists, and 110 subjects within the same age range participated in the final stage of the study to obtain the normative data on the developed test. There was no significant difference between the mean scores of the 18 developed lists (p > 0.05). The mean content validity index (CVI) score was .96. A significant difference was found between the mean score of the two ears (p < 0.05) and between female and male participants (p < 0.05). The Persian version of DVMT has good content validity and can be used for verbal memory assessment in Iranian young adults. American Academy of Audiology.
Auditory memory function in expert chess players.
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
2015-01-01
Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.
Morton, L L; Allen, J D; Williams, N H
1994-04-01
Thirty-two male and female adolescents of native ancestry (Ojibwa) and 32 controls were tested using (1) four WISC-R subtests and (2) two dichotic listening tasks which employed a focused-attention paradigm for processing consonant-vowel combinations (CVs) and musical melodies. On the WISC-R, natives scored higher than controls on Block Design and Picture Completion subtests but lower on Vocabulary and Similarities subtests. On laterality measures more native males showed a left ear advantage on the CV task and the melody task. For CVs the left ear advantage was due to native males' lower right ear (i.e., left hemisphere) involvement. For melodies, the laterality index pointed to less left hemisphere involvement for native males, however, the raw scores showed that natives were performing lower overall. The findings are consistent with culturally-based strategy differences, possibly linked to "hemisphericity," but additional clarifying research regarding the cause and extent of such differences is warranted. Thus, implications for education are premature but a focus on teaching "left hemisphere type" strategies to all individuals not utilizing such skills, including many native males, may prove beneficial.
Binaural Pitch Fusion in Bilateral Cochlear Implant Users.
Reiss, Lina A J; Fowler, Jennifer R; Hartling, Curtis L; Oh, Yonghee
Binaural pitch fusion is the fusion of stimuli that evoke different pitches between the ears into a single auditory image. Individuals who use hearing aids or bimodal cochlear implants (CIs) experience abnormally broad binaural pitch fusion, such that sounds differing in pitch by as much as 3-4 octaves are fused across ears, leading to spectral averaging and speech perception interference. The goal of this study was to determine if adult bilateral CI users also experience broad binaural pitch fusion. Stimuli were pulse trains delivered to individual electrodes. Fusion ranges were measured using simultaneous, dichotic presentation of reference and comparison stimuli in opposite ears, and varying the comparison stimulus to find the range that fused with the reference stimulus. Bilateral CI listeners had binaural pitch fusion ranges varying from 0 to 12 mm (average 6.1 ± 3.9 mm), where 12 mm indicates fusion over all electrodes in the array. No significant correlations of fusion range were observed with any subject factors related to age, hearing loss history, or hearing device history, or with any electrode factors including interaural electrode pitch mismatch, pitch match bandwidth, or within-ear electrode discrimination abilities. Bilateral CI listeners have abnormally broad fusion, similar to hearing aid and bimodal CI listeners. This broad fusion may explain the variability of binaural benefits for speech perception in quiet and in noise in bilateral CI users.
Central Presbycusis: A Review and Evaluation of the Evidence
Humes, Larry E.; Dubno, Judy R.; Gordon-Salant, Sandra; Lister, Jennifer J.; Cacace, Anthony T.; Cruickshanks, Karen J.; Gates, George A.; Wilson, Richard H.; Wingfield, Arthur
2018-01-01
Background The authors reviewed the evidence regarding the existence of age-related declines in central auditory processes and the consequences of any such declines for everyday communication. Purpose This report summarizes the review process and presents its findings. Data Collection and Analysis The authors reviewed 165 articles germane to central presbycusis. Of the 165 articles, 132 articles with a focus on human behavioral measures for either speech or nonspeech stimuli were selected for further analysis. Results For 76 smaller-scale studies of speech understanding in older adults reviewed, the following findings emerged: (1) the three most commonly studied behavioral measures were speech in competition, temporally distorted speech, and binaural speech perception (especially dichotic listening); (2) for speech in competition and temporally degraded speech, hearing loss proved to have a significant negative effect on performance in most of the laboratory studies; (3) significant negative effects of age, unconfounded by hearing loss, were observed in most of the studies of speech in competing speech, time-compressed speech, and binaural speech perception; and (4) the influence of cognitive processing on speech understanding has been examined much less frequently, but when included, significant positive associations with speech understanding were observed. For 36 smaller-scale studies of the perception of nonspeech stimuli by older adults reviewed, the following findings emerged: (1) the three most frequently studied behavioral measures were gap detection, temporal discrimination, and temporal-order discrimination or identification; (2) hearing loss was seldom a significant factor; and (3) negative effects of age were almost always observed. For 18 studies reviewed that made use of test batteries and medium-to-large sample sizes, the following findings emerged: (1) all studies included speech-based measures of auditory processing; (2) 4 of the 18 studies included nonspeech stimuli; (3) for the speech-based measures, monaural speech in a competing-speech background, dichotic speech, and monaural time-compressed speech were investigated most frequently; (4) the most frequently used tests were the Synthetic Sentence Identification (SSI) test with Ipsilateral Competing Message (ICM), the Dichotic Sentence Identification (DSI) test, and time-compressed speech; (5) many of these studies using speech-based measures reported significant effects of age, but most of these studies were confounded by declines in hearing, cognition, or both; (6) for nonspeech auditory-processing measures, the focus was on measures of temporal processing in all four studies; (7) effects of cognition on nonspeech measures of auditory processing have been studied less frequently, with mixed results, whereas the effects of hearing loss on performance were minimal due to judicious selection of stimuli; and (8) there is a paucity of observational studies using test batteries and longitudinal designs. Conclusions Based on this review of the scientific literature, there is insufficient evidence to confirm the existence of central presbycusis as an isolated entity. On the other hand, recent evidence has been accumulating in support of the existence of central presbycusis as a multifactorial condition that involves age- and/or disease-related changes in the auditory system and in the brain. Moreover, there is a clear need for additional research in this area. PMID:22967738
Cortical Correlates of Binaural Temporal Processing Deficits in Older Adults.
Eddins, Ann Clock; Eddins, David A
This study was designed to evaluate binaural temporal processing in young and older adults using a binaural masking level difference (BMLD) paradigm. Using behavioral and electrophysiological measures within the same listeners, a series of stimulus manipulations was used to evaluate the relative contribution of binaural temporal fine-structure and temporal envelope cues. We evaluated the hypotheses that age-related declines in the BMLD task would be more strongly associated with temporal fine-structure than envelope cues and that age-related declines in behavioral measures would be correlated with cortical auditory evoked potential (CAEP) measures. Thirty adults participated in the study, including 10 young normal-hearing, 10 older normal-hearing, and 10 older hearing-impaired adults with bilaterally symmetric, mild-to-moderate sensorineural hearing loss. Behavioral and CAEP thresholds were measured for diotic (So) and dichotic (Sπ) tonal signals presented in continuous diotic (No) narrowband noise (50-Hz wide) maskers. Temporal envelope cues were manipulated by using two different narrowband maskers; Gaussian noise (GN) with robust envelope fluctuations and low-noise noise (LNN) with minimal envelope fluctuations. The potential to use temporal fine-structure cues was controlled by varying the signal frequency (500 or 4000 Hz), thereby relying on the natural decline in phase-locking with increasing frequency. Behavioral and CAEP thresholds were similar across groups for diotic conditions, while the masking release in dichotic conditions was larger for younger than for older participants. Across all participants, BMLDs were larger for GN than LNN and for 500-Hz than for 4000-Hz conditions, where envelope and fine-structure cues were most salient, respectively. Specific age-related differences were demonstrated for 500-Hz dichotic conditions in GN and LNN, reflecting reduced binaural temporal fine-structure coding. No significant age effects were observed for 4000-Hz dichotic conditions, consistent with similar use of binaural temporal envelope cues across age in these conditions. For all groups, thresholds and derived BMLD values obtained using the behavioral and CAEP methods were strongly correlated, supporting the notion that CAEP measures may be useful as an objective index of age-related changes in binaural temporal processing. These results demonstrate an age-related decline in the processing of binaural temporal fine-structure cues with preserved temporal envelope coding that was similar with and without mild-to-moderate peripheral hearing loss. Such age-related changes can be reliably indexed by both behavioral and CAEP measures in young and older adults.
Auditory memory function in expert chess players
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
2015-01-01
Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. Results: The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Conclusion: Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time. PMID:26793666
Olivares-García, M R; Peñaloza-López, Y R; García-Pedroza, F; Jesús-Pérez, S; Uribe-Escamilla, R; Jiménez-de la Sancha, S
In this study, a new dichotic digit test in Spanish (NDDTS) was applied in order to identify auditory laterality. We also evaluated body laterality and spatial location using the Subirana test. Both the dichotic test and the Subirana test for body laterality and spatial location were applied in a group of 40 children with dyslexia and in a control group made up of 40 children who were paired according to age and gender. The results of the three evaluations were analysed using the SPSS 10 software application, with Pearson's chi-squared test. It was seen that 42.5% of the children in the group of dyslexics had mixed auditory laterality, compared to 7.5% in the control group (p < or = 0.05). Body laterality was mixed in 25% of dyslexic children and in 2.5% in the control group (p < or = 0.05) and there was 72.5% spatial disorientation in the group of dyslexics, whereas only 15% (p < or = 0.05) was found in the control group. The NDDTS proved to be a useful tool for demonstrating that mixed auditory laterality and auditory predominance of the left ear are linked to dyslexia. The results of this test exceed those obtained for body laterality. Spatial orientation is indeed altered in children with dyslexia. The importance of this finding makes it necessary to study the central auditory processes in all cases in order to define better rehabilitation strategies in Spanish-speaking children.
Hill, N J; Schölkopf, B
2012-01-01
We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135
NASA Astrophysics Data System (ADS)
Hill, N. J.; Schölkopf, B.
2012-04-01
We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.
Dichotic Word Recognition in Noise and the Right-Ear Advantage
ERIC Educational Resources Information Center
Roup, Christina M.
2011-01-01
Purpose: This study sought to compare dichotic right-ear advantages (REAs) of young adults to older adult data (C. M. Roup, T. L. Wiley, & R. H. Wilson, 2006) after matching for overall levels of recognition performance. Specifically, speech-spectrum noise was introduced in order to reduce dichotic recognition performance of young adults to a…
Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex.
Bidelman, Gavin M; Grall, Jeremy
2014-11-01
Pitch relationships in music are characterized by their degree of consonance, a hierarchical perceptual quality that distinguishes how pleasant musical chords/intervals sound to the ear. The origins of consonance have been debated since the ancient Greeks. To elucidate the neurobiological mechanisms underlying these musical fundamentals, we recorded neuroelectric brain activity while participants listened passively to various chromatic musical intervals (simultaneously sounding pitches) varying in their perceptual pleasantness (i.e., consonance/dissonance). Dichotic presentation eliminated acoustic and peripheral contributions that often confound explanations of consonance. We found that neural representations for pitch in early human auditory cortex code perceptual features of musical consonance and follow a hierarchical organization according to music-theoretic principles. These neural correlates emerge pre-attentively within ~ 150 ms after the onset of pitch, are segregated topographically in superior temporal gyrus with a rightward hemispheric bias, and closely mirror listeners' behavioral valence preferences for the chromatic tone combinations inherent to music. A perceptual-based organization implies that parallel to the phonetic code for speech, elements of music are mapped within early cerebral structures according to higher-order, perceptual principles and the rules of Western harmony rather than simple acoustic attributes. Copyright © 2014 Elsevier Inc. All rights reserved.
Rominger, Christian; Bleier, Angelika; Fitz, Werner; Marksteiner, Josef; Fink, Andreas; Papousek, Ilona; Weiss, Elisabeth M
2016-07-01
Social cognitive impairments may represent a core feature of schizophrenia and above all are a strong predictor of positive psychotic symptoms. Previous studies could show that reduced inhibitory top-down control contributes to deficits in theory of mind abilities and is involved in the genesis of hallucinations. The current study aimed to investigate the relationship between auditory inhibition, affective theory of mind and the experience of hallucinations in patients with schizophrenia. In the present study, 20 in-patients with schizophrenia and 20 healthy controls completed a social cognition task (the Reading the Mind in the Eyes Test) and an inhibitory top-down Dichotic Listening Test. Schizophrenia patients with greater severity of hallucinations showed impaired affective theory of mind as well as impaired inhibitory top-down control. More dysfunctional top-down inhibition was associated with poorer affective theory of mind performance, and seemed to mediate the association between impairment to affective theory of mind and severity of hallucinations. The findings support the idea of impaired theory of mind as a trait marker of schizophrenia. In addition, dysfunctional top-down inhibition may give rise to hallucinations and may further impair affective theory of mind skills in schizophrenia. Copyright © 2016 Elsevier B.V. All rights reserved.
Signs of impaired selective attention in patients with amyotrophic lateral sclerosis.
Pinkhardt, Elmar H; Jürgens, Reinhart; Becker, Wolfgang; Mölle, Matthias; Born, Jan; Ludolph, Albert C; Schreiber, Herbert
2008-04-01
The evidence for involvement of extramotor cortical areas in non-demented patients with amyotrophic lateral sclerosis (ALS) has been provided by recent neuropsychological and functional brain imaging studies. The aim of this study was to investigate possible alterations in selective attention, as an important constituent part of frontal brain function in ALS patients. A classical dichotic listening task paradigm was employed to assess event-related EEG potential (ERPs) indicators of selective attention as well as preattentive processing of mismatch, without interference by motor impairment.A total of 20 patients with sporadic ALS according to the revised El Escorial criteria and 20 healthy controls were studied. Additionally a neuropsychological test battery of frontotemporal functions was applied. Compared with the controls, the ALS patients showed a distinct decrease of the fronto-precentral negative difference wave (Nd), i.e., the main ERP indicator of selective attention. Analysis of the P3 component of the ERPs indicated an increased processing of non-relevant stimuli in ALS patients confirming a reduced focus of attention. We conclude impaired selective attention reflects a subtle variant of frontotemporal dementia frequently observed in ALS patients at a relatively early stage of the disease.
Opposite brain laterality in analogous auditory and visual tests.
Oltedal, Leif; Hugdahl, Kenneth
2017-11-01
Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.
Giuliano, Ryan J; Karns, Christina M; Neville, Helen J; Hillyard, Steven A
2014-12-01
A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual's capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70-90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals.
Central Auditory Nervous System Dysfunction in Echolalic Autistic Individuals.
ERIC Educational Resources Information Center
Wetherby, Amy Miller; And Others
1981-01-01
The results showed that all the Ss had normal hearing on the monaural speech tests; however, there was indication of central auditory nervous system dysfunction in the language dominant hemisphere, inferred from the dichotic tests, for those Ss displaying echolalia. (Author)
Perception of Complex Auditory Scenes
2014-07-02
Simpson, B. D., & Romigh, G., (2014). “Ear dominance in a dichotic cocktail party .” Journal of the Association for Research in Otolaryngology, Abstract...B. D., & Romigh, G. (2014). Ear dominance in a dichotic cocktail party . Journal of the Association for Research in Otolaryngology, Abstract 37, p...dominance in a dichotic cocktail party .” Journal of the Association for Research in Otolaryngology, Abstract 37, p 518. Cherry, E. C. (1953). Some
The influence of agility training on physiological and cognitive performance.
Lennemann, Lynette M; Sidrow, Kathryn M; Johnson, Erica M; Harrison, Catherine R; Vojta, Christopher N; Walker, Thomas B
2013-12-01
Agility training (AT) has recently been instituted in several military communities in hopes of improving combat performance and general fitness. The purpose of this study was to determine how substituting AT for traditional military physical training (PT) influences physical and cognitive performance. Forty-one subjects undergoing military technical training were divided randomly into 2 groups for 6 weeks of training. One group participated in standard military PT consisting of calisthenics and running. A second group duplicated the amount of exercise of the first group but used AT as their primary mode of training. Before and after training, subjects completed a physical and cognitive battery of tests including V[Combining Dot Above]O2max, reaction time, Illinois Agility Test, body composition, visual vigilance, dichotic listening, and working memory tests. There were significant improvements within the AT group in V[Combining Dot Above]O2max, Illinois Agility Test, visual vigilance, and continuous memory. There was a significant increase in time-to-exhaustion for the traditional group. We conclude that AT is as effective or more effective as PT in enhancing physical fitness. Further, it is potentially more effective than PT in enhancing specific measures of physical and cognitive performance, such as physical agility, memory, and vigilance. Consequently, we suggest that AT be incorporated into existing military PT programs as a way to improve war-fighter performance. Further, it seems likely that the benefits of AT observed here occur in various other populations.
Music-induced positive mood broadens the scope of auditory attention
Makkonen, Tommi; Eerola, Tuomas
2017-01-01
Abstract Previous studies indicate that positive mood broadens the scope of visual attention, which can manifest as heightened distractibility. We used event-related potentials (ERP) to investigate whether music-induced positive mood has comparable effects on selective attention in the auditory domain. Subjects listened to experimenter-selected happy, neutral or sad instrumental music and afterwards participated in a dichotic listening task. Distractor sounds in the unattended channel elicited responses related to early sound encoding (N1/MMN) and bottom-up attention capture (P3a) while target sounds in the attended channel elicited a response related to top-down-controlled processing of task-relevant stimuli (P3b). For the subjects in a happy mood, the N1/MMN responses to the distractor sounds were enlarged while the P3b elicited by the target sounds was diminished. Behaviorally, these subjects tended to show heightened error rates on target trials following the distractor sounds. Thus, the ERP and behavioral results indicate that the subjects in a happy mood allocated their attentional resources more diffusely across the attended and the to-be-ignored channels. Therefore, the current study extends previous research on the effects of mood on visual attention and indicates that even unfamiliar instrumental music can broaden the scope of auditory attention via its effects on mood. PMID:28460035
Nakamura, Miyoko; Kolinsky, Régine
2014-12-01
We explored the functional units of speech segmentation in Japanese using dichotic presentation and a detection task requiring no intentional sublexical analysis. Indeed, illusory perception of a target word might result from preattentive migration of phonemes, morae, or syllables from one ear to the other. In Experiment I, Japanese listeners detected targets presented in hiragana and/or kanji. Phoneme migrations did occur, suggesting that orthography-independent sublexical constituents play some role in segmentation. However, syllable and especially mora migrations were more numerous. This pattern of results was not observed in French speakers (Experiment 2), suggesting that it reflects native segmentation in Japanese. To control for the intervention of kanji representations (many words are written in kanji, and one kanji often corresponds to one syllable), in Experiment 3, Japanese listeners were presented with target loanwords that can be written only in katakana. Again, phoneme migrations occurred, while the first mora and syllable led to similar rates of illusory percepts. No migration occurred for the second, "special" mora (/J/ or/N/), probably because this constitutes the latter part of a heavy syllable. Overall, these findings suggest that multiple units, such as morae, syllables, and even phonemes, function independently of orthographic knowledge in Japanese preattentive speech segmentation.
Stimulus Suffix Effects in Dichotic Memory
ERIC Educational Resources Information Center
Parkinson, Stanley R.; Hubbard, Lora L.
1974-01-01
in the present dichotic memory research, the addition of either a monaural stimulus suffix on the unattended ear or a binaural suffix was shown to selectively impair unattended-ear performance. (Editor)
Hemispheric differentiation and category width.
Huang, M S
1979-12-01
This study concerns the relationship between a cognitive style dimension, category width, and hemispheric differentiation. When lists of word pairs were presented simultaneously in a dichotic listening task to broad and narrow categorisers (all female, right-handed), both groups of subjects recalled more words presented to the right ear than those presented to the left ear; indicating left hemisphere's superiority in verbal processing. Both broad and narrow categorisers recalled a similar number of words in the right ear (left hemisphere), but the former recalled significantly more words in the left ear than did the latter. This finding is interpreted as meaning that narrow categorisers rely predominantly on the left hemisphere in verbal processing, and that in comparison with narrow categories, there is greater right hemispheric involvement in processing in the case of broad categorisers. The implication of this finding in terms of the differential processing strategies adopted by the two groups of individuals is discussed.
McConnell, Patrick A; Froeliger, Brett; Garland, Eric L; Ives, Jeffrey C; Sforzo, Gary A
2014-01-01
Binaural beats are an auditory illusion perceived when two or more pure tones of similar frequencies are presented dichotically through stereo headphones. Although this phenomenon is thought to facilitate state changes (e.g., relaxation), few empirical studies have reported on whether binaural beats produce changes in autonomic arousal. Therefore, the present study investigated the effects of binaural beating on autonomic dynamics [heart rate variability (HRV)] during post-exercise relaxation. Subjects (n = 21; 18-29 years old) participated in a double-blind, placebo-controlled study during which binaural beats and placebo were administered over two randomized and counterbalanced sessions (within-subjects repeated-measures design). At the onset of each visit, subjects exercised for 20-min; post-exercise, subjects listened to either binaural beats ('wide-band' theta-frequency binaural beats) or placebo (carrier tones) for 20-min while relaxing alone in a quiet, low-light environment. Dependent variables consisted of high-frequency (HF, reflecting parasympathetic activity), low-frequency (LF, reflecting sympathetic and parasympathetic activity), and LF/HF normalized powers, as well as self-reported relaxation. As compared to the placebo visit, the binaural-beat visit resulted in greater self-reported relaxation, increased parasympathetic activation and increased sympathetic withdrawal. By the end of the 20-min relaxation period there were no observable differences in HRV between binaural-beat and placebo visits, although binaural-beat associated HRV significantly predicted subsequent reported relaxation. Findings suggest that listening to binaural beats may exert an acute influence on both LF and HF components of HRV and may increase subjective feelings of relaxation.
Bykova, L G; Bazylev, V N
1994-01-01
By means of dichotic test the comparative research of the brain activity in dynamics in 84 adult students was conducted during their traditional (36 persons) and intensive (48 persons) learning of foreign languages. By different methods of learning the reliable distinction of the hemisphere's asymmetry was not detected. By both methods in the reliable majority of students the activation of the hemisphere opposite to the one dominating initially was observed. The correlation between the maximum quantitative shift of the right ear coefficient and the level of success in colloquial practice by the same initial level of language start and initial comparable size of memory was revealed. The authors discuss the possibility of the individual map composition for every student using the results of dichotic tests in dynamics for the help in the profession of a teacher.
Peñaloza López, Yolanda Rebeca; Orozco Peña, Xóchitl Daisy; Pérez Ruiz, Santiago Jesús
2018-04-03
To evaluate the central auditory processing disorders in patients with multiple sclerosis, emphasizing auditory laterality by applying psychoacoustic tests and to identify their relationship with the Multiple Sclerosis Disability Scale (EDSS) functions. Depression scales (HADS), EDSS, and 9 psychoacoustic tests to study CAPD were applied to 26 individuals with multiple sclerosis and 26 controls. Correlation tests were performed between the EDSS and psychoacoustic tests. Seven out of 9 psychoacoustic tests were significantly different (P<.05); right or left (14/19 explorations) with respect to control. In dichotic digits there was a left-ear advantage compared to the usual predominance of RDD. There was significant correlation in five psychoacoustic tests and the specific functions of EDSS. The left-ear advantage detected and interpreted as an expression of deficient influences of the corpus callosum and attention in multiple sclerosis should be investigated. There was a correlation between psychoacoustic tests and specific EDSS functions. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.
Dichotic beats of mistuned consonances.
Feeney, M P
1997-10-01
The beats of mistuned consonances (BMCs) result from the presentation of two sinusoids at frequencies slightly mistuned from a ratio of small integers. Several studies have suggested that the source of dichotic BMCs is an interaction within a binaural critical band. In one case the mechanism has been explained as an aural harmonic of the low-frequency tone (f1) creating binaural beats with the high-frequency tone (f2). The other explanation involves a binaural cross correlation between the excitation pattern of f1 and the contralateral f2--occurring within the binaural critical band centered at f2. This study examined the detection of dichotic BMCs for the octave and fifth. In one experiment with the octave, narrow-band noise centered at f2 was presented to one ear along with f1. The other ear was presented with f2. The noise was used to prevent interactions in the binaural critical band centered at f2. Dichotic BMCs were still detected under these conditions, suggesting that binaural interaction within a critical band does not explain the effect. Localization effects were also observed under this masking condition for phase reversals of tuned dichotic octave stimuli. These findings suggest a new theory of dichotic BMCs as a between-channel phase effect. The modified weighted-image model of localization [Stern and Trahiotis, in Auditory Physiology and Perception, edited by Y. Cazals, L. Demany, and K. Horner (Pergamon, Oxford, 1992), pp. 547-554] was used to provide an explanation of the between-channel mechanism.
Vanniasegaram, Iyngaram; Cohen, Mazal; Rosen, Stuart
2004-12-01
To compare the auditory function of normal-hearing children attending mainstream schools who were referred for an auditory evaluation because of listening/hearing problems (suspected auditory processing disorders [susAPD]) with that of normal-hearing control children. Sixty-five children with a normal standard audiometric evaluation, ages 6-14 yr (32 of whom were referred for susAPD, with the rest age-matched control children), completed a battery of four auditory tests: a dichotic test of competing sentences; a simple discrimination of short tone pairs differing in fundamental frequency at varying interstimulus intervals (TDT); a discrimination task using consonant cluster minimal pairs of real words (CCMP), and an adaptive threshold task for detecting a brief tone presented either simultaneously with a masker (simultaneous masking) or immediately preceding it (backward masking). Regression analyses, including age as a covariate, were performed to determine the extent to which the performance of the two groups differed on each task. Age-corrected z-scores were calculated to evaluate the effectiveness of the complete battery in discriminating the groups. The performance of the susAPD group was significantly poorer than the control group on all but the masking tasks, which failed to differentiate the two groups. The CCMP discriminated the groups most effectively, as it yielded the lowest number of control children with abnormal scores, and performance in both groups was independent of age. By contrast, the proportion of control children who performed poorly on the competing sentences test was unacceptably high. Together, the CCMP (verbal) and TDT (nonverbal) tasks detected impaired listening skills in 56% of the children who were referred to the clinic, compared with 6% of the control children. Performance on the two tasks was not correlated. Two of the four tests evaluated, the CCMP and TDT, proved effective in differentiating the two groups of children of this study. The application of both tests increased the proportion of susAPD children who performed poorly compared with the application of each test alone, while reducing the proportion of control subjects who performed poorly. The findings highlight the importance of carrying out a complete auditory evaluation in children referred for medical attention, even if their standard audiometric evaluation is unremarkable.
Giuliano, Ryan J.; Karns, Christina M.; Neville, Helen J.; Hillyard, Steven A.
2015-01-01
A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual’s capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70–90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals. PMID:25000526
Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.
Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta
2009-01-01
In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.
Gravitoinertial force magnitude and direction influence head-centric auditory localization
NASA Technical Reports Server (NTRS)
DiZio, P.; Held, R.; Lackner, J. R.; Shinn-Cunningham, B.; Durlach, N.
2001-01-01
We measured the influence of gravitoinertial force (GIF) magnitude and direction on head-centric auditory localization to determine whether a true audiogravic illusion exists. In experiment 1, supine subjects adjusted computer-generated dichotic stimuli until they heard a fused sound straight ahead in the midsagittal plane of the head under a variety of GIF conditions generated in a slow-rotation room. The dichotic stimuli were constructed by convolving broadband noise with head-related transfer function pairs that model the acoustic filtering at the listener's ears. These stimuli give rise to the perception of externally localized sounds. When the GIF was increased from 1 to 2 g and rotated 60 degrees rightward relative to the head and body, subjects on average set an acoustic stimulus 7.3 degrees right of their head's median plane to hear it as straight ahead. When the GIF was doubled and rotated 60 degrees leftward, subjects set the sound 6.8 degrees leftward of baseline values to hear it as centered. In experiment 2, increasing the GIF in the median plane of the supine body to 2 g did not influence auditory localization. In experiment 3, tilts up to 75 degrees of the supine body relative to the normal 1 g GIF led to small shifts, 1--2 degrees, of auditory setting toward the up ear to maintain a head-centered sound localization. These results show that head-centric auditory localization is affected by azimuthal rotation and increase in magnitude of the GIF and demonstrate that an audiogravic illusion exists. Sound localization is shifted in the direction opposite GIF rotation by an amount related to the magnitude of the GIF and its angular deviation relative to the median plane.
Moore, David R; Sieswerda, Stephanie L; Grainger, Maureen M; Bowling, Alexandra; Smith, Nicholette; Perdew, Audrey; Eichert, Susan; Alston, Sandra; Hilbert, Lisa W; Summers, Lynn; Lin, Li; Hunter, Lisa L
2018-05-01
Children referred to audiology services with otherwise unexplained academic, listening, attention, language, or other difficulties are often found to be audiometrically normal. Some of these children receive further evaluation for auditory processing disorder (APD), a controversial construct that assumes neural processing problems within the central auditory nervous system. This study focuses on the evaluation of APD and how it relates to diagnosis in one large pediatric audiology facility. To analyze electronic records of children receiving a central auditory processing evaluation (CAPE) at Cincinnati Children's Hospital, with a broad goal of understanding current practice in APD diagnosis and the test information which impacts that practice. A descriptive, cross-sectional analysis of APD test outcomes in relation to final audiologist diagnosis for 1,113 children aged 5-19 yr receiving a CAPE between 2009 and 2014. Children had a generally high level of performance on the tests used, resulting in marked ceiling effects on about half the tests. Audiologists developed the diagnostic category "Weakness" because of the large number of referred children who clearly had problems, but who did not fulfill the AAA/ASHA criteria for diagnosis of a "Disorder." A "right-ear advantage" was found in all tests for which each ear was tested, irrespective of whether the tests were delivered monaurally or dichotically. However, neither the side nor size of the ear advantage predicted the ultimate diagnosis well. Cooccurrence of CAPE with other learning problems was nearly universal, but neither the number nor the pattern of cooccurring problems was a predictor of APD diagnosis. The diagnostic patterns of individual audiologists were quite consistent. The number of annual assessments decreased dramatically during the study period. A simple diagnosis of APD based on current guidelines is neither realistic, given the current tests used, nor appropriate, as judged by the audiologists providing the service. Methods used to test for APD must recognize that any form of hearing assessment probes both sensory and cognitive processing. Testing must embrace modern methods, including digital test delivery, adaptive testing, referral to normative data, appropriate testing for young children, validated screening questionnaires, and relevant objective (physiological) methods, as appropriate. Audiologists need to collaborate with other specialists to understand more fully the behaviors displayed by children presenting with listening difficulties. To achieve progress, it is essential for clinicians and researchers to work together. As new understanding and methods become available, it will be necessary to sort out together what works and what doesn't work in the clinic, both from a theoretical and a practical perspective. American Academy of Audiology.
McConnell, Patrick A.; Froeliger, Brett; Garland, Eric L.; Ives, Jeffrey C.; Sforzo, Gary A.
2014-01-01
Binaural beats are an auditory illusion perceived when two or more pure tones of similar frequencies are presented dichotically through stereo headphones. Although this phenomenon is thought to facilitate state changes (e.g., relaxation), few empirical studies have reported on whether binaural beats produce changes in autonomic arousal. Therefore, the present study investigated the effects of binaural beating on autonomic dynamics [heart rate variability (HRV)] during post-exercise relaxation. Subjects (n = 21; 18–29 years old) participated in a double-blind, placebo-controlled study during which binaural beats and placebo were administered over two randomized and counterbalanced sessions (within-subjects repeated-measures design). At the onset of each visit, subjects exercised for 20-min; post-exercise, subjects listened to either binaural beats (‘wide-band’ theta-frequency binaural beats) or placebo (carrier tones) for 20-min while relaxing alone in a quiet, low-light environment. Dependent variables consisted of high-frequency (HF, reflecting parasympathetic activity), low-frequency (LF, reflecting sympathetic and parasympathetic activity), and LF/HF normalized powers, as well as self-reported relaxation. As compared to the placebo visit, the binaural-beat visit resulted in greater self-reported relaxation, increased parasympathetic activation and increased sympathetic withdrawal. By the end of the 20-min relaxation period there were no observable differences in HRV between binaural-beat and placebo visits, although binaural-beat associated HRV significantly predicted subsequent reported relaxation. Findings suggest that listening to binaural beats may exert an acute influence on both LF and HF components of HRV and may increase subjective feelings of relaxation. PMID:25452734
Dole, Marjorie; Hoen, Michel; Meunier, Fanny
2012-06-01
Developmental dyslexia is associated with impaired speech-in-noise perception. The goal of the present research was to further characterize this deficit in dyslexic adults. In order to specify the mechanisms and processing strategies used by adults with dyslexia during speech-in-noise perception, we explored the influence of background type, presenting single target-words against backgrounds made of cocktail party sounds, modulated speech-derived noise or stationary noise. We also evaluated the effect of three listening configurations differing in terms of the amount of spatial processing required. In a monaural condition, signal and noise were presented to the same ear while in a dichotic situation, target and concurrent sound were presented to two different ears, finally in a spatialised configuration, target and competing signals were presented as if they originated from slightly differing positions in the auditory scene. Our results confirm the presence of a speech-in-noise perception deficit in dyslexic adults, in particular when the competing signal is also speech, and when both signals are presented to the same ear, an observation potentially relating to phonological accounts of dyslexia. However, adult dyslexics demonstrated better levels of spatial release of masking than normal reading controls when the background was speech, suggesting that they are well able to rely on denoising strategies based on spatial auditory scene analysis strategies. Copyright © 2012 Elsevier Ltd. All rights reserved.
Music-induced positive mood broadens the scope of auditory attention.
Putkinen, Vesa; Makkonen, Tommi; Eerola, Tuomas
2017-07-01
Previous studies indicate that positive mood broadens the scope of visual attention, which can manifest as heightened distractibility. We used event-related potentials (ERP) to investigate whether music-induced positive mood has comparable effects on selective attention in the auditory domain. Subjects listened to experimenter-selected happy, neutral or sad instrumental music and afterwards participated in a dichotic listening task. Distractor sounds in the unattended channel elicited responses related to early sound encoding (N1/MMN) and bottom-up attention capture (P3a) while target sounds in the attended channel elicited a response related to top-down-controlled processing of task-relevant stimuli (P3b). For the subjects in a happy mood, the N1/MMN responses to the distractor sounds were enlarged while the P3b elicited by the target sounds was diminished. Behaviorally, these subjects tended to show heightened error rates on target trials following the distractor sounds. Thus, the ERP and behavioral results indicate that the subjects in a happy mood allocated their attentional resources more diffusely across the attended and the to-be-ignored channels. Therefore, the current study extends previous research on the effects of mood on visual attention and indicates that even unfamiliar instrumental music can broaden the scope of auditory attention via its effects on mood. © The Author (2017). Published by Oxford University Press.
Florida Journal of Communication Disorders, 1998.
ERIC Educational Resources Information Center
Victor, Shelley J., Ed.; Lundy, Donna S., Ed.
1998-01-01
This annual volume is a compilation of research, clinical, and professional articles addressing innovative technology, new diagnostic tests, physiological basis for treatment, and therapeutic ideas in the fields of speech-language pathology and audiology. Featured articles include: (1) "Development of Local Child Norms for the Dichotic Digits…
Xu, Jian
2017-01-01
The present study investigated test-taking motivation in L2 listening testing context by applying Expectancy-Value Theory as the framework. Specifically, this study was intended to examine the complex relationships among expectancy, importance, interest, listening anxiety, listening metacognitive awareness, and listening test score using data from a large-scale and high-stakes language test among Chinese first-year undergraduates. Structural equation modeling was used to examine the mediating effect of listening metacognitive awareness on the relationship between expectancy, importance, interest, listening anxiety, and listening test score. According to the results, test takers’ listening scores can be predicted by expectancy, interest, and listening anxiety significantly. The relationship between expectancy, interest, listening anxiety, and listening test score was mediated by listening metacognitive awareness. The findings have implications for test takers to improve their test taking motivation and listening metacognitive awareness, as well as for L2 teachers to intervene in L2 listening classrooms. PMID:29312063
Xu, Jian
2017-01-01
The present study investigated test-taking motivation in L2 listening testing context by applying Expectancy-Value Theory as the framework. Specifically, this study was intended to examine the complex relationships among expectancy, importance, interest, listening anxiety, listening metacognitive awareness, and listening test score using data from a large-scale and high-stakes language test among Chinese first-year undergraduates. Structural equation modeling was used to examine the mediating effect of listening metacognitive awareness on the relationship between expectancy, importance, interest, listening anxiety, and listening test score. According to the results, test takers' listening scores can be predicted by expectancy, interest, and listening anxiety significantly. The relationship between expectancy, interest, listening anxiety, and listening test score was mediated by listening metacognitive awareness. The findings have implications for test takers to improve their test taking motivation and listening metacognitive awareness, as well as for L2 teachers to intervene in L2 listening classrooms.
Simultaneous Masking in a Dichotic Emotion Detection Task
ERIC Educational Resources Information Center
Voyer, Daniel; Soraggi, Mariana; Brake, Brandy; Wood, Heather-Dawn
2006-01-01
The present study investigated the possible role of ceiling effects in producing laterality effects of small magnitude in dichotic emotion detection. Twenty two right-handed undergraduate students participated in the present experiment. They were required to detect the presence of a target emotion in the expressions tones of happiness, sadness,…
Auditory system dysfunction in Alzheimer disease and its prodromal states: A review.
Swords, Gabriel M; Nguyen, Lydia T; Mudar, Raksha A; Llano, Daniel A
2018-07-01
Recent findings suggest that both peripheral and central auditory system dysfunction occur in the prodromal stages of Alzheimer Disease (AD), and therefore may represent early indicators of the disease. In addition, loss of auditory function itself leads to communication difficulties, social isolation and poor quality of life for both patients with AD and their caregivers. Developing a greater understanding of auditory dysfunction in early AD may shed light on the mechanisms of disease progression and carry diagnostic and therapeutic importance. Herein, we review the literature on hearing abilities in AD and its prodromal stages investigated through methods such as pure-tone audiometry, dichotic listening tasks, and evoked response potentials. We propose that screening for peripheral and central auditory dysfunction in at-risk populations is a low-cost and effective means to identify early AD pathology and provides an entry point for therapeutic interventions that enhance the quality of life of AD patients. Copyright © 2018 Elsevier B.V. All rights reserved.
Sleep quality and communication aspects in children.
de Castro Corrêa, Camila; José, Maria Renata; Andrade, Eduardo Carvalho; Feniman, Mariza Ribeiro; Fukushiro, Ana Paula; Berretin-Felix, Giédre; Maximino, Luciana Paula
2017-09-01
To correlate quality of life of children in terms of sleep, with their oral language skills, auditory processing and orofacial myofunctional aspects. Nineteen children (12 males and seven females, in the mean age 9.26) undergoing otorhinolaryngological and speech evaluations participated in this study. The OSA-18 questionnaire was applied, followed by verbal and nonverbal sequential memory tests, dichotic digit test, nonverbal dichotic test and Sustained Auditory Attention Ability Test, related to auditory processing. The Phonological Awareness Profile test, Rapid Automatized Naming and Phonological Working Memory were used for assessment of the phonological processing. Language was assessed by the ABFW Child Language Test, analyzing the phonological and lexical levels. Orofacial myofunctional aspects were evaluated through the MBGR Protocol. Statistical tests used: the Mann-Whitney Test, Fisher's exact test and Spearman Correlation. Relating the performance of children in all evaluations to the results obtained in the OSA-18, there was a statistically significant correlation in the phonological working memory for backward digits (p = 0.04); as well as in the breathing item (p = 0.03), posture of the mandible (p = 0.03) and mobility of lips (p = 0.04). A correlation was seen between the sleep quality of life and the skills related to the phonological processing, specifically in the phonological working memory in backward digits, and related to orofacial myofunctional aspects. Copyright © 2017 Elsevier B.V. All rights reserved.
Morton, L L
1994-08-01
Identifying disabilities in word-attack, word-recognition, or reading comprehension, allowed for four categories of reading disability: (1) reading comprehension only (RC), (2) word-attack plus comprehension (WA+RC), (3) word-attack, word-recognition, and comprehension (WA+WR+RC), and (4) word-attack but not comprehension (WA-RC). Along with age-matched controls (AMC) and developmental-delay controls (DDC), the disabled were tested on a directed-attention dichotic task using consonant-vowel combinations. Laterality results for each place of articulation (i.e., bilabial, alveolar, and velar) selectively attested to greater left hemisphere involvement or engagement for the RC group and greater right hemisphere involvement or engagement for the WA+RC group. Performance of the other two disabled groups was consistent with less efficient right hemisphere involvement or callosal transfer. Implications for theory, research, and remediation are discussed.
Loudness enhancement - Monaural, binaural, and dichotic
NASA Technical Reports Server (NTRS)
Elmasian, R.; Galambos, R.
1975-01-01
When one tone burst (T) precedes another (S) by 100 msec, variations in the intensity of T systematically influence the loudness of S. When T is more intense than S, S is increased; and when T is less intense, S loudness is decreased. This occurs in monaural, binaural, and dichotic paradigms of signal presentation. When T and S are presented to the same ear (monaural or binaural), there is more enhancement with less intersubject variability than when they are presented to different ears (dichotic paradigm). Monaural enhancements as large as 30 dB can readily be demonstrated, but decrements rarely exceed 5 dB. Possible physiological mechanisms are discussed for this loudness enhancement, which apparently shares certain characteristics with time-order error, assimilation, and temporal partial masking experiments.
Q-Type Factor Analysis of Healthy Aged Men.
ERIC Educational Resources Information Center
Kleban, Morton H.
Q-type factor analysis was used to re-analyze baseline data collected in 1957, on 47 men aged 65-91. Q-type analysis is the use of factor methods to study persons rather than tests. Although 550 variables were originally studied involving psychiatry, medicine, cerebral metabolism and chemistry, personality, audiometry, dichotic and diotic memory,…
NASA Astrophysics Data System (ADS)
Fiedler, Lorenz; Wöstmann, Malte; Graversen, Carina; Brandmeyer, Alex; Lunner, Thomas; Obleser, Jonas
2017-06-01
Objective. Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening (‘cocktail party’) scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. Approach. To investigate whether a listener’s attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal (‘in-Ear-EEG’) and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n = 7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Main results. Each individual participants’ attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. Significance. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener’s focus of attention.
Impact of Educational Level on Performance on Auditory Processing Tests.
Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane
2016-01-01
Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.
Linguistic Lateralization in Adolescents with Down Syndrome Revealed by a Dichotic Monitoring Test
ERIC Educational Resources Information Center
Shoji, Hiroaki; Koizumi, Natsuko; Ozaki, Hisaki
2009-01-01
Linguistic lateralization in 10 adolescents with Down syndrome (average age: 15.7 years), 15 adolescents with intellectual disabilities of unknown etiology (average age: 17.8 years), 2 groups of children without disabilities (11 children, average age: 4.7 years; 10 children, average age: 8.5 years), and 14 adolescents without disabilities (average…
2015-01-01
An important aspect of speech perception is the ability to group or select formants using cues in the acoustic source characteristics—for example, fundamental frequency (F0) differences between formants promote their segregation. This study explored the role of more radical differences in source characteristics. Three-formant (F1+F2+F3) synthetic speech analogues were derived from natural sentences. In Experiment 1, F1+F3 were generated by passing a harmonic glottal source (F0 = 140 Hz) through second-order resonators (H1+H3); in Experiment 2, F1+F3 were tonal (sine-wave) analogues (T1+T3). F2 could take either form (H2 or T2). In some conditions, the target formants were presented alone, either monaurally or dichotically (left ear = F1+F3; right ear = F2). In others, they were accompanied by a competitor for F2 (F1+F2C+F3; F2), which listeners must reject to optimize recognition. Competitors (H2C or T2C) were created using the time-reversed frequency and amplitude contours of F2. Dichotic presentation of F2 and F2C ensured that the impact of the competitor arose primarily through informational masking. In the absence of F2C, the effect of a source mismatch between F1+F3 and F2 was relatively modest. When F2C was present, intelligibility was lowest when F2 was tonal and F2C was harmonic, irrespective of which type matched F1+F3. This finding suggests that source type and context, rather than similarity, govern the phonetic contribution of a formant. It is proposed that wideband harmonic analogues are more effective informational maskers than narrowband tonal analogues, and so become dominant in across-frequency integration of phonetic information when placed in competition. PMID:25751040
Liu, Danzheng; Shi, Lu-Feng
2013-06-01
This study established the performance-intensity function for Beijing and Taiwan Mandarin bisyllabic word recognition tests in noise in native speakers of Wu Chinese. Effects of the test dialect and listeners' first language on psychometric variables (i.e., slope and 50%-correct threshold) were analyzed. Thirty-two normal-hearing Wu-speaking adults who used Mandarin since early childhood were compared to 16 native Mandarin-speaking adults. Both Beijing and Taiwan bisyllabic word recognition tests were presented at 8 signal-to-noise ratios (SNRs) in 4-dB steps (-12 dB to +16 dB). At each SNR, a half list (25 words) was presented in speech-spectrum noise to listeners' right ear. The order of the test, SNR, and half list was randomized across listeners. Listeners responded orally and in writing. Overall, the Wu-speaking listeners performed comparably to the Mandarin-speaking listeners on both tests. Compared to the Taiwan test, the Beijing test yielded a significantly lower threshold for both the Mandarin- and Wu-speaking listeners, as well as a significantly steeper slope for the Wu-speaking listeners. Both Mandarin tests can be used to evaluate Wu-speaking listeners. Of the 2, the Taiwan Mandarin test results in more comparable functions across listener groups. Differences in the performance-intensity function between listener groups and between tests indicate a first language and dialectal effect, respectively.
An auditory brain-computer interface evoked by natural speech
NASA Astrophysics Data System (ADS)
Lopez-Gordo, M. A.; Fernandez, E.; Romero, S.; Pelayo, F.; Prieto, Alberto
2012-06-01
Brain-computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min-1 in full-length trials and 2.7 bits min-1 using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.
The effects of sad prosody on hemispheric specialization for words processing.
Leshem, Rotem; Arzouan, Yossi; Armony-Sivan, Rinat
2015-06-01
This study examined the effect of sad prosody on hemispheric specialization for word processing using behavioral and electrophysiological measures. A dichotic listening task combining focused attention and signal-detection methods was conducted to evaluate the detection of a word spoken in neutral or sad prosody. An overall right ear advantage together with leftward lateralization in early (150-170 ms) and late (240-260 ms) processing stages was found for word detection, regardless of prosody. Furthermore, the early stage was most pronounced for words spoken in neutral prosody, showing greater negative activation over the left than the right hemisphere. In contrast, the later stage was most pronounced for words spoken with sad prosody, showing greater positive activation over the left than the right hemisphere. The findings suggest that sad prosody alone was not sufficient to modulate hemispheric asymmetry in word-level processing. We posit that lateralized effects of sad prosody on word processing are largely dependent on the psychoacoustic features of the stimuli as well as on task demands. Copyright © 2015 Elsevier Inc. All rights reserved.
From dichoptic to dichotic: historical contrasts between binocular vision and binaural hearing.
Wade, Nicholas J; Ono, Hiroshi
2005-01-01
Phenomena involving vision with two eyes have been commented upon for several thousand years whereas those concerned with hearing with two ears have a much more recent history. Studies of binocular vision and binaural hearing are contrasted with respect to the singleness of the percept, experimental manipulations of dichoptic and dichotic stimuli, eye and ear dominance, spatial localisation, and the instruments used to stimulate the paired organs. One of the principal phenomena that led to studies of dichotic hearing was dichoptic colour mixing. There was similar disagreement regarding whether colours or sounds could be combined when presented to different paired organs. Direction and distance in visual localisation were analysed before those for auditory localisation, partly due to difficulties in controlling the stimuli. Instruments for investigating binocular vision, like the stereoscope and pseudoscope, were invented before those for binaural hearing, like the stethophone and pseudophone.
Oh, Yonghee; Reiss, Lina A J
2017-08-01
Both bimodal cochlear implant and bilateral hearing aid users can exhibit broad binaural pitch fusion, the fusion of dichotically presented tones over a broad range of pitch differences between ears [Reiss, Ito, Eggleston, and Wozny. (2014). J. Assoc. Res. Otolaryngol. 15(2), 235-248; Reiss, Eggleston, Walker, and Oh. (2016). J. Assoc. Res. Otolaryngol. 17(4), 341-356; Reiss, Shayman, Walker, Bennett, Fowler, Hartling, Glickman, Lasarev, and Oh. (2017). J. Acoust. Soc. Am. 143(3), 1909-1920]. Further, the fused binaural pitch is often a weighted average of the different pitches perceived in the two ears. The current study was designed to systematically measure these pitch averaging phenomena in bilateral hearing aid users with broad fusion. The fused binaural pitch of the reference-pair tone combination was initially measured by pitch-matching to monaural comparison tones presented to the pair tone ear. The averaged results for all subjects showed two distinct trends: (1) The fused binaural pitch was dominated by the lower-pitch component when the pair tone was either 0.14 octaves below or 0.78 octaves above the reference tone; (2) pitch averaging occurred when the pair tone was between the two boundaries above, with the most equal weighting at 0.38 octaves above the reference tone. Findings from two subjects suggest that randomization or alternation of the comparison ear can eliminate this asymmetry in the pitch averaging range. Overall, these pitch averaging phenomena suggest that spectral distortions and thus binaural interference may arise during binaural stimulation in hearing-impaired listeners with broad fusion.
The Relationship Between Intensity Coding and Binaural Sensitivity in Adults With Cochlear Implants.
Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y
Many bilateral cochlear implant users show sensitivity to binaural information when stimulation is provided using a pair of synchronized electrodes. However, there is large variability in binaural sensitivity between and within participants across stimulation sites in the cochlea. It was hypothesized that within-participant variability in binaural sensitivity is in part affected by limitations and characteristics of the auditory periphery which may be reflected by monaural hearing performance. The objective of this study was to examine the relationship between monaural and binaural hearing performance within participants with bilateral cochlear implants. Binaural measures included dichotic signal detection and interaural time difference discrimination thresholds. Diotic signal detection thresholds were also measured. Monaural measures included dynamic range and amplitude modulation detection. In addition, loudness growth was compared between ears. Measures were made at three stimulation sites per listener. Greater binaural sensitivity was found with larger dynamic ranges. Poorer interaural time difference discrimination was found with larger difference between comfortable levels of the two ears. In addition, poorer diotic signal detection thresholds were found with larger differences between the dynamic ranges of the two ears. No relationship was found between amplitude modulation detection thresholds or symmetry of loudness growth and the binaural measures. The results suggest that some of the variability in binaural hearing performance within listeners across stimulation sites can be explained by factors nonspecific to binaural processing. The results are consistent with the idea that dynamic range and comfortable levels relate to peripheral neural survival and the width of the excitation pattern which could affect the fidelity with which central binaural nuclei process bilateral inputs.
Computer-Detected Attention Affects Foreign Language Listening but Not Reading Performance.
Lee, Shu-Ping
2016-08-01
No quantitative study has explored the influence of attention on learning English as a foreign language (EFL). This study investigated whether computer-detected attention is associated with EFL reading and listening and reading and listening anxiety. Traditional paper-based English tests used as entrance examinations and tests of general trait anxiety, reading, listening, reading test state anxiety, and listening test state anxiety were administered in 252 Taiwan EFL college students who were divided into High Attention (Conners' Continuous Performance Test, CPT < 50) and Low Attention (CPT ≥ 50) groups. No differences were found between the two groups for traditional paper-based English tests, trait anxieties, general English reading anxiety scales, and general English listening anxiety scales. The Low Attention group had higher test state anxiety and lower listening test scores than the High Attention group, but not in reading. State anxiety during listening tests for EFL students with computer-detected low attention tendency was elevated and their EFL listening performance was affected, but those differences were not found in reading. © The Author(s) 2016.
Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva
2018-01-01
Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p < 0.05, and r = 0.76, p = 0.01, respectively, in a sample of 20 children with APD diagnosis. The standard APD battery identified a larger proportion of participants as having APD, than an attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying sustained and divided attention deficits. Attention deficits in children with APD appear to be centred around the auditory modality but further examination of types of attention in both modalities is required. Revising diagnostic criteria to incorporate attention tests and the inattentive type of APD in the test battery, provides additional useful data to clinicians to ensure careful interpretation of APD assessments. PMID:29441033
ERIC Educational Resources Information Center
Di Stefano, Marirosa; Marano, Elena; Viti, Marzia
2004-01-01
The assessment of language laterality by the dichotic fused-words test may be impaired by interference effects revealed by the dominant report of one member of the stimuli-pair. Stimulus-dominance and ear asymmetry were evaluated in normal population (48 subjects of both sex and handedness) and in 2 patients with a single functional hemisphere.…
ERIC Educational Resources Information Center
Chi, Youngshin
2011-01-01
This study investigated the breakdown effect of a listening comprehension test, whether test takers are affected in comprehending lectures by impediments, and collected test takers' cognitive awareness on test tasks which contain listening breakdown factors how they perceived these impediments. In this context of the study, a "Breakdown" is a test…
The Relationships between Social Class, Listening Test Anxiety and Test Scores
ERIC Educational Resources Information Center
Rezaabadi, Omid Talebi
2016-01-01
This study investigated the relationships between the social anxiety, social class and listening-test anxiety of students learning English as a foreign language. The aims of the study were to examine the relationship between listening-test anxiety and listening-test performance. The data were collected using an adapted Foreign Language Listening…
Lee, Shu-Ping; Su, Hui-Kai; Lee, Shin-Da
2012-06-01
This study investigated the effects of immediate feedback on computer-based foreign language listening comprehension tests and on intrapersonal test-associated anxiety in 72 English major college students at a Taiwanese University. Foreign language listening comprehension of computer-based tests designed by MOODLE, a dynamic e-learning environment, with or without immediate feedback together with the state-trait anxiety inventory (STAI) were tested and repeated after one week. The analysis indicated that immediate feedback during testing caused significantly higher anxiety and resulted in significantly higher listening scores than in the control group, which had no feedback. However, repeated feedback did not affect the test anxiety and listening scores. Computer-based immediate feedback did not lower debilitating effects of anxiety but enhanced students' intrapersonal eustress-like anxiety and probably improved their attention during listening tests. Computer-based tests with immediate feedback might help foreign language learners to increase attention in foreign language listening comprehension.
ERIC Educational Resources Information Center
Aryadoust, Vahid
2012-01-01
This article investigates a version of the International English Language Testing System (IELTS) listening test for evidence of differential item functioning (DIF) based on gender, nationality, age, and degree of previous exposure to the test. Overall, the listening construct was found to be underrepresented, which is probably an important cause…
The Relationship Between Intensity Coding and Binaural Sensitivity in Adults With Cochlear Implants
Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.
2016-01-01
Objectives Many bilateral cochlear implant users show sensitivity to binaural information when stimulation is provided using a pair of synchronized electrodes. However, there is large variability in binaural sensitivity between and within participants across stimulation sites in the cochlea. It was hypothesized that within-participant variability in binaural sensitivity is in part affected by limitations and characteristics of the auditory periphery which may be reflected by monaural hearing performance. The objective of this study was to examine the relationship between monaural and binaural hearing performance within participants with bilateral cochlear implants. Design Binaural measures included dichotic signal detection and interaural time difference discrimination thresholds. Diotic signal detection thresholds were also measured. Monaural measures included dynamic range and amplitude modulation detection. In addition, loudness growth was compared between ears. Measures were made at three stimulation sites per listener. Results Greater binaural sensitivity was found with larger dynamic ranges. Poorer interaural time difference discrimination was found with larger difference between comfortable levels of the two ears. In addition, poorer diotic signal detection thresholds were found with larger differences between the dynamic ranges of the two ears. No relationship was found between amplitude modulation detection thresholds or symmetry of loudness growth and the binaural measures. Conclusions The results suggest that some of the variability in binaural hearing performance within listeners across stimulation sites can be explained by factors non-specific to binaural processing. The results are consistent with the idea that dynamic range and comfortable levels relate to peripheral neural survival and the width of the excitation pattern which could affect the fidelity with which central binaural nuclei process bilateral inputs. PMID:27787393
ERIC Educational Resources Information Center
Worthington, Debra L.; Keaton, Shaughan; Cook, John; Fitch-Hauser, Margaret; Powers, William G.
2014-01-01
The Watson-Barker Listening Test (WBLT) is one of the most popular measures of listening comprehension. However, participants in studies utilizing this scale have been almost exclusively Anglo-American. At the same time, previous research questions the psychometric properties of the test. This study addressed both of these issues by testing the…
ERIC Educational Resources Information Center
Yanagawa, Kozo; Green, Anthony
2008-01-01
The purpose of this study is to examine whether the choice between three multiple-choice listening comprehension test formats results in any difference in listening comprehension test performance. The three formats entail (a) allowing test takers to preview both the question stem and answer options prior to listening; (b) allowing test takers to…
Neural effects of cognitive control load on auditory selective attention
Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R.; Mangalathu, Jain; Desai, Anjali
2014-01-01
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210 msec, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. PMID:24946314
Music listening and cognitive abilities in 10- and 11-year-olds: the blur effect.
Schellenberg, E Glenn; Hallam, Susan
2005-12-01
The spatial abilities of a large sample of 10 and 11 year olds were tested after they listened to contemporary pop music, music composed by Mozart, or a discussion about the present experiment. After being assigned at random to one of the three listening experiences, each child completed two tests of spatial abilities. Performance on one of the tests (square completion) did not differ as a function of the listening experience, but performance on the other test (paper folding) was superior for children who listened to popular music compared to the other two groups. These findings are consistent with the view that positive benefits of music listening on cognitive abilities are most likely to be evident when the music is enjoyed by the listener.
A frontal but not parietal neural correlate of auditory consciousness.
Brancucci, Alfredo; Lugli, Victor; Perrucci, Mauro Gianni; Del Gratta, Cosimo; Tommasi, Luca
2016-01-01
Hemodynamic correlates of consciousness were investigated in humans during the presentation of a dichotic sequence inducing illusory auditory percepts with features analogous to visual multistability. The sequence consisted of a variation of the original stimulation eliciting the Deutsch's octave illusion, created to maintain a stable illusory percept long enough to allow the detection of the underlying hemodynamic activity using functional magnetic resonance imaging (fMRI). Two specular 500 ms dichotic stimuli (400 and 800 Hz) presented in alternation by means of earphones cause an illusory segregation of pitch and ear of origin which can yield up to four different auditory percepts per dichotic stimulus. Such percepts are maintained stable when one of the two dichotic stimuli is presented repeatedly for 6 s, immediately after the alternation. We observed hemodynamic activity specifically accompanying conscious experience of pitch in a bilateral network including the superior frontal gyrus (SFG, BA9 and BA10), medial frontal gyrus (BA6 and BA9), insula (BA13), and posterior lateral nucleus of the thalamus. Conscious experience of side (ear of origin) was instead specifically accompanied by bilateral activity in the MFG (BA6), STG (BA41), parahippocampal gyrus (BA28), and insula (BA13). These results suggest that the neural substrate of auditory consciousness, differently from that of visual consciousness, may rest upon a fronto-temporal rather than upon a fronto-parietal network. Moreover, they indicate that the neural correlates of consciousness depend on the specific features of the stimulus and suggest the SFG-MFG and the insula as important cortical nodes for auditory conscious experience.
ERIC Educational Resources Information Center
Zarrabi, Fatemeh
2016-01-01
The present study investigated the effectiveness of listening strategy instruction on the metacognitive listening strategies awareness of different EFL learner types (LTs). To achieve this goal, 150 EFL students took part in the study and were taught based on a guided lesson plan regarding listening strategies and a pre-test/post-test design was…
The Effects of Test Preparation on Second-Language Listening Test Performance
ERIC Educational Resources Information Center
Winke, Paula; Lim, Hyojung
2017-01-01
To examine the effects of listening test preparation, we had three groups, two experimental and one control (63 learners total), partake in three types of instruction sandwiched between two equally difficult listening tests (pretests and posttests). The first experimental group took four practice tests and received "explicit"…
Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi
2015-04-01
This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.
ERIC Educational Resources Information Center
Aryadoust, Vahid
2015-01-01
The present study uses a mixture Rasch model to examine latent differential item functioning in English as a foreign language listening tests. Participants (n = 250) took a listening and lexico-grammatical test and completed the metacognitive awareness listening questionnaire comprising problem solving (PS), planning and evaluation (PE), mental…
Does Listening to Mozart Affect Listening Ability?
ERIC Educational Resources Information Center
Bowman, Becki J.; Punyanunt-Carter, Narissra; Cheah, Tsui Yi; Watson, W. Joe; Rubin, Rebecca B.
2007-01-01
Considerable research has been conducted testing Rauscher, Shaw, and Ky's (1993) Mozart Effect (ME). This study attempts to replicate, in part, research that tested the ME on listening comprehension abilities. Also included in this study is an examination of control group issues in current day research. We hypothesized that students who listen to…
ERIC Educational Resources Information Center
Wagner, Elvis
2013-01-01
The use of video technology has become widespread in the teaching and testing of second-language (L2) listening, yet research into how this technology affects the learning and testing process has lagged. The current study investigated how the channel of input (audiovisual vs. audio-only) used on an L2 listening test affected test-taker…
Listening: The Second Speaker.
ERIC Educational Resources Information Center
Erway, Ella Anderson
1972-01-01
Scholars agree that listening is an active rather than a passive process. The listening which makes people achieve higher scores on current listening tests is "second speaker" listening or active participation in the encoding of the message. Most of the instructional suggestions in listening curriculum guides are based on this concept. In terms of…
ERIC Educational Resources Information Center
Chambers, Gary N.
1996-01-01
Focuses on listening activities in the second-language classroom. After discussing problems related to listening and listening as a test versus listening as a learning experience, the article suggests that the teacher exploit what is known about the language and the world to make listening part of a learning whole as opposed to a distinct…
ERIC Educational Resources Information Center
Lin, Sheau-Wen; Liu, Yu
2017-01-01
The purpose of this study was to explore elementary students' listening comprehension changes using a Web-based teaching system that can diagnose and remediate students' science listening comprehension problems during scientific inquiry. The 3-component system consisted of a 9-item science listening comprehension test, a 37-item diagnostic test,…
Student's Second-Language Grade May Depend on Classroom Listening Position.
Hurtig, Anders; Sörqvist, Patrik; Ljung, Robert; Hygge, Staffan; Rönnberg, Jerker
2016-01-01
The purpose of this experiment was to explore whether listening positions (close or distant location from the sound source) in the classroom, and classroom reverberation, influence students' score on a test for second-language (L2) listening comprehension (i.e., comprehension of English in Swedish speaking participants). The listening comprehension test administered was part of a standardized national test of English used in the Swedish school system. A total of 125 high school pupils, 15 years old, participated. Listening position was manipulated within subjects, classroom reverberation between subjects. The results showed that L2 listening comprehension decreased as distance from the sound source increased. The effect of reverberation was qualified by the participants' baseline L2 proficiency. A shorter reverberation was beneficial to participants with high L2 proficiency, while the opposite pattern was found among the participants with low L2 proficiency. The results indicate that listening comprehension scores-and hence students' grade in English-may depend on students' classroom listening position.
Student’s Second-Language Grade May Depend on Classroom Listening Position
Sörqvist, Patrik; Ljung, Robert; Hygge, Staffan; Rönnberg, Jerker
2016-01-01
The purpose of this experiment was to explore whether listening positions (close or distant location from the sound source) in the classroom, and classroom reverberation, influence students’ score on a test for second-language (L2) listening comprehension (i.e., comprehension of English in Swedish speaking participants). The listening comprehension test administered was part of a standardized national test of English used in the Swedish school system. A total of 125 high school pupils, 15 years old, participated. Listening position was manipulated within subjects, classroom reverberation between subjects. The results showed that L2 listening comprehension decreased as distance from the sound source increased. The effect of reverberation was qualified by the participants’ baseline L2 proficiency. A shorter reverberation was beneficial to participants with high L2 proficiency, while the opposite pattern was found among the participants with low L2 proficiency. The results indicate that listening comprehension scores—and hence students’ grade in English—may depend on students’ classroom listening position. PMID:27304980
Figure-background in dichotic task and their relation to skills untrained.
Cibian, Aline Priscila; Pereira, Liliane Desgualdo
2015-01-01
To evaluate the effectiveness of auditory training in dichotic task and to compare the responses of trained skills with the responses of untrained skills, after 4-8 weeks. Nineteen subjects, aged 12-15 years, underwent an auditory training based on dichotic interaural intensity difference (DIID), organized in eight sessions, each lasting 50 min. The assessment of auditory processing was conducted in three stages: before the intervention, after the intervention, and in the middle and at the end of the training. Data from this evaluation were analyzed as per group of disorder, according to the changes in the auditory processes evaluated: selective attention and temporal processing. Each of them was named selective attention group (SAG) and temporal processing group (TPG), and, for both the processes, selective attention and temporal processing group (SATPG). The training improved both the trained and untrained closing skill, normalizing all individuals. Untrained solving and temporal ordering skills did not reach normality for SATPG and TPG. Individuals reached normality for the trained figure-ground skill and for the untrained closing skill. The untrained solving and temporal ordering skills improved in some individuals but failed to reach normality.
ERIC Educational Resources Information Center
Atasheneh, Naser; Izadi, Ahmad
2012-01-01
Three components have been introduced for foreign language learning anxiety in the literature: Test anxiety, fear of negative evaluation and communication apprehension. This study teases out the first of the three components with special focus on listening comprehension test to investigate the correlation between listening test results and foreign…
Use of Video and Audio Texts in EFL Listening Test
ERIC Educational Resources Information Center
Basal, Ahmet; Gülözer, Kaine; Demir, Ibrahim
2015-01-01
The study aims to discover whether audio or video modality in a listening test is more beneficial to test takers. In this study, the posttest-only control group design was utilized and quantitative data were collected in order to measure participant performances concerning two types of modality (audio or video) in a listening test. The…
ERIC Educational Resources Information Center
Wei, Wei; Zheng, Ying
2017-01-01
This research provided a comprehensive evaluation and validation of the listening section of a newly introduced computerised test, Pearson Test of English Academic (PTE Academic). PTE Academic contains 11 item types assessing academic listening skills either alone or in combination with other skills. First, task analysis helped identify skills…
An Evaluation of a Testing Model for Listening Comprehension.
ERIC Educational Resources Information Center
Kangli, Ji
A model for testing listening comprehension in English as a Second Language is discussed and compared with the Test for English Majors (TEM). The model in question incorporates listening for: (1) understanding factual information; (2) comprehension and interpretation; (3) detailed and selective information; (4) global ideas; (5) on-line tasks…
Listening Strategies of L2 Learners with Varied Test Tasks
ERIC Educational Resources Information Center
Chang, Anna Ching-Shyang
2008-01-01
This article investigates the strategies that EFL students used and how they adjusted these strategies in response to various listening test tasks. The test tasks involved four forms of listening support: previewing questions, repeated input, background information preparation, and vocabulary instruction. Twenty-two participants were enlisted and…
Binaural Interference and the Effects of Age and Hearing Loss.
Mussoi, Bruna S S; Bentler, Ruth A
2017-01-01
The existence of binaural interference, defined here as poorer speech recognition with both ears than with the better ear alone, is well documented. Studies have suggested that its prevalence may be higher in the elderly population. However, no study to date has explored binaural interference in groups of younger and older adults in conditions that favor binaural processing (i.e., in spatially separated noise). Also, the effects of hearing loss have not been studied. To examine binaural interference through speech perception tests, in groups of younger adults with normal hearing, older adults with normal hearing for their age, and older adults with hearing loss. A cross-sectional study. Thirty-three participants with symmetric thresholds were recruited from the University of Iowa community. Participants were grouped as follows: younger with normal hearing (18-28 yr, n = 12), older with normal hearing for their age (73-87 yr, n = 9), and older with hearing loss (78-94 yr, n = 12). Prior noise exposure was ruled out. The Connected Speech Test (CST) and Hearing in Noise Test (HINT) were administered to all participants bilaterally, and to each ear separately. Test materials were presented in the sound field with speech at 0° azimuth and the noise at 180°. The Dichotic Digits Test (DDT) was administered to all participants through earphones. Hearing aids were not used during testing. Group results were compared with repeated measures and one-way analysis of variances, as appropriate. Within-subject analyses using pre-established critical differences for each test were also performed. The HINT revealed no effect of condition (individual ear versus bilateral presentation) using group analysis, although within-subject analysis showed that 27% of the participants had binaural interference (18% had binaural advantage). On the CST, there was significant binaural advantage across all groups with group data analysis, as well as for 12% of the participants at each of the two signal-to-babble ratios (SBRs) tested. One participant had binaural interference at each SBR. Finally, on the DDT, a significant right-ear advantage was found with group data, and for at least some participants. Regarding age effects, more participants in the pooled elderly groups had binaural interference (33.3%) than in the younger group (16.7%), on the HINT. The presence of hearing loss yielded overall lower scores, but none of the comparisons between bilateral and unilateral performance were affected by hearing loss. Results of within-subject analyses on the HINT agree with previous findings of binaural interference in ≥17% of listeners. Across all groups, a significant right-ear advantage was also seen on the DDT. HINT results support the notion that the prevalence of binaural interference is likely higher in the elderly population. Hearing loss, however, did not affect the differences between bilateral and better unilateral scores. The possibility of binaural interference should be considered when fitting hearing aids to listeners with symmetric hearing loss. Comparing bilateral to unilateral (unaided) performance on tests such as the HINT may provide the clinician with objective data to support subjective preference for one hearing aid as opposed to two. American Academy of Audiology
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-06-01
The isoprenoid pathway produces three key metabolites--endogenous digoxin, dolichol, and ubiquinone. Since endogenous digoxin can regulate neurotransmitter transport and dolichols can modulate glycoconjugate synthesis important in synaptic connectivity, the pathway was assessed in patients with dyslexia, delayed recovery from global aphasia consequent to a dominant hemispheric thrombotic infarct, and developmental delay of speech milestone. The pathway was also studied in right hemispheric, left hemispheric, and bihemispheric dominance to find out the role of hemispheric dominance in the pathogenesis of speech disorders. The plasma/serum--activity of HMG CoA reductase, magnesium, digoxin, dolichol, ubiquinone--and tryptophan/tyrosine catabolic patterns, as well as RBC (Na+)-K+ ATPase activity, were measured in the above mentioned groups. The glycoconjugate metabolism and membrane composition was also studied. The study showed that in dyslexia, developmental delay of speech milestone, and delayed recovery from global aphasia there was an upregulated isoprenoidal pathway with increased digoxin and dolichol levels. The membrane (Na+)-K+ ATPase activity, serum magnesium and ubiquinone levels were low. The tryptophan catabolites were increased and the tyrosine catabolites including dopamine decreased in the serum contributing to a speech dysfunction. There was an increase in carbohydrate residues of glycoproteins, glycosaminoglycans, and glycolipids levels as well as an increased activity of GAG degrading enzymes and glyco hydrolases in the serum. The cholesterol:phospholipid ratio of RBC membrane increased and membrane glycoconjugates showed a decrease. All of these could contribute to altered synaptic inactivity in these disorders. The patterns correlated with those obtained in right hemispheric chemical dominance. Right hemispheric chemical dominance may play a role in the genesis of these disorders. Hemispheric chemical dominance has no correlation with handedness or the dichotic listening test.
Sources of listening anxiety in learning English as a foreign language.
Chang, Anna Ching-Shyang
2008-02-01
In this study of college students' listening anxiety in learning English in a classroom context, participants were 160 students (47 men and 113 women) ages 18 to 19 years. To address their listening anxiety, participants were chosen from students enrolling in a required listening course. A listening questionnaire was used to assess learners' anxiety about spoken English, its intensity, and the main sources of listening anxiety. Overall, participants showed moderately high intensity of anxiety in listening to spoken English, but were more anxious in testing than in general situations. In contrast to previous research on the nature of spoken English as the main source of listening anxiety, this study found that low confidence in comprehending spoken English, taking English listening courses as a requirement, and worrying about test difficulty were the three main factors contributing to participants' listening anxiety in a classroom context. Participants' learning profiles both in the classroom and outside the class yielded data which provides suggestions for reducing anxiety.
Speech Understanding in Complex Listening Environments by Listeners Fit with Cochlear Implants
ERIC Educational Resources Information Center
Dorman, Michael F.; Gifford, Rene H.
2017-01-01
Purpose: The aim of this article is to summarize recent published and unpublished research from our 2 laboratories on improving speech understanding in complex listening environments by listeners fit with cochlear implants (CIs). Method: CI listeners were tested in 2 listening environments. One was a simulation of a restaurant with multiple,…
Isbell, Elif; Stevens, Courtney; Hampton Wray, Amanda; Bell, Theodore; Neville, Helen J
2016-12-01
While a growing body of research has identified experiential factors associated with differences in selective attention, relatively little is known about the contribution of genetic factors to the skill of sustained selective attention, especially in early childhood. Here, we assessed the association between the serotonin transporter linked polymorphic region (5-HTTLPR) genotypes and the neural mechanisms of selective attention in young children from lower socioeconomic status (SES) backgrounds. Event-related potentials (ERPs) were recorded during a dichotic listening task from 121 children (76 females, aged 40-67 months), who were also genotyped for the short and long allele of 5-HTTLPR. The effect of selective attention was measured as the difference in ERP mean amplitudes elicited by identical probe stimuli embedded in stories when they were attended versus unattended. Compared to children homozygous for the long allele, children who carried at least one copy of the short allele showed larger effects of selective attention on neural processing. These findings link the short allele of the 5-HTTLPR to enhanced neural mechanisms of selective attention and lay the groundwork for future studies of gene-by-environment interactions in the context of key cognitive skills. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Hampton Wray, Amanda; Stevens, Courtney; Pakulak, Eric; Isbell, Elif; Bell, Theodore; Neville, Helen
2017-08-01
Although differences in selective attention skills have been identified in children from lower compared to higher socioeconomic status (SES) backgrounds, little is known about these differences in early childhood, a time of rapid attention development. The current study evaluated the development of neural systems for selective attention in children from lower SES backgrounds. Event-related potentials (ERPs) were acquired from 33 children from lower SES and 14 children from higher SES backgrounds during a dichotic listening task. The lower SES group was followed longitudinally for one year. At age four, the higher SES group exhibited a significant attention effect (larger ERP response to attended compared to unattended condition), an effect not observed in the lower SES group. At age five, the lower SES group exhibited a significant attention effect comparable in overall magnitude to that observed in the 4-year-old higher SES group, but with poorer distractor suppression (larger response to the unattended condition). Together, these findings suggest both a maturational delay and divergent developmental pattern in neural mechanisms for selective attention in young children from lower compared to higher SES backgrounds. Furthermore, these findings highlight the importance of studying neurodevelopment within narrow age ranges and in children from diverse backgrounds. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Neural effects of cognitive control load on auditory selective attention.
Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali
2014-08-01
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.
Changes in otoacoustic emissions during selective auditory and visual attention
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2015-01-01
Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing—the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2–3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater. PMID:25994703
Sex hormones affect language lateralisation but not cognitive control in normally cycling women.
Hodgetts, Sophie; Weis, Susanne; Hausmann, Markus
2015-08-01
This article is part of a Special Issue "Estradiol and Cognition". Natural fluctuations of sex hormones during the menstrual cycle have been shown to modulate language lateralisation. Using the dichotic listening (DL) paradigm, a well-established measurement of language lateralisation, several studies revealed that the left hemispheric language dominance was stronger when levels of estradiol were high. A recent study (Hjelmervik et al., 2012) showed, however, that high levels of follicular estradiol increased lateralisation only in a condition that required participants to cognitively control (top-down) the stimulus-driven (bottom-up) response. This finding suggested that sex hormones modulate lateralisation only if cognitive control demands are high. The present study investigated language lateralisation in 73 normally cycling women under three attention conditions that differed in cognitive control demands. Saliva estradiol and progesterone levels were determined by luminescence immunoassays. Women were allocated to a high or low estradiol group. The results showed a reduced language lateralisation when estradiol and progesterone levels were high. The effect was independent of the attention condition indicating that estradiol marginally affected cognitive control. The findings might suggest that high levels of estradiol especially reduce the stimulus-driven (bottom-up) aspect of lateralisation rather than top-down cognitive control. Copyright © 2015 Elsevier Inc. All rights reserved.
Effect of efferent activation on binaural frequency selectivity.
Verhey, Jesko L; Kordus, Monika; Drga, Vit; Yasin, Ifat
2017-07-01
Binaural notched-noise experiments indicate a reduced frequency selectivity of the binaural system compared to monaural processing. The present study investigates how auditory efferent activation (via the medial olivocochlear system) affects binaural frequency selectivity in normal-hearing listeners. Thresholds were measured for a 1-kHz signal embedded in a diotic notched-noise masker for various notch widths. The signal was either presented in phase (diotic) or in antiphase (dichotic), gated with the noise. Stimulus duration was 25 ms, in order to avoid efferent activation due to the masker or the signal. A bandpass-filtered noise precursor was presented prior to the masker and signal stimuli to activate the efferent system. The silent interval between the precursor and the masker-signal complex was 50 ms. For comparison, thresholds for detectability of the masked signal were also measured in a baseline condition without the precursor and, in addition, without the masker. On average, the results of the baseline condition indicate an effectively wider binaural filter, as expected. For both signal phases, the addition of the precursor results in effectively wider filters, which is in agreement with the hypothesis that cochlear gain is reduced due to the presence of the precursor. Copyright © 2017 Elsevier B.V. All rights reserved.
Todd, Ann E.; Goupell, Matthew J.; Litovsky, Ruth Y.
2016-01-01
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs. PMID:27475132
Todd, Ann E; Goupell, Matthew J; Litovsky, Ruth Y
2016-07-01
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs.
The Effect of the Use of Video Texts on ESL Listening Test-Taker Performance
ERIC Educational Resources Information Center
Wagner, Elvis
2010-01-01
Video is widely used in the teaching of L2 listening, and SLA researchers have argued that the visual components of spoken texts are useful for the listener in comprehending aural information. Yet video texts are rarely used on tests of L2 listening ability, perhaps in part due to the belief that including the visual channel involves assessing…
Bellis, Teri James; Ross, Jody
2011-09-01
It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.
From One to Multiple Accents on a Test of L2 Listening Comprehension
ERIC Educational Resources Information Center
Ockey, Gary J.; French, Robert
2016-01-01
Concerns about the need for assessing multidialectal listening skills for global contexts are becoming increasingly prevalent. However, the inclusion of multiple accents on listening assessments may threaten test fairness because it is not practical to include every accent that may be encountered in the language use domain on these tests. Given…
ERIC Educational Resources Information Center
Zhang, Xian
2013-01-01
This study used structural equation modeling to explore the possible causal relations between foreign language (English) listening anxiety and English listening performance. Three hundred participants learning English as a foreign language (FL) completed the foreign language listening anxiety scale (FLLAS) and IELTS test twice with an interval of…
Effects of Strength of Accent on an L2 Interactive Lecture Listening Comprehension Test
ERIC Educational Resources Information Center
Ockey, Gary J.; Papageorgiou, Spiros; French, Robert
2016-01-01
This article reports on a study which aimed to determine the effect of strength of accent on listening comprehension of interactive lectures. Test takers (N = 21,726) listened to an interactive lecture given by one of nine speakers and responded to six comprehension items. The test taker responses were analyzed with the Rasch computer program…
Strategies to combat auditory overload during vehicular command and control.
Abel, Sharon M; Ho, Geoffrey; Nakashima, Ann; Smith, Ingrid
2014-09-01
Strategies to combat auditory overload were studied. Normal-hearing males were tested in a sound isolated room in a mock-up of a military land vehicle. Two tasks were presented concurrently, in quiet and vehicle noise. For Task 1 dichotic phrases were delivered over a communications headset. Participants encoded only those beginning with a preassigned call sign (Baron or Charlie). For Task 2, they agreed or disagreed with simple equations presented either over loudspeakers, as text on the laptop monitor, in both the audio and the visual modalities, or not at all. Accuracy was significantly better by 20% on Task 2 when the equations were presented visually or audiovisually. Scores were at least 78% correct for dichotic phrases presented over the headset, with a right ear advantage of 7%, given the 5 dB speech-to-noise ratio. The left ear disadvantage was particularly apparent in noise, where the interaural difference was 12%. Relatively lower scores in the left ear, in noise, were observed for phrases beginning with Charlie. These findings underscore the benefit of delivering higher priority communications to the dominant ear, the importance of selecting speech sounds that are resilient to noise masking, and the advantage of using text in cases of degraded audio. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
Video in the Evaluation Process.
ERIC Educational Resources Information Center
Pelletier, Raymond J.
The rationale and methodology for using videotape recordings to test foreign language listening comprehension are discussed. First, the advantages of using video in teaching and testing listening comprehension are examined and the specific listening skills to be developed at the beginning level are outlined. Issues in the selection of video…
Intercultural Listening: Measuring Listening Concepts with the LCI-R
ERIC Educational Resources Information Center
Janusik, Laura; Imhof, Margarete
2017-01-01
Listening is an integral part of communication, yet more research is conducted on the speaker as opposed to the listener. Previous research established a general schema of listening as a concept-driven behavior with four factors (Imhof & Janusik, 2006). Further testing by Bodie (2010) confirmed the factor structure and reduced the number of…
The effect of unimodal affective priming on dichotic emotion recognition.
Voyer, Daniel; Myles, Daniel
2017-11-15
The present report concerns two experiments extending to unimodal priming the cross-modal priming effects observed with auditory emotions by Harding and Voyer [(2016). Laterality effects in cross-modal affective priming. Laterality: Asymmetries of Body, Brain and Cognition, 21, 585-605]. Experiment 1 used binaural targets to establish the presence of the priming effect and Experiment 2 used dichotically presented targets to examine auditory asymmetries. In Experiment 1, 82 university students completed a task in which binaural targets consisting of one of 4 English words inflected in one of 4 emotional tones were preceded by binaural primes consisting of one of 4 Mandarin words pronounced in the same (congruent) or different (incongruent) emotional tones. Trials where the prime emotion was congruent with the target emotion showed faster responses and higher accuracy in identifying the target emotion. In Experiment 2, 60 undergraduate students participated and the target was presented dichotically instead of binaurally. Primes congruent with the left ear produced a large left ear advantage, whereas right congruent primes produced a right ear advantage. These results indicate that unimodal priming produces stronger effects than those observed under cross-modal priming. The findings suggest that priming should likely be considered a strong top-down influence on laterality effects.
Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve
The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.
A Comparison of Television and Audio Presentations of the MLA French Listening Examination
ERIC Educational Resources Information Center
Stallings, William M.
1972-01-01
Although nonverbal cues are often available in real-life communication, listening is usually tested by aural stimuli broadcast from an audio-tape. It would seem that testing listening comprehension might be improved by using television to offer nonverbal cues in addition to aural stimuli. (Author)
Choice of Reading Comprehension Test Influences the Outcomes of Genetic Analyses
Betjemann, Rebecca S.; Keenan, Janice M.; Olson, Richard K.; DeFries, John C.
2010-01-01
Does the choice of test for assessing reading comprehension influence the outcome of genetic analyses? A twin design compared two types of reading comprehension tests classified as primarily associated with word decoding (RC-D) or listening comprehension (RC-LC). For both types of tests, the overall genetic influence is high and nearly identical. However, the tests differed significantly in how they covary with the genes associated with decoding and listening comprehension. Although Cholesky decomposition showed that both types of comprehension tests shared significant genetic influence with both decoding and listening comprehension, RC-D tests shared most genetic variance with decoding, and RC-LC tests shared most with listening comprehension. Thus, different tests used to measure the same construct may manifest very different patterns of genetic covariation. These results suggest that the apparent discrepancies among the findings of previous twin studies of reading comprehension could be due at least in part to test differences. PMID:21804757
Some factors underlying individual differences in speech recognition on PRESTO: a first report.
Tamati, Terrin N; Gilbert, Jaimie L; Pisoni, David B
2013-01-01
Previous studies investigating speech recognition in adverse listening conditions have found extensive variability among individual listeners. However, little is currently known about the core underlying factors that influence speech recognition abilities. To investigate sensory, perceptual, and neurocognitive differences between good and poor listeners on the Perceptually Robust English Sentence Test Open-set (PRESTO), a new high-variability sentence recognition test under adverse listening conditions. Participants who fell in the upper quartile (HiPRESTO listeners) or lower quartile (LoPRESTO listeners) on key word recognition on sentences from PRESTO in multitalker babble completed a battery of behavioral tasks and self-report questionnaires designed to investigate real-world hearing difficulties, indexical processing skills, and neurocognitive abilities. Young, normal-hearing adults (N = 40) from the Indiana University community participated in the current study. Participants' assessment of their own real-world hearing difficulties was measured with a self-report questionnaire on situational hearing and hearing health history. Indexical processing skills were assessed using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Neurocognitive abilities were measured with the Auditory Digit Span Forward (verbal short-term memory) and Digit Span Backward (verbal working memory) tests, the Stroop Color and Word Test (attention/inhibition), the WordFam word familiarity test (vocabulary size), the Behavioral Rating Inventory of Executive Function-Adult Version (BRIEF-A) self-report questionnaire on executive function, and two performance subtests of the Wechsler Abbreviated Scale of Intelligence (WASI) Performance Intelligence Quotient (IQ; nonverbal intelligence). Scores on self-report questionnaires and behavioral tasks were tallied and analyzed by listener group (HiPRESTO and LoPRESTO). The extreme groups did not differ overall on self-reported hearing difficulties in real-world listening environments. However, an item-by-item analysis of questions revealed that LoPRESTO listeners reported significantly greater difficulty understanding speakers in a public place. HiPRESTO listeners were significantly more accurate than LoPRESTO listeners at gender discrimination and regional dialect categorization, but they did not differ on talker discrimination accuracy or response time, or gender discrimination response time. HiPRESTO listeners also had longer forward and backward digit spans, higher word familiarity ratings on the WordFam test, and lower (better) scores for three individual items on the BRIEF-A questionnaire related to cognitive load. The two groups did not differ on the Stroop Color and Word Test or either of the WASI performance IQ subtests. HiPRESTO listeners and LoPRESTO listeners differed in indexical processing abilities, short-term and working memory capacity, vocabulary size, and some domains of executive functioning. These findings suggest that individual differences in the ability to encode and maintain highly detailed episodic information in speech may underlie the variability observed in speech recognition performance in adverse listening conditions using high-variability PRESTO sentences in multitalker babble. American Academy of Audiology.
The Effect of Training in Listening to Speeded Discourse on Listening Comprehension.
ERIC Educational Resources Information Center
Krall, W. Richard
A study to investigate the effect of training in listening to speeded discourse on listening comprehension was conducted. Specifically, the study was designed to test the following hypothesis: There is no signifant difference in the amount of gain in listening achievement of the sixth-grade pupils who received speeded discourse speech training…
ERIC Educational Resources Information Center
Goh, Christine C. M.; Aryadoust, Vahid
2015-01-01
The testing and teaching of listening has been partially guided by the notion of subskills, or a set of listening abilities that are needed for achieving successful comprehension and utilization of the information from listening texts. Although this notion came about mainly through applications of theoretical perspectives from psychology and…
Development of a test battery for evaluating speech perception in complex listening environments.
Brungart, Douglas S; Sheffield, Benjamin M; Kubli, Lina R
2014-08-01
In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.
Aided and Unaided Speech Perception by Older Hearing Impaired Listeners
Woods, David L.; Arbogast, Tanya; Doss, Zoe; Younus, Masood; Herron, Timothy J.; Yund, E. William
2015-01-01
The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners. PMID:25730423
ERIC Educational Resources Information Center
Peterson, Robin T.
2007-01-01
This study investigates the combined impact of a memory test and subsequent listening practice in enhancing student listening abilities in collegiate business administration courses. The article reviews relevant literature and describes an exploratory study that was undertaken to compare the effectiveness of this technique with traditional…
Oyama, Yoshinori
2011-06-01
The present study examined Japanese university students' processing time for English subject and object relative clauses in relation to their English listening proficiency. In Analysis 1, the relation between English listening proficiency and reading span test scores was analyzed. The results showed that the high and low listening comprehension groups' reading span test scores do not differ. Analysis 2 investigated English listening proficiency and processing time for sentences with subject and object relative clauses. The results showed that reading the relative clause ending and the main verb section of a sentence with an object relative clause (such as "attacked" and "admitted" in the sentence "The reporter that the senator attacked admitted the error") takes less time for learners with high English listening scores than for learners with low English listening scores. In Analysis 3, English listening proficiency and comprehension accuracy for sentences with subject and object relative clauses were examined. The results showed no significant difference in comprehension accuracy between the high and low listening-comprehension groups. These results indicate that processing time for English relative clauses is related to the cognitive processes involved in listening comprehension, which requires immediate processing of syntactically complex audio information.
Continuous multiword recognition performance of young and elderly listeners in ambient noise
NASA Astrophysics Data System (ADS)
Sato, Hiroshi
2005-09-01
Hearing threshold shift due to aging is known as a dominant factor to degrade speech recognition performance in noisy conditions. On the other hand, cognitive factors of aging-relating speech recognition performance in various speech-to-noise conditions are not well established. In this study, two kinds of speech test were performed to examine how working memory load relates to speech recognition performance. One is word recognition test with high-familiarity, four-syllable Japanese words (single-word test). In this test, each word was presented to listeners; the listeners were asked to write the word down on paper with enough time to answer. In the other test, five continuous word were presented to listeners and listeners were asked to write the word down after just five words were presented (multiword test). Both tests were done in various speech-to-noise ratios under 50-dBA Hoth spectrum noise with more than 50 young and elderly subjects. The results of two experiments suggest that (1) Hearing level is related to scores of both tests. (2) Scores of single-word test are well correlated with those of multiword test. (3) Scores of multiword test are not improved as speech-to-noise ratio improves in the condition where scores of single-word test reach their ceiling.
Vowel perception by noise masked normal-hearing young adults
NASA Astrophysics Data System (ADS)
Richie, Carolyn; Kewley-Port, Diane; Coughlin, Maureen
2005-08-01
This study examined vowel perception by young normal-hearing (YNH) adults, in various listening conditions designed to simulate mild-to-moderate sloping sensorineural hearing loss. YNH listeners were individually age- and gender-matched to young hearing-impaired (YHI) listeners tested in a previous study [Richie et al., J. Acoust. Soc. Am. 114, 2923-2933 (2003)]. YNH listeners were tested in three conditions designed to create equal audibility with the YHI listeners; a low signal level with and without a simulated hearing loss, and a high signal level with a simulated hearing loss. Listeners discriminated changes in synthetic vowel tokens /smcapi e ɛ invv æ/ when F1 or F2 varied in frequency. Comparison of YNH with YHI results failed to reveal significant differences between groups in terms of performance on vowel discrimination, in conditions of similar audibility by using both noise masking to elevate the hearing thresholds of the YNH and applying frequency-specific gain to the YHI listeners. Further, analysis of learning curves suggests that while the YHI listeners completed an average of 46% more test blocks than YNH listeners, the YHI achieved a level of discrimination similar to that of the YNH within the same number of blocks. Apparently, when age and gender are closely matched between young hearing-impaired and normal-hearing adults, performance on vowel tasks may be explained by audibility alone.
Gordon-Salant, Sandra; Cole, Stacey Samuels
2016-01-01
This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.
Some Factors Underlying Individual Differences in Speech Recognition on PRESTO: A First Report
Tamati, Terrin N.; Gilbert, Jaimie L.; Pisoni, David B.
2013-01-01
Background Previous studies investigating speech recognition in adverse listening conditions have found extensive variability among individual listeners. However, little is currently known about the core, underlying factors that influence speech recognition abilities. Purpose To investigate sensory, perceptual, and neurocognitive differences between good and poor listeners on PRESTO, a new high-variability sentence recognition test under adverse listening conditions. Research Design Participants who fell in the upper quartile (HiPRESTO listeners) or lower quartile (LoPRESTO listeners) on key word recognition on sentences from PRESTO in multitalker babble completed a battery of behavioral tasks and self-report questionnaires designed to investigate real-world hearing difficulties, indexical processing skills, and neurocognitive abilities. Study Sample Young, normal-hearing adults (N = 40) from the Indiana University community participated in the current study. Data Collection and Analysis Participants’ assessment of their own real-world hearing difficulties was measured with a self-report questionnaire on situational hearing and hearing health history. Indexical processing skills were assessed using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Neurocognitive abilities were measured with the Auditory Digit Span Forward (verbal short-term memory) and Digit Span Backward (verbal working memory) tests, the Stroop Color and Word Test (attention/inhibition), the WordFam word familiarity test (vocabulary size), the BRIEF-A self-report questionnaire on executive function, and two performance subtests of the WASI Performance IQ (non-verbal intelligence). Scores on self-report questionnaires and behavioral tasks were tallied and analyzed by listener group (HiPRESTO and LoPRESTO). Results The extreme groups did not differ overall on self-reported hearing difficulties in real-world listening environments. However, an item-by-item analysis of questions revealed that LoPRESTO listeners reported significantly greater difficulty understanding speakers in a public place. HiPRESTO listeners were significantly more accurate than LoPRESTO listeners at gender discrimination and regional dialect categorization, but they did not differ on talker discrimination accuracy or response time, or gender discrimination response time. HiPRESTO listeners also had longer forward and backward digit spans, higher word familiarity ratings on the WordFam test, and lower (better) scores for three individual items on the BRIEF-A questionnaire related to cognitive load. The two groups did not differ on the Stroop Color and Word Test or either of the WASI performance IQ subtests. Conclusions HiPRESTO listeners and LoPRESTO listeners differed in indexical processing abilities, short-term and working memory capacity, vocabulary size, and some domains of executive functioning. These findings suggest that individual differences in the ability to encode and maintain highly detailed episodic information in speech may underlie the variability observed in speech recognition performance in adverse listening conditions using high-variability PRESTO sentences in multitalker babble. PMID:24047949
Cloze Listening Test (Form Lisbon and Form Waco).
ERIC Educational Resources Information Center
Bowdidge, John S.
Designed to measure recall of specific information, ability to grasp the thought of a passage as a whole, and ability to apply various contextual clues while listening to a passage of aural communication, each of the alternate forms of the cloze listening test consists of an audio tape recording of approximately twenty minutes duration and a…
ERIC Educational Resources Information Center
Kang, Okim; Thomson, Ron I.; Moran, Meghan
2018-01-01
This study compared five research-based intelligibility measures as they were applied to six varieties of English. The objective was to determine which approach to measuring intelligibility would be most reliable for predicting listener comprehension, as measured through a listening comprehension test similar to the Test of English as a Foreign…
ERIC Educational Resources Information Center
Ferrari-Bridgers, Franca; Stroumbakis, Kostas; Drini, Merlinda; Lynch, Barbara; Vogel, Rosanne
2017-01-01
In this article, the researchers discuss the implementation of the Ferrari, Lynch, and Vogel Listening Test (FLVLT) to two STEM areas: Mathematics and Computer Science. The goal of the present study was to assess the improvement in students' mastery of critical listening skills and how listening can help students to retain information. After…
Gains to L2 Listeners from Reading while Listening vs. Listening Only in Comprehending Short Stories
ERIC Educational Resources Information Center
Chang, Anna C.-S.
2009-01-01
This study builds on the concept that aural-written verification helps L2 learners develop auditory discrimination skills, refine word recognition and gain awareness of form-meaning relationships, by comparing two modes of aural input: reading while listening (R/L) vs. listening only (L/O). Two test tasks (sequencing and gap filling) of 95 items,…
Web-Based Assessment Tool for Communication and Active Listening Skill Development
ERIC Educational Resources Information Center
Cheon, Jongpil; Grant, Michael
2009-01-01
The website "Active Listening" was developed within a larger project--"Interactive Web-based training in the subtleties of communication and active listening skill development." The Active Listening site aims to provide beginning counseling psychology students with didactic and experimental learning activities and interactive tests so that…
Second and foreign language listening: unraveling the construct.
Tafaghodtari, Marzieh H; Vandergrift, Larry
2008-08-01
Identifying the variables which contribute to second and foreign language (L2) listening ability can provide a better understanding of the listening construct. This study explored the degree to which first language (L1) listening ability, L2 proficiency, motivation and metacognition contribute to L2 listening comprehension. 115 Persian-speaking English as a Foreign Language (EFL) university students completed a motivation questionnaire, the Language Learning Motivation Orientation Scale, a listening questionnaire, the Metacognitive Awareness Listening Questionnaire, and an English-language proficiency measure, as well as listening tests in English and Persian. Scores from all measures were subjected to descriptive, inferential, and correlational analyses. The results support the hypothesis that variability in L2 listening cannot be explained by either L2 proficiency or L1 listening ability; rather, a cluster of variables including L2 proficiency, L1 listening ability, metacognitive knowledge and motivation orientations can better explain variability in L2 listening ability.
Spring, C; French, L
1990-01-01
A method of identifying children with specific reading disabilities by identifying discrepancies between their reading and listening comprehension scores was validated with disabled and nondisabled readers in Grades 4, 5, and 6. The method is based on a modification of the reading comprehension subtest of the Peabody Individual Achievement Test (Dunn & Markwardt, 1970). In this modification, even-numbered sentences are read by subjects, and odd-numbered sentences are read by the test administrator as subjects listen. The features of this test that reduce demands on working memory, thereby making it suitable for the detection of a discrepancy between reading and listening comprehension in readers with disabilities, are discussed. A significant group-by-modality interaction was obtained. Children with reading disabilities scored significantly lower on reading than on listening comprehension, while nondisabled readers scored slightly higher, but not significantly so, on reading than on listening comprehension. The appropriateness of this method as a substitute for the traditional method, which is based on the detection of a discrepancy between intelligence and reading and which has recently been proscribed in certain school districts, is discussed. Issues concerning the listening comprehension skills of disabled readers are also discussed.
Using speech sounds to test functional spectral resolution in listeners with cochlear implants
Winn, Matthew B.; Litovsky, Ruth Y.
2015-01-01
In this study, spectral properties of speech sounds were used to test functional spectral resolution in people who use cochlear implants (CIs). Specifically, perception of the /ba/-/da/ contrast was tested using two spectral cues: Formant transitions (a fine-resolution cue) and spectral tilt (a coarse-resolution cue). Higher weighting of the formant cues was used as an index of better spectral cue perception. Participants included 19 CI listeners and 10 listeners with normal hearing (NH), for whom spectral resolution was explicitly controlled using a noise vocoder with variable carrier filter widths to simulate electrical current spread. Perceptual weighting of the two cues was modeled with mixed-effects logistic regression, and was found to systematically vary with spectral resolution. The use of formant cues was greatest for NH listeners for unprocessed speech, and declined in the two vocoded conditions. Compared to NH listeners, CI listeners relied less on formant transitions, and more on spectral tilt. Cue-weighting results showed moderately good correspondence with word recognition scores. The current approach to testing functional spectral resolution uses auditory cues that are known to be important for speech categorization, and can thus potentially serve as the basis upon which CI processing strategies and innovations are tested. PMID:25786954
Lee, Annemarie L; Dolmage, Thomas E; Rhim, Matthew; Goldstein, Roger S; Brooks, Dina
2018-05-01
In people with COPD, dyspnea is the primary symptom limiting exercise tolerance. One approach to reducing dyspnea during exercise is through music listening. A constant speed endurance test reflects a high-intensity aerobic exercise training session, but whether listening to music affects endurance time is unknown. This study aimed to determine the effects of listening to music during a constant speed endurance test in COPD. Participants with COPD completed two endurance walk tests, one with and one without listening to self-selected music throughout the test. The primary outcome was the difference in endurance time between the two conditions. Heart rate, percutaneous oxygen saturation, dyspnea, and rate of perceived exertion were measured before and after each test. Nineteen participants (mean [SD]: age, 71 [8] years; FEV 1 , 47 [19] % predicted) completed the study. Endurance time was greater (1.10 [95% CI, 0.41-1.78] min) while listening to music (7.0 [3.1] min) than without (5.9 [2.6] min), and reduced end-test dyspnea (1.0 [95% CI, -2.80 to -1.80] units) (with music, 4.6 [1.7] units; vs without music, 5.6 [1.4] units, respectively). There was not a significant difference in heart rate, percutaneous oxygen saturation, or leg fatigue. There were no adverse events under either condition. In COPD, dyspnea was less while listening to music and was accompanied by an increased tolerance of high-intensity exercise demonstrated by greater endurance time. Practically, the effect was modest but may represent an aid for exercise training of these patients. Australian New Zealand Clinical Trials Registry; No. ACTRN12617001217392. Copyright © 2017 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Perception of coarticulated tones by non-native listeners
NASA Astrophysics Data System (ADS)
Bent, Tessa
2005-04-01
Mandarin lexical tones vary in their acoustic realization depending on the surrounding context. Native listeners compensate for this tonal coarticulation when identifying tones in context. This study investigated how native English listeners handle tonal coarticulation by testing native English and Mandarin listeners discrimination of the four Mandarin lexical tones in tri-syllabic sequences in which the middle tone varied while the first and last tones were held constant. Three different such frames were tested. As expected, Mandarin listeners discriminated all pairs in all contexts with a high degree of accuracy. English listeners exhibited poorer discrimination than Mandarin listeners and their discrimination accuracy showed a high degree of context dependency. In addition to assessing accuracy, reactions times to correctly discriminated different trials were entered into a multidimensional scaling analysis. For both listener groups, the arrangement of tones in perceptual space varied depending on the surrounding context suggesting that listeners attend to different acoustic attributes of the target tone depending on the surrounding tones. These results demonstrate the importance for models of cross-language speech perception of including contextual variation when characterizing the perception of non-native prosodic categories. [Work supported by NIH/NIDCD
Listening Strategies in the L2 Classroom: More Practice, Less Testing
ERIC Educational Resources Information Center
Aponte-de-Hanna, Cecilia
2012-01-01
This paper looks at the history of listening strategies development from the first studies on strategies used by L2 learners to the most current studies specific to L2 listening, and how this theory can be incorporated into classroom teaching that fosters practice, not testing. This paper also examines the type of needs analysis and diagnostic…
ERIC Educational Resources Information Center
East, Martin; King, Chris
2012-01-01
In the listening component of the IELTS examination candidates hear the input once, delivered at "normal" speed. This format for listening can be problematic for test takers who often perceive normal speed input to be too fast for effective comprehension. The study reported here investigated whether using computer software to slow down…
Criterion-Related Validity of the TOEFL iBT Listening Section. TOEFL iBT Research Report. RR-09-02
ERIC Educational Resources Information Center
Sawaki, Yasuyo; Nissan, Susan
2009-01-01
The study investigated the criterion-related validity of the "Test of English as a Foreign Language"[TM] Internet-based test (TOEFL[R] iBT) Listening section by examining its relationship to a criterion measure designed to reflect language-use tasks that university students encounter in everyday academic life: listening to academic…
Loudness enhancement: Monaural, binaural and dichotic
NASA Technical Reports Server (NTRS)
Elmasian, R. O.; Galambos, R.
1975-01-01
It is shown that when one tone burst precedes another by 100 msec variations in the intensity of the first systematically influences the loudness of second. When the first burst is more intense than the second, the second is increased and when the first burst is less intense, the loudness of the second is decreased. This occurs in monaural, binaural and dichotic paradigms of signal presentation. Where both bursts are presented to the same ear there is more enhancement with less intersubject variability than when they are presented to different ears. Monaural enhancements as large as 30 db can readily be demonstrated, but decrements rarely exceed 5 db. Possible physiological mechanisms are discussed for this loudness enhancement, which apparently shares certain characteristics with time-order-error, assimilation, and temporal partial masking experiments.
Nittrouer, Susan; Tarr, Eric; Bolster, Virginia; Caldwell-Tarr, Amanda; Moberly, Aaron C.; Lowenstein, Joanna H.
2014-01-01
Objective Using signals processed to simulate speech received through cochlear implants and low-frequency extended hearing aids, this study examined the proposal that low-frequency signals facilitate the perceptual organization of broader, spectrally degraded signals. Design In two experiments, words and sentences were presented in diotic and dichotic configurations as four-channel noise-vocoded signals (VOC-only), and as those signals combined with the acoustic signal below 250 Hz (LOW-plus). Dependent measures were percent correct recognition scores, and the difference between scores for the two processing conditions given as proportions of recognition scores for VOC-only. The influence of linguistic context was also examined. Study Sample Participants had normal hearing. In all, 40 adults, 40 7-year-olds, and 20 5-year-olds participated. Results Participants of all ages showed benefits of adding the low-frequency signal. The effect was greater for sentences than words, but no effect of configuration was found. The influence of linguistic context was similar across age groups, and did not contribute to the low-frequency effect. Listeners who scored more poorly with VOC-only stimuli showed greater low-frequency effects. Conclusion The benefit of adding a very low-frequency signal to a broader, spectrally degraded signal seems to derive from its facilitative influence on perceptual organization of the sensory input. PMID:24456179
Smoking modulates language lateralization in a sex-specific way.
Hahn, Constanze; Pogun, Sakire; Güntürkün, Onur
2010-12-01
Smoking affects a widespread network of neuronal functions by altering the properties of acetylcholinergic transmission. Recent studies show that nicotine consumption affects ascending auditory pathways and alters auditory attention, particularly in men. Here we show that smoking affects language lateralization in a sex-specific way. We assessed brain asymmetries of 90 healthy, right-handed participants using a classic consonant-vowel syllable dichotic listening paradigm in a 2×3 experimental design with sex (male, female) and smoking status (non-smoker, light smoker, heavy smoker) as between-subject factors. Our results revealed that male smokers had a significantly less lateralized response pattern compared to the other groups due to a decreased response rate of their right ear. This finding suggests a group-specific impairment of the speech dominant left hemisphere. In addition, decreased overall response accuracy was observed in male smokers compared to the other experimental groups. Similar adverse effects of smoking were not detected in women. Further, a significant negative correlation was detected between the severity of nicotine dependency and response accuracy in male but not in female smokers. Taken together, these results show that smoking modulates functional brain lateralization significantly and in a sexually dimorphic manner. Given that some psychiatric disorders have been associated with altered brain asymmetries and increased smoking prevalence, nicotinergic effects need to be specifically investigated in this context in future studies. Copyright © 2010 Elsevier Ltd. All rights reserved.
Effects of smoking marijuana on focal attention and brain blood flow.
O'Leary, Daniel S; Block, Robert I; Koeppel, Julie A; Schultz, Susan K; Magnotta, Vincent A; Ponto, Laura Boles; Watkins, G Leonard; Hichwa, Richard D
2007-04-01
Using an attention task to control cognitive state, we previously found that smoking marijuana changes regional cerebral blood flow (rCBF). The present study measured rCBF during tasks requiring attention to left and right ears in different conditions. Twelve occasional marijuana users (mean age 23.5 years) were imaged with PET using [15O]water after smoking marijuana or placebo cigarettes as they performed a reaction time (RT) baseline task, and a dichotic listening task with attend-right- and attend-left-ear instructions. Smoking marijuana, but not placebo, resulted in increased normalized rCBF in orbital frontal cortex, anterior cingulate, temporal pole, insula, and cerebellum. RCBF was reduced in visual and auditory cortices. These changes occurred in all three tasks and replicated our earlier studies. They appear to reflect the direct effects of marijuana on the brain. Smoking marijuana lowered rCBF in auditory cortices compared to placebo but did not alter the normal pattern of attention-related rCBF asymmetry (i.e., greater rCBF in the temporal lobe contralateral to the direction of attention) that was also observed after placebo. These data indicate that marijuana has dramatic direct effects on rCBF, but causes relatively little change in the normal pattern of task-related rCBF on this auditory focused attention task. Copyright 2007 John Wiley & Sons, Ltd.
Mental training enhances attentional stability: Neural and behavioral evidence
Lutz, Antoine; Slagter, Heleen A.; Rawlings, Nancy B.; Francis, Andrew D.; Greischar, Lawrence L.; Davidson, Richard J.
2009-01-01
The capacity to stabilize the content of attention over time varies among individuals and its impairment is a hallmark of several mental illnesses. Impairments in sustained attention in patients with attention disorders have been associated with increased trial-to-trial variability in reaction time and event-related potential (ERP) deficits during attention tasks. At present, it is unclear whether the ability to sustain attention and its underlying brain circuitry are transformable through training. Here, we show, with dichotic listening task performance and electroencephalography (EEG), that training attention, as cultivated by meditation, can improve the ability to sustain attention. Three months of intensive meditation training reduced variability in attentional processing of target tones, as indicated by both enhanced theta-band phase consistency of oscillatory neural responses over anterior brain areas and reduced reaction time variability. Furthermore, those individuals who showed the greatest increase in neural response consistency showed the largest decrease in behavioral response variability. Notably, we also observed reduced variability in neural processing, in particular in low-frequency bands, regardless of whether the deviant tone was attended or unattended. Focused attention meditation may thus affect both distracter and target processing, perhaps by enhancing entrainment of neuronal oscillations to sensory input rhythms; a mechanism important for controlling the content of attention. These novel findings highlight the mechanisms underlying focused attention meditation, and support the notion that mental training can significantly affect attention and brain function. PMID:19846729
Aberrant interference of auditory negative words on attention in patients with schizophrenia.
Iwashiro, Norichika; Yahata, Noriaki; Kawamuro, Yu; Kasai, Kiyoto; Yamasue, Hidenori
2013-01-01
Previous research suggests that deficits in attention-emotion interaction are implicated in schizophrenia symptoms. Although disruption in auditory processing is crucial in the pathophysiology of schizophrenia, deficits in interaction between emotional processing of auditorily presented language stimuli and auditory attention have not yet been clarified. To address this issue, the current study used a dichotic listening task to examine 22 patients with schizophrenia and 24 age-, sex-, parental socioeconomic background-, handedness-, dexterous ear-, and intelligence quotient-matched healthy controls. The participants completed a word recognition task on the attended side in which a word with emotionally valenced content (negative/positive/neutral) was presented to one ear and a different neutral word was presented to the other ear. Participants selectively attended to either ear. In the control subjects, presentation of negative but not positive word stimuli provoked a significantly prolonged reaction time compared with presentation of neutral word stimuli. This interference effect for negative words existed whether or not subjects directed attention to the negative words. This interference effect was significantly smaller in the patients with schizophrenia than in the healthy controls. Furthermore, the smaller interference effect was significantly correlated with severe positive symptoms and delusional behavior in the patients with schizophrenia. The present findings suggest that aberrant interaction between semantic processing of negative emotional content and auditory attention plays a role in production of positive symptoms in schizophrenia. (224 words).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey
Google Test MPI Listener is a plugin for the Google Test c++ unit testing library that organizes test output of software that uses both the MPI parallel programming model and Google Test. Typically, such output is ordered arbitrarily and disorganized, making difficult the process of interpreting test output. This plug organizes output in MPI rank order, enabling easy interpretation of test results.
Bellis, Teri James; Billiet, Cassie; Ross, Jody
2011-09-01
Cacace and McFarland (2005) have suggested that the addition of cross-modal analogs will improve the diagnostic specificity of (C)APD (central auditory processing disorder) by ensuring that deficits observed are due to the auditory nature of the stimulus and not to supra-modal or other confounds. Others (e.g., Musiek et al, 2005) have expressed concern about the use of such analogs in diagnosing (C)APD given the uncertainty as to the degree to which cross-modal measures truly are analogous and emphasize the nonmodularity of the CANs (central auditory nervous system) and its function, which precludes modality specificity of (C)APD. To date, no studies have examined the clinical utility of cross-modal (e.g., visual) analogs of central auditory tests in the differential diagnosis of (C)APD. This study investigated performance of children diagnosed with (C)APD, children diagnosed with ADHD (attention deficit hyperactivity disorder), and typically developing children on three diagnostic tests of central auditory function and their corresponding visual analogs. The study sought to determine whether deficits observed in the (C)APD group were restricted to the auditory modality and the degree to which the addition of visual analogs aids in the ability to differentiate among groups. An experimental repeated measures design was employed. Participants consisted of three groups of right-handed children (normal control, n=10; ADHD, n=10; (C)APD, n=7) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of disorders unrelated to their primary diagnosis. Participants in Groups 2 and 3 met current diagnostic criteria for ADHD and (C)APD. Visual analogs of three tests in common clinical use for the diagnosis of (C)APD were used (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; and Duration Patterns [Pinheiro and Musiek, 1985]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANCOVAs (analyses of covariance) were used to examine effects of group, modality, and laterality (Dichotic/Dichoptic Digits) or response condition (auditory and visual patterning). In addition, planned univariate ANCOVAs were used to examine effects of group on intratest comparison measures (REA, HLD [Humming-Labeling Differential]). Children with both ADHD and (C)APD performed more poorly overall than typically developing children on all tasks, with the (C)APD group exhibiting the poorest performance on the auditory and visual patterns tests but the ADHD and (C)APD group performing similarly on the Dichotic/Dichoptic Digits task. However, each of the auditory and visual intratest comparison measures, when taken individually, was able to distinguish the (C)APD group from both the normal control and ADHD groups, whose performance did not differ from one another. Results underscore the importance of intratest comparison measures in the interpretation of central auditory tests (American Speech-Language-Hearing Association [ASHA], 2005 ; American Academy of Audiology [AAA], 2010). Results also support the "non-modular" view of (C)APD in which cross-modal deficits would be predicted based on shared neuroanatomical substrates. Finally, this study demonstrates that auditory tests alone are sufficient to distinguish (C)APD from supra-modal disorders, with cross-modal analogs adding little if anything to the differential diagnostic process. American Academy of Audiology.
Enhancing Listening Comprehension through a Group Work Guessing Game
ERIC Educational Resources Information Center
Baleghizadeh, Sasan; Arabtabar, Fatemeh
2010-01-01
The present paper is an attempt to introduce an innovative technique for a more effective teaching of L2 listening comprehension through a process-oriented approach. Much of what is traditionally known as listening practice is in fact testing material in which students are required to listen to a recording and answer a number of comprehension…
Effects of Audiovisual Media on L2 Listening Comprehension: A Preliminary Study in French
ERIC Educational Resources Information Center
Becker, Shannon R.; Sturm, Jessica L.
2017-01-01
The purpose of the present study was to determine whether integrating online audiovisual materials into the listening instruction of L2 French learners would have a measurable impact on their listening comprehension development. Students from two intact sections of second-semester French were tested on their listening comprehension before and…
Cooper, William B; Tobey, Emily; Loizou, Philipos C
2008-08-01
The purpose of this study was to explore the utility/possibility of using the Montreal Battery for Evaluation of Amusia (MBEA) test (Peretz, et al., Ann N Y Acad Sci, 999, 58-75) to assess the music perception abilities of cochlear implant (CI) users. The MBEA was used to measure six different aspects of music perception (Scale, Contour, Interval, Rhythm, Meter, and Melody Memory) by CI users and normal-hearing (NH) listeners presented with stimuli processed via CI simulations. The spectral resolution (number of channels) was varied in the CI simulations to determine: (a) the number of channels (4, 6, 8, 12, and 16) needed to achieve the highest levels of music perception and (b) the number of channels needed to produce levels of music perception performance comparable with that of CI users. CI users and NH listeners performed higher on temporal-based tests (Rhythm and Meter) than on pitch-based tests (Scale, Contour, and Interval)--a finding that is consistent with previous research studies. The CI users' scores on pitch-based tests were near chance. The CI users' (but not NH listeners') scores for the Memory test, a test that incorporates an integration of both temporal-based and pitch-based aspects of music, were significantly higher than the scores obtained for the pitch-based Scale test and significantly lower than the temporal-based Rhythm and Meter tests. The data from NH listeners indicated that 16 channels of stimulation did not provide the highest music perception scores and performance was as good as that obtained with 12 channels. This outcome is consistent with other studies showing that NH listeners listening to vocoded speech are not able to use effectively F0 cues present in the envelopes, even when the stimuli are processed with a large number (16) of channels. The CI user data seem to most closely match with the 4- and 6-channel NH listener conditions for the pitch-based tasks. Consistent with previous studies, both CI users and NH listeners showed the typical pattern of music perception in which scores are higher on tests measuring the perception of temporal aspects of music (Rhythm and Meter) than spectral (pitch) aspects of music (Scale, Contour, and Interval). On that regard, the pattern of results from this study indicates that the MBEA is a suitable test for measuring various aspects of music perception by CI users.
ERIC Educational Resources Information Center
Weaver, Phyllis A.; Rosner, Jerome
1979-01-01
Scores of 25 learning disabled students (aged 9 to 13) were compared on five tests: a visual-perceptual test (Coloured Progressive Matrices); an auditory-perceptual test (Auditory Motor Placement); a listening and reading comprehension test (Durrell Listening-Reading Series); and a word recognition test (Word Recognition subtest, Diagnostic…
Kondaurova, Maria V; Francis, Alexander L
2008-12-01
Two studies explored the role of native language use of an acoustic cue, vowel duration, in both native and non-native contexts in order to test the hypothesis that non-native listeners' reliance on vowel duration instead of vowel quality to distinguish the English tense/lax vowel contrast could be explained by the role of duration as a cue in native phonological contrasts. In the first experiment, native Russian, Spanish, and American English listeners identified stimuli from a beat/bit continuum varying in nine perceptually equal spectral and duration steps. English listeners relied predominantly on spectrum, but showed some reliance on duration. Russian and Spanish speakers relied entirely on duration. In the second experiment, three tests examined listeners' use of vowel duration in native contrasts. Duration was equally important for the perception of lexical stress for all three groups. However, English listeners relied more on duration as a cue to postvocalic consonant voicing than did native Spanish or Russian listeners, and Spanish listeners relied on duration more than did Russian listeners. Results suggest that, although allophonic experience may contribute to cross-language perceptual patterns, other factors such as the application of statistical learning mechanisms and the influence of language-independent psychoacoustic proclivities cannot be ruled out.
Warzybok, Anna; Brand, Thomas; Wagener, Kirsten C; Kollmeier, Birger
2015-01-01
The current study investigates the extent to which the linguistic complexity of three commonly employed speech recognition tests and second language proficiency influence speech recognition thresholds (SRTs) in noise in non-native listeners. SRTs were measured for non-natives and natives using three German speech recognition tests: the digit triplet test (DTT), the Oldenburg sentence test (OLSA), and the Göttingen sentence test (GÖSA). Sixty-four non-native and eight native listeners participated. Non-natives can show native-like SRTs in noise only for the linguistically easy speech material (DTT). Furthermore, the limitation of phonemic-acoustical cues in digit triplets affects speech recognition to the same extent in non-natives and natives. For more complex and less familiar speech materials, non-natives, ranging from basic to advanced proficiency in German, require on average 3-dB better signal-to-noise ratio for the OLSA and 6-dB for the GÖSA to obtain 50% speech recognition compared to native listeners. In clinical audiology, SRT measurements with a closed-set speech test (i.e. DTT for screening or OLSA test for clinical purposes) should be used with non-native listeners rather than open-set speech tests (such as the GÖSA or HINT), especially if a closed-set version in the patient's own native language is available.
Malinina, E S
2014-01-01
The spatial specificity of auditory aftereffect was studied after a short-time adaptation (5 s) to the broadband noise (20-20000 Hz). Adapting stimuli were sequences of noise impulses with the constant amplitude, test stimuli--with the constant and changing amplitude: an increase of amplitude of impulses in sequence was perceived by listeners as approach of the sound source, while a decrease of amplitude--as its withdrawal. The experiments were performed in an anechoic chamber. The auditory aftereffect was estimated under the following conditions: the adapting and test stimuli were presented from the loudspeaker located at a distance of 1.1 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively far spatial domain); the adapting and test stimuli were presented from different distances. The obtained data showed that perception of the imitated movement of the sound source in both spatial domains had the common characteristic peculiarities that manifested themselves both under control conditions without adaptation and after adaptation to noise. In the absence of adaptation for both distances, an asymmetry of psychophysical curves was observed: the listeners estimated the test stimuli more often as approaching. The overestimation by listeners of test stimuli as the approaching ones was more pronounced at their presentation from the distance of 1.1 m, i. e., from the subjectively near spatial domain. After adaptation to noise the aftereffects showed spatial specificity in both spatial domains: they were observed only at the spatial coincidence of adapting and test stimuli and were absent at their separation. The aftereffects observed in two spatial domains were similar in direction and value: the listeners estimated the test stimuli more often as withdrawing as compared to control. The result of such aftereffect was restoration of the symmetry of psychometric curves and of the equiprobable estimation of direction of movement of test signals.
ERIC Educational Resources Information Center
Bidabadi, Farinaz Shirani; Yamat, Hamidah
2011-01-01
The purpose of the current study was to identify Iranian EFL freshman university students' listening proficiency levels and the listening strategies they employed to investigate the relationship between these two variables. A total of 92 freshmen were involved in this study. The Oxford Placement Test was employed to identify the learners'…
Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas
2014-03-01
Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.
Listening to classical music ameliorates unilateral neglect after stroke.
Tsai, Pei-Luen; Chen, Mei-Ching; Huang, Yu-Ting; Lin, Keh-Chung; Chen, Kuan-Lin; Hsu, Yung-Wen
2013-01-01
OBJECTIVE. We determined whether listening to excerpts of classical music ameliorates unilateral neglect (UN) in stroke patients. METHOD. In this within-subject study, we recruited and separately tested 16 UN patients with a right-hemisphere stroke under three conditions within 1 wk. In each condition, participants were asked to complete three subtests of the Behavioral Inattention Test while listening to classical music, white noise, or nothing. All conditions and the presentation of the tests were counterbalanced across participants. Visual analog scales were used to provide self-reported ratings of arousal and mood. RESULTS. Participants generally had the highest scores under the classical music condition and the lowest scores under the silence condition. In addition, most participants rated their arousal as highest after listening to classical music. CONCLUSION. Listening to classical music may improve visual attention in stroke patients with UN. Future research with larger study populations is necessary to validate these findings. Copyright © 2013 by the American Occupational Therapy Association, Inc.
ERIC Educational Resources Information Center
Mihara, Kei
2015-01-01
The purpose of the present study is twofold. The first goal is to examine the effects of phonological input on students' vocabulary learning. The second is to discuss how different pre-listening activities affect students' second language listening comprehension. The participants were first-year students at a Japanese university. There were two…
Physical and perceptual estimation of differences between loudspeakers
NASA Astrophysics Data System (ADS)
Lavandier, Mathieu; Herzog, Philippe; Meunier, Sabine
2006-12-01
Differentiating the restitution of timbre by several loudspeakers may result from standard measurements, or from listening tests. This work proposes a protocol keeping a close relationship between the objective and perceptual evaluations: the stimuli are musical excerpts, and the measuring environment is a standard listening room. The protocol involves recordings made at a listener position, and objective dissimilarities are computed using an auditory model simulating masking effects. The resulting data correlate very well with listening tests using the same recordings, and show similar dependencies on the major parameters identified from the dissimilarity matrices. To cite this article: M. Lavandier et al., C. R. Mecanique 334 (2006).
Benefiting from Listening in Vocabulary Development
ERIC Educational Resources Information Center
Bulut, Berker; Karasakaloglu, Nuri
2017-01-01
In this research, the effect of active listening training given to fourth grade students on their vocabulary was examined. Pre-test--post-test control group trial model, which is one of the semi-experimental trial models, was used. Besides, "Vocabulary Test" developed by the researcher was applied to experimental and control groups…
ERIC Educational Resources Information Center
Lu, Zhihong; Wang, Yanfei
2014-01-01
The effective design of test items within a computer-based language test (CBLT) for developing English as a foreign language (EFL) learners' listening and speaking skills has become an increasingly challenging task for both test users and test designers compared with that of pencil-and-paper tests in the past. It needs to fit integrated oral…
ERIC Educational Resources Information Center
Filipi, Anna
2012-01-01
The Assessment of Language Competence (ALC) certificates is an annual, international testing program developed by the Australian Council for Educational Research to test the listening and reading comprehension skills of lower to middle year levels of secondary school. The tests are developed for three levels in French, German, Italian and…
Weinberg, W A; McLean, A; Snider, R L; Rintelmann, J W; Brumback, R A
1989-12-01
Eight groups of learning disabled children (N = 100), categorized by the clinical Lexical Paradigm as good readers or poor readers, were individually administered the Gilmore Oral Reading Test, Form D, by one of four input/retrieval methods: (1) the standardized method of administration in which the child reads each paragraph aloud and then answers five questions relating to the paragraph [read/recall method]; (2) the child reads each paragraph aloud and then for each question selects the correct answer from among three choices read by the examiner [read/choice method]; (3) the examiner reads each paragraph aloud and reads each of the five questions to the child to answer [listen/recall method]; and (4) the examiner reads each paragraph aloud and then for each question reads three multiple-choice answers from which the child selects the correct answer [listen/choice method]. The major difference in scores was between the groups tested by the recall versus the orally read multiple-choice methods. This study indicated that poor readers who listened to the material and were tested by orally read multiple-choice format could perform as well as good readers. The performance of good readers was not affected by listening or by the method of testing. The multiple-choice testing improved the performance of poor readers independent of the input method. This supports the arguments made previously that a "bypass approach" to education of poor readers in which testing is accomplished using an orally read multiple-choice format can enhance the child's school performance on reading-related tasks. Using a listening while reading input method may further enhance performance.
Weihing, Jeffrey; Guenette, Linda; Chermak, Gail; Brown, Mallory; Ceruti, Julianne; Fitzgerald, Krista; Geissler, Kristin; Gonzalez, Jennifer; Brenneman, Lauren; Musiek, Frank
2015-01-01
Although central auditory processing disorder (CAPD) test battery performance has been examined in adults with neurologic lesions of the central auditory nervous system (CANS), similar data on children being referred for CAPD evaluations are sparse. This study characterizes CAPD test battery performance in children using tests commonly administered to diagnose the disorder. Specifically, this study describes failure rates for various test combinations, relationships between CAPD tests used in the battery, and the influence of cognitive function on CAPD test performance and CAPD diagnosis. A comparison is also made between the performance of children with CAPD and data from patients with neurologic lesions of the CANS. A retrospective study. Fifty-six pediatric patients were referred for CAPD testing. Participants were administered four CAPD tests, including frequency patterns (FP), low-pass filtered speech (LPFS), dichotic digits (DD), and competing sentences (CS). In addition, they were given the Wechsler Intelligence Scale for Children (WISC). Descriptive analyses examined the failure rates of various test combinations, as well as how often children with CAPD failed certain combinations when compared with adults with CANS lesions. A principal components analysis was performed to examine interrelationships between tests. Correlations and regressions were conducted to determine the relationship between CAPD test performance and the WISC. Results showed that the FP and LPFS tests were most commonly failed by children with CAPD. Two-test combinations that included one or both of these two tests and excluded DD tended to be failed more often. Including the DD and CS test in a battery benefited specificity. Tests thought to measure interhemispheric transfer tended to be correlated. Compared with adult patients with neurologic lesions, children with CAPD tended to fail LPFS more frequently and DD less frequently. Both groups failed FP with relatively equal frequency. The two-test combination that showed the highest failure rate for children with CAPD was LPFS-FP. Comparison with adults with CANS lesions, however, suggests that the mechanisms underlying LPFS performance in children need to be better understood. The two-test combination that showed the next highest failure rates among children with CAPD and did not include LPFS was CS-FP. If it is desirable to use a dichotic measure that has a lower linguistic load than CS then DD can be substituted for CS despite the slightly lower failure rate of the DD-FP battery. American Academy of Audiology.
Listeners' Comprehension of Uptalk in Spontaneous Speech
ERIC Educational Resources Information Center
Tomlinson, John M., Jr.; Tree, Jean E. Fox
2011-01-01
Listeners' comprehension of phrase final rising pitch on declarative utterances, or "uptalk", was examined to test the hypothesis that prolongations might differentiate conflicting functions of rising pitch. In Experiment 1 we found that listeners rated prolongations as indicating more speaker uncertainty, but that rising pitch was unrelated to…
Inservice Training Packet: Auditory Discrimination Listening Skills.
ERIC Educational Resources Information Center
Florida Learning Resources System/CROWN, Jacksonville.
Intended to be used as the basis for a brief inservice workshop, the auditory discrimination/listening skills packet provides information on ideas, materials, and resources for remediating auditory discrimination and listening skill deficits. Included are a sample prescription form, tests of auditory discrimination, and a list of auditory…
Santos, Sandra; Viana, Fernanda Leopoldina; Ribeiro, Iolanda; Prieto, Gerardo; Brandão, Sara; Cadime, Irene
2015-03-03
This investigation aimed to develop and collect psychometric data for two tests assessing listening comprehension of Portuguese students in primary school: the Test of Listening Comprehension of Narrative Texts (TLC-n) and the Test of Listening Comprehension of Expository Texts (TLC-e). Two studies were conducted. The purpose of study 1 was to construct four test forms for each of the two tests to assess first, second, third and fourth grade students of the primary school. The TLC-n was administered to 1042 students, and the TLC-e was administered to 848 students. The purpose of study 2 was to test the psychometric properties of new items for the TLC-n form for fourth graders, given that the results in study 1 indicated a severe lack of difficult items. The participants were 260 fourth graders. The data were analysed using the Rasch model. Thirty items were selected for each test form. The results provided support for the model assumptions: Unidimensionality and local independence of the items. The reliability coefficients were higher than .70 for all test forms. The TLC-n and the TLC-e present good psychometric properties and represent an important contribution to the learning disabilities assessment field.
Development of the adaptive music perception test.
Kirchberger, Martin J; Russo, Frank A
2015-01-01
Despite vast amounts of research examining the influence of hearing loss on speech perception, comparatively little is known about its influence on music perception. No standardized test exists to quantify music perception of hearing-impaired (HI) persons in a clinically practical manner. This study presents the Adaptive Music Perception (AMP) test as a tool to assess important aspects of music perception with hearing loss. A computer-driven test was developed to determine the discrimination thresholds of 10 low-level physical dimensions (e.g., duration, level) in the context of perceptual judgments about musical dimensions: meter, harmony, melody, and timbre. In the meter test, the listener is asked to judge whether a tone sequence is duple or triple in meter. The harmony test requires that the listener make judgments about the stability of the chord sequences. In the melody test, the listener must judge whether a comparison melody is the same as a standard melody when presented in transposition and in the context of a chordal accompaniment that serves as a mask. The timbre test requires that the listener determines which of two comparison tones is different in timbre from a standard tone (ABX design). Twenty-one HI participants and 19 normal-hearing (NH) participants were recruited to carry out the music tests. Participants were tested twice on separate occasions to evaluate test-retest reliability. The HI group had significantly higher discrimination thresholds than the NH group in 7 of the 10 low-level physical dimensions: frequency discrimination in the meter test, dissonance and intonation perception in the harmony test, melody-to-chord ratio for both melody types in the melody test, and the perception of brightness and spectral irregularity in the timbre test. Small but significant improvement between test and retest was observed in three dimensions: frequency discrimination (meter test), dissonance (harmony test), and attack length (timbre test). All other dimensions did not show a session effect. Test-retest reliability was poor (<0.6) for spectral irregularity (timbre test); acceptable (>0.6) for pitch and duration (meter test), dissonance and intonation (harmony test), and melody-to-chord ratio I and II (melody test); and excellent (>0.8) for level (meter test) and attack (timbre test). The AMP test revealed differences in a wide range of music perceptual abilities between NH and HI listeners. The recognition of meter was more difficult for HI listeners when the listening task was based on frequency discrimination. The HI group was less sensitive to changes in harmony and had more difficulties with distinguishing melodies in a background of music. In addition, the thresholds to discriminate timbre were significantly higher for the HI group in brightness and spectral irregularity dimensions. The AMP test can be used as a research tool to further investigate music perception with hearing aids and compare the benefit of different music processing strategies for the HI listener. Future testing will involve larger samples with the inclusion of hearing aided conditions allowing for the establishment of norms so that the test might be appropriate for use in clinical practice.
Elling, Ludger; Steinberg, Christian; Bröckelmann, Ann-Kathrin; Dobel, Christan; Bölte, Jens; Junghofer, Markus
2011-01-01
Background Acute stress is a stereotypical, but multimodal response to a present or imminent challenge overcharging an organism. Among the different branches of this multimodal response, the consequences of glucocorticoid secretion have been extensively investigated, mostly in connection with long-term memory (LTM). However, stress responses comprise other endocrine signaling and altered neuronal activity wholly independent of pituitary regulation. To date, knowledge of the impact of such “paracorticoidal” stress responses on higher cognitive functions is scarce. We investigated the impact of an ecological stressor on the ability to direct selective attention using event-related potentials in humans. Based on research in rodents, we assumed that a stress-induced imbalance of catecholaminergic transmission would impair this ability. Methodology/Principal Findings The stressor consisted of a single cold pressor test. Auditory negative difference (Nd) and mismatch negativity (MMN) were recorded in a tonal dichotic listening task. A time series of such tasks confirmed an increased distractibility occuring 4–7 minutes after onset of the stressor as reflected by an attenuated Nd. Salivary cortisol began to rise 8–11 minutes after onset when no further modulations in the event-related potentials (ERP) occurred, thus precluding a causal relationship. This effect may be attributed to a stress-induced activation of mesofrontal dopaminergic projections. It may also be attributed to an activation of noradrenergic projections. Known characteristics of the modulation of ERP by different stress-related ligands were used for further disambiguation of causality. The conjuncture of an attenuated Nd and an increased MMN might be interpreted as indicating a dopaminergic influence. The selective effect on the late portion of the Nd provides another tentative clue for this. Conclusions/Significance Prior studies have deliberately tracked the adrenocortical influence on cognition, as it has proven most influential with respect to LTM. However, current cortisol-optimized study designs would have failed to detect the present findings regarding attention. PMID:21483666
Binaural Interaction Effects of 30-50 Hz Auditory Steady State Responses.
Gransier, Robin; van Wieringen, Astrid; Wouters, Jan
Auditory stimuli modulated by modulation frequencies within the 30 to 50 Hz region evoke auditory steady state responses (ASSRs) with high signal to noise ratios in adults, and can be used to determine the frequency-specific hearing thresholds of adults who are unable to give behavioral feedback reliably. To measure ASSRs as efficiently as possible a multiple stimulus paradigm can be used, stimulating both ears simultaneously. The response strength of 30 to 50Hz ASSRs is, however, affected when both ears are stimulated simultaneously. The aim of the present study is to gain insight in the measurement efficiency of 30 to 50 Hz ASSRs evoked with a 2-ear stimulation paradigm, by systematically investigating the binaural interaction effects of 30 to 50 Hz ASSRs in normal-hearing adults. ASSRs were obtained with a 64-channel EEG system in 23 normal-hearing adults. All participants participated in one diotic, multiple dichotic, and multiple monaural conditions. Stimuli consisted of a modulated one-octave noise band, centered at 1 kHz, and presented at 70 dB SPL. The diotic condition contained 40 Hz modulated stimuli presented to both ears. In the dichotic conditions, the modulation frequency of the left ear stimulus was kept constant at 40 Hz, while the stimulus at the right ear was either the unmodulated or modulated carrier. In case of the modulated carrier, the modulation frequency varied between 30 and 50 Hz in steps of 2 Hz across conditions. The monaural conditions consisted of all stimuli included in the diotic and dichotic conditions. Modulation frequencies ≥36 Hz resulted in prominent ASSRs in all participants for the monaural conditions. A significant enhancement effect was observed (average: ~3 dB) in the diotic condition, whereas a significant reduction effect was observed in the dichotic conditions. There was no distinct effect of the temporal characteristics of the stimuli on the amount of reduction. The attenuation was in 33% of the cases >3 dB for ASSRs evoked with modulation frequencies ≥40 Hz and 50% for ASSRs evoked with modulation frequencies ≤36 Hz. Binaural interaction effects as observed in the diotic condition are similar to the binaural interaction effects of middle latency responses as reported in the literature, suggesting that these responses share a same underlying mechanism. Our data also indicated that 30 to 50 Hz ASSRs are attenuated when presented dichotically and that this attenuation is independent of the stimulus characteristics as used in the present study. These findings are important as they give insight in how binaural interaction affects the measurement efficiency. The 2-ear stimulation paradigm of the present study was, for the most optimal modulation frequencies (i.e., ≥40 Hz), more efficient than a 1-ear sequential stimulation paradigm in 66% of the cases.
Note-Taking Quality and Performance on an L2 Academic Listening Test
ERIC Educational Resources Information Center
Song, Min-Young
2012-01-01
This study investigated the relationships among the quality of L2 test takers' notes evaluated in terms of different levels of information and test takers' performance on open-ended listening tasks tapping into different comprehension subskills. In addition, this study examined the invariance of the structural relationships among the variables…
Syntactic Comprehension in Reading and Listening: A Study with French Children with Dyslexia
ERIC Educational Resources Information Center
Casalis, Severine; Leuwers, Christel; Hilton, Heather
2013-01-01
This study examined syntactic comprehension in French children with dyslexia in both listening and reading. In the first syntactic comprehension task, a partial version of the Epreuve de Comprehension syntaxico-semantique (ECOSSE test; French adaptation of Bishop's test for receptive grammar test) children with dyslexia performed at a lower level…
A User's Response to the Use of Listening Assessment Instruments.
ERIC Educational Resources Information Center
Roberts, Charles V.
Noting that the attention of the speech communication discipline to listening skills does not mirror the apparent importance of such skills, this paper examines five listening assessment tests--focusing on the strengths, weaknesses, procedural problems, and conceptualizations of each--that potential users should be aware of before selecting any…
Promoting Process-Oriented Listening Instruction in the ESL Classroom
ERIC Educational Resources Information Center
Nguyen, Huong; Abbott, Marilyn L.
2016-01-01
When teaching listening, second language instructors tend to rely on product-oriented approaches that test learners' abilities to identify words and answer comprehension questions, but this does little to help learners improve upon their listening skills (e.g., Vandergrift & Goh, 2012). To address this issue, alternative approaches that guide…
The Aural Music Project: An Exploration of the Usefulness of An Experimental Listening Test.
ERIC Educational Resources Information Center
Humphry, Betty J.; Pitcher, Barbara
The GRE Advanced Music Test and an experimental Aural Supplement (a listening test designed to measure music students'"hearing ability") were taken by 334 senior music students as part of a project conducted in 1964. The Advanced Music Test consists of 200 5-choice questions on the fundamentals of music, history and literature, theory,…
Kohlberg, Gavriel D; Mancuso, Dean M; Chari, Divya A; Lalwani, Anil K
2015-01-01
Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Compared to the original song, modified versions containing only 1-3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience.
Kawamichi, Hiroaki; Yoshihara, Kazufumi; Sasaki, Akihiro T; Sugawara, Sho K; Tanabe, Hiroki C; Shinohara, Ryoji; Sugisawa, Yuka; Tokutake, Kentaro; Mochizuki, Yukiko; Anme, Tokie; Sadato, Norihiro
2015-01-01
Although active listening is an influential behavior, which can affect the social responses of others, the neural correlates underlying its perception have remained unclear. Sensing active listening in social interactions is accompanied by an improvement in the recollected impressions of relevant experiences and is thought to arouse positive feelings. We therefore hypothesized that the recognition of active listening activates the reward system, and that the emotional appraisal of experiences that had been subject to active listening would be improved. To test these hypotheses, we conducted functional magnetic resonance imaging (fMRI) on participants viewing assessments of their own personal experiences made by evaluators with or without active listening attitude. Subjects rated evaluators who showed active listening more positively. Furthermore, they rated episodes more positively when they were evaluated by individuals showing active listening. Neural activation in the ventral striatum was enhanced by perceiving active listening, suggesting that this was processed as rewarding. It also activated the right anterior insula, representing positive emotional reappraisal processes. Furthermore, the mentalizing network was activated when participants were being evaluated, irrespective of active listening behavior. Therefore, perceiving active listening appeared to result in positive emotional appraisal and to invoke mental state attribution to the active listener.
Effects of a training model on active listening skills of post-RN students.
Olson, J K; Iwasiw, C L
1987-03-01
This study investigated the effects of a training module on the active listening skills of non-degree registered nurses. Active listening skills were defined as understanding what another person is saying and feeling and then communicating this understanding of his thoughts and feelings back to him. The sample consisted of 26 post-diploma RNs registered in the first year of a baccalaureate degree nursing program. Pretraining and posttraining data were collected when subjects verbally responded to two portions of the Behavioral Test of Interpersonal Skills for Health Professionals (BTIS). Subjects were audiotaped while responding and tapes were scored using BTIS guidelines. Paired t-tests were used to determine differences between pretraining and posttraining scores. Active listening scores increased significantly (p less than .0005) while attempts to suppress or discount speakers' feelings decreased significantly (p less than .005). A six-hour training session significantly increased active listening skills.
A Spanish matrix sentence test for assessing speech reception thresholds in noise.
Hochmuth, Sabine; Brand, Thomas; Zokoll, Melanie A; Castro, Franz Zenker; Wardenga, Nina; Kollmeier, Birger
2012-07-01
To develop, optimize, and evaluate a new Spanish sentence test in noise. The test comprises a basic matrix of ten names, verbs, numerals, nouns, and adjectives. From this matrix, test lists of ten sentences with an equal syntactical structure can be formed at random, with each list containing the whole speech material. The speech material represents the phoneme distribution of the Spanish language. The test was optimized for measuring speech reception thresholds (SRTs) in noise by adjusting the presentation levels of the individual words. Subsequently, the test was evaluated by independent measurements investigating the training effects, the comparability of test lists, open-set vs. closed-set test format, and performance of listeners of different Spanish varieties. In total, 68 normal-hearing native Spanish-speaking listeners. SRTs measured using an adaptive procedure were 6.2 ± 0.8 dB SNR for the open-set and 7.2 ± 0.7 dB SNR for the closed-set test format. The residual training effect was less than 1 dB after using two double-lists before data collection. No significant differences were found for listeners of different Spanish varieties indicating that the test is applicable to Spanish as well as Latin American listeners. Test lists can be used interchangeably.
Walker, Matthew A.; Short, Ciara E.; Skinner, Kimberly G.
2017-01-01
Purpose This study evaluated the American Speech-Language-Hearing Association's recommendation that audiometric testing for patients with tinnitus should use pulsed or warble tones. Using listeners with varied audiometric configurations and tinnitus statuses, we asked whether steady, pulsed, and warble tones yielded similar audiometric thresholds, and which tone type was preferred. Method Audiometric thresholds (octave frequencies from 0.25–16 kHz) were measured using steady, pulsed, and warble tones in 61 listeners, who were divided into 4 groups on the basis of hearing and tinnitus status. Participants rated the appeal and difficulty of each tone type on a 1–5 scale and selected a preferred type. Results For all groups, thresholds were lower for warble than for pulsed and steady tones, with the largest effects above 4 kHz. Appeal ratings did not differ across tone type, but the steady tone was rated as more difficult than the warble and pulsed tones. Participants generally preferred pulsed and warble tones. Conclusions Pulsed tones provide advantages over steady and warble tones for patients regardless of hearing or tinnitus status. Although listeners preferred pulsed and warble tones to steady tones, pulsed tones are not susceptible to the effects of off-frequency listening, a consideration when testing listeners with sloping audiograms. PMID:28892822
Lentz, Jennifer J; Walker, Matthew A; Short, Ciara E; Skinner, Kimberly G
2017-09-18
This study evaluated the American Speech-Language-Hearing Association's recommendation that audiometric testing for patients with tinnitus should use pulsed or warble tones. Using listeners with varied audiometric configurations and tinnitus statuses, we asked whether steady, pulsed, and warble tones yielded similar audiometric thresholds, and which tone type was preferred. Audiometric thresholds (octave frequencies from 0.25-16 kHz) were measured using steady, pulsed, and warble tones in 61 listeners, who were divided into 4 groups on the basis of hearing and tinnitus status. Participants rated the appeal and difficulty of each tone type on a 1-5 scale and selected a preferred type. For all groups, thresholds were lower for warble than for pulsed and steady tones, with the largest effects above 4 kHz. Appeal ratings did not differ across tone type, but the steady tone was rated as more difficult than the warble and pulsed tones. Participants generally preferred pulsed and warble tones. Pulsed tones provide advantages over steady and warble tones for patients regardless of hearing or tinnitus status. Although listeners preferred pulsed and warble tones to steady tones, pulsed tones are not susceptible to the effects of off-frequency listening, a consideration when testing listeners with sloping audiograms.
Mazor, Kathleen M; Rubin, Donald L; Roblin, Douglas W; Williams, Andrew E; Han, Paul K J; Gaglio, Bridget; Cutrona, Sarah L; Costanza, Mary E; Wagner, Joann L
2016-08-01
Patient question-asking is essential to shared decision making. We sought to describe patients' questions when faced with cancer prevention and screening decisions, and to explore differences in question-asking as a function of health literacy with respect to spoken information (health literacy-listening). Four-hundred and thirty-three (433) adults listened to simulated physician-patient interactions discussing (i) prophylactic tamoxifen for breast cancer prevention, (ii) PSA testing for prostate cancer and (iii) colorectal cancer screening, and identified questions they would have. Health literacy-listening was assessed using the Cancer Message Literacy Test-Listening (CMLT-Listening). Two authors developed a coding scheme, which was applied to all questions. Analyses examined whether participants scoring above or below the median on the CMLT-Listening asked a similar variety of questions. Questions were coded into six major function categories: risks/benefits, procedure details, personalizing information, additional information, decision making and credibility. Participants who scored higher on the CMLT-Listening asked a greater variety of risks/benefits questions; those who scored lower asked a greater variety of questions seeking to personalize information. This difference persisted after adjusting for education. Patients' health literacy-listening is associated with distinctive patterns of question utilization following cancer screening and prevention counselling. Providers should not only be responsive to the question functions the patient favours, but also seek to ensure that the patient is exposed to the full range of information needed for shared decision making. © 2015 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
Ohlenforst, Barbara; Zekveld, Adriana A; Jansma, Elise P; Wang, Yang; Naylor, Graham; Lorens, Artur; Lunner, Thomas; Kramer, Sophia E
To undertake a systematic review of available evidence on the effect of hearing impairment and hearing aid amplification on listening effort. Two research questions were addressed: Q1) does hearing impairment affect listening effort? and Q2) can hearing aid amplification affect listening effort during speech comprehension? English language articles were identified through systematic searches in PubMed, EMBASE, Cinahl, the Cochrane Library, and PsycINFO from inception to August 2014. References of eligible studies were checked. The Population, Intervention, Control, Outcomes, and Study design strategy was used to create inclusion criteria for relevance. It was not feasible to apply a meta-analysis of the results from comparable studies. For the articles identified as relevant, a quality rating, based on the 2011 Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines, was carried out to judge the reliability and confidence of the estimated effects. The primary search produced 7017 unique hits using the keywords: hearing aids OR hearing impairment AND listening effort OR perceptual effort OR ease of listening. Of these, 41 articles fulfilled the Population, Intervention, Control, Outcomes, and Study design selection criteria of: experimental work on hearing impairment OR hearing aid technologies AND listening effort OR fatigue during speech perception. The methods applied in those articles were categorized into subjective, behavioral, and physiological assessment of listening effort. For each study, the statistical analysis addressing research question Q1 and/or Q2 was extracted. In seven articles more than one measure of listening effort was provided. Evidence relating to Q1 was provided by 21 articles that reported 41 relevant findings. Evidence relating to Q2 was provided by 27 articles that reported 56 relevant findings. The quality of evidence on both research questions (Q1 and Q2) was very low, according to the Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines. We tested the statistical evidence across studies with nonparametric tests. The testing revealed only one consistent effect across studies, namely that listening effort was higher for hearing-impaired listeners compared with normal-hearing listeners (Q1) as measured by electroencephalographic measures. For all other studies, the evidence across studies failed to reveal consistent effects on listening effort. In summary, we could only identify scientific evidence from physiological measurement methods, suggesting that hearing impairment increases listening effort during speech perception (Q1). There was no scientific, finding across studies indicating that hearing aid amplification decreases listening effort (Q2). In general, there were large differences in the study population, the control groups and conditions, and the outcome measures applied between the studies included in this review. The results of this review indicate that published listening effort studies lack consistency, lack standardization across studies, and have insufficient statistical power. The findings underline the need for a common conceptual framework for listening effort to address the current shortcomings.
Effects of Hearing Impairment and Hearing Aid Amplification on Listening Effort: A Systematic Review
Ohlenforst, Barbara; Jansma, Elise P.; Wang, Yang; Naylor, Graham; Lorens, Artur; Lunner, Thomas; Kramer, Sophia E.
2017-01-01
Objectives: To undertake a systematic review of available evidence on the effect of hearing impairment and hearing aid amplification on listening effort. Two research questions were addressed: Q1) does hearing impairment affect listening effort? and Q2) can hearing aid amplification affect listening effort during speech comprehension? Design: English language articles were identified through systematic searches in PubMed, EMBASE, Cinahl, the Cochrane Library, and PsycINFO from inception to August 2014. References of eligible studies were checked. The Population, Intervention, Control, Outcomes, and Study design strategy was used to create inclusion criteria for relevance. It was not feasible to apply a meta-analysis of the results from comparable studies. For the articles identified as relevant, a quality rating, based on the 2011 Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines, was carried out to judge the reliability and confidence of the estimated effects. Results: The primary search produced 7017 unique hits using the keywords: hearing aids OR hearing impairment AND listening effort OR perceptual effort OR ease of listening. Of these, 41 articles fulfilled the Population, Intervention, Control, Outcomes, and Study design selection criteria of: experimental work on hearing impairment OR hearing aid technologies AND listening effort OR fatigue during speech perception. The methods applied in those articles were categorized into subjective, behavioral, and physiological assessment of listening effort. For each study, the statistical analysis addressing research question Q1 and/or Q2 was extracted. In seven articles more than one measure of listening effort was provided. Evidence relating to Q1 was provided by 21 articles that reported 41 relevant findings. Evidence relating to Q2 was provided by 27 articles that reported 56 relevant findings. The quality of evidence on both research questions (Q1 and Q2) was very low, according to the Grading of Recommendations Assessment, Development, and Evaluation Working Group guidelines. We tested the statistical evidence across studies with nonparametric tests. The testing revealed only one consistent effect across studies, namely that listening effort was higher for hearing-impaired listeners compared with normal-hearing listeners (Q1) as measured by electroencephalographic measures. For all other studies, the evidence across studies failed to reveal consistent effects on listening effort. Conclusion: In summary, we could only identify scientific evidence from physiological measurement methods, suggesting that hearing impairment increases listening effort during speech perception (Q1). There was no scientific, finding across studies indicating that hearing aid amplification decreases listening effort (Q2). In general, there were large differences in the study population, the control groups and conditions, and the outcome measures applied between the studies included in this review. The results of this review indicate that published listening effort studies lack consistency, lack standardization across studies, and have insufficient statistical power. The findings underline the need for a common conceptual framework for listening effort to address the current shortcomings. PMID:28234670
Lentz, Jennifer J; He, Yuan; Townsend, James T
2014-01-01
This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time (RT) distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180° out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic vs. the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, RTs allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear vs. two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1), consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency)-most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy.
Lentz, Jennifer J.; He, Yuan; Townsend, James T.
2014-01-01
This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time (RT) distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180° out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic vs. the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, RTs allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear vs. two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1), consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency)—most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy. PMID:25202254
Xie, Xin; Fowler, Carol A.
2013-01-01
This study examined the intelligibility of native and Mandarin-accented English speech for native English and native Mandarin listeners. In the latter group, it also examined the role of the language environment and English proficiency. Three groups of listeners were tested: native English listeners (NE), Mandarin-speaking Chinese listeners in the US (M-US) and Mandarin listeners in Beijing, China (M-BJ). As a group, M-US and M-BJ listeners were matched on English proficiency and age of acquisition. A nonword transcription task was used. Identification accuracy for word-final stops in the nonwords established two independent interlanguage intelligibility effects. An interlanguage speech intelligibility benefit for listeners (ISIB-L) was manifest by both groups of Mandarin listeners outperforming native English listeners in identification of Mandarin-accented speech. In the benefit for talkers (ISIB-T), only M-BJ listeners were more accurate identifying Mandarin-accented speech than native English speech. Thus, both Mandarin groups demonstrated an ISIB-L while only the M-BJ group overall demonstrated an ISIB-T. The English proficiency of listeners was found to modulate the magnitude of the ISIB-T in both groups. Regression analyses also suggested that the listener groups differ in their use of acoustic information to identify voicing in stop consonants. PMID:24293741
Ribeiro, Daniela M; Elias, Nassim C; Goyos, Celso; Miguel, Caio F
2010-01-01
The purpose of the current study was to assess whether individuals with intellectual disabilities would emit untrained speaker responses (i.e., signed tacts and mands) after being taught listener behaviors. Listener relations were trained via an automated matching-to-sample (MTS) procedure. Following mastery, the emergence of signed tacts, generalized tacts, and mands was tested. All participants met criterion in listener relations training and showed the emergence of almost all relations. Results suggest that teaching listener relations first, through MTS tasks, is a viable way to produce emergence of speaker relations. PMID:22477464
Temporal relation between top-down and bottom-up processing in lexical tone perception
Shuai, Lan; Gong, Tao
2013-01-01
Speech perception entails both top-down processing that relies primarily on language experience and bottom-up processing that depends mainly on instant auditory input. Previous models of speech perception often claim that bottom-up processing occurs in an early time window, whereas top-down processing takes place in a late time window after stimulus onset. In this paper, we evaluated the temporal relation of both types of processing in lexical tone perception. We conducted a series of event-related potential (ERP) experiments that recruited Mandarin participants and adopted three experimental paradigms, namely dichotic listening, lexical decision with phonological priming, and semantic violation. By systematically analyzing the lateralization patterns of the early and late ERP components that are observed in these experiments, we discovered that: auditory processing of pitch variations in tones, as a bottom-up effect, elicited greater right hemisphere activation; in contrast, linguistic processing of lexical tones, as a top-down effect, elicited greater left hemisphere activation. We also found that both types of processing co-occurred in both the early (around 200 ms) and late (around 300–500 ms) time windows, which supported a parallel model of lexical tone perception. Unlike the previous view that language processing is special and performed by dedicated neural circuitry, our study have elucidated that language processing can be decomposed into general cognitive functions (e.g., sensory and memory) and share neural resources with these functions. PMID:24723863
Cerebral specialization for speech production in persons with Down syndrome.
Heath, M; Elliott, D
1999-09-01
The study of cerebral specialization in persons with Down syndrome (DS) has revealed an anomalous pattern of organization. Specifically, dichotic listening studies (e.g., Elliott & Weeks, 1993) have suggested a left ear/right hemisphere dominance for speech perception for persons with DS. In the current investigation, the cerebral dominance for speech production was examined using the mouth asymmetry technique. In right-handed, nonhandicapped subjects, mouth asymmetry methodology has shown that during speech, the right side of the mouth opens sooner and to a larger degree then the left side (Graves, Goodglass, & Landis, 1982). The phenomenon of right mouth asymmetry (RMA) is believed to reflect the direct access that the musculature on the right side of the face has to the left hemisphere's speech production systems. This direct access may facilitate the transfer of innervatory patterns to the muscles on the right side of the face. In the present study, the lateralization for speech production was investigated in 10 right-handed participants with DS and 10 nonhandicapped subjects. A RMA at the initiation and end of speech production occurred for subjects in both groups. Surprisingly, the degree of asymmetry between groups did not differ, suggesting that the lateralization of speech production is similar for persons with and persons without DS. These results support the biological dissociation model (Elliott, Weeks, & Elliott, 1987), which holds that persons with DS display a unique dissociation between speech perception (right hemisphere) and speech production (left hemisphere). Copyright 1999 Academic Press.
Why Not Non-Native Varieties of English as Listening Comprehension Test Input?
ERIC Educational Resources Information Center
Abeywickrama, Priyanvada
2013-01-01
The existence of different varieties of English in target language use (TLU) domains calls into question the usefulness of listening comprehension tests whose input is limited only to a native speaker variety. This study investigated the impact of non-native varieties or accented English speech on test takers from three different English use…
Mental Load in Listening, Speech Shadowing and Simultaneous Interpreting: A Pupillometric Study.
ERIC Educational Resources Information Center
Tommola, Jorma; Hyona, Jukka
This study investigated the sensitivity of the pupillary response as an indicator of average mental load during three language processing tasks of varying complexity. The tasks included: (1) listening (without any subsequent comprehension testing); (2) speech shadowing (repeating a message in the same language while listening to it); and (3)…
Visual Cues and Listening Effort: Individual Variability
ERIC Educational Resources Information Center
Picou, Erin M.; Ricketts, Todd A; Hornsby, Benjamin W. Y.
2011-01-01
Purpose: To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Method: Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and…
Lessons in Listening and Learning.
ERIC Educational Resources Information Center
Phibbs, Mary E.
1991-01-01
Presents a teachers search for solutions to the problem of students not listening in science class. The author discovered the sequential nature of aural Origami is an excellent method for getting students to listen. Used cassette tape recordings of the paper folding directions twice a week for a month. Students test scores improved as well as…
Notetaking, Verbal Aptitude, & Listening Span: Factors Involved in Learning from Lectures.
ERIC Educational Resources Information Center
Walbaum, Sharlene D.
Three variables (verbal aptitude, listening ability, and notetaking) that may mediate how much college students learn from a lecture were studied. Verbal aptitude was operationalized as a Verbal Scholastic Aptitude Test (VSAT) score. Listening ability was measured as the score on an auditory short-term memory task, using the serial running memory…
A Content-Based Approach to Teaching and Testing Listening Skills to Grade 5 EFL Learners
ERIC Educational Resources Information Center
Chou, Mu-hsuan
2013-01-01
English education has been officially incorporated into elementary-level education in Taiwan since 2001, with the key objective of reinforcing pupils' oral communication in class. Although oral interaction involves a degree of listening input from interlocutors, listening has unfortunately remained a marginalized area in Taiwanese elementary…
The Role of Receptive Vocabulary Knowledge in Advanced EFL Listening Comprehension
ERIC Educational Resources Information Center
Atas, Ufuk
2018-01-01
This paper presents an empirical study that investigates the role of vocabulary knowledge in listening comprehension with 33 advanced Turkish learners of English as a foreign language. The Vocabulary Levels Test (Schmitt, Schmitt & Clapham, 2001) is used to measure the vocabulary knowledge of the participants and a standardized listening test…
Hamdan, Jihad M; Al-Hawamdeh, Rose Fowler
2018-04-10
This empirical study examines the extent to which 'face', i.e. (audio visual dialogues), affects the listening comprehension of advanced Jordanian EFL learners in a TOFEL-like test, as opposed to its absence (i.e. a purely audio test) which is the current norm in many English language proficiency tests, including but not limited to TOFEL iBT, TOEIC and academic IELTS. Through an online experiment, 60 Jordanian postgraduate linguistics and English literature students (advanced EFL learners) at the University of Jordan sit for two listening tests (simulating English proficiency tests); namely, one which is purely audio [i.e. without any face (including any visuals such as motion, as well as still pictures)], and one which is audiovisual/video. The results clearly show that the inclusion of visuals enhances subjects' performance in listening tests. It is concluded that since the aim of English proficiency tests such as TOEFL iBT is to qualify or disqualify subjects to work and study in western English-speaking countries, the exclusion of visuals is unfounded. In actuality, most natural interaction includes visibility of the interlocutors involved, and hence test takers who sit purely audio proficiency tests in English or any other language are placed at a disadvantage.
Keidser, Gitte; Best, Virginia; Freeston, Katrina; Boyce, Alexandra
2015-01-01
It is well-established that communication involves the working memory system, which becomes increasingly engaged in understanding speech as the input signal degrades. The more resources allocated to recovering a degraded input signal, the fewer resources, referred to as cognitive spare capacity (CSC), remain for higher-level processing of speech. Using simulated natural listening environments, the aims of this paper were to (1) evaluate an English version of a recently introduced auditory test to measure CSC that targets the updating process of the executive function, (2) investigate if the test predicts speech comprehension better than the reading span test (RST) commonly used to measure working memory capacity, and (3) determine if the test is sensitive to increasing the number of attended locations during listening. In Experiment I, the CSC test was presented using a male and a female talker, in quiet and in spatially separated babble- and cafeteria-noises, in an audio-only and in an audio-visual mode. Data collected on 21 listeners with normal and impaired hearing confirmed that the English version of the CSC test is sensitive to population group, noise condition, and clarity of speech, but not presentation modality. In Experiment II, performance by 27 normal-hearing listeners on a novel speech comprehension test presented in noise was significantly associated with working memory capacity, but not with CSC. Moreover, this group showed no significant difference in CSC as the number of talker locations in the test increased. There was no consistent association between the CSC test and the RST. It is recommended that future studies investigate the psychometric properties of the CSC test, and examine its sensitivity to the complexity of the listening environment in participants with both normal and impaired hearing. PMID:25999904
Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian
2016-01-01
Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791
[Diagnosis of psychogenic hearing disorders in childhood].
Kothe, C; Fleischer, S; Breitfuss, A; Hess, M
2003-11-01
In comparison with organic hearing loss, which is commonly reported, non-organic hearing loss is under-represented in the literature. The audiological results for 20 patients, aged between 6 and 17 years (mean 11.3), with psychogenic hearing disturbances were analysed prospectively. In 17 cases, the disturbance was bilateral and in three cases unilateral. In no case was the result of an objective hearing test exceptional, while a hearing threshold of between 30 and 100 dB was reported in single ear, pure-tone audiograms. In 12 cases, single ear speech audiograms were unexceptional. Suprathreshold tests, such as the dichotic discrimination test or the speech audiogram with noise disturbance, could lead to a clearer diagnosis in cases of severe psychogenic auditory impairment. In half of the patients, a conflict situation in the school or family was evident. After treatment for this conflict, hearing ability returned to normal. There was no improvement for six patients.
Lawless, Martin S; Vigeant, Michelle C
2017-10-01
Selecting an appropriate listening test design for concert hall research depends on several factors, including listening test method and participant critical-listening experience. Although expert listeners afford more reliable data, their perceptions may not be broadly representative. The present paper contains two studies that examined the validity and reliability of the data obtained from two listening test methods, a successive and a comparative method, and two types of participants, musicians and non-musicians. Participants rated their overall preference of auralizations generated from eight concert hall conditions with a range of reverberation times (0.0-7.2 s). Study 1, with 34 participants, assessed the two methods. The comparative method yielded similar results and reliability as the successive method. Additionally, the comparative method was rated as less difficult and more preferable. For study 2, an additional 37 participants rated the stimuli using the comparative method only. An analysis of variance of the responses from both studies revealed that musicians are better than non-musicians at discerning their preferences across stimuli. This result was confirmed with a k-means clustering analysis on the entire dataset that revealed five preference groups. Four groups exhibited clear preferences to the stimuli, while the fifth group, predominantly comprising non-musicians, demonstrated no clear preference.
Optimizing acoustical conditions for speech intelligibility in classrooms
NASA Astrophysics Data System (ADS)
Yang, Wonyoung
High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with SNS = 4 dB and increased to 0.8 and 1.2 s with decreased SNS = 0 dB, for both normal and hearing-impaired listeners. Hearing-impaired listeners required more early energy than normal-hearing listeners. Reflective ceiling barriers and ceiling reflectors---in particular, parallel front-back rows of semi-circular reflectors---achieved the goal of decreasing reverberation with the least speech-level reduction.
Kohlberg, Gavriel D.; Mancuso, Dean M.; Chari, Divya A.; Lalwani, Anil K.
2015-01-01
Objective. Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Methods. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Results. Compared to the original song, modified versions containing only 1–3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Conclusions. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience. PMID:26543322
Kawamichi, Hiroaki; Yoshihara, Kazufumi; Sasaki, Akihiro T.; Sugawara, Sho K.; Tanabe, Hiroki C.; Shinohara, Ryoji; Sugisawa, Yuka; Tokutake, Kentaro; Mochizuki, Yukiko; Anme, Tokie; Sadato, Norihiro
2015-01-01
Although active listening is an influential behavior, which can affect the social responses of others, the neural correlates underlying its perception have remained unclear. Sensing active listening in social interactions is accompanied by an improvement in the recollected impressions of relevant experiences and is thought to arouse positive feelings. We therefore hypothesized that the recognition of active listening activates the reward system, and that the emotional appraisal of experiences that had been subject to active listening would be improved. To test these hypotheses, we conducted functional magnetic resonance imaging (fMRI) on participants viewing assessments of their own personal experiences made by evaluators with or without active listening attitude. Subjects rated evaluators who showed active listening more positively. Furthermore, they rated episodes more positively when they were evaluated by individuals showing active listening. Neural activation in the ventral striatum was enhanced by perceiving active listening, suggesting that this was processed as rewarding. It also activated the right anterior insula, representing positive emotional reappraisal processes. Furthermore, the mentalizing network was activated when participants were being evaluated, irrespective of active listening behavior. Therefore, perceiving active listening appeared to result in positive emotional appraisal and to invoke mental state attribution to the active listener. PMID:25188354
Dincer D'Alessandro, Hilal; Ballantyne, Deborah; Boyle, Patrick J; De Seta, Elio; DeVincentiis, Marco; Mancini, Patrizia
2017-11-30
The aim of the study was to investigate the link between temporal fine structure (TFS) processing, pitch, and speech perception performance in adult cochlear implant (CI) recipients, including bimodal listeners who may benefit better low-frequency (LF) temporal coding in the contralateral ear. The study participants were 43 adult CI recipients (23 unilateral, 6 bilateral, and 14 bimodal listeners). Two new LF pitch perception tests-harmonic intonation (HI) and disharmonic intonation (DI)-were used to evaluate TFS sensitivity. HI and DI were designed to estimate a difference limen for discrimination of tone changes based on harmonic or inharmonic pitch glides. Speech perception was assessed using the newly developed Italian Sentence Test with Adaptive Randomized Roving level (STARR) test where sentences relevant to everyday contexts were presented at low, medium, and high levels in a fluctuating background noise to estimate a speech reception threshold (SRT). Although TFS and STARR performances in the majority of CI recipients were much poorer than those of hearing people reported in the literature, a considerable intersubject variability was observed. For CI listeners, median just noticeable differences were 27.0 and 147.0 Hz for HI and DI, respectively. HI outcomes were significantly better than those for DI. Median STARR score was 14.8 dB. Better performers with speech reception thresholds less than 20 dB had a median score of 8.6 dB. A significant effect of age was observed for both HI/DI tests, suggesting that TFS sensitivity tended to worsen with increasing age. CI pure-tone thresholds and duration of profound deafness were significantly correlated with STARR performance. Bimodal users showed significantly better TFS and STARR performance for bimodal listening than for their CI-only condition. Median bimodal gains were 33.0 Hz for the HI test and 95.0 Hz for the DI test. DI outcomes in bimodal users revealed a significant correlation with unaided hearing thresholds for octave frequencies lower than 1000 Hz. Median STARR scores were 17.3 versus 8.1 dB for CI only and bimodal listening, respectively. STARR performance was significantly correlated with HI findings for CI listeners and with those of DI for bimodal listeners. LF pitch perception was found to be abnormal in the majority of adult CI recipients, confirming poor TFS processing of CIs. Similarly, the STARR findings reflected a common performance deterioration with the HI/DI tests, suggesting the cause probably being a lack of access to TFS information. Contralateral hearing aid users obtained a remarkable bimodal benefit for all tests. Such results highlighted the importance of TFS cues for challenging speech perception and the relevance to everyday listening conditions. HI/DI and STARR tests show promise for gaining insights into how TFS and speech perception are being limited and may guide the customization of CI program parameters and support the fine tuning of bimodal listening.
The Effect of Holy Quran Voice on Mental Health.
Mahjoob, Monireh; Nejati, Jalil; Hosseini, Alireaza; Bakhshani, Noor Mohammad
2016-02-01
This study was designed to determine the effect of Quran listening without its musical tone (Tartil) on the mental health of personnel in Zahedan University of Medical Sciences, southeast of Iran. The results showed significant differences between the test and control groups in their mean mental health scores after Quran listening (P = 0.037). No significant gender differences in the test group before and after intervention were found (P = 0.806). These results suggest that Quran listening could be recommended by psychologists for improving mental health and achieving greater calm.
Assessing Auditory Processing Abilities in Typically Developing School-Aged Children.
McDermott, Erin E; Smart, Jennifer L; Boiano, Julie A; Bragg, Lisa E; Colon, Tiffany N; Hanson, Elizabeth M; Emanuel, Diana C; Kelly, Andrea S
2016-02-01
Large discrepancies exist in the literature regarding definition, diagnostic criteria, and appropriate assessment for auditory processing disorder (APD). Therefore, a battery of tests with normative data is needed. The purpose of this study is to collect normative data on a variety of tests for APD on children aged 7-12 yr, and to examine effects of outside factors on test performance. Children aged 7-12 yr with normal hearing, speech and language abilities, cognition, and attention were recruited for participation in this normative data collection. One hundred and forty-seven children were recruited using flyers and word of mouth. Of the participants recruited, 137 children qualified for the study. Participants attended schools located in areas that varied in terms of socioeconomic status, and resided in six different states. Audiological testing included a hearing screening (15 dB HL from 250 to 8000 Hz), word recognition testing, tympanometry, ipsilateral and contralateral reflexes, and transient-evoked otoacoustic emissions. The language, nonverbal IQ, phonological processing, and attention skills of each participant were screened using the Clinical Evaluation of Language Fundamentals-4 Screener, Test of Nonverbal Intelligence, Comprehensive Test of Phonological Processing, and Integrated Visual and Auditory-Continuous Performance Test, respectively. The behavioral APD battery included the following tests: Dichotic Digits Test, Frequency Pattern Test, Duration Pattern Test, Random Gap Detection Test, Compressed and Reverberated Words Test, Auditory Figure Ground (signal-to-noise ratio of +8 and +0), and Listening in Spatialized Noise-Sentences Test. Mean scores and standard deviations of each test were calculated, and analysis of variance tests were used to determine effects of factors such as gender, handedness, and birth history on each test. Normative data tables for the test battery were created for the following age groups: 7- and 8-yr-olds (n = 49), 9- and 10-yr-olds (n = 40), and 11- and 12-yr-olds (n = 48). No significant effects were seen for gender or handedness on any of the measures. The data collected in this study are appropriate for use in clinical diagnosis of APD. Use of a low-linguistically loaded core battery with the addition of more language-based tests, when language abilities are known, can provide a well-rounded picture of a child's auditory processing abilities. Screening for language, phonological processing, attention, and cognitive level can provide more information regarding a diagnosis of APD, determine appropriateness of the test battery for the individual child, and may assist with making recommendations or referrals. It is important to use a multidisciplinary approach in the diagnosis and treatment of APD due to the high likelihood of comorbidity with other language, learning, or attention deficits. Although children with other diagnoses may be tested for APD, it is important to establish previously made diagnoses before testing to aid in appropriate test selection and recommendations. American Academy of Audiology.
Kondaurova, Maria V.; Francis, Alexander L.
2008-01-01
Two studies explored the role of native language use of an acoustic cue, vowel duration, in both native and non-native contexts in order to test the hypothesis that non-native listeners’ reliance on vowel duration instead of vowel quality to distinguish the English tense∕lax vowel contrast could be explained by the role of duration as a cue in native phonological contrasts. In the first experiment, native Russian, Spanish, and American English listeners identified stimuli from a beat∕bit continuum varying in nine perceptually equal spectral and duration steps. English listeners relied predominantly on spectrum, but showed some reliance on duration. Russian and Spanish speakers relied entirely on duration. In the second experiment, three tests examined listeners’ use of vowel duration in native contrasts. Duration was equally important for the perception of lexical stress for all three groups. However, English listeners relied more on duration as a cue to postvocalic consonant voicing than did native Spanish or Russian listeners, and Spanish listeners relied on duration more than did Russian listeners. Results suggest that, although allophonic experience may contribute to cross-language perceptual patterns, other factors such as the application of statistical learning mechanisms and the influence of language-independent psychoacoustic proclivities cannot be ruled out. PMID:19206820
Using listening difficulty ratings of conditions for speech communication in rooms
NASA Astrophysics Data System (ADS)
Sato, Hiroshi; Bradley, John S.; Morimoto, Masayuki
2005-03-01
The use of listening difficulty ratings of speech communication in rooms is explored because, in common situations, word recognition scores do not discriminate well among conditions that are near to acceptable. In particular, the benefits of early reflections of speech sounds on listening difficulty were investigated and compared to the known benefits to word intelligibility scores. Listening tests were used to assess word intelligibility and perceived listening difficulty of speech in simulated sound fields. The experiments were conducted in three types of sound fields with constant levels of ambient noise: only direct sound, direct sound with early reflections, and direct sound with early reflections and reverberation. The results demonstrate that (1) listening difficulty can better discriminate among these conditions than can word recognition scores; (2) added early reflections increase the effective signal-to-noise ratio equivalent to the added energy in the conditions without reverberation; (3) the benefit of early reflections on difficulty scores is greater than expected from the simple increase in early arriving speech energy with reverberation; (4) word intelligibility tests are most appropriate for conditions with signal-to-noise (S/N) ratios less than 0 dBA, and where S/N is between 0 and 15-dBA S/N, listening difficulty is a more appropriate evaluation tool. .
Physical load handling and listening comprehension effects on balance control.
Qu, Xingda
2010-12-01
The purpose of this study was to determine the physical load handling and listening comprehension effects on balance control. A total of 16 young and 16 elderly participants were recruited in this study. The physical load handling task required holding a 5-kg load in each hand with arms at sides. The listening comprehension task involved attentive listening to a short conversation. Three short questions were asked regarding the conversation right after the testing trial to test the participants' attentiveness during the experiment. Balance control was assessed by centre of pressure-based measures, which were calculated from the force platform data when the participants were quietly standing upright on a force platform. Results from this study showed that both physical load handling and listening comprehension adversely affected balance control. Physical load handling had a more deleterious effect on balance control under the listening comprehension condition vs. no-listening comprehension condition. Based on the findings from this study, interventions for the improvement of balance could be focused on avoiding exposures to physically demanding tasks and cognitively demanding tasks simultaneously. STATEMENT OF RELEVANCE: Findings from this study can aid in better understanding how humans maintain balance, especially when physical and cognitive loads are applied. Such information is useful for developing interventions to prevent fall incidents and injuries in occupational settings and daily activities.
Experiential Learning and Learning Environments: The Case of Active Listening Skills
ERIC Educational Resources Information Center
Huerta-Wong, Juan Enrique; Schoech, Richard
2010-01-01
Social work education research frequently has suggested an interaction between teaching techniques and learning environments. However, this interaction has never been tested. This study compared virtual and face-to-face learning environments and included active listening concepts to test whether the effectiveness of learning environments depends…
ERIC Educational Resources Information Center
Batty, Aaron Olaf
2015-01-01
The rise in the affordability of quality video production equipment has resulted in increased interest in video-mediated tests of foreign language listening comprehension. Although research on such tests has continued fairly steadily since the early 1980s, studies have relied on analyses of raw scores, despite the growing prevalence of item…
Learning to Comprehend Foreign-Accented Speech by Means of Production and Listening Training
ERIC Educational Resources Information Center
Grohe, Ann-Kathrin; Weber, Andrea
2016-01-01
The effects of production and listening training on the subsequent comprehension of foreign-accented speech were investigated in a training-test paradigm. During training, German nonnative (L2) and English native (L1) participants listened to a story spoken by a German speaker who replaced all English /?/s with /t/ (e.g., *"teft" for…
Listening Strategy Use and Influential Factors in Web-Based Computer Assisted Language Learning
ERIC Educational Resources Information Center
Chen, L.; Zhang, R.; Liu, C.
2014-01-01
This study investigates second and foreign language (L2) learners' listening strategy use and factors that influence their strategy use in a Web-based computer assisted language learning (CALL) system. A strategy inventory, a factor questionnaire and a standardized listening test were used to collect data from a group of 82 Chinese students…
The Impact of Vocabulary Preparation on L2 Listening Comprehension, Confidence and Strategy Use
ERIC Educational Resources Information Center
Chang, Anna Ching-Shyang
2007-01-01
Building on previous studies of the effects of planning on second language learners' (L2) oral narratives and writing, this research reports an investigation of the effects of vocabulary preparation prior to a listening comprehension test on L2 learners' vocabulary performance, listening comprehension, confidence levels and strategy use. The…
The Effectiveness of Multimedia Application on Students Listening Comprehension
ERIC Educational Resources Information Center
Pangaribuan, Tagor; Sinaga, Andromeda; Sipayung, Kammer Tuahman
2017-01-01
Listening comprehension is a complex skill particulaly in mastered by non-native speaker settings. This research aimed at finding out the effect of multimedia application on students' listening. The research design is experimental, with a t-test. The population is the sixth semester of HKBP Nommensen University at the academic year of 2016/2017,…
ERIC Educational Resources Information Center
Andringa, Sible; Olsthoorn, Nomi; van Beuningen, Catherine; Schoonen, Rob; Hulstijn, Jan
2012-01-01
The goal of this study was to explain individual differences in both native and non-native listening comprehension; 121 native and 113 non-native speakers of Dutch were tested on various linguistic and nonlinguistic cognitive skills thought to underlie listening comprehension. Structural equation modeling was used to identify the predictors of…
Characteristics of stereo reproduction with parametric loudspeakers
NASA Astrophysics Data System (ADS)
Aoki, Shigeaki; Toba, Masayoshi; Tsujita, Norihisa
2012-05-01
A parametric loudspeaker utilizes nonlinearity of a medium and is known as a super-directivity loudspeaker. The parametric loudspeaker is one of the prominent applications of nonlinear ultrasonics. So far, the applications have been limited monaural reproduction sound system for public address in museum, station and street etc. In this paper, we discussed characteristics of stereo reproduction with two parametric loudspeakers by comparing with those with two ordinary dynamic loudspeakers. In subjective tests, three typical listening positions were selected to investigate the possibility of correct sound localization in a wide listening area. The binaural information was ILD (Interaural Level Difference) or ITD (Interaural Time Delay). The parametric loudspeaker was an equilateral hexagon. The inner and outer diameters were 99 and 112 mm, respectively. Signals were 500 Hz, 1 kHz, 2 kHz and 4 kHz pure tones and pink noise. Three young males listened to test signals 10 times in each listening condition. Subjective test results showed that listeners at the three typical listening positions perceived correct sound localization of all signals using the parametric loudspeakers. It was almost similar to those using the ordinary dynamic loudspeakers, however, except for the case of sinusoidal waves with ITD. It was determined the parametric loudspeaker could exclude the contradiction between the binaural information ILD and ITD that occurred in stereo reproduction with ordinary dynamic loudspeakers because the super directivity of parametric loudspeaker suppressed the cross talk components.
Central Auditory Processing of Temporal and Spectral-Variance Cues in Cochlear Implant Listeners
Pham, Carol Q.; Bremen, Peter; Shen, Weidong; Yang, Shi-Ming; Middlebrooks, John C.; Zeng, Fan-Gang; Mc Laughlin, Myles
2015-01-01
Cochlear implant (CI) listeners have difficulty understanding speech in complex listening environments. This deficit is thought to be largely due to peripheral encoding problems arising from current spread, which results in wide peripheral filters. In normal hearing (NH) listeners, central processing contributes to segregation of speech from competing sounds. We tested the hypothesis that basic central processing abilities are retained in post-lingually deaf CI listeners, but processing is hampered by degraded input from the periphery. In eight CI listeners, we measured auditory nerve compound action potentials to characterize peripheral filters. Then, we measured psychophysical detection thresholds in the presence of multi-electrode maskers placed either inside (peripheral masking) or outside (central masking) the peripheral filter. This was intended to distinguish peripheral from central contributions to signal detection. Introduction of temporal asynchrony between the signal and masker improved signal detection in both peripheral and central masking conditions for all CI listeners. Randomly varying components of the masker created spectral-variance cues, which seemed to benefit only two out of eight CI listeners. Contrastingly, the spectral-variance cues improved signal detection in all five NH listeners who listened to our CI simulation. Together these results indicate that widened peripheral filters significantly hamper central processing of spectral-variance cues but not of temporal cues in post-lingually deaf CI listeners. As indicated by two CI listeners in our study, however, post-lingually deaf CI listeners may retain some central processing abilities similar to NH listeners. PMID:26176553
The stress-reducing effect of music listening varies depending on the social context.
Linnemann, Alexandra; Strahler, Jana; Nater, Urs M
2016-10-01
Given that music listening often occurs in a social context, and given that social support can be associated with a stress-reducing effect, it was tested whether the mere presence of others while listening to music enhances the stress-reducing effect of listening to music. A total of 53 participants responded to questions on stress, presence of others, and music listening five times per day (30min after awakening, 1100h, 1400h, 1800h, 2100h) for seven consecutive days. After each assessment, participants were asked to collect a saliva sample for the later analysis of salivary cortisol (as a marker for the hypothalamic-pituitary-adrenal axis) and salivary alpha-amylase (as a marker for the autonomic nervous system). Hierarchical linear modeling revealed that music listening per se was not associated with a stress-reducing effect. However, listening to music in the presence of others led to decreased subjective stress levels, attenuated secretion of salivary cortisol, and higher activity of salivary alpha-amylase. When listening to music alone, music that was listened to for the reason of relaxation predicted lower subjective stress. The stress-reducing effect of music listening in daily life varies depending on the presence of others. Music listening in the presence of others enhanced the stress-reducing effect of music listening independently of reasons for music listening. Solitary music listening was stress-reducing when relaxation was stated as the reason for music listening. Thus, in daily life, music listening can be used for stress reduction purposes, with the greatest success when it occurs in the presence of others or when it is deliberately listened to for the reason of relaxation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Effects of sensorineural hearing loss on visually guided attention in a multitalker environment.
Best, Virginia; Marrone, Nicole; Mason, Christine R; Kidd, Gerald; Shinn-Cunningham, Barbara G
2009-03-01
This study asked whether or not listeners with sensorineural hearing loss have an impaired ability to use top-down attention to enhance speech intelligibility in the presence of interfering talkers. Listeners were presented with a target string of spoken digits embedded in a mixture of five spatially separated speech streams. The benefit of providing simple visual cues indicating when and/or where the target would occur was measured in listeners with hearing loss, listeners with normal hearing, and a control group of listeners with normal hearing who were tested at a lower target-to-masker ratio to equate their baseline (no cue) performance with the hearing-loss group. All groups received robust benefits from the visual cues. The magnitude of the spatial-cue benefit, however, was significantly smaller in listeners with hearing loss. Results suggest that reduced utility of selective attention for resolving competition between simultaneous sounds contributes to the communication difficulties experienced by listeners with hearing loss in everyday listening situations.
Voice gender identification by cochlear implant users: The role of spectral and temporal resolution
NASA Astrophysics Data System (ADS)
Fu, Qian-Jie; Chinchilla, Sherol; Nogaki, Geraldine; Galvin, John J.
2005-09-01
The present study explored the relative contributions of spectral and temporal information to voice gender identification by cochlear implant users and normal-hearing subjects. Cochlear implant listeners were tested using their everyday speech processors, while normal-hearing subjects were tested under speech processing conditions that simulated various degrees of spectral resolution, temporal resolution, and spectral mismatch. Voice gender identification was tested for two talker sets. In Talker Set 1, the mean fundamental frequency values of the male and female talkers differed by 100 Hz while in Talker Set 2, the mean values differed by 10 Hz. Cochlear implant listeners achieved higher levels of performance with Talker Set 1, while performance was significantly reduced for Talker Set 2. For normal-hearing listeners, performance was significantly affected by the spectral resolution, for both Talker Sets. With matched speech, temporal cues contributed to voice gender identification only for Talker Set 1 while spectral mismatch significantly reduced performance for both Talker Sets. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to 4-8 spectral channels. The results suggest that, because of the reduced spectral resolution, cochlear implant patients may attend strongly to periodicity cues to distinguish voice gender.
Bimodal distribution of performance in discriminating major/minor modes.
Chubb, Charles; Dickson, Christopher A; Dean, Tyler; Fagan, Christopher; Mann, Daniel S; Wright, Charles E; Guan, Maime; Silva, Andrew E; Gregersen, Peter K; Kowalsky, Elena
2013-10-01
This study investigated the abilities of listeners to classify various sorts of musical stimuli as major vs minor. All stimuli combined four pure tones: low and high tonics (G5 and G6), dominant (D), and either a major third (B) or a minor third (B[symbol: see text]). Especially interesting results were obtained using tone-scrambles, randomly ordered sequences of pure tones presented at ≈15 per second. All tone-scrambles tested comprised 16 G's (G5's + G6's), 8 D's, and either 8 B's or 8 B[symbol: see text]'s. The distribution of proportion correct across 275 listeners tested over the course of three experiments was strikingly bimodal, with one mode very close to chance performance, and the other very close to perfect performance. Testing with tone-scrambles thus sorts listeners fairly cleanly into two subpopulations. Listeners in subpopulation 1 are sufficiently sensitive to major vs minor to classify tone-scrambles nearly perfectly; listeners in subpopulation 2 (comprising roughly 70% of the population) have very little sensitivity to major vs minor. Skill in classifying major vs minor tone-scrambles shows a modest correlation of around 0.5 with years of musical training.
NASA Astrophysics Data System (ADS)
Nelson, Peggy B.; Jin, Su-Hyun
2004-05-01
Previous work [Nelson, Jin, Carney, and Nelson (2003), J. Acoust. Soc. Am 113, 961-968] suggested that cochlear implant users do not benefit from masking release when listening in modulated noise. The previous findings indicated that implant users experience little to no release from masking when identifying sentences in speech-shaped noise, regardless of the modulation frequency applied to the noise. The lack of masking release occurred for all implant subjects who were using three different devices and speech processing strategies. In the present study, possible causes of this reduced masking release in implant listeners were investigated. Normal-hearing listeners, implant users, and normal-hearing listeners presented with a four-band simulation of a cochlear implant were tested for their understanding of sentences in gated noise (1-32 Hz gate frequencies) when the duty cycle of the noise was varied from 25% to 75%. No systematic effect of noise duty cycle on implant and simulation listeners' performance was noted, indicating that the masking caused by gated noise is not only energetic masking. Masking release significantly increased when the number of spectral channels was increased from 4 to 12 for simulation listeners, suggesting that spectral resolution is important for masking release. Listeners were also tested for their understanding of gated sentences (sentences in quiet interrupted by periods of silence ranging from 1 to 32 Hz as a measure of auditory fusion, or the ability to integrate speech across temporal gaps. Implant and simulation listeners had significant difficulty understanding gated sentences at every gate frequency. When the number of spectral channels was increased for simulation listeners, their ability to understand gated sentences improved significantly. Findings suggest that implant listeners' difficulty understanding speech in modulated conditions is related to at least two (possibly related) factors: degraded spectral information and limitations in auditory fusion across temporal gaps.
King, Gillian; Servais, Michelle; Shepherd, Tracy A; Willoughby, Colleen; Bolack, Linda; Moodie, Sheila; Baldwin, Patricia; Strachan, Deborah; Knickle, Kerry; Pinto, Madhu; Parker, Kathryn; McNaughton, Nancy
2017-01-01
To prepare for an RCT by examining the effects of an educational intervention on the listening skills of pediatric rehabilitation clinicians, piloting study procedures, and investigating participants' learning experiences. Six experienced clinicians received the intervention, consisting of video simulations and solution-focused coaching regarding personal listening goals. Self- and observer-rated measures of listening skill were completed and qualitative information was gathered in interviews and a member checking session. Significant change on self-reported listening skills was found from pre- to post-test and/or follow-up. The pilot provided useful information to improve the study protocol, including the addition of an initial orientation to listening skills. Participants found the intervention to be a highly valuable and intense learning experience, and reported immediate changes to their clinical and interprofessional practice. The educational intervention has the potential to be an effective means to enhance the listening skills of practicing pediatric rehabilitation clinicians.
Bavelas, J B; Coates, L; Johnson, T
2000-12-01
A collaborative theory of narrative story-telling was tested in two experiments that examined what listeners do and their effect on the narrator. In 63 unacquainted dyads (81 women and 45 men), a narrator told his or her own close-call story. The listeners made 2 different kinds of listener responses: Generic responses included nodding and vocalizations such as "mhm." Specific responses, such as wincing or exclaiming, were tightly connected to (and served to illustrate) what the narrator was saying at the moment. In experimental conditions that distracted listeners from the narrative content, listeners made fewer responses, especially specific ones, and the narrators also told their stories significantly less well, particularly at what should have been the dramatic ending. Thus, listeners were co-narrators both through their own specific responses, which helped illustrate the story, and in their apparent effect on the narrator's performance. The results demonstrate the importance of moment-by-moment collaboration in face-to-face dialogue.
An algorithm to improve speech recognition in noise for hearing-impaired listeners
Healy, Eric W.; Yoho, Sarah E.; Wang, Yuxuan; Wang, DeLiang
2013-01-01
Despite considerable effort, monaural (single-microphone) algorithms capable of increasing the intelligibility of speech in noise have remained elusive. Successful development of such an algorithm is especially important for hearing-impaired (HI) listeners, given their particular difficulty in noisy backgrounds. In the current study, an algorithm based on binary masking was developed to separate speech from noise. Unlike the ideal binary mask, which requires prior knowledge of the premixed signals, the masks used to segregate speech from noise in the current study were estimated by training the algorithm on speech not used during testing. Sentences were mixed with speech-shaped noise and with babble at various signal-to-noise ratios (SNRs). Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions. These increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs. They were also often substantial, allowing several HI listeners to improve intelligibility from scores near zero to values above 70%. PMID:24116438
Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.
Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva
2016-01-01
Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.
Carroll, Rebecca; Meis, Markus; Schulte, Michael; Vormann, Matthias; Kießling, Jürgen; Meister, Hartmut
2015-02-01
To report the development of a standardized German version of a reading span test (RST) with a dual task design. Special attention was paid to psycholinguistic control of the test items and time-sensitive scoring. We aim to establish our RST version to use for determining an individual's working memory in the framework of hearing research in German contexts. RST stimuli were controlled and pretested for psycholinguistic factors. The RST task was to read sentences, quickly determine their plausibility, and later recall certain words to determine a listener's individual reading span. RST results were correlated with outcomes of additional sentence-in-noise tests measured in an aided and an unaided listening condition, each at two reception thresholds. Item plausibility was pre-determined by 28 native German participants. An additional 62 listeners (45-86 years, M = 69.8) with mild-to-moderate hearing loss were tested for speech intelligibility and reading span in a multicenter study. The reading span test significantly correlated with speech intelligibility at both speech reception thresholds in the aided listening condition. Our German RST is standardized with respect to psycholinguistic construction principles of the stimuli, and is a cognitive correlate of intelligibility in a German matrix speech-in-noise test.
Non-native Listeners’ Recognition of High-Variability Speech Using PRESTO
Tamati, Terrin N.; Pisoni, David B.
2015-01-01
Background Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. Purpose The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. Research Design Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. Study Sample Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. Data Collection and Analysis Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary knowledge was assessed with the WordFam word familiarity test, and executive functioning was assessed with the BRIEF-A (Behavioral Rating Inventory of Executive Function – Adult Version) self-report questionnaire. Scores from the non-native listeners on behavioral tasks and self-report questionnaires were compared with scores obtained from native listeners tested in a previous study and were examined for individual differences. Results Non-native keyword recognition scores were significantly lower on PRESTO sentences than on HINT sentences. Non-native listeners’ keyword recognition scores were also lower than native listeners’ scores on both sentence recognition tasks. Differences in performance on the sentence recognition tasks between non-native and native listeners were larger on PRESTO than on HINT, although group differences varied by signal-to-noise ratio. The non-native and native groups also differed in the ability to categorize talkers by region of origin and in vocabulary knowledge. Individual non-native word recognition accuracy on PRESTO sentences in multitalker babble at more favorable signal-to-noise ratios was found to be related to several BRIEF-A subscales and composite scores. However, non-native performance on PRESTO was not related to regional dialect categorization, talker and gender discrimination, or vocabulary knowledge. Conclusions High-variability sentences in multitalker babble were particularly challenging for non-native listeners. Difficulty under high-variability testing conditions was related to lack of experience with the L2, especially L2 sociolinguistic information, compared with native listeners. Individual differences among the non-native listeners were related to weaknesses in core neurocognitive abilities affecting behavioral control in everyday life. PMID:25405842
Naming and Categorization in Young Children: IV: Listener Behavior Training and Transfer of Function
Horne, Pauline J; Hughes, J. Carl; Lowe, C. Fergus
2006-01-01
Following pretraining with everyday objects, 14 children aged from 1 to 4 years were trained, for each of three pairs of different arbitrary wooden shapes (Set 1), to select one stimulus in response to the spoken word /zog/, and the other to /vek/. When given a test for the corresponding tacts (“zog” and “vek”), 10 children passed, showing that they had learned common names for the stimuli, and 4 failed. All children were trained to clap to one stimulus of Pair 1 and wave to the other. All those who named showed either transfer of the novel functions to the remaining two pairs of stimuli in Test 1, or novel function comprehension for all three pairs in Test 2, or both. Three of these children next participated in, and passed, category match-to-sample tests. In contrast, all 4 children who had learned only listener behavior failed both the category transfer and category match-to-sample tests. When 3 of them were next trained to name the stimuli, they passed the category transfer and (for the 2 subjects tested) category match-to-sample tests. Three children were next trained on the common listener relations with another set of arbitrary stimuli (Set 2); all succeeded on the tact and category tests with the Set 2 stimuli. Taken together with the findings from the other studies in the series, the present experiment shows that (a) common listener training also establishes the corresponding names in some but not all children, and (b) only children who learn common names categorize; all those who learn only listener behavior fail. This is good evidence in support of the naming account of categorization. PMID:16673828
Assessor Decision Making While Marking a Note-Taking Listening Test: The Case of the OET
ERIC Educational Resources Information Center
Harding, Luke; Pill, John; Ryan, Kerry
2011-01-01
This article investigates assessor decision making when using and applying a marking guide for a note-taking task in a specific purpose English language listening test. In contexts where note-taking items are used, a marking guide is intended to stipulate what kind of response should be accepted as evidence of the ability under test. However,…
Communications in Task Analysis. Training Manual 4
1975-10-01
Training Research Programs Psychological Sciences Division Office of Naval Research Contract No. N00014-74-A-0436-0001 NR 151-370 ADproved for... Testing Testing Understanding 20. ABSTRACT (Continue on reveree elde If neceeemry end Identity by block number) This is the fourth in a series of...recommendations for achieving active listening, testing for understanding, and problems in active listening. The third section is the most detailed
ERIC Educational Resources Information Center
McNeil, Malcolm R.; Pratt, Sheila R.; Szuminsky, Neil; Sung, Jee Eun; Fossett, Tepanta R. D.; Fassbinder, Wiltrud; Lim, Kyoung Yuel
2015-01-01
Purpose: This study assessed the reliability and validity of intermodality associations and differences in persons with aphasia (PWA) and healthy controls (HC) on a computerized listening and 3 reading versions of the Revised Token Test (RTT; McNeil & Prescott, 1978). Method: Thirty PWA and 30 HC completed the test versions, including a…
ERIC Educational Resources Information Center
Lonchamp, F.
This is a presentation of the results of a factor analysis of a battery of tests intended to measure listening and reading comprehension in English as a second language. The analysis sought to answer the following questions: (1) whether the factor analysis method yields results when applied to tests which are not specifically designed for this…
ERIC Educational Resources Information Center
Kuswoyo, Heri
2013-01-01
Among three sections that follow the Paper-Based TOEFL (PBT), many test takers find listening comprehension section is the most difficult. Thus, in this research the researcher aims to explore how students learn PBT's listening comprehension section effectively through song technique. This sounds like a more interesting and engaging way to learn…
Fostering and Assessing Critical Listening Skills in the Speech Course
ERIC Educational Resources Information Center
Ferrari-Bridgers, Franca; Vogel, Rosanne; Lynch, Barbara
2017-01-01
In this article we present the results of two listening assessments conducted in spring 2013 and fall 2013. Our primary goal is of a pedagogical nature and is concerned with the design and the testing of a tool that could measure students' critical listening skill improvement during the span of a semester. A total of N = 370 students participated…
Non-English Majors' Listening Teaching Based on Lexical Chunks Theory and Schema Theory
ERIC Educational Resources Information Center
He, Xiaoyu
2016-01-01
English listening is seen as a vital means of linguistic input for Chinese EFL (English as a Foreign Language) learners, which lays a solid foundation for English learning and communication with English speakers. Besides, with increasing of scores of the listening part in the newly-reformed CET-4 and CET-6 (CET refers to college English test in…
Issues Related to Assessing Listening Ability.
ERIC Educational Resources Information Center
Mead, Nancy A.
The National Assessment of Educational Progress (NAEP) and the Speech Communication Association (SCA) initiated a pilot study to test the feasibility of assessing speaking and listening skills. A pool of 56 items was developed and then field tested at four sites which represented a variety of national regions, of size and type of cities, and of…
Playing the Recording Once or Twice: Effects on Listening Test Performances
ERIC Educational Resources Information Center
Ruhm, Richard; Leitner-Jones, Claire; Kulmhofer, Andrea; Kiefer, Thomas; Mlakar, Heike; Itzlinger-Bruneforth, Ursula
2016-01-01
Much debate surrounds the issue of whether allowing candidates to listen to recordings twice is more desirable in language tests than offering just one opportunity. Using regression models, this study investigates, analyses and interconnects both item difficulty and stimulus length in relation to the frequency of stimulus presentation and its…
Headphone screening to facilitate web-based auditory experiments
Woods, Kevin J.P.; Siegel, Max; Traer, James; McDermott, Josh H.
2017-01-01
Psychophysical experiments conducted remotely over the internet permit data collection from large numbers of participants, but sacrifice control over sound presentation, and therefore are not widely employed in hearing research. To help standardize online sound presentation, we introduce a brief psychophysical test for determining if online experiment participants are wearing headphones. Listeners judge which of three pure tones is quietest, with one of the tones presented 180° out of phase across the stereo channels. This task is intended to be easy over headphones but difficult over loudspeakers due to phase-cancellation. We validated the test in the lab by testing listeners known to be wearing headphones or listening over loudspeakers. The screening test was effective and efficient, discriminating between the two modes of listening with a small number of trials. When run online, a bimodal distribution of scores was obtained, suggesting that some participants performed the task over loudspeakers despite instructions to use headphones. The ability to detect and screen out these participants mitigates concerns over sound quality for online experiments, a first step toward opening auditory perceptual research to the possibilities afforded by crowdsourcing. PMID:28695541
Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H
2016-08-01
To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.
Bembich, Stefano; Demarini, Sergio; Clarici, Andrea; Massaccesi, Stefano; Grasso, Domenico Loenardo
2011-12-01
The Wada test is usually used for pre-surgical assessment of language lateralization. Considering its invasiveness and risk of complications, alternative methods have been proposed but they are not always applicable to non-cooperative patients. In this study we explored the possibility of using optical topography (OT)--a multichannel near-infrared system--for non-invasive assessment of hemispheric language dominance during passive listening. Cortical activity was monitored in a sample of healthy, adult Italian native speakers, all right-handed. We assessed changes in oxy-haemoglobin concentration in temporal, parietal and posterior frontal lobes during a passive listening of bi-syllabic words and vowel-consonant-vowel syllables lasting less then 3 minutes. Activated channels were identified by t tests. Left hemisphere showed significant activity only during the passive listening of bi-syllabic words. Specifically, the superior temporal gyrus, the supramarginal gyrus and the posterior inferior parietal lobe were activated. During passive listening of bi-syllabic words, right handed healthy adults showed a significant activation in areas already known to be involved in speech comprehension. Although more research is needed, OT proved to be a promising alternative to the Wada test for non-invasive assessment of hemispheric language lateralization, even if using a particularly brief trial, which has been designed for future applications with non-cooperative subjects.
Does listening to music with an audio ski helmet impair reaction time to peripheral stimuli?
Ruedl, G; Pocecco, E; Wolf, M; Schöpf, S; Burtscher, M; Kopp, M
2012-12-01
With the recent worldwide increase in ski helmet use, new market trends are developing, including audio helmets for listening to music while skiing or snowboarding. The aim of this study was to evaluate whether listening to music with an audio ski helmet impairs reaction time to peripheral stimuli. A within-subjects design study using the Compensatory-Tracking-Test was performed on 65 subjects (36 males and 29 females) who had a mean age of 23.3 ± 3.9 years. Using repeated measures analysis of variance, we found significant differences in reaction times between the 4 test conditions (p=0.039). The lowest mean reaction time (± SE) was measured for helmet use while listening to music (507.9 ± 13.2 ms), which was not different from helmet use alone (514.6 ± 12.5 ms) (p=0.528). However, compared to helmet use while listening to music, reaction time was significantly longer for helmet and ski goggles used together (535.8 ± 14.2 ms, p=0.005), with a similar trend for helmet and ski goggles used together while listening to music (526.9 ± 13.8 ms) (p=0.094). In conclusion, listening to music with an audio ski helmet did not increase mean reaction time to peripheral stimuli in a laboratory setting. © Georg Thieme Verlag KG Stuttgart · New York.
Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan
2014-01-01
Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise. PMID:25566159
Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan
2014-01-01
Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.
Effects of auditory selective attention on chirp evoked auditory steady state responses.
Bohr, Andreas; Bernarding, Corinna; Strauss, Daniel J; Corona-Strauss, Farah I
2011-01-01
Auditory steady state responses (ASSRs) are frequently used to assess auditory function. Recently, the interest in effects of attention on ASSRs has increased. In this paper, we investigated for the first time possible effects of attention on AS-SRs evoked by amplitude modulated and frequency modulated chirps paradigms. Different paradigms were designed using chirps with low and high frequency content, and the stimulation was presented in a monaural and dichotic modality. A total of 10 young subjects participated in the study, they were instructed to ignore the stimuli and after a second repetition they had to detect a deviant stimulus. In the time domain analysis, we found enhanced amplitudes for the attended conditions. Furthermore, we noticed higher amplitudes values for the condition using frequency modulated low frequency chirps evoked by a monaural stimulation. The most difference between attended and unattended modality was exhibited at the dichotic case of the amplitude modulated condition using chirps with low frequency content.
Vol'f, N V
1998-01-01
Sexual differences in the hemispheric organization of verbal functions were shown in experiments with dichotic presentation of word lists, in Sternbeg's memory scanning task, in studies of EEG power and coherence while memorizing the lists of dichotically presented words. The efficiency of word retrieval and speed of memory scanning for stimuli presented to the right hemisphere were higher in women. EEG activation while memorizing words was more pronounced in men. There were negative correlations between left ear word retrieval and EEG activation in women. The author's findings showed sexual dimorphism in functional connections within the cortical regions of the brain while memorizing verbal information. The changes in coherence were in positive correlation with the efficiency of word retrieval in women and in inverse correlation in men, and this was evidence for the different physiological significance of changes in coherence in men and women. This suggests that the physiological significance of changes in coherence differs in men and women.
Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.
2015-01-01
Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on functioning. PMID:26136699
ERIC Educational Resources Information Center
Masapollo, Matthew; Polka, Linda; Ménard, Lucie
2016-01-01
To learn to produce speech, infants must effectively monitor and assess their own speech output. Yet very little is known about how infants perceive speech produced by an infant, which has higher voice pitch and formant frequencies compared to adult or child speech. Here, we tested whether pre-babbling infants (at 4-6 months) prefer listening to…
Evaluation of Extended-wear Hearing Aid Technology for Operational Military Use
2017-07-01
for a transparent hearing protection device that could protect the hearing of normal-hearing listeners without degrading auditory situational...method, suggest that continuous noise protection is also comparable to conventional earplug devices. Behavioral testing on listeners with normal...associated with the extended-wear hearing aid could be adapted to provide long-term hearing protection for listeners with normal hearing with minimal
ERIC Educational Resources Information Center
Yoon, Su-Youn; Lee, Chong Min; Houghton, Patrick; Lopez, Melissa; Sakano, Jennifer; Loukina, Anastasia; Krovetz, Bob; Lu, Chi; Madani, Nitin
2017-01-01
In this study, we developed assistive tools and resources to support TOEIC® Listening test item generation. There has recently been an increased need for a large pool of items for these tests. This need has, in turn, inspired efforts to increase the efficiency of item generation while maintaining the quality of the created items. We aimed to…
ERIC Educational Resources Information Center
Nissan, Susan; And Others
One of the item types in the Listening Comprehension section of the Test of English as a Foreign Language (TOEFL) test is the dialogue. Because the dialogue item pool needs to have an appropriate balance of items at a range of difficulty levels, test developers have examined items at various difficulty levels in an attempt to identify their…
ERIC Educational Resources Information Center
de Bruijn, Gert-Jan; Spaans, Pieter; Jansen, Bastiaan; van't Riet, Jonathan
2016-01-01
Adolescent hearing loss is a public health problem that has eluded effective intervention. A persuasive message strategy was tested for its effectiveness on adolescents' intention to listen to music at a reduced volume. The messages manipulated both type of message frame [positive consequences of listening to music at a reduced volume…
The Effect of Repetition Types on Listening Tests in an EFL Setting
ERIC Educational Resources Information Center
Horness, Paul
2013-01-01
This study was an investigation into the effects of repetition on a listening comprehension test for second language learners. Repetition has been previously examined in a cursory way, usually as a secondary question to a primary treatment. Additionally, the method of repetition was limited to one way and to one treatment condition; therefore, it…
ERIC Educational Resources Information Center
Teng, Feng
2016-01-01
The present study was conducted in the context of learning English as a Foreign Language (EFL) with the purpose of assessing the roles of breadth and depth of vocabulary knowledge in academic listening comprehension. The Vocabulary Size Test (VST, Nation & Beglar, 2007) and the Word Associates Test (WAT, Read, 2004) were administered to…
Pleasurable music affects reinforcement learning according to the listener
Gold, Benjamin P.; Frank, Michael J.; Bogert, Brigitte; Brattico, Elvira
2013-01-01
Mounting evidence links the enjoyment of music to brain areas implicated in emotion and the dopaminergic reward system. In particular, dopamine release in the ventral striatum seems to play a major role in the rewarding aspect of music listening. Striatal dopamine also influences reinforcement learning, such that subjects with greater dopamine efficacy learn better to approach rewards while those with lesser dopamine efficacy learn better to avoid punishments. In this study, we explored the practical implications of musical pleasure through its ability to facilitate reinforcement learning via non-pharmacological dopamine elicitation. Subjects from a wide variety of musical backgrounds chose a pleasurable and a neutral piece of music from an experimenter-compiled database, and then listened to one or both of these pieces (according to pseudo-random group assignment) as they performed a reinforcement learning task dependent on dopamine transmission. We assessed musical backgrounds as well as typical listening patterns with the new Helsinki Inventory of Music and Affective Behaviors (HIMAB), and separately investigated behavior for the training and test phases of the learning task. Subjects with more musical experience trained better with neutral music and tested better with pleasurable music, while those with less musical experience exhibited the opposite effect. HIMAB results regarding listening behaviors and subjective music ratings indicate that these effects arose from different listening styles: namely, more affective listening in non-musicians and more analytical listening in musicians. In conclusion, musical pleasure was able to influence task performance, and the shape of this effect depended on group and individual factors. These findings have implications in affective neuroscience, neuroaesthetics, learning, and music therapy. PMID:23970875
The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise.
Shen, Jing; Souza, Pamela E
2017-09-18
This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise. Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise. The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise. Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss.
The Effect of Dynamic Pitch on Speech Recognition in Temporally Modulated Noise
Souza, Pamela E.
2017-01-01
Purpose This study investigated the effect of dynamic pitch in target speech on older and younger listeners' speech recognition in temporally modulated noise. First, we examined whether the benefit from dynamic-pitch cues depends on the temporal modulation of noise. Second, we tested whether older listeners can benefit from dynamic-pitch cues for speech recognition in noise. Last, we explored the individual factors that predict the amount of dynamic-pitch benefit for speech recognition in noise. Method Younger listeners with normal hearing and older listeners with varying levels of hearing sensitivity participated in the study, in which speech reception thresholds were measured with sentences in nonspeech noise. Results The younger listeners benefited more from dynamic pitch for speech recognition in temporally modulated noise than unmodulated noise. Older listeners were able to benefit from the dynamic-pitch cues but received less benefit from noise modulation than the younger listeners. For those older listeners with hearing loss, the amount of hearing loss strongly predicted the dynamic-pitch benefit for speech recognition in noise. Conclusions Dynamic-pitch cues aid speech recognition in noise, particularly when noise has temporal modulation. Hearing loss negatively affects the dynamic-pitch benefit to older listeners with significant hearing loss. PMID:28800370
Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina
2013-02-01
Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.
Dikker, Suzanne; Silbert, Lauren J; Hasson, Uri; Zevin, Jason D
2014-04-30
Recent research has shown that the degree to which speakers and listeners exhibit similar brain activity patterns during human linguistic interaction is correlated with communicative success. Here, we used an intersubject correlation approach in fMRI to test the hypothesis that a listener's ability to predict a speaker's utterance increases such neural coupling between speakers and listeners. Nine subjects listened to recordings of a speaker describing visual scenes that varied in the degree to which they permitted specific linguistic predictions. In line with our hypothesis, the temporal profile of listeners' brain activity was significantly more synchronous with the speaker's brain activity for highly predictive contexts in left posterior superior temporal gyrus (pSTG), an area previously associated with predictive auditory language processing. In this region, predictability differentially affected the temporal profiles of brain responses in the speaker and listeners respectively, in turn affecting correlated activity between the two: whereas pSTG activation increased with predictability in the speaker, listeners' pSTG activity instead decreased for more predictable sentences. Listeners additionally showed stronger BOLD responses for predictive images before sentence onset, suggesting that highly predictable contexts lead comprehenders to preactivate predicted words.
Shi, Lu-Feng; Morozova, Natalia
2012-08-01
Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.
Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro
2013-01-01
The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions. PMID:23516137
Koeda, Michihiko; Belin, Pascal; Hama, Tomoko; Masuda, Tadashi; Matsuura, Masato; Okubo, Yoshiro
2013-01-01
The Montreal Affective Voices (MAVs) consist of a database of non-verbal affect bursts portrayed by Canadian actors, and high recognitions accuracies were observed in Canadian listeners. Whether listeners from other cultures would be as accurate is unclear. We tested for cross-cultural differences in perception of the MAVs: Japanese listeners were asked to rate the MAVs on several affective dimensions and ratings were compared to those obtained by Canadian listeners. Significant Group × Emotion interactions were observed for ratings of Intensity, Valence, and Arousal. Whereas Intensity and Valence ratings did not differ across cultural groups for sad and happy vocalizations, they were significantly less intense and less negative in Japanese listeners for angry, disgusted, and fearful vocalizations. Similarly, pleased vocalizations were rated as less intense and less positive by Japanese listeners. These results demonstrate important cross-cultural differences in affective perception not just of non-verbal vocalizations expressing positive affect (Sauter et al., 2010), but also of vocalizations expressing basic negative emotions.
Health literacy and pap testing in insured women.
Mazor, K M; Williams, A E; Roblin, D W; Gaglio, B; Cutrona, S L; Costanza, M E; Han, P K J; Wagner, J L; Fouayzi, H; Field, T S
2014-12-01
Several studies have found a link between health literacy and participation in cancer screening. Most, however, have relied on self-report to determine screening status. Further, until now, health literacy measures have assessed print literacy only. The purpose of this study was to examine the relationship between participation in cervical cancer screening (Papanicolaou [Pap] testing) and two forms of health literacy-reading and listening. A demographically diverse sample was recruited from a pool of insured women in Georgia, Massachusetts, Hawaii, and Colorado between June 2009 and April 2010. Health literacy was assessed using the Cancer Message Literacy Test-Listening and the Cancer Message Literacy Test-Reading. Adherence to cervical cancer screening was ascertained through electronic administrative data on Pap test utilization. The relationship between health literacy and adherence to evidence-based recommendations for Pap testing was examined using multivariate logistic regression models. Data from 527 women aged 40 to 65 were analyzed and are reported here. Of these 527 women, 397 (75 %) were up to date with Pap testing. Higher health literacy scores for listening but not reading predicted being up to date. The fact that health literacy listening was associated with screening behavior even in this insured population suggests that it has independent effects beyond those of access to care. Patients who have difficulty understanding spoken recommendations about cancer screening may be at risk for underutilizing screening as a result.
The Developmental Trajectory of Spatial Listening Skills in Normal-Hearing Children
ERIC Educational Resources Information Center
Lovett, Rosemary Elizabeth Susan; Kitterick, Padraig Thomas; Huang, Shan; Summerfield, Arthur Quentin
2012-01-01
Purpose: To establish the age at which children can complete tests of spatial listening and to measure the normative relationship between age and performance. Method: Fifty-six normal-hearing children, ages 1.5-7.9 years, attempted tests of the ability to discriminate a sound source on the left from one on the right, to localize a source, to track…
ERIC Educational Resources Information Center
Kim, Youjin; Tracy-Ventura, Nicole; Jung, Yeonjoo
2016-01-01
Elicited imitation requires listeners to listen and repeat sentences as accurately as possible. In second language acquisition (SLA) research it has been used for a variety of purposes. Recently, versions of the same elicited imitation test (EIT) have been created in 6 languages with the purpose of measuring second language proficiency (Ortega…
How Much Videos Win over Audios in Listening Instruction for EFL Learners
ERIC Educational Resources Information Center
Yasin, Burhanuddin; Mustafa, Faisal; Permatasari, Rizki
2017-01-01
This study aims at comparing the benefits of using videos instead of audios for improving students' listening skills. This experimental study used a pre-test and post-test control group design. The sample, selected by cluster random sampling resulted in the selection of 32 second year high school students for each group. The instruments used were…
The MOC Reflex during Active Listening to Speech
ERIC Educational Resources Information Center
Garinis, Angela C.; Glattke, Theodore; Cone, Barbara K.
2011-01-01
Purpose: The purpose of this study was to test the hypothesis that active listening to speech would increase medial olivocochlear (MOC) efferent activity for the right vs. the left ear. Method: Click-evoked otoacoustic emissions (CEOAEs) were evoked by 60-dB p.e. SPL clicks in 13 normally hearing adults in 4 test conditions for each ear: (a) in…
Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo
2009-04-01
The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.
Aided speech recognition in single-talker competition by elderly hearing-impaired listeners
NASA Astrophysics Data System (ADS)
Coughlin, Maureen; Humes, Larry
2004-05-01
This study examined the speech-identification performance in one-talker interference conditions that increased in complexity while audibility was ensured over a wide bandwidth (200-4000 Hz). Factorial combinations of three independent variables were used to vary the amount of informational masking. These variables were: (1) competition playback direction (forward or reverse); (2) gender match between target and competition talkers (same or different); and (3) target talker uncertainty (one of three possible talkers from trial to trial). Four groups of listeners, two elderly hearing-impaired groups differing in age (65-74 and 75-84 years) and two young normal-hearing groups, were tested. One of the groups of young normal-hearing listeners was tested under acoustically equivalent test conditions and one was tested under perceptually equivalent test conditions. The effect of each independent variable on speech-identification performance and informational masking was generally consistent with expectations. Group differences in the observed informational masking were most pronounced for the oldest group of hearing-impaired listeners. The eight measures of speech-identification performance were found to be strongly correlated with one another, and individual differences in speech understanding performance among the elderly were found to be associated with age and level of education. [Work supported, in part, by NIA.
Leshem, Rotem
2016-02-01
This study examined the relationship between trait impulsivity and cognitive control, as measured by the Barratt Impulsiveness Scale (BIS) and a focused attention dichotic listening to words task, respectively. In the task, attention was manipulated in two attention conditions differing in their cognitive control demands: one in which attention was directed to one ear at a time for a whole block of trials (blocked condition) and another in which attention was switched pseudo-randomly between the two ears from trial to trial (mixed condition). Results showed that high impulsivity participants exhibited more false alarm and intrusion errors as well as a lesser ability to distinguish between stimuli in the mixed condition, as compared to low impulsivity participants. In the blocked condition, the performance levels of the two groups were comparable with respect to these measures. In addition, total BIS scores were correlated with intrusions and laterality index in the mixed but not the blocked condition. The findings suggest that high impulsivity individuals may be less prone to attentional difficulties when cognitive load is relatively low. In contrast, when attention switching is involved, high impulsivity is associated with greater difficulty in inhibiting responses and resolving cognitive conflict than is low impulsivity, as reflected in error-prone information processing. The conclusion is that trait impulsivity in a non-clinical population is manifested more strongly when attention switching is required than during maintained attention. This may have important implications for the conceptualization and treatment of impulsivity in both non-clinical and clinical populations.
Isbell, Elif; Wray, Amanda Hampton; Neville, Helen J
2016-11-01
Selective attention, the ability to enhance the processing of particular input while suppressing the information from other concurrent sources, has been postulated to be a foundational skill for learning and academic achievement. The neural mechanisms of this foundational ability are both vulnerable and enhanceable in children from lower socioeconomic status (SES) families. In the current study, we assessed individual differences in neural mechanisms of this malleable brain function in children from lower SES families. Specifically, we investigated the extent to which individual differences in neural mechanisms of selective auditory attention accounted for variability in nonverbal cognitive abilities in lower SES preschoolers. We recorded event-related potentials (ERPs) during a dichotic listening task and administered nonverbal IQ tasks to 124 lower SES children (77 females) between the ages of 40 and 67 months. The attention effect, i.e., the difference in ERP mean amplitudes elicited by identical probes embedded in stories when attended versus unattended, was significantly correlated with nonverbal IQ scores. Larger, more positive attention effects over the anterior and central electrode locations were associated with superior nonverbal IQ performance. Our findings provide initial evidence for prominent individual differences in neural indices of selective attention in lower SES children. Furthermore, our results indicate a noteworthy relationship between neural mechanisms of selective attention and nonverbal IQ performance in lower SES preschoolers. These findings provide the basis for future research to identify the factors that contribute to such individual differences in neural mechanisms of selective attention. © 2015 John Wiley & Sons Ltd.
Sander, David; Grandjean, Didier; Pourtois, Gilles; Schwartz, Sophie; Seghier, Mohamed L; Scherer, Klaus R; Vuilleumier, Patrik
2005-12-01
Multiple levels of processing are thought to be involved in the appraisal of emotionally relevant events, with some processes being engaged relatively independently of attention, whereas other processes may depend on attention and current task goals or context. We conducted an event-related fMRI experiment to examine how processing angry voice prosody, an affectively and socially salient signal, is modulated by voluntary attention. To manipulate attention orthogonally to emotional prosody, we used a dichotic listening paradigm in which meaningless utterances, pronounced with either angry or neutral prosody, were presented simultaneously to both ears on each trial. In two successive blocks, participants selectively attended to either the left or right ear and performed a gender-decision on the voice heard on the target side. Our results revealed a functional dissociation between different brain areas. Whereas the right amygdala and bilateral superior temporal sulcus responded to anger prosody irrespective of whether it was heard from a to-be-attended or to-be-ignored voice, the orbitofrontal cortex and the cuneus in medial occipital cortex showed greater activation to the same emotional stimuli when the angry voice was to-be-attended rather than to-be-ignored. Furthermore, regression analyses revealed a strong correlation between orbitofrontal regions and sensitivity on a behavioral inhibition scale measuring proneness to anxiety reactions. Our results underscore the importance of emotion and attention interactions in social cognition by demonstrating that multiple levels of processing are involved in the appraisal of emotionally relevant cues in voices, and by showing a modulation of some emotional responses by both the current task-demands and individual differences.
Effects of aging and gender on interhemispheric function.
Bellis, T J; Wilber, L A
2001-04-01
The ability of the two hemispheres of the brain to communicate with one another via the corpus callosum is important for a wide variety of sensory, motor, and cognitive functions, many of them communication related. Anatomical evidence suggests that aging results in structural changes in the corpus callosum and that the course over time of age-related changes in corpus callosum structure may depend on the gender of the individual. Further, it has been hypothesized that age- and gender-related changes in corpus callosum structure may result in concomitant decreased performance on tasks that are reliant on interhemispheric integrity. The purpose of this study was to investigate the effects of age and gender on auditory behavioral and visuomotor temporal indices of interhemispheric function across the life span of the normal adult. Results from 120 consistently right-handed adults from age 20 to 75 years revealed that interhemispheric integrity, as measured by dichotic listening, auditory temporal patterning, and visuomotor interhemispheric transfer time tasks, decreases relatively early in the adult life span (i.e., between the ages of 40 and 55 years) and shows no further decrease thereafter. In addition, the course over time of interhemispheric decline is different for men compared to women for some tasks. These findings suggest that decreased interhemispheric function may be a possible factor contributing to auditory and communication difficulties experienced by aging adults. In addition, results of this study hold implications for the clinical assessment of interhemispheric function in aging adults and for future research into the functional ramifications of decreased multimodality interhemispheric transfer.
Tomita, Nozomi; Imai, Shoji; Kanayama, Yusuke; Kawashima, Issaku; Kumano, Hiroaki
2017-06-01
While dichotic listening (DL) was originally intended to measure bottom-up selective attention, it has also become a tool for measuring top-down selective attention. This study investigated the brain regions related to top-down selective and divided attention DL tasks and a 2-back task using alphanumeric and Japanese numeric sounds. Thirty-six healthy participants underwent near-infrared spectroscopy scanning while performing a top-down selective attentional DL task, a top-down divided attentional DL task, and a 2-back task. Pearson's correlations were calculated to show relationships between oxy-Hb concentration in each brain region and the score of each cognitive task. Different brain regions were activated during the DL and 2-back tasks. Brain regions activated in the top-down selective attention DL task were the left inferior prefrontal gyrus and left pars opercularis. The left temporopolar area was activated in the top-down divided attention DL task, and the left frontopolar area and left dorsolateral prefrontal cortex were activated in the 2-back task. As further evidence for the finding that each task measured different cognitive and brain area functions, neither the percentages of correct answers for the three tasks nor the response times for the selective attentional task and the divided attentional task were correlated to one another. Thus, the DL and 2-back tasks used in this study can assess multiple areas of cognitive, brain-related dysfunction to explore their relationship to different psychiatric and neurodevelopmental disorders.
Speech processing: from peripheral to hemispheric asymmetry of the auditory system.
Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier
2012-01-01
Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
Is children's listening effort in background noise influenced by the speaker's voice quality?
Sahlén, Birgitta; Haake, Magnus; von Lochow, Heike; Holm, Lucas; Kastberg, Tobias; Brännström, K Jonas; Lyberg-Åhlander, Viveka
2018-07-01
The present study aims at exploring the influence of voice quality on listening effort in children performing a language comprehension test with sentences of increasing difficulty. Listening effort is explored in relation to gender ( = cisgender). The study has a between-groups design. Ninety-three mainstreamed children aged 8;2 to 9;3 with typical language development participated. The children were randomly assigned to two groups (n = 46/47) with equal allocation of boys and girls and for the analysis to four groups depending of gender and voice condition. Working memory capacity and executive functions were tested in quiet. A digital version of a language comprehension test (the TROG-2) was used to measure the effect of voice quality on listening effort, measured as response time in a forced-choice paradigm. The groups listened to sentences through recordings of the same female voice, one group with a typical voice and one with a dysphonic voice, both in competing multi-talker babble noise. Response times were logged after a time buffer between the sentence-ending and indication of response. There was a significant increase in response times with increased task difficulty and response times between the two voice conditions differed significantly. The girls in the dysphonic condition were slower with increasing task difficulty. A dysphonic voice clearly adds to the noise burden and listening effort is greater in girls than in boys when the teacher speaks with dysphonic voice in a noisy background. These findings might mirror gender differences as for coping strategies in challenging contexts and have important implications for education.
Effects of hearing aid settings for electric-acoustic stimulation.
Dillon, Margaret T; Buss, Emily; Pillsbury, Harold C; Adunka, Oliver F; Buchman, Craig A; Adunka, Marcia C
2014-02-01
Cochlear implant (CI) recipients with postoperative hearing preservation may utilize an ipsilateral bimodal listening condition known as electric-acoustic stimulation (EAS). Studies on EAS have reported significant improvements in speech perception abilities over CI-alone listening conditions. Adjustments to the hearing aid (HA) settings to match prescription targets routinely used in the programming of conventional amplification may provide additional gains in speech perception abilities. Investigate the difference in users' speech perception scores when listening with the recommended HA settings for EAS patients versus HA settings adjusted to match National Acoustic Laboratories' nonlinear fitting procedure version 1 (NAL-NL1) targets. Prospective analysis of the influence of HA settings. Nine EAS recipients with greater than 12 mo of listening experience with the DUET speech processor. Subjects were tested in the EAS listening condition with two different HA setting configurations. Speech perception materials included consonant-nucleus-consonant (CNC) words in quiet, AzBio sentences in 10-talker speech babble at a signal-to-noise ratio (SNR) of +10, and the Bamford-Kowal-Bench sentences in noise (BKB-SIN) test. The speech perception performance on each test measure was compared between the two HA configurations. Subjects experienced a significant improvement in speech perception abilities with the HA settings adjusted to match NAL-NL1 targets over the recommended HA settings. EAS subjects have been shown to experience improvements in speech perception abilities when listening to ipsilateral combined stimulation. This population's abilities may be underestimated with current HA settings. Tailoring the HA output to the patient's individual hearing loss offers improved outcomes on speech perception measures. American Academy of Audiology.
Speaker and Accent Variation Are Handled Differently: Evidence in Native and Non-Native Listeners
Kriengwatana, Buddhamas; Terry, Josephine; Chládková, Kateřina; Escudero, Paola
2016-01-01
Listeners are able to cope with between-speaker variability in speech that stems from anatomical sources (i.e. individual and sex differences in vocal tract size) and sociolinguistic sources (i.e. accents). We hypothesized that listeners adapt to these two types of variation differently because prior work indicates that adapting to speaker/sex variability may occur pre-lexically while adapting to accent variability may require learning from attention to explicit cues (i.e. feedback). In Experiment 1, we tested our hypothesis by training native Dutch listeners and Australian-English (AusE) listeners without any experience with Dutch or Flemish to discriminate between the Dutch vowels /I/ and /ε/ from a single speaker. We then tested their ability to classify /I/ and /ε/ vowels of a novel Dutch speaker (i.e. speaker or sex change only), or vowels of a novel Flemish speaker (i.e. speaker or sex change plus accent change). We found that both Dutch and AusE listeners could successfully categorize vowels if the change involved a speaker/sex change, but not if the change involved an accent change. When AusE listeners were given feedback on their categorization responses to the novel speaker in Experiment 2, they were able to successfully categorize vowels involving an accent change. These results suggest that adapting to accents may be a two-step process, whereby the first step involves adapting to speaker differences at a pre-lexical level, and the second step involves adapting to accent differences at a contextual level, where listeners have access to word meaning or are given feedback that allows them to appropriately adjust their perceptual category boundaries. PMID:27309889
Advantages of binaural amplification to acceptable noise level of directional hearing aid users.
Kim, Ja-Hee; Lee, Jae Hee; Lee, Ho-Ki
2014-06-01
The goal of the present study was to examine whether Acceptable Noise Levels (ANLs) would be lower (greater acceptance of noise) in binaural listening than in monaural listening condition and also whether meaningfulness of background speech noise would affect ANLs for directional microphone hearing aid users. In addition, any relationships between the individual binaural benefits on ANLs and the individuals' demographic information were investigated. Fourteen hearing aid users (mean age, 64 years) participated for experimental testing. For the ANL calculation, listeners' most comfortable listening levels and background noise level were measured. Using Korean ANL material, ANLs of all participants were evaluated under monaural and binaural amplification with a counterbalanced order. The ANLs were also compared across five types of competing speech noises, consisting of 1- through 8-talker background speech maskers. Seven young normal-hearing listeners (mean age, 27 years) participated for the same measurements as a pilot testing. The results demonstrated that directional hearing aid users accepted more noise (lower ANLs) with binaural amplification than with monaural amplification, regardless of the type of competing speech. When the background speech noise became more meaningful, hearing-impaired listeners accepted less amount of noise (higher ANLs), revealing that ANL is dependent on the intelligibility of the competing speech. The individuals' binaural advantages in ANLs were significantly greater for the listeners with longer experience of hearing aids, yet not related to their age or hearing thresholds. Binaural directional microphone processing allowed hearing aid users to accept a greater amount of background noise, which may in turn improve listeners' hearing aid success. Informational masking substantially influenced background noise acceptance. Given a significant association between ANLs and duration of hearing aid usage, ANL measurement can be useful for clinical counseling of binaural hearing aid candidates or unsuccessful users.
Free Field Word recognition test in the presence of noise in normal hearing adults.
Almeida, Gleide Viviani Maciel; Ribas, Angela; Calleros, Jorge
In ideal listening situations, subjects with normal hearing can easily understand speech, as can many subjects who have a hearing loss. To present the validation of the Word Recognition Test in a Free Field in the Presence of Noise in normal-hearing adults. Sample consisted of 100 healthy adults over 18 years of age with normal hearing. After pure tone audiometry, a speech recognition test was applied in free field condition with monosyllables and disyllables, with standardized material in three listening situations: optimal listening condition (no noise), with a signal to noise ratio of 0dB and a signal to noise ratio of -10dB. For these tests, an environment in calibrated free field was arranged where speech was presented to the subject being tested from two speakers located at 45°, and noise from a third speaker, located at 180°. All participants had speech audiometry results in the free field between 88% and 100% in the three listening situations. Word Recognition Test in Free Field in the Presence of Noise proved to be easy to be organized and applied. The results of the test validation suggest that individuals with normal hearing should get between 88% and 100% of the stimuli correct. The test can be an important tool in measuring noise interference on the speech perception abilities. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Sulaiman, A H; Seluakumaran, K; Husain, R
2013-08-01
To investigate listening habits and hearing risks associated with the use of personal listening devices among urban high school students in Malaysia. Cross-sectional, descriptive study. In total, 177 personal listening device users (13-16 years old) were interviewed to elicit their listening habits (e.g. listening duration, volume setting) and symptoms of hearing loss. Their listening levels were also determined by asking them to set their usual listening volume on an Apple iPod TM playing a pre-selected song. The iPod's sound output was measured with an artificial ear connected to a sound level meter. Subjects also underwent pure tone audiometry to ascertain their hearing thresholds at standard frequencies (0.5-8 kHz) and extended high frequencies (9-16 kHz). The mean measured listening level and listening duration for all subjects were 72.2 dBA and 1.2 h/day, respectively. Their self-reported listening levels were highly correlated with the measured levels (P < 0.001). Subjects who listened at higher volumes also tend to listen for longer durations (P = 0.012). Male subjects listened at a significantly higher volume than female subjects (P = 0.008). When sound exposure levels were compared with the recommended occupational noise exposure limit, 4.5% of subjects were found to be listening at levels which require mandatory hearing protection in the occupational setting. Hearing loss (≥25 dB hearing level at one or more standard test frequencies) was detected in 7.3% of subjects. Subjects' sound exposure levels from the devices were positively correlated with their hearing thresholds at two of the extended high frequencies (11.2 and 14 kHz), which could indicate an early stage of noise-induced hearing loss. Although the average high school student listened at safe levels, a small percentage of listeners were exposed to harmful sound levels. Preventive measures are needed to avoid permanent hearing damage in high-risk listeners. Copyright © 2013 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Laterality, spatial abilities, and accident proneness.
Voyer, Susan D; Voyer, Daniel
2015-01-01
Although handedness as a measure of cerebral specialization has been linked to accident proneness, more direct measures of laterality are rarely considered. The present study aimed to fill that gap in the existing research. In addition, individual difference factors in accident proneness were further examined with the inclusion of mental rotation and navigation abilities measures. One hundred and forty participants were asked to complete the Mental Rotations Test, the Santa Barbara Sense of Direction scale, the Greyscales task, the Fused Dichotic Word Test, the Waterloo Handedness Questionnaire, and a grip strength task before answering questions related to number of accidents in five areas. Results indicated that handedness scores, absolute visual laterality score, absolute response time on the auditory laterality index, and navigation ability were significant predictors of the total number of accidents. Results are discussed with respect to cerebral hemispheric specialization and risk-taking attitudes and behavior.
Fassaert, Thijs; van Dulmen, Sandra; Schellevis, François; Bensing, Jozien
2007-11-01
Active listening is a prerequisite for a successful healthcare encounter, bearing potential therapeutic value especially in clinical situations that require no specific medical intervention. Although generally acknowledged as such, active listening has not been studied in depth. This paper describes the development of the Active Listening Observation Scale (ALOS-global), an observation instrument measuring active listening and its validation in a sample of general practice consultations for minor ailments. Five hundred and twenty-four videotaped general practice consultations involving minor ailments were observed with the ALOS-global. Hypotheses were tested to determine validity, incorporating patients' perception of GPs' affective performance, GPs' verbal attention, patients' self-reported anxiety level and gender differences. The final 7-item ALOS-global had acceptable inter- and intra-observer agreement. Factor analysis revealed one homogeneous dimension. The scalescore was positively related to verbal attention measured by RIAS, to patients' perception of GPs' performance and to their pre-visit anxiety level. Female GPs received higher active listening scores. The results of this study are promising concerning the psychometric properties of the ALOS-global. More research is needed to confirm these preliminary findings. After establishing how active listening differentiates between health professionals, the ALOS-global may become a valuable tool in feedback and training aimed at increasing listening skills.
Brand, Sophie; Ernestus, Mirjam
2018-05-01
In casual conversations, words often lack segments. This study investigates whether listeners rely on their experience with reduced word pronunciation variants during the processing of single segment reduction. We tested three groups of listeners in a lexical decision experiment with French words produced either with or without word-medial schwa (e.g., /ʀvy/ and /ʀvy/ for revue). Participants also rated the relative frequencies of the two pronunciation variants of the words. If the recognition accuracy and reaction times (RTs) for a given listener group correlate best with the frequencies of occurrence holding for that given listener group, recognition is influenced by listeners' exposure to these variants. Native listeners' relative frequency ratings correlated well with their accuracy scores and RTs. Dutch advanced learners' accuracy scores and RTs were best predicted by their own ratings. In contrast, the accuracy and RTs from Dutch beginner learners of French could not be predicted by any relative frequency rating; the rating task was probably too difficult for them. The participant groups showed behaviour reflecting their difference in experience with the pronunciation variants. Our results strongly suggest that listeners store the frequencies of occurrence of pronunciation variants, and consequently the variants themselves.
The relationship between speech recognition, behavioural listening effort, and subjective ratings.
Picou, Erin M; Ricketts, Todd A
2018-06-01
The purpose of this study was to evaluate the reliability and validity of four subjective questions related to listening effort. A secondary purpose of this study was to evaluate the effects of hearing aid beamforming microphone arrays on word recognition and listening effort. Participants answered subjective questions immediately following testing in a dual-task paradigm with three microphone settings in a moderately reverberant laboratory environment in two noise configurations. Participants rated their: (1) mental work, (2) desire to improve the situation, (3) tiredness, and (4) desire to give up. Data were analysed using repeated measures and reliability analyses. Eighteen adults with symmetrical sensorineural hearing loss participated. Beamforming differentially affected word recognition and listening effort. Analysis revealed the same pattern of results for behavioural listening effort and subjective ratings of desire to improve the situation. Conversely, ratings of work revealed the same pattern of results as word recognition performance. Ratings of tiredness and desire to give up were unaffected by hearing aid microphone or noise configuration. Participant ratings of their desire to control the listening situation appear to reliable subjective indicators of listening effort that align with results from a behavioural measure of listening effort.
Listeners' comprehension of uptalk in spontaneous speech.
Tomlinson, John M; Fox Tree, Jean E
2011-04-01
Listeners' comprehension of phrase final rising pitch on declarative utterances, or uptalk, was examined to test the hypothesis that prolongations might differentiate conflicting functions of rising pitch. In Experiment 1 we found that listeners rated prolongations as indicating more speaker uncertainty, but that rising pitch was unrelated to ratings. In Experiment 2 we found that prolongations interacted with rising pitch when listeners monitored for words in the subsequent utterance. Words preceded by prolonged uptalk were monitored faster than words preceded by non-prolonged uptalk. In Experiment 3 we found that the interaction between rising pitch and prolongations depended on listeners' beliefs about speakers' mental states. Results support the theory that temporal and situational context are important in determining intonational meaning. Copyright © 2010 Elsevier B.V. All rights reserved.
Sörqvist, Patrik; Hurtig, Anders; Ljung, Robert; Rönnberg, Jerker
2014-01-01
The purpose of this experiment was to investigate whether classroom reverberation influences second-language (L2) listening comprehension. Moreover, we investigated whether individual differences in baseline L2 proficiency and in working memory capacity (WMC) modulate the effect of reverberation time on L2 listening comprehension. The results showed that L2 listening comprehension decreased as reverberation time increased. Participants with higher baseline L2 proficiency were less susceptible to this effect. WMC was also related to the effect of reverberation (although just barely significant), but the effect of WMC was eliminated when baseline L2 proficiency was statistically controlled. Taken together, the results suggest that top-down cognitive capabilities support listening in adverse conditions. Potential implications for the Swedish national tests in English are discussed. PMID:24646043
ERIC Educational Resources Information Center
Lee, HyeSun; Winke, Paula
2013-01-01
We adapted three practice College Scholastic Ability Tests (CSAT) of English listening, each with five-option items, to create four- and three-option versions by asking 73 Korean speakers or learners of English to eliminate the least plausible options in two rounds. Two hundred and sixty-four Korean high school English-language learners formed…
ERIC Educational Resources Information Center
Couper, Graeme
2011-01-01
While there is growing evidence that pronunciation teaching can work, there is a need to establish what it is that makes it work. The study reported here tested for the effect of two particular factors: socially constructed metalanguage (SCM) and critical listening (CL). SCM is a term proposed for metalanguage developed by students working…
ERIC Educational Resources Information Center
Zhu, Xinhua; Li, Xueyan; Yu, Guoxing; Cheong, Choo Mui; Liao, Xian
2016-01-01
Integrated assessment tasks have been increasingly used in language tests, but the underlying constructs of integrated tasks remain elusive. This study aimed to improve understanding of the construct of integrated writing tasks in Chinese Language examinations in Hong Kong by looking at the language competences measured in the…
Factors contributing to speech perception scores in long-term pediatric cochlear implant users.
Davidson, Lisa S; Geers, Ann E; Blamey, Peter J; Tobey, Emily A; Brenner, Christine A
2011-02-01
The objectives of this report are to (1) describe the speech perception abilities of long-term pediatric cochlear implant (CI) recipients by comparing scores obtained at elementary school (CI-E, 8 to 9 yrs) with scores obtained at high school (CI-HS, 15 to 18 yrs); (2) evaluate speech perception abilities in demanding listening conditions (i.e., noise and lower intensity levels) at adolescence; and (3) examine the relation of speech perception scores to speech and language development over this longitudinal timeframe. All 112 teenagers were part of a previous nationwide study of 8- and 9-yr-olds (N = 181) who received a CI between 2 and 5 yrs of age. The test battery included (1) the Lexical Neighborhood Test (LNT; hard and easy word lists); (2) the Bamford Kowal Bench sentence test; (3) the Children's Auditory-Visual Enhancement Test; (4) the Test of Auditory Comprehension of Language at CI-E; (5) the Peabody Picture Vocabulary Test at CI-HS; and (6) the McGarr sentences (consonants correct) at CI-E and CI-HS. CI-HS speech perception was measured in both optimal and demanding listening conditions (i.e., background noise and low-intensity level). Speech perception scores were compared based on age at test, lexical difficulty of stimuli, listening environment (optimal and demanding), input mode (visual and auditory-visual), and language age. All group mean scores significantly increased with age across the two test sessions. Scores of adolescents significantly decreased in demanding listening conditions. The effect of lexical difficulty on the LNT scores, as evidenced by the difference in performance between easy versus hard lists, increased with age and decreased for adolescents in challenging listening conditions. Calculated curves for percent correct speech perception scores (LNT and Bamford Kowal Bench) and consonants correct on the McGarr sentences plotted against age-equivalent language scores on the Test of Auditory Comprehension of Language and Peabody Picture Vocabulary Test achieved asymptote at similar ages, around 10 to 11 yrs. On average, children receiving CIs between 2 and 5 yrs of age exhibited significant improvement on tests of speech perception, lipreading, speech production, and language skills measured between primary grades and adolescence. Evidence suggests that improvement in speech perception scores with age reflects increased spoken language level up to a language age of about 10 yrs. Speech perception performance significantly decreased with softer stimulus intensity level and with introduction of background noise. Upgrades to newer speech processing strategies and greater use of frequency-modulated systems may be beneficial for ameliorating performance under these demanding listening conditions.
Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H
2015-09-01
To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.
NASA Astrophysics Data System (ADS)
Samardzic, Nikolina
The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly, as an example of the significance of speech intelligibility evaluation in the context of an applicable listening environment, as indicated in this research, it was found that the jury test participants required on average an approximate 3 dB increase in sound pressure level of speech material while driving and listening compared to when just listening, for an equivalent speech intelligibility performance and the same listening task.
The MOC reflex during active listening to speech.
Garinis, Angela C; Glattke, Theodore; Cone, Barbara K
2011-10-01
The purpose of this study was to test the hypothesis that active listening to speech would increase medial olivocochlear (MOC) efferent activity for the right vs. the left ear. Click-evoked otoacoustic emissions (CEOAEs) were evoked by 60-dB p.e. SPL clicks in 13 normally hearing adults in 4 test conditions for each ear: (a) in quiet; (b) with 60-dB SPL contralateral broadband noise; (c) with words embedded (at -3-dB signal-to-noise ratio [SNR]) in 60-dB SPL contralateral noise during which listeners directed attention to the words; and (d) for the same SNR as in the 3rd condition, with words played backwards. There was greater suppression during active listening compared with passive listening that was apparent in the latency range of 6- to 18-ms poststimulus onset. Ear differences in CEOAE amplitude were observed in all conditions, with right-ear amplitudes larger than those for the left. The absolute difference between CEOAE amplitude in quiet and with contralateral noise, a metric of suppression, was equivalent for right and left ears. When the amplitude differences were normalized, suppression was greater for noise presented to the right and the effect measured for a probe in the left ear. The findings support the theory that cortical mechanisms involved in listening to speech affect cochlear function through the MOC efferent system.
Lopez Valdes, Alejandro; Mc Laughlin, Myles; Viani, Laura; Walshe, Peter; Smith, Jaclyn; Zeng, Fan-Gang; Reilly, Richard B.
2014-01-01
Cochlear implants (CIs) can partially restore functional hearing in deaf individuals. However, multiple factors affect CI listener's speech perception, resulting in large performance differences. Non-speech based tests, such as spectral ripple discrimination, measure acoustic processing capabilities that are highly correlated with speech perception. Currently spectral ripple discrimination is measured using standard psychoacoustic methods, which require attentive listening and active response that can be difficult or even impossible in special patient populations. Here, a completely objective cortical evoked potential based method is developed and validated to assess spectral ripple discrimination in CI listeners. In 19 CI listeners, using an oddball paradigm, cortical evoked potential responses to standard and inverted spectrally rippled stimuli were measured. In the same subjects, psychoacoustic spectral ripple discrimination thresholds were also measured. A neural discrimination threshold was determined by systematically increasing the number of ripples per octave and determining the point at which there was no longer a significant difference between the evoked potential response to the standard and inverted stimuli. A correlation was found between the neural and the psychoacoustic discrimination thresholds (R2 = 0.60, p<0.01). This method can objectively assess CI spectral resolution performance, providing a potential tool for the evaluation and follow-up of CI listeners who have difficulty performing psychoacoustic tests, such as pediatric or new users. PMID:24599314
Lopez Valdes, Alejandro; Mc Laughlin, Myles; Viani, Laura; Walshe, Peter; Smith, Jaclyn; Zeng, Fan-Gang; Reilly, Richard B
2014-01-01
Cochlear implants (CIs) can partially restore functional hearing in deaf individuals. However, multiple factors affect CI listener's speech perception, resulting in large performance differences. Non-speech based tests, such as spectral ripple discrimination, measure acoustic processing capabilities that are highly correlated with speech perception. Currently spectral ripple discrimination is measured using standard psychoacoustic methods, which require attentive listening and active response that can be difficult or even impossible in special patient populations. Here, a completely objective cortical evoked potential based method is developed and validated to assess spectral ripple discrimination in CI listeners. In 19 CI listeners, using an oddball paradigm, cortical evoked potential responses to standard and inverted spectrally rippled stimuli were measured. In the same subjects, psychoacoustic spectral ripple discrimination thresholds were also measured. A neural discrimination threshold was determined by systematically increasing the number of ripples per octave and determining the point at which there was no longer a significant difference between the evoked potential response to the standard and inverted stimuli. A correlation was found between the neural and the psychoacoustic discrimination thresholds (R2=0.60, p<0.01). This method can objectively assess CI spectral resolution performance, providing a potential tool for the evaluation and follow-up of CI listeners who have difficulty performing psychoacoustic tests, such as pediatric or new users.
The development of a modified spectral ripple test.
Aronoff, Justin M; Landsberger, David M
2013-08-01
Poor spectral resolution can be a limiting factor for hearing impaired listeners, particularly for complex listening tasks such as speech understanding in noise. Spectral ripple tests are commonly used to measure spectral resolution, but these tests contain a number of potential confounds that can make interpretation of the results difficult. To measure spectral resolution while avoiding those confounds, a modified spectral ripple test with dynamically changing ripples was created, referred to as the spectral-temporally modulated ripple test (SMRT). This paper describes the SMRT and provides evidence that it is sensitive to changes in spectral resolution.
Joshi, Anurag; Kiran, Ravi; Sah, Ash Narayan
2017-01-01
This paper studies the impact of musical religious songs (hymns) for managing stress of Indian Engineering students through Galvanic Skin Response (GSR). The objective of the paper is to find out, whether listening to hymns is able to reduce the value of GSR. Sample students were selected through initial screening and the students who reported high mental stress during the interview were selected for the main drills. All the readings were taken using a GSR meter. Statistical t-test was used for the purpose of hypothesis testing. The study examines the relation between GSR and stress. The results indicated that listening to hymns had a significant effect on the value of GSR. The results highlight that GSR decreased at t = 300 seconds for the experimental group, who listened to hymns, as compared to control group (not exposed to the same). It is recommended that, this amazingly effortless and yet highly efficient traditional technique of listening to hymns be made a part of student's routine curriculum. The paper aims at spreading awareness of listening to hymns as one of the modes of Stress Management amongst Indian Engineering Students.
Fatehi, Zahra; Baradaran, Hamid Reza; Asadpour, Mohamad; Rezaeian, Mohsen
2017-01-01
Background: Individuals' listening styles differs based on their characters, professions and situations. This study aimed to assess the validity and reliability of Listening Styles Profile- Revised (LSP- R) in Iranian students. Methods: After translating into Persian, LSP-R was employed in a sample of 240 medical and nursing Persian speaking students in Iran. Statistical analysis was performed to test the reliability and validity of the LSP-R. Results: The study revealed high internal consistency and good test-retest reliability for the Persian version of the questionnaire. The Cronbach's alpha coefficient was 0.72 and intra-class correlation coefficient 0.87. The means for the content validity index and the content validity ratio (CVR) were 0.90 and 0.83, respectively. Exploratory factor analysis (EFA) yielded a four-factor solution accounted for 60.8% of the observed variance. Majority of medical students (73%) as well as majority of nursing students (70%) stated that their listening styles were task-oriented. Conclusion: In general, the study finding suggests that the Persian version of LSP-R is a valid and reliable instrument for assessing listening styles profile in the studied sample.
Identification and discrimination of bilingual talkers across languages1
Winters, Stephen J.; Levi, Susannah V.; Pisoni, David B.
2008-01-01
This study investigated the extent to which language familiarity affects the perception of the indexical properties of speech by testing listeners’ identification and discrimination of bilingual talkers across two different languages. In one experiment, listeners were trained to identify bilingual talkers speaking in only one language and were then tested on their ability to identify the same talkers speaking in another language. In the second experiment, listeners discriminated between bilingual talkers across languages in an AX discrimination paradigm. The results of these experiments indicate that there is sufficient language-independent indexical information in speech for listeners to generalize knowledge of talkers’ voices across languages and to successfully discriminate between bilingual talkers regardless of the language they are speaking. However, the results of these studies also revealed that listeners do not solely rely on language-independent information when performing these tasks. Listeners use language-dependent indexical cues to identify talkers who are speaking a familiar language. Moreover, the tendency to perceive two talkers as the “same” or “different” depends on whether the talkers are speaking in the same language. The combined results of these experiments thus suggest that indexical processing relies on both language-dependent and language-independent information in the speech signal. PMID:18537401
A new IRT-based standard setting method: application to eCat-listening.
García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David
2013-01-01
Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.
Filipino, Indonesian and Thai Listening Test Errors
ERIC Educational Resources Information Center
Castro, C. S.; And Others
1975-01-01
This article reports on a study to identify listening, and aural comprehension difficulties experienced by students of English, specifically RELC (Regional English Language Centre in Singapore) course members. The most critical errors are discussed and conclusions about foreign language learning are drawn. (CLK)
Behavioural Indices of Central Auditory Processing
2009-06-01
materials were presented monaurally or binaurally over a headset at a comfortable listening level. All but the Gaps-in-Noise Test were presented twice...Milford, New Hampshire). Stimulus materials were pesented monaurally or binaurally at a comfortable listening level of approximately 50 decibels above
Tsukada, Kimiko; Hirata, Yukari; Roengpitya, Rungpat
2014-06-01
The purpose of this research was to compare the perception of Japanese vowel length contrasts by 4 groups of listeners who differed in their familiarity with length contrasts in their first language (L1; i.e., American English, Italian, Japanese, and Thai). Of the 3 nonnative groups, native Thai listeners were expected to outperform American English and Italian listeners, because vowel length is contrastive in their L1. Native Italian listeners were expected to demonstrate a higher level of accuracy for length contrasts than American English listeners, because the former are familiar with consonant (but not vowel) length contrasts (i.e., singleton vs. geminate) in their L1. A 2-alternative forced-choice AXB discrimination test that included 125 trials was administered to all the participants, and the listeners' discrimination accuracy (d') was reported. As expected, Japanese listeners were more accurate than all 3 nonnative groups in their discrimination of Japanese vowel length contrasts. The 3 nonnative groups did not differ from one another in their discrimination accuracy despite varying experience with length contrasts in their L1. Only Thai listeners were more accurate in their length discrimination when the target vowel was long than when it was short. Being familiar with vowel length contrasts in L1 may affect the listeners' cross-language perception, but it does not guarantee that their L1 experience automatically results in efficient processing of length contrasts in unfamiliar languages. The extent of success may be related to how length contrasts are phonetically implemented in listeners' L1.
ERIC Educational Resources Information Center
Blood, Gordon W.
1985-01-01
Results of a study involving 76 stutterers and 76 nonstutterers (seven to 15 years old) included (1) a right- ear preference for both groups; (2) differences in dichotic stuttering and nonstuttering Ss; and (3) a relationship between stuttering severity and hemispheric dominance dependency on manner of data analysis. (Author/CL)
Selective attention and the auditory vertex potential. 1: Effects of stimulus delivery rate
NASA Technical Reports Server (NTRS)
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
1975-01-01
Enhancement of the auditory vertex potentials with selective attention to dichotically presented tone pips was found to be critically sensitive to the range of inter-stimulus intervals in use. Only at the shortest intervals was a clear-cut enhancement of the latency component to stimuli observed for the attended ear.
Desjardins, Jamie L
2016-01-01
Older listeners with hearing loss may exert more cognitive resources to maintain a level of listening performance similar to that of younger listeners with normal hearing. Unfortunately, this increase in cognitive load, which is often conceptualized as increased listening effort, may come at the cost of cognitive processing resources that might otherwise be available for other tasks. The purpose of this study was to evaluate the independent and combined effects of a hearing aid directional microphone and a noise reduction (NR) algorithm on reducing the listening effort older listeners with hearing loss expend on a speech-in-noise task. Participants were fitted with study worn commercially available behind-the-ear hearing aids. Listening effort on a sentence recognition in noise task was measured using an objective auditory-visual dual-task paradigm. The primary task required participants to repeat sentences presented in quiet and in a four-talker babble. The secondary task was a digital visual pursuit rotor-tracking test, for which participants were instructed to use a computer mouse to track a moving target around an ellipse that was displayed on a computer screen. Each of the two tasks was presented separately and concurrently at a fixed overall speech recognition performance level of 50% correct with and without the directional microphone and/or the NR algorithm activated in the hearing aids. In addition, participants reported how effortful it was to listen to the sentences in quiet and in background noise in the different hearing aid listening conditions. Fifteen older listeners with mild sloping to severe sensorineural hearing loss participated in this study. Listening effort in background noise was significantly reduced with the directional microphones activated in the hearing aids. However, there was no significant change in listening effort with the hearing aid NR algorithm compared to no noise processing. Correlation analysis between objective and self-reported ratings of listening effort showed no significant relation. Directional microphone processing effectively reduced the cognitive load of listening to speech in background noise. This is significant because it is likely that listeners with hearing impairment will frequently encounter noisy speech in their everyday communications. American Academy of Audiology.
Evaluation of Extended-Wear Hearing Aid Technology for Operational Military Use
2016-07-01
listeners without degrading auditory situational awareness. To this point, significant progress has been made in this evaluation process. The devices...provide long-term hearing protection for listeners with normal hearing with minimal impact on auditory situational awareness and minimal annoyance due to...Test Plan: A comprehensive test plan is complete for the measurements at AFRL, which will incorporate goals 1-2 and 4-5 above using a normal
Wegner, M L; Brookshire, R H; Nicholas, L E
1984-01-01
Aphasic and nonaphasic listeners' comprehension of main ideas and details within coherent and noncoherent narrative discourse was examined. Coherent paragraphs contained one topic to which all sentences in the paragraph related. Noncoherent paragraphs contained a change in topic with every third or fourth sentence. Each paragraph contained four main ideas and one or more details that related to each main idea. Listeners' responses to yes/no questions following each paragraph yielded the following results: (1) Nonaphasic listeners comprehended the paragraphs better than aphasic listeners. (2) Both aphasic and nonaphasic listeners comprehended main ideas better than they comprehended details. (3) Coherence did not affect comprehension of main ideas for either group. (4) Coherence did not affect comprehension of details by nonaphasic subjects. (5) Coherence affected comprehension of details by aphasic subjects, and their comprehension of details in coherent paragraphs was worse than their comprehension of details in noncoherent paragraphs. There was no significant correlation between Token Test scores and measures of paragraph comprehension.
Speech Perception in Older Hearing Impaired Listeners: Benefits of Perceptual Training
Woods, David L.; Doss, Zoe; Herron, Timothy J.; Arbogast, Tanya; Younus, Masood; Ettlinger, Marc; Yund, E. William
2015-01-01
Hearing aids (HAs) only partially restore the ability of older hearing impaired (OHI) listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC) syllables containing different vowels (/ɑ/, /i/, or /u/) and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs) spanning a 12 dB range. Noise levels were adjusted over training sessions based on d’ measures. Listeners were tested before and after training to measure (1) changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2) sentence reception thresholds (SeRTs) using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d’ thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in processing simple sentences and that greater transfer of benefit would be evident in the comprehension of more unpredictable speech material. PMID:25730330
Neurodynamic evaluation of hearing aid features using EEG correlates of listening effort.
Bernarding, Corinna; Strauss, Daniel J; Hannemann, Ronny; Seidler, Harald; Corona-Strauss, Farah I
2017-06-01
In this study, we propose a novel estimate of listening effort using electroencephalographic data. This method is a translation of our past findings, gained from the evoked electroencephalographic activity, to the oscillatory EEG activity. To test this technique, electroencephalographic data from experienced hearing aid users with moderate hearing loss were recorded, wearing hearing aids. The investigated hearing aid settings were: a directional microphone combined with a noise reduction algorithm in a medium and a strong setting, the noise reduction setting turned off, and a setting using omnidirectional microphones without any noise reduction. The results suggest that the electroencephalographic estimate of listening effort seems to be a useful tool to map the exerted effort of the participants. In addition, the results indicate that a directional processing mode can reduce the listening effort in multitalker listening situations.
Luo, Xin; Ashmore, Krista B
2014-06-01
Context-dependent pitch perception helps listeners recognize tones produced by speakers with different fundamental frequencies (f0s). The role of language experience in tone normalization remains unclear. In this cross-language study of tone normalization, native Mandarin and English listeners were asked to recognize Mandarin Tone 1 (high-flat) and Tone 2 (mid-rising) with a preceding Mandarin sentence. To further test whether context-dependent pitch perception is speech-specific or domain-general, both language groups were asked to identify non-speech flat and rising pitch contours with a preceding non-speech flat pitch contour. Results showed that both Mandarin and English listeners made more rising responses with non-speech than with speech stimuli, due to differences in spectral complexity and listening task between the two stimulus types. English listeners made more rising responses than Mandarin listeners with both speech and non-speech stimuli. Contrastive context effects (more rising responses in the high-f0 context than in the low-f0 context) were found with both speech and non-speech stimuli for Mandarin listeners, but not for English listeners. English listeners' lack of tone experience may have caused more rising responses and limited use of context f0 cues. These results suggest that context-dependent pitch perception in tone normalization is domain-general, but influenced by long-term language experience.
Hypothalamic digoxin, hemispheric chemical dominance, and the tridosha theory.
Kurup, Ravi Kumar; Kurup, Parameswara Achutha
2003-05-01
Ayurveda, the traditional Indian System of Medicine, deals with the theory of the three tridosha states (both physical and psychological): Vata, Pitta, and Kapha. They are the three major human constitutional types that both depend on psychological and physical characteristics. The Pitta state is described as a critical, discriminative, and rational psychological state of mind, while the Kapha state is described as being dominant for emotional stimuli. The Vata state is an intermediate unstable shifting state. The Pitta types are of average height and built with well developed musculature. The Vata types are thin individuals with low body mass index. The Kapha types are short stocky individuals that tend toward obesity, and who are sedentary. The study assessed the biochemical differences between right hemispheric dominant, bihemispheric dominant, and left hemispheric dominant individuals, and then compared this with the patterns obtained in the Vata, Pitta, and Kapha states. The isoprenoid metabolites (digoxin, dolichol, and ubiquinone), glycoconjugate metabolism, free radical metabolism, and the RBC membrane composition were studied. The hemispheric chemical dominance in various systemic diseases and psychological states was also investigated. The results showed that right hemispheric chemically dominant/Kapha state had elevated digoxin levels, increased free radical production and reduced scavenging, increased tryptophan catabolites and reduced tyrosine catabolites, increased glycoconjugate levels and increased cholesterol: phospholipid ratio of RBC membranes. Left hemispheric chemically dominant/Pitta states had the opposite biochemical patterns. The patterns were normal or intermediate in the bihemispheric chemically dominant/Vata state. This pattern could be correlated with various systemic and neuropsychiatric diseases and personality traits. Right hemispheric chemical dominance/Kapha state represents a hyperdigoxinemic state with membrane sodium-potassium ATPase inhibition. Left hemispheric chemical dominance/Pitta state represents the reverse pattern with hypodigoxinemia and membrane sodium-potassium ATPase stimulation. The Vata state is the intermediate bihemispheric chemical dominant state. Ninety-five percent of the patients/individuals in the tridosha, pathological, and psychological groups were right-handed/left hemispheric dominant, however, their biochemical patterns were different--either left hemispheric chemical dominant or right hemispheric chemical dominant. Hemispheric chemical dominance/tridosha states had no correlation with cerebral dominance detected by handedness/dichotic listening test.