Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Friederici, Angela D
2015-12-01
Literacy acquisition is highly associated with auditory processing abilities, such as auditory discrimination. The event-related potential Mismatch Response (MMR) is an indicator for cortical auditory discrimination abilities and it has been found to be reduced in individuals with reading and writing impairments and also in infants at risk for these impairments. The goal of the present study was to analyze the relationship between auditory speech discrimination in infancy and writing abilities at school age within subjects, and to determine when auditory speech discrimination differences, relevant for later writing abilities, start to develop. We analyzed the MMR registered in response to natural syllables in German children with and without writing problems at two points during development, that is, at school age and at infancy, namely at age 1 month and 5 months. We observed MMR related auditory discrimination differences between infants with and without later writing problems, starting to develop at age 5 months-an age when infants begin to establish language-specific phoneme representations. At school age, these children with and without writing problems also showed auditory discrimination differences, reflected in the MMR, confirming a relationship between writing and auditory speech processing skills. Thus, writing problems at school age are, at least, partly grounded in auditory discrimination problems developing already during the first months of life. Copyright © 2015 Elsevier Ltd. All rights reserved.
Auditory Perceptual Abilities Are Associated with Specific Auditory Experience
Zaltz, Yael; Globerson, Eitan; Amir, Noam
2017-01-01
The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318
ERIC Educational Resources Information Center
Beauchamp, Chris M.; Stelmack, Robert M.
2006-01-01
The relation between intelligence and speed of auditory discrimination was investigated during an auditory oddball task with backward masking. In target discrimination conditions that varied in the interval between the target and the masking stimuli and in the tonal frequency of the target and masking stimuli, higher ability participants (HA)…
Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony
2009-01-01
It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…
ERIC Educational Resources Information Center
Kargas, Niko; López, Beatriz; Reddy, Vasudevi; Morris, Paul
2015-01-01
Current views suggest that autism spectrum disorders (ASDs) are characterised by enhanced low-level auditory discrimination abilities. Little is known, however, about whether enhanced abilities are universal in ASD and how they relate to symptomatology. We tested auditory discrimination for intensity, frequency and duration in 21 adults with ASD…
Relationship between Auditory and Cognitive Abilities in Older Adults
Sheft, Stanley
2015-01-01
Objective The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults. Methods Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer’s Disease Center, participants were a community-dwelling cohort of older adults (range 63–98 years) without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61) and White (n=63) subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities. Results Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance. Conclusions For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials. PMID:26237423
ERIC Educational Resources Information Center
Bishop, Dorothy V. M.; Hardiman, Mervyn J.; Barry, Johanna G.
2011-01-01
Behavioural and electrophysiological studies give differing impressions of when auditory discrimination is mature. Ability to discriminate frequency and speech contrasts reaches adult levels only around 12 years of age, yet an electrophysiological index of auditory discrimination, the mismatch negativity (MMN), is reported to be as large in…
Revisiting the "enigma" of musicians with dyslexia: Auditory sequencing and speech abilities.
Zuk, Jennifer; Bishop-Liebler, Paula; Ozernov-Palchik, Ola; Moore, Emma; Overy, Katie; Welch, Graham; Gaab, Nadine
2017-04-01
Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing and speech discrimination in 52 adults comprised of musicians with dyslexia, nonmusicians with dyslexia, and typical musicians. An auditory sequencing task measuring perceptual acuity for tone sequences of increasing length was administered. Furthermore, subjects were asked to discriminate synthesized syllable continua varying in acoustic components of speech necessary for intraphonemic discrimination, which included spectral (formant frequency) and temporal (voice onset time [VOT] and amplitude envelope) features. Results indicate that musicians with dyslexia did not significantly differ from typical musicians and performed better than nonmusicians with dyslexia for auditory sequencing as well as discrimination of spectral and VOT cues within syllable continua. However, typical musicians demonstrated superior performance relative to both groups with dyslexia for discrimination of syllables varying in amplitude information. These findings suggest a distinct profile of speech processing abilities in musicians with dyslexia, with specific weaknesses in discerning amplitude cues within speech. Because these difficulties seem to remain persistent in adults with dyslexia despite musical training, this study only partly supports the potential for musical training to enhance the auditory processing skills known to be crucial for literacy in individuals with dyslexia. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.
2015-01-01
The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746
Assessing Auditory Discrimination Skill of Malay Children Using Computer-based Method.
Ting, H; Yunus, J; Mohd Nordin, M Z
2005-01-01
The purpose of this paper is to investigate the auditory discrimination skill of Malay children using computer-based method. Currently, most of the auditory discrimination assessments are conducted manually by Speech-Language Pathologist. These conventional tests are actually general tests of sound discrimination, which do not reflect the client's specific speech sound errors. Thus, we propose computer-based Malay auditory discrimination test to automate the whole process of assessment as well as to customize the test according to the specific speech error sounds of the client. The ability in discriminating voiced and unvoiced Malay speech sounds was studied for the Malay children aged between 7 and 10 years old. The study showed no major difficulty for the children in discriminating the Malay speech sounds except differentiating /g/-/k/ sounds. Averagely the children of 7 years old failed to discriminate /g/-/k/ sounds.
Behavioral Indications of Auditory Processing Disorders.
ERIC Educational Resources Information Center
Hartman, Kerry McGoldrick
1988-01-01
Identifies disruptive behaviors of children that may indicate central auditory processing disorders (CAPDs), perceptual handicaps of auditory discrimination or auditory memory not related to hearing ability. Outlines steps to modify the communication environment for CAPD children at home and in the classroom. (SV)
Statistical learning and auditory processing in children with music training: An ERP study.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne
2017-07-01
The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Robson, Holly; Cloutman, Lauren; Keidel, James L; Sage, Karen; Drakesmith, Mark; Welbourne, Stephen
2014-10-01
Auditory discrimination is significantly impaired in Wernicke's aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Behavioural auditory discrimination thresholds of consonant-vowel-consonant (CVC) syllables and pure tones (PTs) were measured in WA (n = 7) and control (n = 7) participants. Threshold results were used to develop multiple deviant MMN oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed a relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.
Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter
2018-05-01
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.
Schneider, David M; Woolley, Sarah M N
2010-06-01
Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the auditory midbrain increases neural discrimination of complex vocalizations.
Spatial localization deficits and auditory cortical dysfunction in schizophrenia
Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.
2014-01-01
Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608
The effect of superior auditory skills on vocal accuracy
NASA Astrophysics Data System (ADS)
Amir, Ofer; Amir, Noam; Kishon-Rabin, Liat
2003-02-01
The relationship between auditory perception and vocal production has been typically investigated by evaluating the effect of either altered or degraded auditory feedback on speech production in either normal hearing or hearing-impaired individuals. Our goal in the present study was to examine this relationship in individuals with superior auditory abilities. Thirteen professional musicians and thirteen nonmusicians, with no vocal or singing training, participated in this study. For vocal production accuracy, subjects were presented with three tones. They were asked to reproduce the pitch using the vowel /a/. This procedure was repeated three times. The fundamental frequency of each production was measured using an autocorrelation pitch detection algorithm designed for this study. The musicians' superior auditory abilities (compared to the nonmusicians) were established in a frequency discrimination task reported elsewhere. Results indicate that (a) musicians had better vocal production accuracy than nonmusicians (production errors of 1/2 a semitone compared to 1.3 semitones, respectively); (b) frequency discrimination thresholds explain 43% of the variance of the production data, and (c) all subjects with superior frequency discrimination thresholds showed accurate vocal production; the reverse relationship, however, does not hold true. In this study we provide empirical evidence to the importance of auditory feedback on vocal production in listeners with superior auditory skills.
Auditory Temporal Order Discrimination and Backward Recognition Masking in Adults with Dyslexia
ERIC Educational Resources Information Center
Griffiths, Yvonne M.; Hill, Nicholas I.; Bailey, Peter J.; Snowling, Margaret J.
2003-01-01
The ability of 20 adult dyslexic readers to extract frequency information from successive tone pairs was compared with that of IQ-matched controls using temporal order discrimination and auditory backward recognition masking (ABRM) tasks. In both paradigms, the interstimulus interval (ISI) between tones in a pair was either short (20 ms) or long…
Development of a Pitch Discrimination Screening Test for Preschool Children.
Abramson, Maria Kulick; Lloyd, Peter J
2016-04-01
There is a critical need for tests of auditory discrimination for young children as this skill plays a fundamental role in the development of speaking, prereading, reading, language, and more complex auditory processes. Frequency discrimination is important with regard to basic sensory processing affecting phonological processing, dyslexia, measurements of intelligence, auditory memory, Asperger syndrome, and specific language impairment. This study was performed to determine the clinical feasibility of the Pitch Discrimination Test (PDT) to screen the preschool child's ability to discriminate some of the acoustic demands of speech perception, primarily pitch discrimination, without linguistic content. The PDT used brief speech frequency tones to gather normative data from preschool children aged 3 to 5 yrs. A cross-sectional study was used to gather data regarding the pitch discrimination abilities of a sample of typically developing preschool children, between 3 and 5 yrs of age. The PDT consists of ten trials using two pure tones of 100-msec duration each, and was administered in an AA or AB forced-choice response format. Data from 90 typically developing preschool children between the ages of 3 and 5 yrs were used to provide normative data. Nonparametric Mann-Whitney U-testing was used to examine the effects of age as a continuous variable on pitch discrimination. The Kruskal-Wallis test was used to determine the significance of age on performance on the PDT. Spearman rank was used to determine the correlation of age and performance on the PDT. Pitch discrimination of brief tones improved significantly from age 3 yrs to age 4 yrs, as well as from age 3 yrs to the age 4- and 5-yrs group. Results indicated that between ages 3 and 4 yrs, children's auditory discrimination of pitch improved on the PDT. The data showed that children can be screened for auditory discrimination of pitch beginning with age 4 yrs. The PDT proved to be a time efficient, feasible tool for a simple form of frequency discrimination screening in the preschool population before the age where other diagnostic tests of auditory processing disorders can be used. American Academy of Audiology.
Cortical activity patterns predict speech discrimination ability
Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P
2010-01-01
Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123
'Sorry, I meant the patient's left side': impact of distraction on left-right discrimination.
McKinley, John; Dempster, Martin; Gormley, Gerard J
2015-04-01
Medical students can have difficulty in distinguishing left from right. Many infamous medical errors have occurred when a procedure has been performed on the wrong side, such as in the removal of the wrong kidney. Clinicians encounter many distractions during their work. There is limited information on how these affect performance. Using a neuropsychological paradigm, we aim to elucidate the impacts of different types of distraction on left-right (LR) discrimination ability. Medical students were recruited to a study with four arms: (i) control arm (no distraction); (ii) auditory distraction arm (continuous ambient ward noise); (iii) cognitive distraction arm (interruptions with clinical cognitive tasks), and (iv) auditory and cognitive distraction arm. Participants' LR discrimination ability was measured using the validated Bergen Left-Right Discrimination Test (BLRDT). Multivariate analysis of variance was used to analyse the impacts of the different forms of distraction on participants' performance on the BLRDT. Additional analyses looked at effects of demographics on performance and correlated participants' self-perceived LR discrimination ability and their actual performance. A total of 234 students were recruited. Cognitive distraction had a greater negative impact on BLRDT performance than auditory distraction. Combined auditory and cognitive distraction had a negative impact on performance, but only in the most difficult LR task was this negative impact found to be significantly greater than that of cognitive distraction alone. There was a significant medium-sized correlation between perceived LR discrimination ability and actual overall BLRDT performance. Distraction has a significant impact on performance and multifaceted approaches are required to reduce LR errors. Educationally, greater emphasis on the linking of theory and clinical application is required to support patient safety and human factor training in medical school curricula. Distraction has the potential to impair an individual's ability to make accurate LR decisions and students should be trained from undergraduate level to be mindful of this. © 2015 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina
2005-01-01
The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…
Reading strategies of Chinese students with severe to profound hearing loss.
Cheung, Ka Yan; Leung, Man Tak; McPherson, Bradley
2013-01-01
The present study investigated the significance of auditory discrimination and the use of phonological and orthographic codes during the course of reading development in Chinese students who are deaf or hard of hearing (D/HH). In this study, the reading behaviors of D/HH students in 2 tasks-a task on auditory perception of onset rime and a synonym decision task-were compared with those of their chronological age-matched and reading level (RL)-matched controls. Cross-group comparison of the performances of participants in the task on auditory perception suggests that poor auditory discrimination ability may be a possible cause of reading problems for D/HH students. In addition, results of the synonym decision task reveal that D/HH students with poor reading ability demonstrate a significantly greater preference for orthographic rather than phonological information, when compared with the D/HH students with good reading ability and their RL-matched controls. Implications for future studies and educational planning are discussed.
[Auditory training in workshops: group therapy option].
Santos, Juliana Nunes; do Couto, Isabel Cristina Plais; Amorim, Raquel Martins da Costa
2006-01-01
auditory training in groups. to verify in a group of individuals with mental retardation the efficacy of auditory training in a workshop environment. METHOD a longitudinal prospective study with 13 mentally retarded individuals from the Associação de Pais e Amigos do Excepcional (APAE) of Congonhas divided in two groups: case (n=5) and control (n=8) and who were submitted to ten auditory training sessions after verifying the integrity of the peripheral auditory system through evoked otoacoustic emissions. Participants were evaluated using a specific protocol concerning the auditory abilities (sound localization, auditory identification, memory, sequencing, auditory discrimination and auditory comprehension) at the beginning and at the end of the project. Data (entering, processing and analyses) were analyzed by the Epi Info 6.04 software. the groups did not differ regarding aspects of age (mean = 23.6 years) and gender (40% male). In the first evaluation both groups presented similar performances. In the final evaluation an improvement in the auditory abilities was observed for the individuals in the case group. When comparing the mean number of correct answers obtained by both groups in the first and final evaluations, a statistically significant result was obtained for sound localization (p=0.02), auditory sequencing (p=0.006) and auditory discrimination (p=0.03). group auditory training demonstrated to be effective in individuals with mental retardation, observing an improvement in the auditory abilities. More studies, with a larger number of participants, are necessary in order to confirm the findings of the present research. These results will help public health professionals to reanalyze the theory models used for therapy, so that they can use specific methods according to individual needs, such as auditory training workshops.
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne
2016-12-01
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.
Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé
2017-03-01
Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/-B/aa) and more intact onset responses for nonword repetition (Baz for/-B/az). Thus visual speech altered both discrimination and identification in the CHL-to a large extent for the/B/onsets but only minimally for the/G/onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children's discrimination skills (i.e., d' analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets-even after variation due to the other variables was controlled. These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé
2017-01-01
Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/–B/aa) and more intact onset responses for nonword repetition (Baz for/–B/az). Thus visual speech altered both discrimination and identification in the CHL—to a large extent for the /B/ onsets but only minimally for the /G/ onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children’s discrimination skills (i.e., d’ analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets—even after variation due to the other variables was controlled. Conclusions These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. PMID:28167003
Rota-Donahue, Christine; Schwartz, Richard G.; Shafer, Valerie; Sussman, Elyse S.
2016-01-01
Background Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children’s auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. Purpose This study examined the perception of small frequency differences (Δf) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. Research Design An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of Δf from the 1000-Hz base frequency. Study Sample Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Data Collection and Analysis Behavioral data collected using headphone delivery were analyzed using the sensitivity index d′, calculated for three Δf was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d′ and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. Results TD children and children with APD and/or SLI differed in the detection of small-tone Δf. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d′ showed different strengths of correlation based on the magnitudes of the Δf. Auditory processing scores showed stronger correlation to the sensitivity index d′ for the small Δf, while language scores showed stronger correlation to the sensitivity index d′ for the large Δf. Conclusion Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. PMID:27310407
Rota-Donahue, Christine; Schwartz, Richard G; Shafer, Valerie; Sussman, Elyse S
2016-06-01
Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children's auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. : This study examined the perception of small frequency differences (∆ƒ) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of ∆ƒ from the 1000-Hz base frequency. Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Behavioral data collected using headphone delivery were analyzed using the sensitivity index d', calculated for three ∆ƒ was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d' and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. TD children and children with APD and/or SLI differed in the detection of small-tone ∆ƒ. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d' showed different strengths of correlation based on the magnitudes of the ∆ƒ. Auditory processing scores showed stronger correlation to the sensitivity index d' for the small ∆ƒ, while language scores showed stronger correlation to the sensitivity index d' for the large ∆ƒ. Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. American Academy of Audiology.
Summary statistics in auditory perception.
McDermott, Josh H; Schemitsch, Michael; Simoncelli, Eero P
2013-04-01
Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.
ERIC Educational Resources Information Center
Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.
2017-01-01
This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…
Fengler, Ineke; Nava, Elena; Röder, Brigitte
2015-01-01
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
Halliday, Lorna F; Tuomainen, Outi; Rosen, Stuart
2017-09-01
There is a general consensus that many children and adults with dyslexia and/or specific language impairment display deficits in auditory processing. However, how these deficits are related to developmental disorders of language is uncertain, and at least four categories of model have been proposed: single distal cause models, risk factor models, association models, and consequence models. This study used children with mild to moderate sensorineural hearing loss (MMHL) to investigate the link between auditory processing deficits and language disorders. We examined the auditory processing and language skills of 46, 8-16year-old children with MMHL and 44 age-matched typically developing controls. Auditory processing abilities were assessed using child-friendly psychophysical techniques in order to obtain discrimination thresholds. Stimuli incorporated three different timescales (µs, ms, s) and three different levels of complexity (simple nonspeech tones, complex nonspeech sounds, speech sounds), and tasks required discrimination of frequency or amplitude cues. Language abilities were assessed using a battery of standardised assessments of phonological processing, reading, vocabulary, and grammar. We found evidence that three different auditory processing abilities showed different relationships with language: Deficits in a general auditory processing component were necessary but not sufficient for language difficulties, and were consistent with a risk factor model; Deficits in slow-rate amplitude modulation (envelope) detection were sufficient but not necessary for language difficulties, and were consistent with either a single distal cause or a consequence model; And deficits in the discrimination of a single speech contrast (/bɑ/ vs /dɑ/) were neither necessary nor sufficient for language difficulties, and were consistent with an association model. Our findings suggest that different auditory processing deficits may constitute distinct and independent routes to the development of language difficulties in children. Copyright © 2017 Elsevier B.V. All rights reserved.
Auditory Discrimination Learning: Role of Working Memory.
Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal
2016-01-01
Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.
Auditory Discrimination Learning: Role of Working Memory
Zhang, Yu-Xuan; Moore, David R.; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal
2016-01-01
Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068
Prediction of cognitive outcome based on the progression of auditory discrimination during coma.
Juan, Elsa; De Lucia, Marzia; Tzovara, Athina; Beaud, Valérie; Oddo, Mauro; Clarke, Stephanie; Rossetti, Andrea O
2016-09-01
To date, no clinical test is able to predict cognitive and functional outcome of cardiac arrest survivors. Improvement of auditory discrimination in acute coma indicates survival with high specificity. Whether the degree of this improvement is indicative of recovery remains unknown. Here we investigated if progression of auditory discrimination can predict cognitive and functional outcome. We prospectively recorded electroencephalography responses to auditory stimuli of post-anoxic comatose patients on the first and second day after admission. For each recording, auditory discrimination was quantified and its evolution over the two recordings was used to classify survivors as "predicted" when it increased vs. "other" if not. Cognitive functions were tested on awakening and functional outcome was assessed at 3 months using the Cerebral Performance Categories (CPC) scale. Thirty-two patients were included, 14 "predicted survivors" and 18 "other survivors". "Predicted survivors" were more likely to recover basic cognitive functions shortly after awakening (ability to follow a standardized neuropsychological battery: 86% vs. 44%; p=0.03 (Fisher)) and to show a very good functional outcome at 3 months (CPC 1: 86% vs. 33%; p=0.004 (Fisher)). Moreover, progression of auditory discrimination during coma was strongly correlated with cognitive performance on awakening (phonemic verbal fluency: rs=0.48; p=0.009 (Spearman)). Progression of auditory discrimination during coma provides early indication of future recovery of cognitive functions. The degree of improvement is informative of the degree of functional impairment. If confirmed in a larger cohort, this test would be the first to predict detailed outcome at the single-patient level. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Speech sound discrimination training improves auditory cortex responses in a rat model of autism
Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Kilgard, Michael P.
2014-01-01
Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA) increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field (AAF) responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes. PMID:25140133
Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu
2016-10-01
The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.
Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.
Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J
2016-01-01
Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities.
Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns
Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J.
2016-01-01
Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10− and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities. PMID:27932941
Perceptual and academic patterns of learning-disabled/gifted students.
Waldron, K A; Saphire, D G
1992-04-01
This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.
Debruyne, Joke A; Francart, Tom; Janssen, A Miranda L; Douma, Kim; Brokx, Jan P L
2017-03-01
This study investigated the hypotheses that (1) prelingually deafened CI users do not have perfect electrode discrimination ability and (2) the deactivation of non-discriminable electrodes can improve auditory performance. Electrode discrimination difference limens were determined for all electrodes of the array. The subjects' basic map was subsequently compared to an experimental map, which contained only discriminable electrodes, with respect to speech understanding in quiet and in noise, listening effort, spectral ripple discrimination and subjective appreciation. Subjects were six prelingually deafened, late implanted adults using the Nucleus cochlear implant. Electrode discrimination difference limens across all subjects and electrodes ranged from 0.5 to 7.125, with significantly larger limens for basal electrodes. No significant differences were found between the basic map and the experimental map on auditory tests. Subjective appreciation was found to be significantly poorer for the experimental map. Prelingually deafened CI users were unable to discriminate between all adjacent electrodes. There was no difference in auditory performance between the basic and experimental map. Potential factors contributing to the absence of improvement with the experimental map include the reduced number of maxima, incomplete adaptation to the new frequency allocation, and the mainly basal location of deactivated electrodes.
The music perception abilities of prelingually deaf children with cochlear implants.
Stabej, Katja Kladnik; Smid, Lojze; Gros, Anton; Zargi, Miha; Kosir, Andrej; Vatovec, Jagoda
2012-10-01
To investigate the music perception abilities of prelingually deaf children with cochlear implants, in comparison to a group of normal-hearing children, and to consider the factors that contribute to music perception. The music perception abilities of 39 prelingually deaf children with unilateral cochlear implants were compared to the abilities of 39 normal hearing children. To assess the music listening abilities, the MuSIC perception test was adopted. The influence of the child's age, age at implantation, device experience and type of sound-processing strategy on the music perception were evaluated. The effects of auditory performance, nonverbal intellectual abilities, as well as the child's additional musical education on music perception were also considered. Children with cochlear implants and normal hearing children performed significantly differently with respect to rhythm discrimination (55% vs. 82%, p<0.001), instrument identification (57% vs. 88%, p<0.001) and emotion rating (p=0.022). However we found no significant difference in terms of melody discrimination and dissonance rating between the two groups. There was a positive correlation between auditory performance and melody discrimination (r=0.27; p=0.031), between auditory performance and instrument identification (r=0.20; p=0.059) and between the child's grade (mark) in school music classes and melody discrimination (r=0.34; p=0.030). In children with cochlear implant only, the music perception ability assessed by the emotion rating test was negatively correlated to the child's age (r(S)=-0.38; p=0.001), age at implantation (r(S)=-0.34; p=0.032), and device experience (r(S)=-0.38; p=0.019). The child's grade in school music classes showed a positive correlation to music perception abilities assessed by rhythm discrimination test (r(S)=0.46; p<0.001), melody discrimination test (r(S)=0.28; p=0.018), and instrument identification test (r(S)=0.23; p=0.05). As expected, there was a marked difference in the music perception abilities of prelingually deaf children with cochlear implants in comparison to the group of normal hearing children, but not for all the tests of music perception. Additional multi-centre studies, including a larger number of participants and a broader spectrum of music subtests, considering as many as possible of the factors that may contribute to music perception, seem reasonable. Copyright © 2012. Published by Elsevier Ireland Ltd.
Kloepper, L N; Nachtigall, P E; Gisiner, R; Breese, M
2010-11-01
Toothed whales and dolphins possess a hypertrophied auditory system that allows for the production and hearing of ultrasonic signals. Although the fossil record provides information on the evolution of the auditory structures found in extant odontocetes, it cannot provide information on the evolutionary pressures leading to the hypertrophied auditory system. Investigating the effect of hearing loss may provide evidence for the reason for the development of high-frequency hearing in echolocating animals by demonstrating how high-frequency hearing assists in the functioning echolocation system. The discrimination abilities of a false killer whale (Pseudorca crassidens) were measured prior to and after documented high-frequency hearing loss. In 1992, the subject had good hearing and could hear at frequencies up to 100 kHz. In 2008, the subject had lost hearing at frequencies above 40 kHz. First in 1992, and then again in 2008, the subject performed an identical echolocation task, discriminating between machined hollow aluminum cylinder targets of differing wall thickness. Performances were recorded for individual target differences and compared between both experimental years. Performances on individual targets dropped between 1992 and 2008, with a maximum performance reduction of 36.1%. These data indicate that, with a loss in high-frequency hearing, there was a concomitant reduction in echolocation discrimination ability, and suggest that the development of a hypertrophied auditory system capable of hearing at ultrasonic frequencies evolved in response to pressures for fine-scale echolocation discrimination.
Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome
Engineer, Crystal T.; Rahebi, Kimiya C.; Borland, Michael S.; Buell, Elizabeth P.; Centanni, Tracy M.; Fink, Melyssa K.; Im, Kwok W.; Wilson, Linda G.; Kilgard, Michael P.
2015-01-01
Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial. PMID:26321676
Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.
Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta
2009-01-01
In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.
Auditory training improves auditory performance in cochlear implanted children.
Roman, Stephane; Rochette, Françoise; Triglia, Jean-Michel; Schön, Daniele; Bigand, Emmanuel
2016-07-01
While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that rehabilitative care is most efficient when it takes place early on during childhood. These results are important to pinpoint the auditory deficits in CI children, to gather a better understanding of the links between basic auditory skills and speech perception which will in turn allow more efficient rehabilitative programs. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Houlihan, Michael; Stelmack, Robert M.
2012-01-01
The relation between mental ability and the ability to detect violations of an abstract, third-order conjunction rule was examined using event-related potential measures, specifically mismatch negativity (MMN). The primary objective was to determine whether the extraction of invariant relations based on abstract conjunctions between two…
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Auditory experience controls the maturation of song discrimination and sexual response in Drosophila
Li, Xiaodong; Ishimoto, Hiroshi
2018-01-01
In birds and higher mammals, auditory experience during development is critical to discriminate sound patterns in adulthood. However, the neural and molecular nature of this acquired ability remains elusive. In fruit flies, acoustic perception has been thought to be innate. Here we report, surprisingly, that auditory experience of a species-specific courtship song in developing Drosophila shapes adult song perception and resultant sexual behavior. Preferences in the song-response behaviors of both males and females were tuned by social acoustic exposure during development. We examined the molecular and cellular determinants of this social acoustic learning and found that GABA signaling acting on the GABAA receptor Rdl in the pC1 neurons, the integration node for courtship stimuli, regulated auditory tuning and sexual behavior. These findings demonstrate that maturation of auditory perception in flies is unexpectedly plastic and is acquired socially, providing a model to investigate how song learning regulates mating preference in insects. PMID:29555017
Tomlin, Danielle; Moore, David R.; Dillon, Harvey
2015-01-01
Objectives: In this study, the authors assessed the potential utility of a recently developed questionnaire (Evaluation of Children’s Listening and Processing Skills [ECLiPS]) for supporting the clinical assessment of children referred for auditory processing disorder (APD). Design: A total of 49 children (35 referred for APD assessment and 14 from mainstream schools) were assessed for auditory processing (AP) abilities, cognitive abilities, and symptoms of listening difficulty. Four questionnaires were used to capture the symptoms of listening difficulty from the perspective of parents (ECLiPS and Fisher’s auditory problem checklist), teachers (Teacher’s Evaluation of Auditory Performance), and children, that is, self-report (Listening Inventory for Education). Correlation analyses tested for convergence between the questionnaires and both cognitive and AP measures. Discriminant analyses were performed to determine the best combination of tests for discriminating between typically developing children and children referred for APD. Results: All questionnaires were sensitive to the presence of difficulty, that is, children referred for assessment had significantly more symptoms of listening difficulty than typically developing children. There was, however, no evidence of more listening difficulty in children meeting the diagnostic criteria for APD. Some AP tests were significantly correlated with ECLiPS factors measuring related abilities providing evidence for construct validity. All questionnaires correlated to a greater or lesser extent with the cognitive measures in the study. Discriminant analysis suggested that the best discrimination between groups was achieved using a combination of ECLiPS factors, together with nonverbal Intelligence Quotient (cognitive) and AP measures (i.e., dichotic digits test and frequency pattern test). Conclusions: The ECLiPS was particularly sensitive to cognitive difficulties, an important aspect of many children referred for APD, as well as correlating with some AP measures. It can potentially support the preliminary assessment of children referred for APD. PMID:26002277
NASA Astrophysics Data System (ADS)
Moore, Brian C. J.
Psychoacoustics
Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.
Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza
2012-02-01
Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.
Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.
Murai, Yuki; Yotsumoto, Yuko
2016-01-01
When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.
Discrimination of brief speech sounds is impaired in rats with auditory cortex lesions
Porter, Benjamin A.; Rosenthal, Tara R.; Ranasinghe, Kamalini G.; Kilgard, Michael P.
2011-01-01
Auditory cortex (AC) lesions impair complex sound discrimination. However, a recent study demonstrated spared performance on an acoustic startle response test of speech discrimination following AC lesions (Floody et al., 2010). The current study reports the effects of AC lesions on two operant speech discrimination tasks. AC lesions caused a modest and quickly recovered impairment in the ability of rats to discriminate consonant-vowel-consonant speech sounds. This result seems to suggest that AC does not play a role in speech discrimination. However, the speech sounds used in both studies differed in many acoustic dimensions and an adaptive change in discrimination strategy could allow the rats to use an acoustic difference that does not require an intact AC to discriminate. Based on our earlier observation that the first 40 ms of the spatiotemporal activity patterns elicited by speech sounds best correlate with behavioral discriminations of these sounds (Engineer et al., 2008), we predicted that eliminating additional cues by truncating speech sounds to the first 40 ms would render the stimuli indistinguishable to a rat with AC lesions. Although the initial discrimination of truncated sounds took longer to learn, the final performance paralleled rats using full-length consonant-vowel-consonant sounds. After 20 days of testing, half of the rats using speech onsets received bilateral AC lesions. Lesions severely impaired speech onset discrimination for at least one-month post lesion. These results support the hypothesis that auditory cortex is required to accurately discriminate the subtle differences between similar consonant and vowel sounds. PMID:21167211
Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling
Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.; ...
2017-06-30
Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less
Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.
Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less
Effect of signal to noise ratio on the speech perception ability of older adults
Shojaei, Elahe; Ashayeri, Hassan; Jafari, Zahra; Zarrin Dast, Mohammad Reza; Kamali, Koorosh
2016-01-01
Background: Speech perception ability depends on auditory and extra-auditory elements. The signal- to-noise ratio (SNR) is an extra-auditory element that has an effect on the ability to normally follow speech and maintain a conversation. Speech in noise perception difficulty is a common complaint of the elderly. In this study, the importance of SNR magnitude as an extra-auditory effect on speech perception in noise was examined in the elderly. Methods: The speech perception in noise test (SPIN) was conducted on 25 elderly participants who had bilateral low–mid frequency normal hearing thresholds at three SNRs in the presence of ipsilateral white noise. These participants were selected by available sampling method. Cognitive screening was done using the Persian Mini Mental State Examination (MMSE) test. Results: Independent T- test, ANNOVA and Pearson Correlation Index were used for statistical analysis. There was a significant difference in word discrimination scores at silence and at three SNRs in both ears (p≤0.047). Moreover, there was a significant difference in word discrimination scores for paired SNRs (0 and +5, 0 and +10, and +5 and +10 (p≤0.04)). No significant correlation was found between age and word recognition scores at silence and at three SNRs in both ears (p≥0.386). Conclusion: Our results revealed that decreasing the signal level and increasing the competing noise considerably reduced the speech perception ability in normal hearing at low–mid thresholds in the elderly. These results support the critical role of SNRs for speech perception ability in the elderly. Furthermore, our results revealed that normal hearing elderly participants required compensatory strategies to maintain normal speech perception in challenging acoustic situations. PMID:27390712
Kujala, Teija; Leminen, Miika
2017-12-01
In specific language impairment (SLI), there is a delay in the child's oral language skills when compared with nonverbal cognitive abilities. The problems typically relate to phonological and morphological processing and word learning. This article reviews studies which have used mismatch negativity (MMN) in investigating low-level neural auditory dysfunctions in this disorder. With MMN, it is possible to tap the accuracy of neural sound discrimination and sensory memory functions. These studies have found smaller response amplitudes and longer latencies for speech and non-speech sound changes in children with SLI than in typically developing children, suggesting impaired and slow auditory discrimination in SLI. Furthermore, they suggest shortened sensory memory duration and vulnerability of the sensory memory to masking effects. Importantly, some studies reported associations between MMN parameters and language test measures. In addition, it was found that language intervention can influence the abnormal MMN in children with SLI, enhancing its amplitude. These results suggest that the MMN can shed light on the neural basis of various auditory and memory impairments in SLI, which are likely to influence speech perception. Copyright © 2017. Published by Elsevier Ltd.
Hay-McCutcheon, Marcia J; Peterson, Nathaniel R; Pisoni, David B; Kirk, Karen Iler; Yang, Xin; Parton, Jason
The purpose of this study was to evaluate performance on two challenging listening tasks, talker and regional accent discrimination, and to assess variables that could have affected the outcomes. A prospective study using 35 adults with one cochlear implant (CI) or a CI and a contralateral hearing aid (bimodal hearing) was conducted. Adults completed talker and regional accent discrimination tasks. Two-alternative forced-choice tasks were used to assess talker and accent discrimination in a group of adults who ranged in age from 30 years old to 81 years old. A large amount of performance variability was observed across listeners for both discrimination tasks. Three listeners successfully discriminated between talkers for both listening tasks, 14 participants successfully completed one discrimination task and 18 participants were not able to discriminate between talkers for either listening task. Some adults who used bimodal hearing benefitted from the addition of acoustic cues provided through a HA but for others the HA did not help with discrimination abilities. Acoustic speech feature analysis of the test signals indicated that both the talker speaking rate and the fundamental frequency (F0) helped with talker discrimination. For accent discrimination, findings suggested that access to more salient spectral cues was important for better discrimination performance. The ability to perform challenging discrimination tasks successfully likely involves a number of complex interactions between auditory and non-auditory pre- and post-implant factors. To understand why some adults with CIs perform similarly to adults with normal hearing and others experience difficulty discriminating between talkers, further research will be required with larger populations of adults who use unilateral CIs, bilateral CIs and bimodal hearing. Copyright © 2018 Elsevier Inc. All rights reserved.
Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria
2016-01-01
Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884
Effects of Long-Term Musical Training on Cortical Auditory Evoked Potentials.
Brown, Carolyn J; Jeon, Eun-Kyung; Driscoll, Virginia; Mussoi, Bruna; Deshpande, Shruti Balvalli; Gfeller, Kate; Abbas, Paul J
Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared with nonmusicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the acoustic change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and nonmusicians. Twenty individuals (10 musicians and 10 nonmusicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure. The ACC was recorded and used as an objective (i.e., nonbehavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. As a group, musicians were able to detect smaller changes in pitch than nonmusician. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the ripple noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than nonmusicians. Those differences are evident not only in perceptual/behavioral tests but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal-hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric or hearing-impaired listeners.
Inservice Training Packet: Auditory Discrimination Listening Skills.
ERIC Educational Resources Information Center
Florida Learning Resources System/CROWN, Jacksonville.
Intended to be used as the basis for a brief inservice workshop, the auditory discrimination/listening skills packet provides information on ideas, materials, and resources for remediating auditory discrimination and listening skill deficits. Included are a sample prescription form, tests of auditory discrimination, and a list of auditory…
Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2014-01-01
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403
Auditory Phoneme Discrimination in Illiterates: Mismatch Negativity--A Question of Literacy?
ERIC Educational Resources Information Center
Schaadt, Gesa; Pannekamp, Ann; van der Meer, Elke
2013-01-01
These days, illiteracy is still a major problem. There is empirical evidence that auditory phoneme discrimination is one of the factors contributing to written language acquisition. The current study investigated auditory phoneme discrimination in participants who did not acquire written language sufficiently. Auditory phoneme discrimination was…
Hashemi, Nassim; Ghorbani, Ali; Soleymani, Zahra; Kamali, Mohmmad; Ahmadi, Zohreh Ziatabar; Mahmoudian, Saeid
2018-07-01
Auditory discrimination of speech sounds is an important perceptual ability and a precursor to the acquisition of language. Auditory information is at least partially necessary for the acquisition and organization of phonological rules. There are few standardized behavioral tests to evaluate phonemic distinctive features in children with or without speech and language disorders. The main objective of the present study was the development, validity, and reliability of the Persian version of auditory word discrimination test (P-AWDT) for 4-8-year-old children. A total of 120 typical children and 40 children with speech sound disorder (SSD) participated in the present study. The test comprised of 160 monosyllabic paired-words distributed in the Forms A-1 and the Form A-2 for the initial consonants (80 words) and the Forms B-1 and the Form B-2 for the final consonants (80 words). Moreover, the discrimination of vowels was randomly included in all forms. Content validity was calculated and 50 children repeated the test twice with two weeks of interval (test-retest reliability). Further analysis was also implemented including validity, intraclass correlation coefficient (ICC), Cronbach's alpha (internal consistency), age groups, and gender. The content validity index (CVI) and the test-retest reliability of the P-AWDT were achieved 63%-86% and 81%-96%, respectively. Moreover, the total Cronbach's alpha for the internal consistency was estimated relatively high (0.93). Comparison of the mean scores of the P-AWDT in the typical children and the children with SSD revealed a significant difference. The results revealed that the group with SSD had greater severity of deficit than the typical group in auditory word discrimination. In addition, the difference between the age groups was statistically significant, especially in 4-4.11-year-old children. The performance of the two gender groups was relatively same. The comparison of the P-AWDT scores between the typical children and the children with SSD demonstrated differences in the capabilities of auditory phonological discrimination in both initial and final positions. It supposed that the P-AWDT meets the appropriate validity and reliability criteria. The P-AWDT test can be utilized to measure the distinctive features of phonemes, the auditory discrimination of initial and final consonants and middle vowels of words in 4-8-year-old typical children and children with SSD. Copyright © 2018. Published by Elsevier B.V.
Effects of Long-Term Musical Training on Cortical Evoked Auditory Potentials
Brown, Carolyn J.; Jeon, Eun-Kyung; Driscoll, Virginia; Mussoi, Bruna; Deshpande, Shruti Balvalli; Gfeller, Kate; Abbas, Paul
2016-01-01
Objective Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared to non-musicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the auditory change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and non-musicians. Design Twenty individuals (10 musicians and 10 non-musicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure and the ACC was recorded and used as an objective (i.e. non-behavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. Results As a group, musicians were able to detect smaller changes in pitch than non-musicians. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the rippled noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Conclusions Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than non-musicians. Those differences are evident not only in perceptual/behavioral tests, but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric and/or hearing-impaired listeners. PMID:28225736
Ikeda, Yumiko; Yahata, Noriaki; Takahashi, Hidehiko; Koeda, Michihiko; Asai, Kunihiko; Okubo, Yoshiro; Suzuki, Hidenori
2010-05-01
Comprehending conversation in a crowd requires appropriate orienting and sustainment of auditory attention to and discrimination of the target speaker. While a multitude of cognitive functions such as voice perception and language processing work in concert to subserve this ability, it is still unclear which cognitive components critically determine successful discrimination of speech sounds under constantly changing auditory conditions. To investigate this, we present a functional magnetic resonance imaging (fMRI) study of changes in cerebral activities associated with varying challenge levels of speech discrimination. Subjects participated in a diotic listening paradigm that presented them with two news stories read simultaneously but independently by a target speaker and a distracting speaker of incongruent or congruent sex. We found that the voice of distracter of congruent rather than incongruent sex made the listening more challenging, resulting in enhanced activities mainly in the left temporal and frontal gyri. Further, the activities at the left inferior, left anterior superior and right superior loci in the temporal gyrus were shown to be significantly correlated with accuracy of the discrimination performance. The present results suggest that the subregions of bilateral temporal gyri play a key role in the successful discrimination of speech under constantly changing auditory conditions as encountered in daily life. 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Residual neural processing of musical sound features in adult cochlear implant users.
Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias
2014-01-01
Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants' attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients' age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. -Automatic brain responses to musical feature changes reflect the limitations of central auditory processing in adult Cochlear Implant users.-The brains of adult CI users automatically process sound features changes even when inserted in a musical context.-CI users show disrupted automatic discriminatory abilities for rhythm in the brain.-Our fast paradigm demonstrate residual musical abilities in the brains of adult CI users giving hope for their future rehabilitation.
Residual Neural Processing of Musical Sound Features in Adult Cochlear Implant Users
Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias
2014-01-01
Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants’ attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients’ age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. Highlights: -Automatic brain responses to musical feature changes reflect the limitations of central auditory processing in adult Cochlear Implant users.-The brains of adult CI users automatically process sound features changes even when inserted in a musical context.-CI users show disrupted automatic discriminatory abilities for rhythm in the brain.-Our fast paradigm demonstrate residual musical abilities in the brains of adult CI users giving hope for their future rehabilitation. PMID:24772074
Dunlop, William A.; Enticott, Peter G.; Rajan, Ramesh
2016-01-01
Autism Spectrum Disorder (ASD), characterized by impaired communication skills and repetitive behaviors, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD) individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants. PMID:27555814
Fundamental deficits of auditory perception in Wernicke's aphasia.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
2013-01-01
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
Developmental changes in automatic rule-learning mechanisms across early childhood.
Mueller, Jutta L; Friederici, Angela D; Männel, Claudia
2018-06-27
Infants' ability to learn complex linguistic regularities from early on has been revealed by electrophysiological studies indicating that 3-month-olds, but not adults, can automatically detect non-adjacent dependencies between syllables. While different ERP responses in adults and infants suggest that both linguistic rule learning and its link to basic auditory processing undergo developmental changes, systematic investigations of the developmental trajectories are scarce. In the present study, we assessed 2- and 4-year-olds' ERP indicators of pitch discrimination and linguistic rule learning in a syllable-based oddball design. To test for the relation between auditory discrimination and rule learning, ERP responses to pitch changes were used as predictor for potential linguistic rule-learning effects. Results revealed that 2-year-olds, but not 4-year-olds, showed ERP markers of rule learning. Although, 2-year-olds' rule learning was not dependent on differences in pitch perception, 4-year-old children demonstrated a dependency, such that those children who showed more pronounced responses to pitch changes still showed an effect of rule learning. These results narrow down the developmental decline of the ability for automatic linguistic rule learning to the age between 2 and 4 years, and, moreover, point towards a strong modification of this change by auditory processes. At an age when the ability of automatic linguistic rule learning phases out, rule learning can still be observed in children with enhanced auditory responses. The observed interrelations are plausible causes for age-of-acquisition effects and inter-individual differences in language learning. © 2018 John Wiley & Sons Ltd.
Yoder, Kathleen M.; Vicario, David S.
2012-01-01
Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1. Local estradiol action within an auditory area is necessary for socially-relevant sounds to induce normal physiological responses in the brains of both sexes; 2. These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3. Estradiol action within the auditory forebrain enables behavioral discrimination among socially-relevant sounds in males; and 4. Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. Keywords: Estrogens, Songbird, Social Context, Auditory Perception PMID:22201281
Bennur, Sharath; Tsunada, Joji; Cohen, Yale E; Liu, Robert C
2013-11-01
Acoustic communication between animals requires them to detect, discriminate, and categorize conspecific or heterospecific vocalizations in their natural environment. Laboratory studies of the auditory-processing abilities that facilitate these tasks have typically employed a broad range of acoustic stimuli, ranging from natural sounds like vocalizations to "artificial" sounds like pure tones and noise bursts. However, even when using vocalizations, laboratory studies often test abilities like categorization in relatively artificial contexts. Consequently, it is not clear whether neural and behavioral correlates of these tasks (1) reflect extensive operant training, which drives plastic changes in auditory pathways, or (2) the innate capacity of the animal and its auditory system. Here, we review a number of recent studies, which suggest that adopting more ethological paradigms utilizing natural communication contexts are scientifically important for elucidating how the auditory system normally processes and learns communication sounds. Additionally, since learning the meaning of communication sounds generally involves social interactions that engage neuromodulatory systems differently than laboratory-based conditioning paradigms, we argue that scientists need to pursue more ethological approaches to more fully inform our understanding of how the auditory system is engaged during acoustic communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
de Sousa, Paulo; Sellwood, William; Spray, Amy; Bentall, Richard P
2016-04-01
Thought disorder (TD) has been shown to vary in relation to negative affect. Here we examine the role internal source monitoring (iSM, i.e. ability to discriminate between inner speech and verbalized speech) in TD and whether changes in iSM performance are implicated in the affective reactivity effect (deterioration of TD when participants are asked to talk about emotionally-laden topics). Eighty patients diagnosed with schizophrenia-spectrum disorder and thirty healthy controls received interviews that promoted personal disclosure (emotionally salient) and interviews on everyday topics (non-salient) on separate days. During the interviews, participants were tested on iSM, self-reported affect and immediate auditory recall. Patients had more TD, poorer ability to discriminate between inner and verbalized speech, poorer immediate auditory recall and reported more negative affect than controls. Both groups displayed more TD and negative affect in salient interviews but only patients showed poorer performance on iSM. Immediate auditory recall did not change significantly across affective conditions. In patients, the relationship between self-reported negative affect and TD was mediated by deterioration in the ability to discriminate between inner speech and speech that was directed to others and socially shared (performance on the iSM) in both interviews. Furthermore, deterioration in patients' performance on iSM across conditions significantly predicted deterioration in TD across the interviews (affective reactivity of speech). Poor iSM is significantly associated with TD. Negative affect, leading to further impaired iSM, leads to increased TD in patients with psychosis. Avenues for future research as well as clinical implications of these findings are discussed. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Sleep-dependent consolidation benefits fast transfer of time interval training.
Chen, Lihan; Guo, Lu; Bao, Ming
2017-03-01
Previous study has shown that short training (15 min) for explicitly discriminating temporal intervals between two paired auditory beeps, or between two paired tactile taps, can significantly improve observers' ability to classify the perceptual states of visual Ternus apparent motion while the training of task-irrelevant sensory properties did not help to improve visual timing (Chen and Zhou in Exp Brain Res 232(6):1855-1864, 2014). The present study examined the role of 'consolidation' after training of temporal task-irrelevant properties, or whether a pure delay (i.e., blank consolidation) following pretest of the target task would give rise to improved ability of visual interval timing, typified in visual Ternus display. A procedure of pretest-training-posttest was adopted, with the probe of discriminating Ternus apparent motion. The extended implicit training of timing in which the time intervals between paired auditory beeps or paired tactile taps were manipulated but the task was discrimination of the auditory pitches or tactile intensities, did not lead to the training benefits (Exps 1 and 3); however, a delay of 24 h after implicit training of timing, including solving 'Sudoku puzzles,' made the otherwise absent training benefits observable (Exps 2, 4, 5 and 6). The above improvements in performance were not due to a practice effect of Ternus motion (Exp 7). A general 'blank' consolidation period of 24 h also made improvements of visual timing observable (Exp 8). Taken together, the current findings indicated that sleep-dependent consolidation imposed a general effect, by potentially triggering and maintaining neuroplastic changes in the intrinsic (timing) network to enhance the ability of time perception.
Robson, Holly; Keidel, James L; Ralph, Matthew A Lambon; Sage, Karen
2012-01-01
Wernicke's aphasia is a condition which results in severely disrupted language comprehension following a lesion to the left temporo-parietal region. A phonological analysis deficit has traditionally been held to be at the root of the comprehension impairment in Wernicke's aphasia, a view consistent with current functional neuroimaging which finds areas in the superior temporal cortex responsive to phonological stimuli. However behavioural evidence to support the link between a phonological analysis deficit and auditory comprehension has not been yet shown. This study extends seminal work by Blumstein, Baker, and Goodglass (1977) to investigate the relationship between acoustic-phonological perception, measured through phonological discrimination, and auditory comprehension in a case series of Wernicke's aphasia participants. A novel adaptive phonological discrimination task was used to obtain reliable thresholds of the phonological perceptual distance required between nonwords before they could be discriminated. Wernicke's aphasia participants showed significantly elevated thresholds compared to age and hearing matched control participants. Acoustic-phonological thresholds correlated strongly with auditory comprehension abilities in Wernicke's aphasia. In contrast, nonverbal semantic skills showed no relationship with auditory comprehension. The results are evaluated in the context of recent neurobiological models of language and suggest that impaired acoustic-phonological perception underlies the comprehension impairment in Wernicke's aphasia and favour models of language which propose a leftward asymmetry in phonological analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.
Speech perception task with pseudowords.
Appezzato, Mariana Martins; Hackerott, Maria Mercedes Saraiva; Avila, Clara Regina Brandão de
2018-01-01
Purpose Prepare a list of pseudowords in Brazilian Portuguese to assess the auditory discrimination ability of schoolchildren and investigate the internal consistency of test items and the effect of school grade on discrimination performance. Methods Study participants were 60 schoolchildren (60% female) enrolled in the 3rd (n=14), 4th (n=24) and 5th (n=22) grades of an elementary school in the city of Sao Paulo, Brazil, aged between eight years and two months and 11 years and eight months (99 to 136 months; mean=120.05; SD=10.26), with average school performance score of 7.21 (minimum 5.0; maximum 10; SD=1.23). Forty-eight minimal pairs of Brazilian Portuguese pseudowords distinguished by a single phoneme were prepared. The participants' responses (whether the elements of the pairs were the same or different) were noted and analyzed. The data were analyzed using the Cronbach's Alpha Coefficient, Spearman's Correlation Coefficient, and Bonferroni Post-hoc Test at significance level of 0.05. Results Internal consistency analysis indicated the deletion of 20 pairs. The 28 items with results showed good internal consistency (α=0.84). The maximum and minimum scores of correct discrimination responses were 34 and 16, respectively (mean=30.79; SD=3.68). No correlation was observed between age, school performance, and discrimination performance, and no difference between school grades was found. Conclusion Most of the items proposed for assessing the auditory discrimination of speech sounds showed good internal consistency in relation to the task. Age and school grade did not improve the auditory discrimination of speech sounds.
ERIC Educational Resources Information Center
Kodak, Tiffany; Clements, Andrea; Paden, Amber R.; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A.
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The…
A selective impairment of perception of sound motion direction in peripheral space: A case study.
Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C
2016-01-08
It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zatorre, Robert J.; Delhommeau, Karine; Zarate, Jean Mary
2012-01-01
We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. PMID:23227019
Albouy, Philippe; Cousineau, Marion; Caclin, Anne; Tillmann, Barbara; Peretz, Isabelle
2016-01-06
Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia or specific language impairment might be a low-level sensory dysfunction. In the present study we test this hypothesis in congenital amusia, a neurodevelopmental disorder characterized by severe deficits in the processing of pitch-based material. We manipulated the temporal characteristics of auditory stimuli and investigated the influence of the time given to encode pitch information on participants' performance in discrimination and short-term memory. Our results show that amusics' performance in such tasks scales with the duration available to encode acoustic information. This suggests that in auditory neuro-developmental disorders, abnormalities in early steps of the auditory processing can underlie the high-level deficits (here musical disabilities). Observing that the slowing down of temporal dynamics improves amusics' pitch abilities allows considering this approach as a potential tool for remediation in developmental auditory disorders.
Auditory and tactile gap discrimination by observers with normal and impaired hearing.
Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Perez, Zachary D; Delhorne, Lorraine A; Villabona, Timothy J
2014-02-01
Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21-69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250-250, 250-400, 400-250, and 400-400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.
Abnormal frequency discrimination in children with SLI as indexed by mismatch negativity (MMN).
Rinker, Tanja; Kohls, Gregor; Richter, Cathrin; Maas, Verena; Schulz, Eberhard; Schecker, Michael
2007-02-14
For several decades, the aetiology of specific language impairment (SLI) has been associated with a central auditory processing deficit disrupting the normal language development of affected children. One important aspect for language acquisition is the discrimination of different acoustic features, such as frequency information. Concerning SLI, studies to date that examined frequency discrimination abilities have been contradictory. We hypothesized that an auditory processing deficit in children with SLI depends on the frequency range and the difference between the tones used. Using a passive mismatch negativity (MMN)-design, 13 boys with SLI and 13 age- and IQ-matched controls (7-11 years) were tested with two sine tones of different frequency (700Hz versus 750Hz). Reversed hemispheric activity between groups indicated abnormal processing in SLI. In a second time window, MMN2 was absent for the children with SLI. It can therefore be assumed that a frequency discrimination deficit in children with SLI becomes particularly apparent for tones below 750Hz and for a frequency difference of 50Hz. This finding may have important implications for future research and integration of various research approaches.
ERIC Educational Resources Information Center
Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Ojala, Pauliina; Huotilainen, Minna
2014-01-01
Adult musicians show superior auditory discrimination skills when compared to non-musicians. The enhanced auditory skills of musicians are reflected in the augmented amplitudes of their auditory event-related potential (ERP) responses. In the current study, we investigated longitudinally the development of auditory discrimination skills in…
Kodak, Tiffany; Clements, Andrea; Paden, Amber R; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The results of the skills assessment showed that 4 participants failed to demonstrate mastery of at least 1 of the skills. We compared the outcomes of the assessment to the results of auditory-visual conditional discrimination training and found that training outcomes were related to the assessment outcomes for 7 of the 9 participants. One participant who did not demonstrate mastery of all assessment skills subsequently learned several conditional discriminations when blocked training trials were conducted. Another participant who did not demonstrate mastery of the auditory discrimination skill subsequently acquired conditional discriminations in 1 of the training conditions. We discuss the implications of the assessment for practice and suggest additional areas of research on this topic. © Society for the Experimental Analysis of Behavior.
Jain, Chandni; Sahoo, Jitesh Prasad
Tinnitus is the perception of a sound without an external source. It can affect auditory perception abilities in individuals with normal hearing sensitivity. The aim of the study was to determine the effect of tinnitus on psychoacoustic abilities in individuals with normal hearing sensitivity. The study was conducted on twenty subjects with tinnitus and twenty subjects without tinnitus. Tinnitus group was again divided into mild and moderate tinnitus based on the tinnitus handicap inventory. Differential limen of intensity, differential limen of frequency, gap detection test, modulation detection thresholds were done through the mlp toolbox in Matlab and speech in noise test was done with the help of Quick SIN in Kannada. RESULTS of the study showed that the clinical group performed poorly in all the tests except for differential limen of intensity. Tinnitus affects aspects of auditory perception like temporal resolution, speech perception in noise and frequency discrimination in individuals with normal hearing. This could be due to subtle changes in the central auditory system which is not reflected in the pure tone audiogram.
Pilocarpine Seizures Cause Age-Dependent Impairment in Auditory Location Discrimination
ERIC Educational Resources Information Center
Neill, John C.; Liu, Zhao; Mikati, Mohammad; Holmes, Gregory L.
2005-01-01
Children who have status epilepticus have continuous or rapidly repeating seizures that may be life-threatening and may cause life-long changes in brain and behavior. The extent to which status epilepticus causes deficits in auditory discrimination is unknown. A naturalistic auditory location discrimination method was used to evaluate this…
Musically cued gait-training improves both perceptual and motor timing in Parkinson's disease.
Benoit, Charles-Etienne; Dalla Bella, Simone; Farrugia, Nicolas; Obrig, Hellmuth; Mainka, Stefan; Kotz, Sonja A
2014-01-01
It is well established that auditory cueing improves gait in patients with idiopathic Parkinson's disease (IPD). Disease-related reductions in speed and step length can be improved by providing rhythmical auditory cues via a metronome or music. However, effects on cognitive aspects of motor control have yet to be thoroughly investigated. If synchronization of movement to an auditory cue relies on a supramodal timing system involved in perceptual, motor, and sensorimotor integration, auditory cueing can be expected to affect both motor and perceptual timing. Here, we tested this hypothesis by assessing perceptual and motor timing in 15 IPD patients before and after a 4-week music training program with rhythmic auditory cueing. Long-term effects were assessed 1 month after the end of the training. Perceptual and motor timing was evaluated with a battery for the assessment of auditory sensorimotor and timing abilities and compared to that of age-, gender-, and education-matched healthy controls. Prior to training, IPD patients exhibited impaired perceptual and motor timing. Training improved patients' performance in tasks requiring synchronization with isochronous sequences, and enhanced their ability to adapt to durational changes in a sequence in hand tapping tasks. Benefits of cueing extended to time perception (duration discrimination and detection of misaligned beats in musical excerpts). The current results demonstrate that auditory cueing leads to benefits beyond gait and support the idea that coupling gait to rhythmic auditory cues in IPD patients relies on a neuronal network engaged in both perceptual and motor timing.
Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti
2014-03-01
To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.
Auditory spatial processing in Alzheimer’s disease
Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.
2015-01-01
The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732
Sanju, Himanshu Kumar; Kumar, Prawin
2016-10-01
Introduction Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.
Multisensory Cues Capture Spatial Attention Regardless of Perceptual Load
ERIC Educational Resources Information Center
Santangelo, Valerio; Spence, Charles
2007-01-01
We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in…
The ability for cocaine and cocaine-associated cues to compete for attention
Pitchers, Kyle K.; Wood, Taylor R.; Skrzynski, Cari J.; Robinson, Terry E.; Sarter, Martin
2017-01-01
In humans, reward cues, including drug cues in addicts, are especially effective in biasing attention towards them, so much so they can disrupt ongoing task performance. It is not known, however, whether this happens in rats. To address this question, we developed a behavioral paradigm to assess the capacity of an auditory drug (cocaine) cue to evoke cocaine-seeking behavior, thus distracting thirsty rats from performing a well-learned sustained attention task (SAT) to obtain a water reward. First, it was determined that an auditory cocaine cue (tone-CS) reinstated drug-seeking equally in sign-trackers (STs) and goal-trackers (GTs), which otherwise vary in the propensity to attribute incentive salience to a localizable drug cue. Next, we tested the ability of an auditory cocaine cue to disrupt performance on the SAT in STs and GTs. Rats were trained to self-administer cocaine intravenously using an Intermittent Access self-administration procedure known to produce a progressive increase in motivation for cocaine, escalation of intake, and strong discriminative stimulus control over drug-seeking behavior. When presented alone, the auditory discriminative stimulus elicited cocaine-seeking behavior while rats were performing the SAT, but it was not sufficiently disruptive to impair SAT performance. In contrast, if cocaine was available in the presence of the cue, or when administered non-contingently, SAT performance was severely disrupted. We suggest that performance on a relatively automatic, stimulus-driven task, such as the basic version of the SAT used here, may be difficult to disrupt with a drug cue alone. A task that requires more top-down cognitive control may be needed. PMID:27890441
Developmental hearing loss impedes auditory task learning and performance in gerbils
von Trapp, Gardiner; Aloni, Ishita; Young, Stephen; Semple, Malcolm N.; Sanes, Dan H.
2016-01-01
The consequences of developmental hearing loss have been reported to include both sensory and cognitive deficits. To investigate these issues in a non-human model, auditory learning and asymptotic psychometric performance were compared between normal hearing (NH) adult gerbils and those reared with conductive hearing loss (CHL). At postnatal day 10, before ear canal opening, gerbil pups underwent bilateral malleus removal to induce a permanent CHL. Both CHL and control animals were trained to approach a water spout upon presentation of a target (Go stimuli), and withhold for foils (Nogo stimuli). To assess the rate of task acquisition and asymptotic performance, animals were tested on an amplitude modulation (AM) rate discrimination task. Behavioral performance was calculated using a signal detection theory framework. Animals reared with developmental CHL displayed a slower rate of task acquisition for AM discrimination task. Slower acquisition was explained by an impaired ability to generalize to newly introduced stimuli, as compared to controls. Measurement of discrimination thresholds across consecutive testing blocks revealed that CHL animals required a greater number of testing sessions to reach asymptotic threshold values, as compared to controls. However, with sufficient training, CHL animals approached control performance. These results indicate that a sensory impediment can delay auditory learning, and increase the risk of poor performance on a temporal task. PMID:27746215
Law, Jeremy M.; Vandermosten, Maaike; Ghesquiere, Pol; Wouters, Jan
2014-01-01
This study investigated whether auditory, speech perception, and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e., rapid automatic naming, verbal short-term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM) and an amplitude rise time (RT); an intensity discrimination task (ID) was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words-in-noise tasks. Group analyses revealed significant group differences in auditory tasks (i.e., RT and ID) and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech-in-noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences. PMID:25071512
Binaural speech processing in individuals with auditory neuropathy.
Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B
2012-12-13
Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
Vocal Accuracy and Neural Plasticity Following Micromelody-Discrimination Training
Zarate, Jean Mary; Delhommeau, Karine; Wood, Sean; Zatorre, Robert J.
2010-01-01
Background Recent behavioral studies report correlational evidence to suggest that non-musicians with good pitch discrimination sing more accurately than those with poorer auditory skills. However, other studies have reported a dissociation between perceptual and vocal production skills. In order to elucidate the relationship between auditory discrimination skills and vocal accuracy, we administered an auditory-discrimination training paradigm to a group of non-musicians to determine whether training-enhanced auditory discrimination would specifically result in improved vocal accuracy. Methodology/Principal Findings We utilized micromelodies (i.e., melodies with seven different interval scales, each smaller than a semitone) as the main stimuli for auditory discrimination training and testing, and we used single-note and melodic singing tasks to assess vocal accuracy in two groups of non-musicians (experimental and control). To determine if any training-induced improvements in vocal accuracy would be accompanied by related modulations in cortical activity during singing, the experimental group of non-musicians also performed the singing tasks while undergoing functional magnetic resonance imaging (fMRI). Following training, the experimental group exhibited significant enhancements in micromelody discrimination compared to controls. However, we did not observe a correlated improvement in vocal accuracy during single-note or melodic singing, nor did we detect any training-induced changes in activity within brain regions associated with singing. Conclusions/Significance Given the observations from our auditory training regimen, we therefore conclude that perceptual discrimination training alone is not sufficient to improve vocal accuracy in non-musicians, supporting the suggested dissociation between auditory perception and vocal production. PMID:20567521
Stropahl, Maren; Plotz, Karsten; Schönfeld, Rüdiger; Lenarz, Thomas; Sandmann, Pascale; Yovel, Galit; De Vos, Maarten; Debener, Stefan
2015-11-01
There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive. Copyright © 2015 Elsevier Inc. All rights reserved.
Peter, Varghese; Wong, Kogo; Narne, Vijaya Kumar; Sharma, Mridula; Purdy, Suzanne C; McMahon, Catherine
2014-02-01
There are many clinically available tests for the assessment of auditory processing skills in children and adults. However, there is limited data available on the maturational effects on the performance on these tests. The current study investigated maturational effects on auditory processing abilities using three psychophysical measures: temporal modulation transfer function (TMTF), iterated ripple noise (IRN) perception, and spectral ripple discrimination (SRD). A cross-sectional study. Three groups of subjects were tested: 10 adults (18-30 yr), 10 older children (12-18 yr), and 10 young children (8-11 yr) Temporal envelope processing was measured by obtaining thresholds for amplitude modulation detection as a function of modulation frequency (TMTF; 4, 8, 16, 32, 64, and 128 Hz). Temporal fine structure processing was measured using IRN, and spectral processing was measured using SRD. The results showed that young children had significantly higher modulation thresholds at 4 Hz (TMTF) compared to the other two groups and poorer SRD scores compared to adults. The results on IRN did not differ across groups. The results suggest that different aspects of auditory processing mature at different age periods and these maturational effects need to be considered while assessing auditory processing in children. American Academy of Audiology.
1984-08-01
90de It noce..etrv wnd identify by block numberl .’-- This work reviews the areas of monaural and binaural signal detection, auditory discrimination And...AUDITORY DISPLAYS This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to...pertaining to the major areas of auditory processing in humans. The areas covered in the reviews presented here are monaural and binaural siqnal detection
Reduced auditory efferent activity in childhood selective mutism.
Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava
2004-06-01
Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.
ERIC Educational Resources Information Center
Flom, Ross; Bahrick, Lorraine E.
2007-01-01
This research examined the developmental course of infants' ability to perceive affect in bimodal (audiovisual) and unimodal (auditory and visual) displays of a woman speaking. According to the intersensory redundancy hypothesis (L. E. Bahrick, R. Lickliter, & R. Flom, 2004), detection of amodal properties is facilitated in multimodal stimulation…
2012-03-01
of gunfire interrupting the relative quiet of the countryside, or a sudden reduction in typical city noise, a change in the soundscape serves as an...in realistic soundscapes . Thereby, the Soldiers’ abilities to detect, discriminate, localize, and track can be accurately measured in a controlled
Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH).
Tierney, Adam; Kraus, Nina
2014-01-01
Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.
Auditory Confrontation Naming in Alzheimer’s Disease
Brandt, Jason; Bakker, Arnold; Maroof, David Aaron
2010-01-01
Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630
Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha
2016-12-01
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Infant discrimination of rapid auditory cues predicts later language impairment.
Benasich, April A; Tallal, Paula
2002-10-17
The etiology and mechanisms of specific language impairment (SLI) in children are unknown. Differences in basic auditory processing abilities have been suggested to underlie their language deficits. Studies suggest that the neuropathology, such as atypical patterns of cerebral lateralization and cortical cellular anomalies, implicated in such impairments likely occur early in life. Such anomalies may play a part in the rapid processing deficits seen in this disorder. However, prospective, longitudinal studies in infant populations that are critical to examining these hypotheses have not been done. In the study described, performance on brief, rapidly-presented, successive auditory processing and perceptual-cognitive tasks were assessed in two groups of infants: normal control infants with no family history of language disorders and infants from families with a positive family history for language impairment. Initial assessments were obtained when infants were 6-9 months of age (M=7.5 months) and the sample was then followed through age 36 months. At the first visit, infants' processing of rapid auditory cues as well as global processing speed and memory were assessed. Significant differences in mean thresholds were seen in infants born into families with a history of SLI as compared with controls. Examination of relations between infant processing abilities and emerging language through 24 months-of-age revealed that threshold for rapid auditory processing at 7.5 months was the single best predictor of language outcome. At age 3, rapid auditory processing threshold and being male, together predicted 39-41% of the variance in language outcome. Thus, early deficits in rapid auditory processing abilities both precede and predict subsequent language delays. These findings support an essential role for basic nonlinguistic, central auditory processes, particularly rapid spectrotemporal processing, in early language development. Further, these findings provide a temporal diagnostic window during which future language impairments may be addressed.
Speech processing in children with functional articulation disorders.
Gósy, Mária; Horváth, Viktória
2015-03-01
This study explored auditory speech processing and comprehension abilities in 5-8-year-old monolingual Hungarian children with functional articulation disorders (FADs) and their typically developing peers. Our main hypothesis was that children with FAD would show co-existing auditory speech processing disorders, with different levels of these skills depending on the nature of the receptive processes. The tasks included (i) sentence and non-word repetitions, (ii) non-word discrimination and (iii) sentence and story comprehension. Results suggest that the auditory speech processing of children with FAD is underdeveloped compared with that of typically developing children, and largely varies across task types. In addition, there are differences between children with FAD and controls in all age groups from 5 to 8 years. Our results have several clinical implications.
Auditory stream segregation in children with Asperger syndrome
Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E.; Nieminen-von Wendt, T.; Kujala, T.
2009-01-01
Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception. PMID:19751798
Cognitive abilities relate to self-reported hearing disability.
Zekveld, Adriana A; George, Erwin L J; Houtgast, Tammo; Kramer, Sophia E
2013-10-01
In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1996) and performed the Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) test as well as tests of spatial working memory (SWM) and visual sustained attention. Regression analyses examined the predictive value of age, hearing thresholds (pure-tone averages [PTAs]), speech perception in noise (speech reception thresholds in noise [SRTNs]), and the cognitive tests for the 5 AIADH factors. Besides the variance explained by age, PTA, and SRTN, cognitive abilities were related to each hearing factor. The reported difficulties with sound detection and speech perception in quiet were less severe for participants with higher age, lower PTAs, and better TRTs. Fewer sound localization and speech perception in noise problems were reported by participants with better SRTNs and smaller SWM. Fewer sound discrimination difficulties were reported by subjects with better SRTNs and TRTs and smaller SWM. The results suggest a general role of the ability to read partly masked text in subjective hearing. Large working memory was associated with more reported hearing difficulties. This study shows that besides auditory variables and age, cognitive abilities are related to self-reported hearing disability.
Developmental hearing loss impedes auditory task learning and performance in gerbils.
von Trapp, Gardiner; Aloni, Ishita; Young, Stephen; Semple, Malcolm N; Sanes, Dan H
2017-04-01
The consequences of developmental hearing loss have been reported to include both sensory and cognitive deficits. To investigate these issues in a non-human model, auditory learning and asymptotic psychometric performance were compared between normal hearing (NH) adult gerbils and those reared with conductive hearing loss (CHL). At postnatal day 10, before ear canal opening, gerbil pups underwent bilateral malleus removal to induce a permanent CHL. Both CHL and control animals were trained to approach a water spout upon presentation of a target (Go stimuli), and withhold for foils (Nogo stimuli). To assess the rate of task acquisition and asymptotic performance, animals were tested on an amplitude modulation (AM) rate discrimination task. Behavioral performance was calculated using a signal detection theory framework. Animals reared with developmental CHL displayed a slower rate of task acquisition for AM discrimination task. Slower acquisition was explained by an impaired ability to generalize to newly introduced stimuli, as compared to controls. Measurement of discrimination thresholds across consecutive testing blocks revealed that CHL animals required a greater number of testing sessions to reach asymptotic threshold values, as compared to controls. However, with sufficient training, CHL animals approached control performance. These results indicate that a sensory impediment can delay auditory learning, and increase the risk of poor performance on a temporal task. Copyright © 2016 Elsevier B.V. All rights reserved.
Encoding of Discriminative Fear Memory by Input-Specific LTP in the Amygdala.
Kim, Woong Bin; Cho, Jun-Hyeong
2017-08-30
In auditory fear conditioning, experimental subjects learn to associate an auditory conditioned stimulus (CS) with an aversive unconditioned stimulus. With sufficient training, animals fear conditioned to an auditory CS show fear response to the CS, but not to irrelevant auditory stimuli. Although long-term potentiation (LTP) in the lateral amygdala (LA) plays an essential role in auditory fear conditioning, it is unknown whether LTP is induced selectively in the neural pathways conveying specific CS information to the LA in discriminative fear learning. Here, we show that postsynaptically expressed LTP is induced selectively in the CS-specific auditory pathways to the LA in a mouse model of auditory discriminative fear conditioning. Moreover, optogenetically induced depotentiation of the CS-specific auditory pathways to the LA suppressed conditioned fear responses to the CS. Our results suggest that input-specific LTP in the LA contributes to fear memory specificity, enabling adaptive fear responses only to the relevant sensory cue. VIDEO ABSTRACT. Copyright © 2017 Elsevier Inc. All rights reserved.
Speech training alters consonant and vowel responses in multiple auditory cortex fields
Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.
2015-01-01
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927
Dolivo, Vassilissa; Taborsky, Michael
2017-05-01
Sensory modalities individuals use to obtain information from the environment differ among conspecifics. The relative contributions of genetic divergence and environmental plasticity to this variance remain yet unclear. Numerous studies have shown that specific sensory enrichments or impoverishments at the postnatal stage can shape neural development, with potential lifelong effects. For species capable of adjusting to novel environments, specific sensory stimulation at a later life stage could also induce specific long-lasting behavioral effects. To test this possibility, we enriched young adult Norway rats with either visual, auditory, or olfactory cues. Four to 8 months after the enrichment period we tested each rat for their learning ability in 3 two-choice discrimination tasks, involving either visual, auditory, or olfactory stimulus discrimination, in a full factorial design. No sensory modality was more relevant than others for the proposed task per se, but rats performed better when tested in the modality for which they had been enriched. This shows that specific environmental conditions encountered during early adulthood have specific long-lasting effects on the learning abilities of rats. Furthermore, we disentangled the relative contributions of genetic and environmental causes of the response. The reaction norms of learning abilities in relation to the stimulus modality did not differ between families, so interindividual divergence was mainly driven by environmental rather than genetic factors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Language discrimination without language: Experiments on tamarin monkeys
NASA Astrophysics Data System (ADS)
Tincoff, Ruth; Hauser, Marc; Spaepen, Geertrui; Tsao, Fritz; Mehler, Jacques
2002-05-01
Human newborns can discriminate spoken languages differing on prosodic characteristics such as the timing of rhythmic units [T. Nazzi et al., JEP:HPP 24, 756-766 (1998)]. Cotton-top tamarins have also demonstrated a similar ability to discriminate a morae- (Japanese) vs a stress-timed (Dutch) language [F. Ramus et al., Science 288, 349-351 (2000)]. The finding that tamarins succeed in this task when either natural or synthesized utterances are played in a forward direction, but fail on backward utterances which disrupt the rhythmic cues, suggests that sensitivity to language rhythm may rely on general processes of the primate auditory system. However, the rhythm hypothesis also predicts that tamarins would fail to discriminate languages from the same rhythm class, such as English and Dutch. To assess the robustness of this ability, tamarins were tested on a different-rhythm-class distinction, Polish vs Japanese, and a new same-rhythm-class distinction, English vs Dutch. The stimuli were natural forward utterances produced by multiple speakers. As predicted by the rhythm hypothesis, tamarins discriminated between Polish and Japanese, but not English and Dutch. These findings strengthen the claim that discriminating the rhythmic cues of language does not require mechanisms specialized for human speech. [Work supported by NSF.
Relevance of Spectral Cues for Auditory Spatial Processing in the Occipital Cortex of the Blind
Voss, Patrice; Lepore, Franco; Gougoux, Frédéric; Zatorre, Robert J.
2011-01-01
We have previously shown that some blind individuals can localize sounds more accurately than their sighted counterparts when one ear is obstructed, and that this ability is strongly associated with occipital cortex activity. Given that spectral cues are important for monaurally localizing sounds when one ear is obstructed, and that blind individuals are more sensitive to small spectral differences, we hypothesized that enhanced use of spectral cues via occipital cortex mechanisms could explain the better performance of blind individuals in monaural localization. Using positron-emission tomography (PET), we scanned blind and sighted persons as they discriminated between sounds originating from a single spatial position, but with different spectral profiles that simulated different spatial positions based on head-related transfer functions. We show here that a sub-group of early blind individuals showing superior monaural sound localization abilities performed significantly better than any other group on this spectral discrimination task. For all groups, performance was best for stimuli simulating peripheral positions, consistent with the notion that spectral cues are more helpful for discriminating peripheral sources. PET results showed that all blind groups showed cerebral blood flow increases in the occipital cortex; but this was also the case in the sighted group. A voxel-wise covariation analysis showed that more occipital recruitment was associated with better performance across all blind subjects but not the sighted. An inter-regional covariation analysis showed that the occipital activity in the blind covaried with that of several frontal and parietal regions known for their role in auditory spatial processing. Overall, these results support the notion that the superior ability of a sub-group of early-blind individuals to localize sounds is mediated by their superior ability to use spectral cues, and that this ability is subserved by cortical processing in the occipital cortex. PMID:21716600
Interactions of cognitive and auditory abilities in congenitally blind individuals.
Rokem, Ariel; Ahissar, Merav
2009-02-01
Congenitally blind individuals have been found to show superior performance in perceptual and memory tasks. In the present study, we asked whether superior stimulus encoding could account for performance in memory tasks. We characterized the performance of a group of congenitally blind individuals on a series of auditory, memory and executive cognitive tasks and compared their performance to that of sighted controls matched for age, education and musical training. As expected, we found superior verbal spans among congenitally blind individuals. Moreover, we found superior speech perception, measured by resilience to noise, and superior auditory frequency discrimination. However, when memory span was measured under conditions of equivalent speech perception, by adjusting the signal to noise ratio for each individual to the same level of perceptual difficulty (80% correct), the advantage in memory span was completely eliminated. Moreover, blind individuals did not possess any advantage in cognitive executive functions, such as manipulation of items in memory and math abilities. We propose that the short-term memory advantage of blind individuals results from better stimulus encoding, rather than from superiority at subsequent processing stages.
ERIC Educational Resources Information Center
Behrmann, Polly; Millman, Joan
The activities collected in this handbook are planned for parents to use with their children in a learning experience. They can also be used in the classroom. Sections contain games designed to develop visual discrimination, auditory discrimination, motor coordination and oral expression. An objective is given for each game, and directions for…
English Auditory Discrimination Skills of Spanish-Speaking Children.
ERIC Educational Resources Information Center
Kramer, Virginia Reyes; Schell, Leo M.
1982-01-01
Eighteen Mexican American pupils in the grades 1-3 from two urban Kansas schools were tested, using 18 pairs of sound contrasts, for auditory discrimination problems related to their language-different background. Results showed v-b, ch-sh, and s-sp contrasts were the most difficult for subjects to discriminate. (LC)
Convergent-Discriminant Validity of the Jewish Employment Vocational System (JEVS).
ERIC Educational Resources Information Center
Tryjankowski, Elaine M.
This study investigated the construct validity of five perceptual traits (auditory discrimination, visual discrimination, visual memory, visual-motor coordination, and auditory to visual-motor coordination) with five simulated work samples (union assembly, resistor reading, budgette assembly, lock assembly, and nail and screw sort) from the Jewish…
Lepistö, T; Silokallio, S; Nieminen-von Wendt, T; Alku, P; Näätänen, R; Kujala, T
2006-10-01
Language development is delayed and deviant in individuals with autism, but proceeds quite normally in those with Asperger syndrome (AS). We investigated auditory-discrimination and orienting in children with AS using an event-related potential (ERP) paradigm that was previously applied to children with autism. ERPs were measured to pitch, duration, and phonetic changes in vowels and to corresponding changes in non-speech sounds. Active sound discrimination was evaluated with a sound-identification task. The mismatch negativity (MMN), indexing sound-discrimination accuracy, showed right-hemisphere dominance in the AS group, but not in the controls. Furthermore, the children with AS had diminished MMN-amplitudes and decreased hit rates for duration changes. In contrast, their MMN to speech pitch changes was parietally enhanced. The P3a, reflecting involuntary orienting to changes, was diminished in the children with AS for speech pitch and phoneme changes, but not for the corresponding non-speech changes. The children with AS differ from controls with respect to their sound-discrimination and orienting abilities. The results of the children with AS are relatively similar to those earlier obtained from children with autism using the same paradigm, although these clinical groups differ markedly in their language development.
Morgan, Simeon J; Paolini, Antonio G
2012-06-06
Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants and auditory midbrain implants. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices. Several techniques such as reward-based operant conditioning, conditioned avoidance, or classical fear conditioning have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear Nucleus, guided by the monitoring of neural responses to acoustic stimuli, and the fixation of the electrode into place for chronic use is likewise shown.
ERIC Educational Resources Information Center
ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.
This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 27 titles deal with a variety of topics, including the following: facilitation of language development in disadvantaged preschool children; auditory-visual discrimination skills, language performance, and development of manual…
ERIC Educational Resources Information Center
Wehner, Daniel T.; Ahlfors, Seppo P.; Mody, Maria
2007-01-01
Poor readers perform worse than their normal reading peers on a variety of speech perception tasks, which may be linked to their phonological processing abilities. The purpose of the study was to compare the brain activation patterns of normal and impaired readers on speech perception to better understand the phonological basis in reading…
Single electrode micro-stimulation of rat auditory cortex: an evaluation of behavioral performance.
Rousche, Patrick J; Otto, Kevin J; Reilly, Mark P; Kipke, Daryl R
2003-05-01
A combination of electrophysiological mapping, behavioral analysis and cortical micro-stimulation was used to explore the interrelation between the auditory cortex and behavior in the adult rat. Auditory discriminations were evaluated in eight rats trained to discriminate the presence or absence of a 75 dB pure tone stimulus. A probe trial technique was used to obtain intensity generalization gradients that described response probabilities to mid-level tones between 0 and 75 dB. The same rats were then chronically implanted in the auditory cortex with a 16 or 32 channel tungsten microwire electrode array. Implanted animals were then trained to discriminate the presence of single electrode micro-stimulation of magnitude 90 microA (22.5 nC/phase). Intensity generalization gradients were created to obtain the response probabilities to mid-level current magnitudes ranging from 0 to 90 microA on 36 different electrodes in six of the eight rats. The 50% point (the current level resulting in 50% detections) varied from 16.7 to 69.2 microA, with an overall mean of 42.4 (+/-8.1) microA across all single electrodes. Cortical micro-stimulation induced sensory-evoked behavior with similar characteristics as normal auditory stimuli. The results highlight the importance of the auditory cortex in a discrimination task and suggest that micro-stimulation of the auditory cortex might be an effective means for a graded information transfer of auditory information directly to the brain as part of a cortical auditory prosthesis.
Segal, Osnat; Houston, Derek; Kishon-Rabin, Liat
2016-01-01
To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants' processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress patterns in infants with CI was reduced compared with that of infants with NH, (3) both groups showed directional asymmetry in discrimination, that is, increased discrimination from the uncommon to the common stress pattern in Hebrew (/dóti/ versus /dotí/) compared with the reversed condition. The CI device transmitted sufficient acoustic information (amplitude, duration, and fundamental frequency) to allow discrimination between stress patterns in young hearing-impaired infants with CI. The present pattern of results is in support of a discrimination model in which both auditory capabilities and "top-down" interactions are involved. That is, the CI infants detected changes between stressed and unstressed syllables after which they developed a bias for the more common weak-strong stress pattern in Hebrew. The latter suggests that infants with CI were able to extract the statistical distribution of stress patterns by listening to the ambient language even after limited auditory experience with the CI device. To conclude, in relation to processing of lexical stress patterns, infants with CI followed similar developmental milestones as hearing infants thus establishing important prerequisites for early language acquisition.
Bratzke, Daniel; Seifried, Tanja; Ulrich, Rolf
2012-08-01
This study assessed possible cross-modal transfer effects of training in a temporal discrimination task from vision to audition as well as from audition to vision. We employed a pretest-training-post-test design including a control group that performed only the pretest and the post-test. Trained participants showed better discrimination performance with their trained interval than the control group. This training effect transferred to the other modality only for those participants who had been trained with auditory stimuli. The present study thus demonstrates for the first time that training on temporal discrimination within the auditory modality can transfer to the visual modality but not vice versa. This finding represents a novel illustration of auditory dominance in temporal processing and is consistent with the notion that time is primarily encoded in the auditory system.
Visual processing affects the neural basis of auditory discrimination.
Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko
2008-12-01
The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.
Auditory processing deficits in bipolar disorder with and without a history of psychotic features.
Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N
2015-11-01
Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
The effect of auditory memory load on intensity resolution in individuals with Parkinson's disease
NASA Astrophysics Data System (ADS)
Richardson, Kelly C.
Purpose: The purpose of the current study was to investigate the effect of auditory memory load on intensity resolution in individuals with Parkinson's disease (PD) as compared to two groups of listeners without PD. Methods: Nineteen individuals with Parkinson's disease, ten healthy age- and hearing-matched adults, and ten healthy young adults were studied. All listeners participated in two intensity discrimination tasks differing in auditory memory load; a lower memory load, 4IAX task and a higher memory load, ABX task. Intensity discrimination performance was assessed using a bias-free measurement of signal detectability known as d' (d-prime). Listeners further participated in a continuous loudness scaling task where they were instructed to rate the loudness level of each signal intensity using a computerized 150mm visual analogue scale. Results: Group discrimination functions indicated significantly lower intensity discrimination sensitivity (d') across tasks for the individuals with PD, as compared to the older and younger controls. No significant effect of aging on intensity discrimination was observed for either task. All three listeners groups demonstrated significantly lower intensity discrimination sensitivity for the higher auditory memory load, ABX task, compared to the lower auditory memory load, 4IAX task. Furthermore, a significant effect of aging was identified for the loudness scaling condition. The younger controls were found to rate most stimuli along the continuum as significantly louder than the older controls and the individuals with PD. Conclusions: The persons with PD showed evidence of impaired auditory perception for intensity information, as compared to the older and younger controls. The significant effect of aging on loudness perception may indicate peripheral and/or central auditory involvement.
Auditory Learning. Dimensions in Early Learning Series.
ERIC Educational Resources Information Center
Zigmond, Naomi K.; Cicci, Regina
The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…
Is the Role of External Feedback in Auditory Skill Learning Age Dependent?
Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat
2017-12-20
The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for frequency) task, with external feedback (EF) provided for half of them. Data supported the following findings: (a) Children learned the difference limen for frequency task only when EF was provided. (b) The ability of the children to benefit from EF was associated with better cognitive skills. (c) Adults showed significant learning whether EF was provided or not. (d) In children, within-session learning following training was dependent on the provision of feedback, whereas between-sessions learning occurred irrespective of feedback. EF was found beneficial for auditory skill learning of 7-9-year-old children but not for young adults. The data support the supervised Hebbian model for auditory skill learning, suggesting combined bottom-up internal neural feedback controlled by top-down monitoring. In the case of immature executive functions, EF enhanced auditory skill learning. This study has implications for the design of training protocols in the auditory modality for different age groups, as well as for special populations.
Mendez, M F
2001-02-01
After a right temporoparietal stroke, a left-handed man lost the ability to understand speech and environmental sounds but developed greater appreciation for music. The patient had preserved reading and writing but poor verbal comprehension. Slower speech, single syllable words, and minimal written cues greatly facilitated his verbal comprehension. On identifying environmental sounds, he made predominant acoustic errors. Although he failed to name melodies, he could match, describe, and sing them. The patient had normal hearing except for presbyacusis, right-ear dominance for phonemes, and normal discrimination of basic psychoacoustic features and rhythm. Further testing disclosed difficulty distinguishing tone sequences and discriminating two clicks and short-versus-long tones, particularly in the left ear. Together, these findings suggest impairment in a direct route for temporal analysis and auditory word forms in his right hemisphere to Wernicke's area in his left hemisphere. The findings further suggest a separate and possibly rhythm-based mechanism for music recognition.
Visual Aversive Learning Compromises Sensory Discrimination.
Shalev, Lee; Paz, Rony; Avidan, Galia
2018-03-14
Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.
Jansson-Verkasalo, Eira; Eggers, Kurt; Järvenpää, Anu; Suominen, Kalervo; Van den Bergh, Bea; De Nil, Luc; Kujala, Teija
2014-09-01
Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS). Participants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC. There were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS. The results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology. The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve
2018-01-01
To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…
Baguley, D M; Bird, J; Humphriss, R L; Prevost, A T
2006-02-01
. Acquired unilateral sensorineural hearing loss reduces the ability to localize sounds and to discriminate in background noise. . Four controlled trials attempt to determine the benefit of contralateral bone anchored hearing aids over contralateral routing of signal (CROS) hearing aids and over the unaided condition. All found no significant improvement in auditory localization with either aid. Speech discrimination in noise and subjective questionnaire measures of auditory abilities showed an advantage for bone anchored hearing aid (BAHA) > CROS > unaided conditions. . All four studies have material shortfalls: (i) the BAHA was always trialled after the CROS aid; (ii) CROS aids were only trialled for 4 weeks; (iii) none used any measure of hearing handicap when selecting subjects; (iv) two studies have a bias in terms of patient selection; (v) all studies were underpowered (vi) double reporting of patients occurred. . There is a paucity of evidence to support the efficacy of BAHA in the treatment of acquired unilateral sensorineural hearing loss. Clinicians should proceed with caution and perhaps await a larger randomized trial. . It is perhaps only appropriate to insert a BAHA peg at the time of vestibular schwanoma tumour excision in patients with good preoperative hearing, as their hearing handicap increases most.
Behavioral and subcortical signatures of musical expertise in Mandarin Chinese speakers
Tervaniemi, Mari; Aalto, Daniel
2018-01-01
Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for fo or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers. PMID:29300756
Effect of musical training on static and dynamic measures of spectral-pattern discrimination.
Sheft, Stanley; Smayda, Kirsten; Shafiro, Valeriy; Maddox, W Todd; Chandrasekaran, Bharath
2013-06-01
Both behavioral and physiological studies have demonstrated enhanced processing of speech in challenging listening environments attributable to musical training. The relationship, however, of this benefit to auditory abilities as assessed by psychoacoustic measures remains unclear. Using tasks previously shown to relate to speech-in-noise perception, the present study evaluated discrimination ability for static and dynamic spectral patterns by 49 listeners grouped as either musicians or nonmusicians. The two static conditions measured the ability to detect a change in the phase of a logarithmic sinusoidal spectral ripple of wideband noise with ripple densities of 1.5 and 3.0 cycles per octave chosen to emphasize either timbre or pitch distinctions, respectively. The dynamic conditions assessed temporal-pattern discrimination of 1-kHz pure tones frequency modulated by different lowpass noise samples with thresholds estimated in terms of either stimulus duration or signal-to-noise ratio. Musicians performed significantly better than nonmusicians on all four tasks. Discriminant analysis showed that group membership was correctly predicted for 88% of the listeners with the structure coefficient of each measure greater than 0.51. Results suggest that enhanced processing of static and dynamic spectral patterns defined by low-rate modulation may contribute to the relationship between musical training and speech-in-noise perception. [Supported by NIH.].
Canopoli, Alessandro; Herbst, Joshua A; Hahnloser, Richard H R
2014-05-14
Many animals exhibit flexible behaviors that they can adjust to increase reward or avoid harm (learning by positive or aversive reinforcement). But what neural mechanisms allow them to restore their original behavior (motor program) after reinforcement is withdrawn? One possibility is that motor restoration relies on brain areas that have a role in memorization but no role in either motor production or in sensory processing relevant for expressing the behavior and its refinement. We investigated the role of a higher auditory brain area in the songbird for modifying and restoring the stereotyped adult song. We exposed zebra finches to aversively reinforcing white noise stimuli contingent on the pitch of one of their stereotyped song syllables. In response, birds significantly changed the pitch of that syllable to avoid the aversive reinforcer. After we withdrew reinforcement, birds recovered their original song within a few days. However, we found that large bilateral lesions in the caudal medial nidopallium (NCM, a high auditory area) impaired recovery of the original pitch even several weeks after withdrawal of the reinforcing stimuli. Because NCM lesions spared both successful noise-avoidance behavior and birds' auditory discrimination ability, our results show that NCM is not needed for directed motor changes or for auditory discriminative processing, but is implied in memorizing or recalling the memory of the recent song target. Copyright © 2014 the authors 0270-6474/14/347018-09$15.00/0.
Wehner, Daniel T.; Ahlfors, Seppo P.; Mody, Maria
2007-01-01
Poor readers perform worse than their normal reading peers on a variety of speech perception tasks, which may be linked to their phonological processing abilities. The purpose of the study was to compare the brain activation patterns of normal and impaired readers on speech perception to better understand the phonological basis in reading disability. Whole-head magnetoencephalography (MEG) was recorded as good and poor readers, 7-13 years of age, performed an auditory word discrimination task. We used an auditory oddball paradigm in which the ‘deviant’ stimuli (/bat/, /kat/, /rat/) differed in the degree of phonological contrast (1 vs. 3 features) from a repeated standard word (/pat/). Both good and poor readers responded more slowly to deviants that were phonologically similar compared to deviants that were phonologically dissimilar to the standard word. Source analysis of the MEG data using Minimum Norm Estimation (MNE) showed that compared to good readers, poor readers had reduced left-hemisphere activation to the most demanding phonological condition reflecting their difficulties with phonological processing. Furthermore, unlike good readers, poor readers did not show differences in activation as a function of the degree of phonological contrast. These results are consistent with a phonological account of reading disability. PMID:17675109
ERIC Educational Resources Information Center
Kudoh, Masaharu; Shibuki, Katsuei
2006-01-01
We have previously reported that sound sequence discrimination learning requires cholinergic inputs to the auditory cortex (AC) in rats. In that study, reward was used for motivating discrimination behavior in rats. Therefore, dopaminergic inputs mediating reward signals may have an important role in the learning. We tested the possibility in the…
Genetic pleiotropy explains associations between musical auditory discrimination and intelligence.
Mosing, Miriam A; Pedersen, Nancy L; Madison, Guy; Ullén, Fredrik
2014-01-01
Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.
Genetic Pleiotropy Explains Associations between Musical Auditory Discrimination and Intelligence
Mosing, Miriam A.; Pedersen, Nancy L.; Madison, Guy; Ullén, Fredrik
2014-01-01
Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions. PMID:25419664
Goswami, Usha; Fosker, Tim; Huss, Martina; Mead, Natasha; Szucs, Dénes
2011-01-01
Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory processing of brief, rapidly successive acoustic changes is compromised in dyslexia, thereby affecting phonetic discrimination (e.g. discriminating /b/ from /d/) via impaired discrimination of formant transitions (rapid acoustic changes in frequency and intensity). However, an alternative auditory temporal hypothesis is that the basic auditory processing of the slower amplitude modulation cues in speech is compromised (Goswami et al., 2002). Here, we contrast children's perception of a synthetic speech contrast (ba/wa) when it is based on the speed of the rate of change of frequency information (formant transition duration) versus the speed of the rate of change of amplitude modulation (rise time). We show that children with dyslexia have excellent phonetic discrimination based on formant transition duration, but poor phonetic discrimination based on envelope cues. The results explain why phonetic discrimination may be allophonic in developmental dyslexia (Serniclaes et al., 2004), and suggest new avenues for the remediation of developmental dyslexia. © 2010 Blackwell Publishing Ltd.
A Perceptuo-Cognitive-Motor Approach to the Special Child.
ERIC Educational Resources Information Center
Kornblum, Rena Beth
A movement therapist reviews ways in which a perceptuo-cognitive approach can help handicapped children in learning and in social adjustment. She identifies specific auditory problems (hearing loss, sound-ground confusion, auditory discrimination, auditory localization, auditory memory, auditory sequencing), visual problems (visual acuity,…
Tervaniemi, Mari; Sannemann, Christian; Noyranen, Maiju; Salonen, Johanna; Pihko, Elina
2011-08-01
The brain basis behind musical competence in its various forms is not yet known. To determine the pattern of hemispheric lateralization during sound-change discrimination, we recorded the magnetic counterpart of the electrical mismatch negativity (MMNm) responses in professional musicians, musical participants (with high scores in the musicality tests but without professional training in music) and non-musicians. While watching a silenced video, they were presented with short sounds with frequency and duration deviants and C major chords with C minor chords as deviants. MMNm to chord deviants was stronger in both musicians and musical participants than in non-musicians, particularly in their left hemisphere. No group differences were obtained in the MMNm strength in the right hemisphere in any of the conditions or in the left hemisphere in the case of frequency or duration deviants. Thus, in addition to professional training in music, musical aptitude (combined with lower-level musical training) is also reflected in brain functioning related to sound discrimination. The present magnetoencephalographic evidence therefore indicates that the sound discrimination abilities may be differentially distributed in the brain in musically competent and naïve participants, especially in a musical context established by chord stimuli: the higher forms of musical competence engage both auditory cortices in an integrative manner. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Voss, Patrice; Gougoux, Frederic; Zatorre, Robert J; Lassonde, Maryse; Lepore, Franco
2008-04-01
Blind individuals do not necessarily receive more auditory stimulation than sighted individuals. However, to interact effectively with their environment, they have to rely on non-visual cues (in particular auditory) to a greater extent. Often benefiting from cerebral reorganization, they not only learn to rely more on such cues but also may process them better and, as a result, demonstrate exceptional abilities in auditory spatial tasks. Here we examine the effects of blindness on brain activity, using positron emission tomography (PET), during a sound-source discrimination task (SSDT) in both early- and late-onset blind individuals. This should not only provide an answer to the question of whether the blind manifest changes in brain activity but also allow a direct comparison of the two subgroups performing an auditory spatial task. The task was presented under two listening conditions: one binaural and one monaural. The binaural task did not show any significant behavioural differences between groups, but it demonstrated striate and extrastriate activation in the early-blind groups. A subgroup of early-blind individuals, on the other hand, performed significantly better than all the other groups during the monaural task, and these enhanced skills were correlated with elevated activity within the left dorsal extrastriate cortex. Surprisingly, activation of the right ventral visual pathway, which was significantly activated in the late-blind individuals during the monaural task, was negatively correlated with performance. This suggests the possibility that not all cross-modal plasticity is beneficial. Overall, our results not only support previous findings showing that occipital cortex of early-blind individuals is functionally engaged in spatial auditory processing but also shed light on the impact the age of onset of blindness can have on the ensuing cross-modal plasticity.
Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.
Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi
2015-08-01
To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Auditory-Cortex Short-Term Plasticity Induced by Selective Attention
Jääskeläinen, Iiro P.; Ahveninen, Jyrki
2014-01-01
The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458
Szelag, Elzbieta; Lewandowska, Monika; Wolak, Tomasz; Seniow, Joanna; Poniatowska, Renata; Pöppel, Ernst; Szymaszek, Aneta
2014-03-15
Experimental studies have often reported close associations between rapid auditory processing and language competency. The present study was aimed at improving auditory comprehension in aphasic patients following specific training in the perception of temporal order (TO) of events. We tested 18 aphasic patients showing both comprehension and TO perception deficits. Auditory comprehension was assessed by the Token Test, phonemic awareness and Voice-Onset-Time Test. The TO perception was assessed using auditory Temporal-Order-Threshold, defined as the shortest interval between two consecutive stimuli, necessary to report correctly their before-after relation. Aphasic patients participated in eight 45-minute sessions of either specific temporal training (TT, n=11) aimed to improve sequencing abilities, or control non-temporal training (NT, n=7) focussed on volume discrimination. The TT yielded improved TO perception; moreover, a transfer of improvement was observed from the time domain to the language domain, which was untrained during the training. The NT did not improve either the TO perception or comprehension in any language test. These results are in agreement with previous literature studies which proved ameliorated language competency following the TT in language-learning-impaired or dyslexic children. Our results indicated for the first time such benefits also in aphasic patients. Copyright © 2013 Elsevier B.V. All rights reserved.
The effects of context and musical training on auditory temporal-interval discrimination.
Banai, Karen; Fisher, Shirley; Ganot, Ron
2012-02-01
Non sensory factors such as stimulus context and musical experience are known to influence auditory frequency discrimination, but whether the context effect extends to auditory temporal processing remains unknown. Whether individual experiences such as musical training alter the context effect is also unknown. The goal of the present study was therefore to investigate the effects of stimulus context and musical experience on auditory temporal-interval discrimination. In experiment 1, temporal-interval discrimination was compared between fixed context conditions in which a single base temporal interval was presented repeatedly across all trials and variable context conditions in which one of two base intervals was randomly presented on each trial. Discrimination was significantly better in the fixed than in the variable context conditions. In experiment 2 temporal discrimination thresholds of musicians and non-musicians were compared across 3 conditions: a fixed context condition in which the target interval was presented repeatedly across trials, and two variable context conditions differing in the frequencies used for the tones marking the temporal intervals. Musicians outperformed non-musicians on all 3 conditions, but the effects of context were similar for the two groups. Overall, it appears that, like frequency discrimination, temporal-interval discrimination benefits from having a fixed reference. Musical experience, while improving performance, did not alter the context effect, suggesting that improved discrimination skills among musicians are probably not an outcome of more sensitive contextual facilitation or predictive coding mechanisms. Copyright © 2011 Elsevier B.V. All rights reserved.
Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome.
Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A; Mottron, Laurent
2010-07-01
Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Bornstein, Joan L.
The booklet outlines ways to help children with learning disabilities in specific subject areas. Characteristic behavior and remedial exercises are listed for seven areas of auditory problems: auditory reception, auditory association, auditory discrimination, auditory figure ground, auditory closure and sound blending, auditory memory, and grammar…
Odour discrimination and identification are improved in early blindness.
Cuevas, Isabel; Plaza, Paula; Rombaux, Philippe; De Volder, Anne G; Renier, Laurent
2009-12-01
Previous studies showed that early blind humans develop superior abilities in the use of their remaining senses, hypothetically due to a functional reorganization of the deprived visual brain areas. While auditory and tactile functions have been investigated for long, little is known about the effects of early visual deprivation on olfactory processing. However, blind humans make an extensive use of olfactory information in their daily life. Here we investigated olfactory discrimination and identification abilities in early blind subjects and age-matched sighted controls. Three levels of cuing were used in the identification task, i.e., free-identification (no cue), categorization (semantic cues) and multiple choice (semantic and phonological cues). Early blind subjects significantly outperformed the controls in odour discrimination, free-identification and categorization. In addition, the larger group difference was observed in the free-identification as compared to the categorization and the multiple choice conditions. This indicated that a better access to the semantic information from odour perception accounted for part of the improved olfactory performances in odour identification in the blind. We concluded that early blind subjects have both improved perceptual abilities and a better access to the information stored in semantic memory than sighted subjects.
Seki, Yoshimasa; Okanoya, Kazuo
2008-02-01
Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.
Auditory cortical change detection in adults with Asperger syndrome.
Lepistö, Tuulia; Nieminen-von Wendt, Taina; von Wendt, Lennart; Näätänen, Risto; Kujala, Teija
2007-03-06
The present study investigated whether auditory deficits reported in children with Asperger syndrome (AS) are also present in adulthood. To this end, event-related potentials (ERPs) were recorded from adults with AS for duration, pitch, and phonetic changes in vowels, and for acoustically matched non-speech stimuli. These subjects had enhanced mismatch negativity (MMN) amplitudes particularly for pitch and duration deviants, indicating enhanced sound-discrimination abilities. Furthermore, as reflected by the P3a, their involuntary orienting was enhanced for changes in non-speech sounds, but tended to be deficient for changes in speech sounds. The results are consistent with those reported earlier in children with AS, except for the duration-MMN, which was diminished in children and enhanced in adults.
Evaluation protocol for amusia: Portuguese sample.
Peixoto, Maria Conceição; Martins, Jorge; Teixeira, Pedro; Alves, Marisa; Bastos, José; Ribeiro, Carlos
2012-12-01
Amusia is a disorder that affects the processing of music. Part of this processing happens in the primary auditory cortex. The study of this condition allows us to evaluate the central auditory pathways. To explore the diagnostic evaluation tests of amusia. The authors propose an evaluation protocol for patients with suspected amusia (after brain injury or complaints of poor musical perception), in parallel with the assessment of central auditory processing, already implemented in the department. The Montreal Evaluation of Battery of amusia was the basis for the selection of the tests. From this comprehensive battery of tests we selected some of the musical examples to evaluate different musical aspects, including memory and perception of music, ability concerning musical recognition and discrimination. In terms of memory there is a test for assessing delayed memory, adapted to the Portuguese culture. Prospective study. Although still experimental, with the possibility of adjustments in the assessment, we believe that this assessment, combined with the study of central auditory processing, will allow us to understand some central lesions, congenital or acquired hearing perception limitations.
Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams.
Centanni, Tracy Michelle; Booker, Anne B; Chen, Fuyi; Sloan, Andrew M; Carraway, Ryan S; Rennaker, Robert L; LoTurco, Joseph J; Kilgard, Michael P
2016-04-27
Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC-) before any behavioral training. A separate group of 8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing. These animals also exhibited abnormal neural plasticity after training, which may be a mechanism for why some children with dyslexia do not respond to intervention. These results are in stark contrast to our previous work with a different candidate gene, which caused a different set of deficits. Our results shed some light on possible neural and genetic mechanisms causing heterogeneity in the dyslexic population. Copyright © 2016 the authors 0270-6474/16/364895-12$15.00/0.
Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams
Booker, Anne B.; Chen, Fuyi; Sloan, Andrew M.; Carraway, Ryan S.; Rennaker, Robert L.; LoTurco, Joseph J.; Kilgard, Michael P.
2016-01-01
Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC−) before any behavioral training. A separate group of 8 rats (3 DC−) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. SIGNIFICANCE STATEMENT Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing. These animals also exhibited abnormal neural plasticity after training, which may be a mechanism for why some children with dyslexia do not respond to intervention. These results are in stark contrast to our previous work with a different candidate gene, which caused a different set of deficits. Our results shed some light on possible neural and genetic mechanisms causing heterogeneity in the dyslexic population. PMID:27122044
Auditory-Motor Processing of Speech Sounds
Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.
2013-01-01
The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846
Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M
2000-01-01
Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.
Adaptive training diminishes distractibility in aging across species.
Mishra, Jyoti; de Villers-Sidani, Etienne; Merzenich, Michael; Gazzaley, Adam
2014-12-03
Aging is associated with deficits in the ability to ignore distractions, which has not yet been remediated by any neurotherapeutic approach. Here, in parallel auditory experiments with older rats and humans, we evaluated a targeted cognitive training approach that adaptively manipulated distractor challenge. Training resulted in enhanced discrimination abilities in the setting of irrelevant information in both species that was driven by selectively diminished distraction-related errors. Neural responses to distractors in auditory cortex were selectively reduced in both species, mimicking the behavioral effects. Sensory receptive fields in trained rats exhibited improved spectral and spatial selectivity. Frontal theta measures of top-down engagement with distractors were selectively restrained in trained humans. Finally, training gains generalized to group and individual level benefits in aspects of working memory and sustained attention. Thus, we demonstrate converging cross-species evidence for training-induced selective plasticity of distractor processing at multiple neural scales, benefitting distractor suppression and cognitive control. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
Auditory Pattern Recognition and Brief Tone Discrimination of Children with Reading Disorders
ERIC Educational Resources Information Center
Walker, Marianna M.; Givens, Gregg D.; Cranford, Jerry L.; Holbert, Don; Walker, Letitia
2006-01-01
Auditory pattern recognition skills in children with reading disorders were investigated using perceptual tests involving discrimination of frequency and duration tonal patterns. A behavioral test battery involving recognition of the pattern of presentation of tone triads was used in which individual components differed in either frequency or…
Kansas Center for Research in Early Childhood Education Annual Report, FY 1973.
ERIC Educational Resources Information Center
Horowitz, Frances D.
This monograph is a collection of papers describing a series of loosely related studies of visual attention, auditory stimulation, and language discrimination in young infants. Titles include: (1) Infant Attention and Discrimination: Methodological and Substantive Issues; (2) The Addition of Auditory Stimulation (Music) and an Interspersed…
Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C
2014-01-01
Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.
Perception of the pitch of unresolved harmonics by 3- and 7-month-old human infants.
Lau, Bonnie K; Werner, Lynne A
2014-08-01
Three-month-olds discriminate resolved harmonic complexes on the basis of missing fundamental (MF) pitch. In view of reported difficulty in discriminating unresolved complexes at 7 months and striking changes in the organization of the auditory system during early infancy, infants' ability to discriminate unresolved complexes is of some interest. This study investigated the ability of 3-month-olds, 7-month-olds, and adults to discriminate the pitch of unresolved harmonic complexes using an observer-based method. Stimuli were MF complexes bandpass filtered with a -12 dB/octave slope, combined in random phase, presented at 70 dB sound pressure level (SPL) for 650 ms with a 50 ms rise/fall with a pink noise at 65 dB SPL. The conditions were (1) "LOW" unresolved harmonics (2500-4500 Hz) based on MFs of 160 and 200 Hz and (2) "HIGH" unresolved harmonics (4000-6000 Hz) based on MFs of 190 and 200 Hz. To demonstrate MF discrimination, participants had to ignore spectral changes in complexes with the same fundamental and respond only when the fundamental changed. Nearly all infants tested categorized complexes by MF pitch suggesting discrimination of pitch extracted from unresolved harmonics by 3 months. Adults also categorized the complexes by MF pitch, although musically trained adults were more successful than musically untrained adults.
Zhang, Y; Li, D D; Chen, X W
2017-06-20
Objective: Case-control study analysis of the speech discrimination of unilateral microtia and external auditory canal atresia patients with normal hearing subjects in quiet and noisy environment. To understand the speech recognition results of patients with unilateral external auditory canal atresia and provide scientific basis for clinical early intervention. Method: Twenty patients with unilateral congenital microtia malformation combined external auditory canal atresia, 20 age matched normal subjects as control group. All subjects used Mandarin speech audiometry material, to test the speech discrimination scores (SDS) in quiet and noisy environment in sound field. Result: There's no significant difference of speech discrimination scores under the condition of quiet between two groups. There's a statistically significant difference when the speech signal in the affected side and noise in the nomalside (single syllable, double syllable, statements; S/N=0 and S/N=-10) ( P <0.05). There's no significant difference of speech discrimination scores when the speech signal in the nomalside and noise in the affected side. There's a statistically significant difference in condition of the signal and noise in the same side when used one-syllable word recognition (S/N=0 and S/N=-5) ( P <0.05), while double syllable word and statement has no statistically significant difference ( P >0.05). Conclusion: The speech discrimination scores of unilateral congenital microtia malformation patients with external auditory canal atresia under the condition of noise is lower than the normal subjects. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
Kornilov, Sergey A; Landi, Nicole; Rakhlin, Natalia; Fang, Shin-Yi; Grigorenko, Elena L; Magnuson, James S
2014-01-01
We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n = 23) and typically developing (n = 16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder.
Transformation of temporal sequences in the zebra finch auditory system
Lim, Yoonseob; Lagoy, Ryan; Shinn-Cunningham, Barbara G; Gardner, Timothy J
2016-01-01
This study examines how temporally patterned stimuli are transformed as they propagate from primary to secondary zones in the thalamorecipient auditory pallium in zebra finches. Using a new class of synthetic click stimuli, we find a robust mapping from temporal sequences in the primary zone to distinct population vectors in secondary auditory areas. We tested whether songbirds could discriminate synthetic click sequences in an operant setup and found that a robust behavioral discrimination is present for click sequences composed of intervals ranging from 11 ms to 40 ms, but breaks down for stimuli composed of longer inter-click intervals. This work suggests that the analog of the songbird auditory cortex transforms temporal patterns to sequence-selective population responses or ‘spatial codes', and that these distinct population responses contribute to behavioral discrimination of temporally complex sounds. DOI: http://dx.doi.org/10.7554/eLife.18205.001 PMID:27897971
Sheft, Stanley; Shafiro, Valeriy; Lorenzi, Christian; McMullen, Rachel; Farrell, Caitlin
2012-01-01
Objective The frequency modulation (FM) of speech can convey linguistic information and also enhance speech-stream coherence and segmentation. Using a clinically oriented approach, the purpose of the present study was to examine the effects of age and hearing loss on the ability to discriminate between stochastic patterns of low-rate FM and determine whether difficulties in speech perception experienced by older listeners relate to a deficit in this ability. Design Data were collected from 18 normal-hearing young adults, and 18 participants who were at least 60 years old, nine normal-hearing and nine with a mild-to-moderate sensorineural hearing loss. Using stochastic frequency modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, discrimination thresholds were measured in terms of frequency excursion (ΔF) both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio (SNRFM) in the presence of a speech-babble masker. Speech perception ability was evaluated using Quick Speech-in-Noise (QuickSIN) sentences in four-talker babble. Results Results showed a significant effect of age, but not of hearing loss among the older listeners, for FM discrimination conditions with masking present (ΔF and SNRFM). The effect of age was not significant for the FM measures based on stimulus duration. ΔF and SNRFM were also the two conditions for which performance was significantly correlated with listener age when controlling for effect of hearing loss as measured by pure-tone average. With respect to speech-in-noise ability, results from the SNRFM condition were significantly correlated with QuickSIN performance. Conclusions Results indicate that aging is associated with reduced ability to discriminate moderate-duration patterns of low-rate stochastic FM. Furthermore, the relationship between QuickSIN performance and the SNRFM thresholds suggests that the difficulty experienced by older listeners with speech-in-noise processing may in part relate to diminished ability to process slower fine-structure modulation at low sensation levels. Results thus suggest that clinical consideration of stochastic FM discrimination measures may offer a fuller picture of auditory processing abilities. PMID:22790319
Homeostatic enhancement of sensory transduction
Milewski, Andrew R.; Ó Maoiléidigh, Dáibhid; Salvi, Joshua D.; Hudspeth, A. J.
2017-01-01
Our sense of hearing boasts exquisite sensitivity, precise frequency discrimination, and a broad dynamic range. Experiments and modeling imply, however, that the auditory system achieves this performance for only a narrow range of parameter values. Small changes in these values could compromise hair cells’ ability to detect stimuli. We propose that, rather than exerting tight control over parameters, the auditory system uses a homeostatic mechanism that increases the robustness of its operation to variation in parameter values. To slowly adjust the response to sinusoidal stimulation, the homeostatic mechanism feeds back a rectified version of the hair bundle’s displacement to its adaptation process. When homeostasis is enforced, the range of parameter values for which the sensitivity, tuning sharpness, and dynamic range exceed specified thresholds can increase by more than an order of magnitude. Signatures in the hair cell’s behavior provide a means to determine through experiment whether such a mechanism operates in the auditory system. Robustness of function through homeostasis may be ensured in any system through mechanisms similar to those that we describe here. PMID:28760949
ERIC Educational Resources Information Center
Hill, P. R.; Hogben, J. H.; Bishop, D. M. V.
2005-01-01
It has been proposed that specific language impairment (SLI) is caused by an impairment of auditory processing, but it is unclear whether this problem affects temporal processing, frequency discrimination (FD), or both. Furthermore, there are few longitudinal studies in this area, making it hard to establish whether any deficit represents a…
ERIC Educational Resources Information Center
Kodak, Tiffany; Clements, Andrea; LeBlanc, Brittany
2013-01-01
The purpose of the present investigation was to evaluate a rapid assessment procedure to identify effective instructional strategies to teach auditory-visual conditional discriminations to children diagnosed with autism. We replicated and extended previous rapid skills assessments (Lerman, Vorndran, Addison, & Kuhn, 2004) by evaluating the effects…
Infants' Auditory Enumeration: Evidence for Analog Magnitudes in the Small Number Range
ERIC Educational Resources Information Center
vanMarle, Kristy; Wynn, Karen
2009-01-01
Vigorous debate surrounds the issue of whether infants use different representational mechanisms to discriminate small and large numbers. We report evidence for ratio-dependent performance in infants' discrimination of small numbers of auditory events, suggesting that infants can use analog magnitudes to represent small values, at least in the…
Visual adaptation enhances action sound discrimination.
Barraclough, Nick E; Page, Steve A; Keefe, Bruce D
2017-01-01
Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.
Satoh, Masayuki; Takeda, Katsuhiko; Kuzuhara, Shigeki
2007-01-01
There is fairly general agreement that the melody and the rhythm are the independent components of the perception of music. In the theory of music, the melody and harmony determine to which tonality the music belongs. It remains an unsettled question whether the tonality is also an independent component of the perception of music, or a by-product of the melody and harmony. We describe a patient with auditory agnosia and expressive amusia that developed after a bilateral infarction of the temporal lobes. We carried out a detailed examination of musical ability in the patient and in control subjects. Comparing with a control population, we identified the following impairments in music perception: (a) discrimination of familiar melodies; (b) discrimination of unfamiliar phrases, and (c) discrimination of isolated chords. His performance in pitch discrimination and tonality were within normal limits. Although intrasubject statistical analysis revealed significant difference only between tonality task and unfamiliar phrase performance, comparison with control subjects suggested a dissociation between a preserved tonality analysis and impairment of perception of melody and chords. By comparing the results of our patient with those in the literature, we may say that there is a double dissociation between the tonality and the other components. Thus, it seems reasonable to suppose that tonality is an independent component of music perception. Based on our present and previous studies, we proposed the revised version of the cognitive model of musical processing in the brain. Copyright 2007 S. Karger AG, Basel.
A sound advantage: Increased auditory capacity in autism.
Remington, Anna; Fairnie, Jake
2017-09-01
Autism Spectrum Disorder (ASD) has an intriguing auditory processing profile. Individuals show enhanced pitch discrimination, yet often find seemingly innocuous sounds distressing. This study used two behavioural experiments to examine whether an increased capacity for processing sounds in ASD could underlie both the difficulties and enhanced abilities found in the auditory domain. Autistic and non-autistic young adults performed a set of auditory detection and identification tasks designed to tax processing capacity and establish the extent of perceptual capacity in each population. Tasks were constructed to highlight both the benefits and disadvantages of increased capacity. Autistic people were better at detecting additional unexpected and expected sounds (increased distraction and superior performance respectively). This suggests that they have increased auditory perceptual capacity relative to non-autistic people. This increased capacity may offer an explanation for the auditory superiorities seen in autism (e.g. heightened pitch detection). Somewhat counter-intuitively, this same 'skill' could result in the sensory overload that is often reported - which subsequently can interfere with social communication. Reframing autistic perceptual processing in terms of increased capacity, rather than a filtering deficit or inability to maintain focus, increases our understanding of this complex condition, and has important practical implications that could be used to develop intervention programs to minimise the distress that is often seen in response to sensory stimuli. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Sakurai, Y
2002-01-01
This study reports how hippocampal individual cells and cell assemblies cooperate for neural coding of pitch and temporal information in memory processes for auditory stimuli. Each rat performed two tasks, one requiring discrimination of auditory pitch (high or low) and the other requiring discrimination of their duration (long or short). Some CA1 and CA3 complex-spike neurons showed task-related differential activity between the high and low tones in only the pitch-discrimination task. However, without exception, neurons which showed task-related differential activity between the long and short tones in the duration-discrimination task were always task-related neurons in the pitch-discrimination task. These results suggest that temporal information (long or short), in contrast to pitch information (high or low), cannot be coded independently by specific neurons. The results also indicate that the two different behavioral tasks cannot be fully differentiated by the task-related single neurons alone and suggest a model of cell-assembly coding of the tasks. Cross-correlation analysis among activities of simultaneously recorded multiple neurons supported the suggested cell-assembly model.Considering those results, this study concludes that dual coding by hippocampal single neurons and cell assemblies is working in memory processing of pitch and temporal information of auditory stimuli. The single neurons encode both auditory pitches and their temporal lengths and the cell assemblies encode types of tasks (contexts or situations) in which the pitch and the temporal information are processed.
Pre-attentive auditory discrimination skill in Indian classical vocal musicians and non-musicians.
Sanju, Himanshu Kumar; Kumar, Prawin
2016-09-01
To test for pre-attentive auditory discrimination skills in Indian classical vocal musicians and non-musicians. Mismatch negativity (MMN) was recorded to test for pre-attentive auditory discrimination skills with a pair of stimuli of /1000 Hz/ and /1100 Hz/, with /1000 Hz/ as the frequent stimulus and /1100 Hz/ as the infrequent stimulus. Onset, offset and peak latencies were the considered latency parameters, whereas peak amplitude and area under the curve were considered for amplitude analysis. Exactly 50 participants, out of which the experimental group had 25 adult Indian classical vocal musicians and 25 age-matched non-musicians served as the control group, were included in the study. Experimental group participants had a minimum professional music experience in Indian classic vocal music of 10 years. However, control group participants did not have any formal training in music. Descriptive statistics showed better waveform morphology in the experimental group as compared to the control. MANOVA showed significantly better onset latency, peak amplitude and area under the curve in the experimental group but no significant difference in the offset and peak latencies between the two groups. The present study probably points towards the enhancement of pre-attentive auditory discrimination skills in Indian classical vocal musicians compared to non-musicians. It indicates that Indian classical musical training enhances pre-attentive auditory discrimination skills in musicians, leading to higher peak amplitude and a greater area under the curve compared to non-musicians.
Utilizing Oral-Motor Feedback in Auditory Conceptualization.
ERIC Educational Resources Information Center
Howard, Marilyn
The Auditory Discrimination in Depth (ADD) program, an oral-motor approach to beginning reading instruction, trains first grade children in auditory skills by a process in which language and oral-motor feedback are used to integrate auditory properties with visual properties. This emphasis of the ADD program makes the child's perceptual…
Return of Function after Hair Cell Regeneration
Ryals, Brenda M.; Dent, Micheal L.; Dooling, Robert J.
2012-01-01
The ultimate goal of hair cell regeneration is to restore functional hearing. Because birds begin perceiving and producing song early in life, they provide a propitious model for studying not only whether regeneration of lost hair cells can return auditory sensitivity but also whether this regenerated periphery can restore complex auditory perception and production. They are the only animal where hair cell regeneration occurs naturally after hair cell loss and where the ability to correctly perceive and produce complex acoustic signals is critical to procreation and survival. The purpose of this review article is to survey the most recent literature on behavioral measures of auditory functional return in adult birds after hair cell regeneration. The first portion of the review summarizes the effect of ototoxic drug induced hair cell loss and regeneration on hearing loss and recovery for pure tones. The second portion reviews studies of complex, species-specific vocalization discrimination and recognition after hair cell regeneration. Finally, we discuss the relevance of temporary hearing loss and recovery through hair cell regeneration on complex call and song production. Hearing sensitivity is restored, except for the highest frequencies, after hair cell regeneration in birds, but there are enduring changes to complex auditory perception. These changes do not appear to provide any obstacle to future auditory or vocal learning. PMID:23202051
Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa
2017-02-01
The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.
Auditory-prosodic processing in bipolar disorder; from sensory perception to emotion.
Van Rheenen, Tamsyn E; Rossell, Susan L
2013-12-01
Accurate emotion processing is critical to understanding the social world. Despite growing evidence of facial emotion processing impairments in patients with bipolar disorder (BD), comprehensive investigations of emotional prosodic processing is limited. The existing (albeit sparse) literature is inconsistent at best, and confounded by failures to control for the effects of gender or low level sensory-perceptual impairments. The present study sought to address this paucity of research by utilizing a novel behavioural battery to comprehensively investigate the auditory-prosodic profile of BD. Fifty BD patients and 52 healthy controls completed tasks assessing emotional and linguistic prosody, and sensitivity for discriminating tones that deviate in amplitude, duration and pitch. BD patients were less sensitive than their control counterparts in discriminating amplitude and durational cues but not pitch cues or linguistic prosody. They also demonstrated impaired ability to recognize happy intonations; although this was specific to male's with the disorder. The recognition of happy in the patient group was correlated with pitch and amplitude sensitivity in female patients only. The small sample size of patients after stratification by current mood state prevented us from conducting subgroup comparisons between symptomatic, euthymic and control participants to explicitly examine the effects of mood. Our findings indicate the existence of a female advantage for the processing of emotional prosody in BD, specifically for the processing of happy. Although male BD patients were impaired in their ability to recognize happy prosody, this was unrelated to reduced tone discrimination sensitivity. This study indicates the importance of examining both gender and low order sensory perceptual capacity when examining emotional prosody. © 2013 Elsevier B.V. All rights reserved.
The impact of negative affect on reality discrimination.
Smailes, David; Meins, Elizabeth; Fernyhough, Charles
2014-09-01
People who experience auditory hallucinations tend to show weak reality discrimination skills, so that they misattribute internal, self-generated events to an external, non-self source. We examined whether inducing negative affect in healthy young adults would increase their tendency to make external misattributions on a reality discrimination task. Participants (N = 54) received one of three mood inductions (one positive, two negative) and then performed an auditory signal detection task to assess reality discrimination. Participants who received either of the two negative inductions made more false alarms, but not more hits, than participants who received the neutral induction, indicating that negative affect makes participants more likely to misattribute internal, self-generated events to an external, non-self source. These findings are drawn from an analogue sample, and research that examines whether negative affect also impairs reality discrimination in patients who experience auditory hallucinations is required. These findings show that negative affect disrupts reality discrimination and suggest one way in which negative affect may lead to hallucinatory experiences. Copyright © 2014 Elsevier Ltd. All rights reserved.
de Borst, Aline W; de Gelder, Beatrice
2017-08-01
Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Amusia and protolanguage impairments in schizophrenia
Kantrowitz, J. T.; Scaramello, N.; Jakubovitz, A.; Lehrfeld, J. M.; Laukka, P.; Elfenbein, H. A.; Silipo, G.; Javitt, D. C.
2017-01-01
Background Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Method Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Results Highly significant deficits were seen between patients and controls across auditory tasks (p<0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. Discussion This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia. PMID:25066878
Amusia and protolanguage impairments in schizophrenia.
Kantrowitz, J T; Scaramello, N; Jakubovitz, A; Lehrfeld, J M; Laukka, P; Elfenbein, H A; Silipo, G; Javitt, D C
2014-10-01
Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Highly significant deficits were seen between patients and controls across auditory tasks (p < 0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.
Fam, Justine; Holmes, Nathan; Delaney, Andrew; Crane, James; Westbrook, R Frederick
2018-06-14
Oxytocin (OT) is a neuropeptide which influences the expression of social behavior and regulates its distribution according to the social context - OT is associated with increased pro-social effects in the absence of social threat and defensive aggression when threats are present. The present experiments investigated the effects of OT beyond that of social behavior by using a discriminative Pavlovian fear conditioning protocol with rats. In Experiment 1, an OT receptor agonist (TGOT) microinjected into the basolateral amygdala facilitated the discrimination between an auditory cue that signaled shock and another auditory cue that signaled the absence of shock. This TGOT-facilitated discrimination was replicated in a second experiment where the shocked and non-shocked auditory cues were accompanied by a common visual cue. Conditioned responding on probe trials of the auditory and visual elements indicated that TGOT administration produced a qualitative shift in the learning mechanisms underlying the discrimination between the two compounds. This was confirmed by comparisons between the present results and simulated predictions of elemental and configural associative learning models. Overall, the present findings demonstrate that the neuromodulatory effects of OT influence behavior outside of the social domain. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chang, Ming; Iizuka, Hiroyuki; Kashioka, Hideki; Naruse, Yasushi; Furukawa, Masahiro; Ando, Hideyuki; Maeda, Taro
2017-01-01
When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between 'l' and 'r' sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words 'light' and 'right'. There was also improvement in the recognition of other words containing 'l' and 'r' (e.g., 'blight' and 'bright'), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses.
Iizuka, Hiroyuki; Kashioka, Hideki; Naruse, Yasushi; Furukawa, Masahiro; Ando, Hideyuki; Maeda, Taro
2017-01-01
When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between ‘l’ and ‘r’ sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words ‘light’ and ‘right’. There was also improvement in the recognition of other words containing ‘l’ and ‘r’ (e.g., ‘blight’ and ‘bright’), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses. PMID:28617861
ERIC Educational Resources Information Center
Steinbrink, Claudia; Groth, Katarina; Lachmann, Thomas; Riecker, Axel
2012-01-01
This fMRI study investigated phonological vs. auditory temporal processing in developmental dyslexia by means of a German vowel length discrimination paradigm (Groth, Lachmann, Riecker, Muthmann, & Steinbrink, 2011). Behavioral and fMRI data were collected from dyslexics and controls while performing same-different judgments of vowel duration in…
ERIC Educational Resources Information Center
Steinhaus, Kurt A.
A 12-week study of two groups of 14 college freshmen music majors was conducted to determine which group demonstrated greater achievement in learning auditory discrimination using computer-assisted instruction (CAI). The method employed was a pre-/post-test experimental design using subjects randomly assigned to a control group or an experimental…
A Further Evaluation of Picture Prompts during Auditory-Visual Conditional Discrimination Training
ERIC Educational Resources Information Center
Carp, Charlotte L.; Peterson, Sean P.; Arkel, Amber J.; Petursdottir, Anna I.; Ingvarsson, Einar T.
2012-01-01
This study was a systematic replication and extension of Fisher, Kodak, and Moore (2007), in which a picture prompt embedded into a least-to-most prompting sequence facilitated acquisition of auditory-visual conditional discriminations. Participants were 4 children who had been diagnosed with autism; 2 had limited prior receptive skills, and 2 had…
Kornilov, Sergey A.; Landi, Nicole; Rakhlin, Natalia; Fang, Shin-Yi; Grigorenko, Elena L.; Magnuson, James S.
2015-01-01
We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n=23) and typically developing (n=16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder. PMID:25350759
Top-down and bottom-up modulation of brain structures involved in auditory discrimination.
Diekhof, Esther K; Biedermann, Franziska; Ruebsamen, Rudolf; Gruber, Oliver
2009-11-10
Auditory deviancy detection comprises both automatic and voluntary processing. Here, we investigated the neural correlates of different components of the sensory discrimination process using functional magnetic resonance imaging. Subliminal auditory processing of deviant events that were not detected led to activation in left superior temporal gyrus. On the other hand, both correct detection of deviancy and false alarms activated a frontoparietal network of attentional processing and response selection, i.e. this network was activated regardless of the physical presence of deviant events. Finally, activation in the putamen, anterior cingulate and middle temporal cortex depended on factual stimulus representations and occurred only during correct deviancy detection. These results indicate that sensory discrimination may rely on dynamic bottom-up and top-down interactions.
Magnetoencephalographic signatures of numerosity discrimination in fetuses and neonates.
Schleger, Franziska; Landerl, Karin; Muenssinger, Jana; Draganova, Rossitza; Reinl, Maren; Kiefer-Schmidt, Isabelle; Weiss, Magdalene; Wacker-Gußmann, Annette; Huotilainen, Minna; Preissl, Hubert
2014-01-01
Numerosity discrimination has been demonstrated in newborns, but not in fetuses. Fetal magnetoencephalography allows non-invasive investigation of neural responses in neonates and fetuses. During an oddball paradigm with auditory sequences differing in numerosity, evoked responses were recorded and mismatch responses were quantified as an indicator for auditory discrimination. Thirty pregnant women with healthy fetuses (last trimester) and 30 healthy term neonates participated. Fourteen adults were included as a control group. Based on measurements eligible for analysis, all adults, all neonates, and 74% of fetuses showed numerical mismatch responses. Numerosity discrimination appears to exist in the last trimester of pregnancy.
Castagna, Filomena; Montemagni, Cristiana; Maria Milani, Anna; Rocca, Giuseppe; Rocca, Paola; Casacchia, Massimo; Bogetto, Filippo
2013-02-28
This study aimed to evaluate the ability to decode emotion in the auditory and audiovisual modality in a group of patients with schizophrenia, and to explore the role of cognition and psychopathology in affecting these emotion recognition abilities. Ninety-four outpatients in a stable phase and 51 healthy subjects were recruited. Patients were assessed through a psychiatric evaluation and a wide neuropsychological battery. All subjects completed the comprehensive affect testing system (CATS), a group of computerized tests designed to evaluate emotion perception abilities. With respect to the controls, patients were not impaired in the CATS tasks involving discrimination of nonemotional prosody, naming of emotional stimuli expressed by voice and judging the emotional content of a sentence, whereas they showed a specific impairment in decoding emotion in a conflicting auditory condition and in the multichannel modality. Prosody impairment was affected by executive functions, attention and negative symptoms, while deficit in multisensory emotion recognition was affected by executive functions and negative symptoms. These emotion recognition deficits, rather than being associated purely with emotion perception disturbances in schizophrenia, are affected by core symptoms of the illness. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
The role of Broca's area in speech perception: evidence from aphasia revisited.
Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele
2011-12-01
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.
Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.
2016-01-01
Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of auditory attention and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally-degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally-degraded speech reflects early peripheral auditory processes as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that auditory attention and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, since they are routinely required to encode, process and understand spectrally-degraded acoustic signals. PMID:28045787
Auditory Evoked Responses in Neonates by MEG
NASA Astrophysics Data System (ADS)
Hernandez-Pavon, J. C.; Sosa, M.; Lutter, W. J.; Maier, M.; Wakai, R. T.
2008-08-01
Magnetoencephalography is a biomagnetic technique with outstanding potential for neurodevelopmental studies. In this work, we have used MEG to determinate if newborns can discriminate between different stimuli during the first few months of life. Five neonates were stimulated during several minutes with auditory stimulation. The results suggest that the newborns are able to discriminate between different stimuli despite their early age.
Auditory Evoked Responses in Neonates by MEG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez-Pavon, J. C.; Department of Medical Physics, University of Wisconsin Madison, Wisconsin; Sosa, M.
2008-08-11
Magnetoencephalography is a biomagnetic technique with outstanding potential for neurodevelopmental studies. In this work, we have used MEG to determinate if newborns can discriminate between different stimuli during the first few months of life. Five neonates were stimulated during several minutes with auditory stimulation. The results suggest that the newborns are able to discriminate between different stimuli despite their early age.
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.
Götz, Theresa; Hanke, David; Huonker, Ralph; Weiss, Thomas; Klingner, Carsten; Brodoehl, Stefan; Baumbach, Philipp; Witte, Otto W
2017-06-01
We often close our eyes to improve perception. Recent results have shown a decrease of perception thresholds accompanied by an increase in somatosensory activity after eye closure. However, does somatosensory spatial discrimination also benefit from eye closure? We previously showed that spatial discrimination is accompanied by a reduction of somatosensory activity. Using magnetoencephalography, we analyzed the magnitude of primary somatosensory (somatosensory P50m) and primary auditory activity (auditory P50m) during a one-back discrimination task in 21 healthy volunteers. In complete darkness, participants were requested to pay attention to either the somatosensory or auditory stimulation and asked to open or close their eyes every 6.5 min. Somatosensory P50m was reduced during a task requiring the distinguishing of stimulus location changes at the distal phalanges of different fingers. The somatosensory P50m was further reduced and detection performance was higher during eyes open. A similar reduction was found for the auditory P50m during a task requiring the distinguishing of changing tones. The function of eye closure is more than controlling visual input. It might be advantageous for perception because it is an effective way to reduce interference from other modalities, but disadvantageous for spatial discrimination because it requires at least one top-down processing stage. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Hisagi, Miwako; Shafer, Valerie L.; Strange, Winifred; Sussman, Elyse S.
2015-01-01
This study examined automaticity of discrimination of a Japanese length contrast for consonants (miʃi vs. miʃʃi) in native (Japanese) and non-native (American-English) listeners using behavioral measures and the event-related potential (ERP) mismatch negativity (MMN). Attention to the auditory input was manipulated either away from the auditory input via a visual oddball task (Visual Attend), or to the input by asking the listeners to count auditory deviants (Auditory Attend). Results showed a larger MMN when attention was focused on the consonant contrast than away from it for both groups. The MMN was larger for consonant duration increments than decrements. No difference in MMN between the language groups was observed, but the Japanese listeners did show better behavioral discrimination than the American English listeners. In addition, behavioral responses showed a weak, but significant correlation with MMN amplitude. These findings suggest that both acoustic-phonetic properties and phonological experience affects automaticity of speech processing. PMID:26119918
Stability of auditory discrimination and novelty processing in physiological aging.
Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele
2013-01-01
Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.
Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten
2010-01-01
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean
2015-01-01
We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…
ERIC Educational Resources Information Center
Vercillo, Tiziana; Burr, David; Gori, Monica
2016-01-01
A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…
Advanced Parkinson disease patients have impairment in prosody processing.
Albuquerque, Luisa; Martins, Maurício; Coelho, Miguel; Guedes, Leonor; Ferreira, Joaquim J; Rosa, Mário; Martins, Isabel Pavão
2016-01-01
The ability to recognize and interpret emotions in others is a crucial prerequisite of adequate social behavior. Impairments in emotion processing have been reported from the early stages of Parkinson's disease (PD). This study aims to characterize emotion recognition in advanced Parkinson's disease (APD) candidates for deep-brain stimulation and to compare emotion recognition abilities in visual and auditory domains. APD patients, defined as those with levodopa-induced motor complications (N = 42), and healthy controls (N = 43) matched by gender, age, and educational level, undertook the Comprehensive Affect Testing System (CATS), a battery that evaluates recognition of seven basic emotions (happiness, sadness, anger, fear, surprise, disgust, and neutral) on facial expressions and four emotions on prosody (happiness, sadness, anger, and fear). APD patients were assessed during the "ON" state. Group performance was compared with independent-samples t tests. Compared to controls, APD had significantly lower scores on the discrimination and naming of emotions in prosody, and visual discrimination of neutral faces, but no significant differences in visual emotional tasks. The contrasting performance in emotional processing between visual and auditory stimuli suggests that APD candidates for surgery have either a selective difficulty in recognizing emotions in prosody or a general defect in prosody processing. Studies investigating early-stage PD, and the effect of subcortical lesions in prosody processing, favor the latter interpretation. Further research is needed to understand these deficits in emotional prosody recognition and their possible contribution to later behavioral or neuropsychiatric manifestations of PD.
Vavatzanidis, Niki Katerina; Mürbe, Dirk; Friederici, Angela; Hahne, Anja
2015-12-01
One main incentive for supplying hearing impaired children with a cochlear implant is the prospect of oral language acquisition. Only scarce knowledge exists, however, of what congenitally deaf children actually perceive when receiving their first auditory input, and specifically what speech-relevant features they are able to extract from the new modality. We therefore presented congenitally deaf infants and young children implanted before the age of 4 years with an oddball paradigm of long and short vowel variants of the syllable /ba/. We measured the EEG in regular intervals to study their discriminative ability starting with the first activation of the implant up to 8 months later. We were thus able to time-track the emerging ability to differentiate one of the most basic linguistic features that bears semantic differentiation and helps in word segmentation, namely, vowel length. Results show that already 2 months after the first auditory input, but not directly after implant activation, these early implanted children differentiate between long and short syllables. Surprisingly, after only 4 months of hearing experience, the ERPs have reached the same properties as those of the normal hearing control group, demonstrating the plasticity of the brain with respect to the new modality. We thus show that a simple but linguistically highly relevant feature such as vowel length reaches age-appropriate electrophysiological levels as fast as 4 months after the first acoustic stimulation, providing an important basis for further language acquisition.
Perceptual consequences of disrupted auditory nerve activity.
Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold
2005-06-01
Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.
Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.
Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359
Pauletti, C; Mannarelli, D; Locuratolo, N; Vanacore, N; De Lucia, M C; Fattapposta, F
2014-04-01
To investigate whether pre-attentive auditory discrimination is impaired in patients with essential tremor (ET) and to evaluate the role of age at onset in this function. Seventeen non-demented patients with ET and seventeen age- and sex-matched healthy controls underwent an EEG recording during a classical auditory MMN paradigm. MMN latency was significantly prolonged in patients with elderly-onset ET (>65 years) (p=0.046), while no differences emerged in either latency or amplitude between young-onset ET patients and controls. This study represents a tentative indication of a dysfunction of auditory automatic change detection in elderly-onset ET patients, pointing to a selective attentive deficit in this subgroup of ET patients. The delay in pre-attentive auditory discrimination, which affects elderly-onset ET patients alone, further supports the hypothesis that ET represents a heterogeneous family of diseases united by tremor; these diseases are characterized by cognitive differences that may range from a disturbance in a selective cognitive function, such as the automatic part of the orienting response, to more widespread and complex cognitive dysfunctions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Immersive audiomotor game play enhances neural and perceptual salience of weak signals in noise
Whitton, Jonathon P.; Hancock, Kenneth E.; Polley, Daniel B.
2014-01-01
All sensory systems face the fundamental challenge of encoding weak signals in noisy backgrounds. Although discrimination abilities can improve with practice, these benefits rarely generalize to untrained stimulus dimensions. Inspired by recent findings that action video game training can impart a broader spectrum of benefits than traditional perceptual learning paradigms, we trained adult humans and mice in an immersive audio game that challenged them to forage for hidden auditory targets in a 2D soundscape. Both species learned to modulate their angular search vectors and target approach velocities based on real-time changes in the level of a weak tone embedded in broadband noise. In humans, mastery of this tone in noise task generalized to an improved ability to comprehend spoken sentences in speech babble noise. Neural plasticity in the auditory cortex of trained mice supported improved decoding of low-intensity sounds at the training frequency and an enhanced resistance to interference from background masking noise. These findings highlight the potential to improve the neural and perceptual salience of degraded sensory stimuli through immersive computerized games. PMID:24927596
Immersive audiomotor game play enhances neural and perceptual salience of weak signals in noise.
Whitton, Jonathon P; Hancock, Kenneth E; Polley, Daniel B
2014-06-24
All sensory systems face the fundamental challenge of encoding weak signals in noisy backgrounds. Although discrimination abilities can improve with practice, these benefits rarely generalize to untrained stimulus dimensions. Inspired by recent findings that action video game training can impart a broader spectrum of benefits than traditional perceptual learning paradigms, we trained adult humans and mice in an immersive audio game that challenged them to forage for hidden auditory targets in a 2D soundscape. Both species learned to modulate their angular search vectors and target approach velocities based on real-time changes in the level of a weak tone embedded in broadband noise. In humans, mastery of this tone in noise task generalized to an improved ability to comprehend spoken sentences in speech babble noise. Neural plasticity in the auditory cortex of trained mice supported improved decoding of low-intensity sounds at the training frequency and an enhanced resistance to interference from background masking noise. These findings highlight the potential to improve the neural and perceptual salience of degraded sensory stimuli through immersive computerized games.
Change deafness for real spatialized environmental scenes.
Gaston, Jeremy; Dickerson, Kelly; Hipp, Daniel; Gerhardstein, Peter
2017-01-01
The everyday auditory environment is complex and dynamic; often, multiple sounds co-occur and compete for a listener's cognitive resources. 'Change deafness', framed as the auditory analog to the well-documented phenomenon of 'change blindness', describes the finding that changes presented within complex environments are often missed. The present study examines a number of stimulus factors that may influence change deafness under real-world listening conditions. Specifically, an AX (same-different) discrimination task was used to examine the effects of both spatial separation over a loudspeaker array and the type of change (sound source additions and removals) on discrimination of changes embedded in complex backgrounds. Results using signal detection theory and accuracy analyses indicated that, under most conditions, errors were significantly reduced for spatially distributed relative to non-spatial scenes. A second goal of the present study was to evaluate a possible link between memory for scene contents and change discrimination. Memory was evaluated by presenting a cued recall test following each trial of the discrimination task. Results using signal detection theory and accuracy analyses indicated that recall ability was similar in terms of accuracy, but there were reductions in sensitivity compared to previous reports. Finally, the present study used a large and representative sample of outdoor, urban, and environmental sounds, presented in unique combinations of nearly 1000 trials per participant. This enabled the exploration of the relationship between change perception and the perceptual similarity between change targets and background scene sounds. These (post hoc) analyses suggest both a categorical and a stimulus-level relationship between scene similarity and the magnitude of change errors.
Tervaniemi, M; Kruck, S; De Baene, W; Schröger, E; Alter, K; Friederici, A D
2009-10-01
By recording auditory electrical brain potentials, we investigated whether the basic sound parameters (frequency, duration and intensity) are differentially encoded among speech vs. music sounds by musicians and non-musicians during different attentional demands. To this end, a pseudoword and an instrumental sound of comparable frequency and duration were presented. The accuracy of neural discrimination was tested by manipulations of frequency, duration and intensity. Additionally, the subjects' attentional focus was manipulated by instructions to ignore the sounds while watching a silent movie or to attentively discriminate the different sounds. In both musicians and non-musicians, the pre-attentively evoked mismatch negativity (MMN) component was larger to slight changes in music than in speech sounds. The MMN was also larger to intensity changes in music sounds and to duration changes in speech sounds. During attentional listening, all subjects more readily discriminated changes among speech sounds than among music sounds as indexed by the N2b response strength. Furthermore, during attentional listening, musicians displayed larger MMN and N2b than non-musicians for both music and speech sounds. Taken together, the data indicate that the discriminative abilities in human audition differ between music and speech sounds as a function of the sound-change context and the subjective familiarity of the sound parameters. These findings provide clear evidence for top-down modulatory effects in audition. In other words, the processing of sounds is realized by a dynamically adapting network considering type of sound, expertise and attentional demands, rather than by a strictly modularly organized stimulus-driven system.
Thinking about touch facilitates tactile but not auditory processing.
Anema, Helen A; de Haan, Alyanne M; Gebuis, Titia; Dijkerman, H Chris
2012-05-01
Mental imagery is considered to be important for normal conscious experience. It is most frequently investigated in the visual, auditory and motor domain (imagination of movement), while the studies on tactile imagery (imagination of touch) are scarce. The current study investigated the effect of tactile and auditory imagery on the left/right discriminations of tactile and auditory stimuli. In line with our hypothesis, we observed that after tactile imagery, tactile stimuli were responded to faster as compared to auditory stimuli and vice versa. On average, tactile stimuli were responded to faster as compared to auditory stimuli, and stimuli in the imagery condition were on average responded to slower as compared to baseline performance (left/right discrimination without imagery assignment). The former is probably due to the spatial and somatotopic proximity of the fingers receiving the taps and the thumbs performing the response (button press), the latter to a dual task cost. Together, these results provide the first evidence of a behavioural effect of a tactile imagery assignment on the perception of real tactile stimuli.
Behavioural benefits of multisensory processing in ferrets.
Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R
2017-01-01
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Auditory and motor imagery modulate learning in music performance
Brown, Rachel M.; Palmer, Caroline
2013-01-01
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of auditory interference. Motor imagery aided pitch accuracy overall when interference conditions were manipulated at encoding (Experiment 1) but not at retrieval (Experiment 2). Thus, skilled performers' imagery abilities had distinct influences on encoding and retrieval of musical sequences. PMID:23847495
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
2015-09-01
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
Topographic EEG activations during timbre and pitch discrimination tasks using musical sounds.
Auzou, P; Eustache, F; Etevenon, P; Platel, H; Rioux, P; Lambert, J; Lechevalier, B; Zarifian, E; Baron, J C
1995-01-01
Successive auditory stimulation sequences were presented binaurally to 18 young normal volunteers. Five conditions were investigated: two reference tasks, assumed to involve passive listening to couples of musical sounds, and three discrimination tasks, one dealing with pitch, and two with timbre (either with or without the attack). A symmetrical montage of 16 EEG channels was recorded for each subject across the different conditions. Two quantitative parameters of EEG activity were compared among the different sequences within five distinct frequency bands. As compared to a rest (no stimulation) condition, both passive listening conditions led to changes in primary auditory cortex areas. Both discrimination tasks for pitch and timbre led to right hemisphere EEG changes, organized in two poles: an anterior one and a posterior one. After discussing the electrophysiological aspects of this work, these results are interpreted in terms of a network including the right temporal neocortex and the right frontal lobe to maintain the acoustical information in an auditory working memory necessary to carry out the discrimination task.
Assessment of Auditory Functioning of Deaf-Blind Multihandicapped Children.
ERIC Educational Resources Information Center
Kukla, Deborah; Connolly, Theresa Thomas
The manual describes a procedure to assess to what extent a deaf-blind multiply handicapped student uses his residual hearing in the classroom. Six levels of auditory functioning (awareness/reflexive, attention/alerting, localization, auditory discrimination, recognition, and comprehension) are analyzed, and assessment activities are detailed for…
Background noise exerts diverse effects on the cortical encoding of foreground sounds.
Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E
2017-08-01
In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may contribute to robust signal representation and discrimination in acoustic environments with prominent background noise. Copyright © 2017 the American Physiological Society.
Test of a motor theory of long-term auditory memory
Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer
2012-01-01
Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75–80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve. PMID:22511719
Test of a motor theory of long-term auditory memory.
Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer
2012-05-01
Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75-80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve.
Linguistic and auditory temporal processing in children with specific language impairment.
Fortunato-Tavares, Talita; Rocha, Caroline Nunes; Andrade, Claudia Regina Furquim de; Befi-Lopes, Débora Maria; Schochat, Eliane; Hestvik, Arild; Schwartz, Richard G
2009-01-01
Several studies suggest the association of specific language impairment (SLI) to deficits in auditory processing. It has been evidenced that children with SLI present deficit in brief stimuli discrimination. Such deficit would lead to difficulties in developing phonological abilities necessary to map phonemes and to effectively and automatically code and decode words and sentences. However, the correlation between temporal processing (TP) and specific deficits in language disorders--such as syntactic comprehension abilities--has received little or no attention. To analyze the correlation between: TP (through the Frequency Pattern Test--FPT) and Syntactic Complexity Comprehension (through a Sentence Comprehension Task). Sixteen children with typical language development (8;9 +/- 1;1 years) and seven children with SLI (8;1 +/- 1;2 years) participated on the study. Accuracy of both groups decreased with the increase on syntactic complexity (both p < 0.01). On the between groups comparison, performance difference on the Test of Syntactic Complexity Comprehension (TSCC) was statistically significant (p = 0.02).As expected, children with SLI presented FPT performance outside reference values. On the SLI group, correlations between TSCC and FPT were positive and higher for high syntactic complexity (r = 0.97) than for low syntactic complexity (r = 0.51). Results suggest that FPT is positively correlated to syntactic complexity comprehension abilities.The low performance on FPT could serve as an additional indicator of deficits in complex linguistic processing. Future studies should consider, besides the increase of the sample, longitudinal studies that investigate the effect of frequency pattern auditory training on performance in high syntactic complexity comprehension tasks.
Bishop, Dorothy V.M.; McArthur, Genevieve M.
2005-01-01
It has frequently been claimed that children with specific language impairment (SLI) have impaired auditory perception, but there is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area. Discrepancies in this field may arise because (a) a focus on mean results obscures the heterogeneity in the population and (b) insufficient attention has been paid to maturational aspects of auditory processing. We conducted a study of 16 young people with specific language impairment (SLI) and 16 control participants, 24 of whom had had auditory event-related potentials (ERPs) and frequency discrimination thresholds assessed 18 months previously. When originally assessed, around one third of the listeners with SLI had poor behavioural frequency discrimination thresholds, and these tended to be the younger participants. However, most of the SLI group had age-inappropriate late components of the auditory ERP, regardless of their frequency discrimination. At follow-up, the behavioural thresholds of those with poor frequency discrimination improved, though some remained outside the control range. At follow-up, ERPs for many of the individuals in the SLI group were still not age-appropriate. In several cases, waveforms of individuals in the SLI group resembled those of younger typically-developing children, though in other cases the waveform was unlike that of control cases at any age. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. This study emphasises the variability seen in SLI, and the importance of studying individual cases rather than focusing on group means. PMID:15871598
de Hoz, Livia; Gierej, Dorota; Lioudyno, Victoria; Jaworski, Jacek; Blazejczyk, Magda; Cruces-Solís, Hugo; Beroun, Anna; Lebitko, Tomasz; Nikolaev, Tomasz; Knapska, Ewelina; Nelken, Israel; Kaczmarek, Leszek
2018-05-01
The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.
Mouterde, Solveig C; Elie, Julie E; Mathevon, Nicolas; Theunissen, Frédéric E
2017-03-29
One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging. SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio. Copyright © 2017 Mouterde et al.
Brainstem Correlates of Temporal Auditory Processing in Children with Specific Language Impairment
ERIC Educational Resources Information Center
Basu, Madhavi; Krishnan, Ananthanarayan; Weber-Fox, Christine
2010-01-01
Deficits in identification and discrimination of sounds with short inter-stimulus intervals or short formant transitions in children with specific language impairment (SLI) have been taken to reflect an underlying temporal auditory processing deficit. Using the sustained frequency following response (FFR) and the onset auditory brainstem responses…
Agonistic character displacement in social cognition of advertisement signals.
Pasch, Bret; Sanford, Rachel; Phelps, Steven M
2017-03-01
Interspecific aggression between sibling species may enhance discrimination of competitors when recognition errors are costly, but proximate mechanisms mediating increased discriminative ability are unclear. We studied behavioral and neural mechanisms underlying responses to conspecific and heterospecific vocalizations in Alston's singing mouse (Scotinomys teguina), a species in which males sing to repel rivals. We performed playback experiments using males in allopatry and sympatry with a dominant heterospecific (Scotinomys xerampelinus) and examined song-evoked induction of egr-1 in the auditory system to examine how neural tuning modulates species-specific responses. Heterospecific songs elicited stronger neural responses in sympatry than in allopatry, despite eliciting less singing in sympatry. Our results refute the traditional neuroethological concept of a matched filter and instead suggest expansion of sensory sensitivity to mediate competitor recognition in sympatry.
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:27551263
Reed, Amanda C.; Centanni, Tracy M.; Borland, Michael S.; Matney, Chanel J.; Engineer, Crystal T.; Kilgard, Michael P.
2015-01-01
Objectives Hearing loss is a commonly experienced disability in a variety of populations including veterans and the elderly and can often cause significant impairment in the ability to understand spoken language. In this study, we tested the hypothesis that neural and behavioral responses to speech will be differentially impaired in an animal model after two forms of hearing loss. Design Sixteen female Sprague–Dawley rats were exposed to one of two types of broadband noise which was either moderate or intense. In nine of these rats, auditory cortex recordings were taken 4 weeks after noise exposure (NE). The other seven were pretrained on a speech sound discrimination task prior to NE and were then tested on the same task after hearing loss. Results Following intense NE, rats had few neural responses to speech stimuli. These rats were able to detect speech sounds but were no longer able to discriminate between speech sounds. Following moderate NE, rats had reorganized cortical maps and altered neural responses to speech stimuli but were still able to accurately discriminate between similar speech sounds during behavioral testing. Conclusions These results suggest that rats are able to adjust to the neural changes after moderate NE and discriminate speech sounds, but they are not able to recover behavioral abilities after intense NE. Animal models could help clarify the adaptive and pathological neural changes that contribute to speech processing in hearing-impaired populations and could be used to test potential behavioral and pharmacological therapies. PMID:25072238
[A case of transient auditory agnosia and schizophrenia].
Kanzaki, Jin; Harada, Tatsuhiko; Kanzaki, Sho
2011-03-01
We report a case of transient functional auditory agnosia and schizophrenia and discuss their relationship. A 30-year-old woman with schizophrenia reporting bilateral hearing loss was found in history taking to be able to hear but could neither understand speech nor discriminate among environmental sounds. Audiometry clarified normal but low speech discrimination. Otoacoustic emission and auditory brainstem response were normal. Magnetic resonance imaging (MRI) elsewhere evidenced no abnormal findings. We assumed that taking care of her grandparents who had been discharged from the hospital had unduly stressed her, and her condition improved shortly after she stopped caring for them, returned home and started taking a minor tranquilizer.
Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success
Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.
2013-01-01
The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625
Dyslexia risk gene relates to representation of sound in the auditory brainstem.
Neef, Nicole E; Müller, Bent; Liebig, Johanna; Schaadt, Gesa; Grigutsch, Maren; Gunter, Thomas C; Wilcke, Arndt; Kirsten, Holger; Skeide, Michael A; Kraft, Indra; Kraus, Nina; Emmrich, Frank; Brauer, Jens; Boltze, Johannes; Friederici, Angela D
2017-04-01
Dyslexia is a reading disorder with strong associations with KIAA0319 and DCDC2. Both genes play a functional role in spike time precision of neurons. Strikingly, poor readers show an imprecise encoding of fast transients of speech in the auditory brainstem. Whether dyslexia risk genes are related to the quality of sound encoding in the auditory brainstem remains to be investigated. Here, we quantified the response consistency of speech-evoked brainstem responses to the acoustically presented syllable [da] in 159 genotyped, literate and preliterate children. When controlling for age, sex, familial risk and intelligence, partial correlation analyses associated a higher dyslexia risk loading with KIAA0319 with noisier responses. In contrast, a higher risk loading with DCDC2 was associated with a trend towards more stable responses. These results suggest that unstable representation of sound, and thus, reduced neural discrimination ability of stop consonants, occurred in genotypes carrying a higher amount of KIAA0319 risk alleles. Current data provide the first evidence that the dyslexia-associated gene KIAA0319 can alter brainstem responses and impair phoneme processing in the auditory brainstem. This brain-gene relationship provides insight into the complex relationships between phenotype and genotype thereby improving the understanding of the dyslexia-inherent complex multifactorial condition. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Human sensitivity to differences in the rate of auditory cue change.
Maloff, Erin S; Grantham, D Wesley; Ashmead, Daniel H
2013-05-01
Measurement of sensitivity to differences in the rate of change of auditory signal parameters is complicated by confounds among duration, extent, and velocity of the changing signal. Dooley and Moore [(1988) J. Acoust. Soc. Am. 84(4), 1332-1337] proposed a method for measuring sensitivity to rate of change using a duration discrimination task. They reported improved duration discrimination when an additional intensity or frequency change cue was present. The current experiments were an attempt to use this method to measure sensitivity to the rate of change in intensity and spatial position. Experiment 1 investigated whether duration discrimination was enhanced when additional cues of rate of intensity change, rate of spatial position change, or both were provided. Experiment 2 determined whether participant listening experience or the testing environment influenced duration discrimination task performance. Experiment 3 assessed whether duration discrimination could be used to measure sensitivity to rates of changes in intensity and spatial position for stimuli with lower rates of change, as well as emphasizing the constancy of the velocity cue. Results of these experiments showed that duration discrimination was impaired rather than enhanced by the additional velocity cues. The findings are discussed in terms of the demands of listening to concurrent changes along multiple auditory dimensions.
Auditory fitness for duty: a review.
Tufts, Jennifer B; Vasil, Kristin A; Briggs, Sarah
2009-10-01
Auditory fitness for duty (AFFD) refers to the possession of hearing abilities sufficient for safe and effective job performance. In jobs such as law enforcement and piloting, where the ability to hear is critical to job performance and safety, hearing loss can decrease performance, even to the point of being hazardous to self and others. Tests of AFFD should provide an employer with a valid assessment of an employee's ability to perform the job safely, without discriminating against the employee purely on the basis of hearing loss. The purpose of this review is to provide a basic description of the functional hearing abilities required in hearing-critical occupations, and a summary of current practices in AFFD evaluation. In addition, we suggest directions for research and standardization to ensure best practices in the evaluation of AFFD in the future. We conducted a systematic review of the English-language peer-reviewed literature in AFFD. "Popular" search engines were consulted for governmental regulations and trade journal articles. We also contacted professionals with expertise in AFFD regarding research projects, unpublished material, and current standards. The literature review provided information regarding the functional hearing abilities required to perform hearing-critical tasks, the development of and characteristics of AFFD protocols, and the current implementation of AFFD protocols. This review paper provides evidence of the need to institute job-specific AFFD protocols, move beyond the pure-tone audiogram, and establish the validity of test protocols. These needs are arguably greater now than in times past.
Auditory phase and frequency discrimination: a comparison of nine procedures.
Creelman, C D; Macmillan, N A
1979-02-01
Two auditory discrimination tasks were thoroughly investigated: discrimination of frequency differences from a sinusoidal signal of 200 Hz and discrimination of differences in relative phase of mixed sinusoids of 200 Hz and 400 Hz. For each task psychometric functions were constructed for three observers, using nine different psychophysical measurement procedures. These procedures included yes-no, two-interval forced-choice, and various fixed- and variable-standard designs that investigators have used in recent years. The data showed wide ranges of apparent sensitivity. For frequency discrimination, models derived from signal detection theory for each psychophysical procedure seem to account for the performance differences. For phase discrimination the models do not account for the data. We conclude that for some discriminative continua the assumptions of signal detection theory are appropriate, and underlying sensitivity may be derived from raw data by appropriate transformations. For other continua the models of signal detection theory are probably inappropriate; we speculate that phase might be discriminable only on the basis of comparison or change and suggest some tests of our hypothesis.
Choudhury, Naseem; Leppanen, Paavo H.T.; Leevers, Hilary J.; Benasich, April A.
2007-01-01
An infant’s ability to process auditory signals presented in rapid succession (i.e. rapid auditory processing abilities [RAP]) has been shown to predict differences in language outcomes in toddlers and preschool children. Early deficits in RAP abilities may serve as a behavioral marker for language-based learning disabilities. The purpose of this study is to determine if performance on infant information processing measures designed to tap RAP and global processing skills differ as a function of family history of specific language impairment (SLI) and/or the particular demand characteristics of the paradigm used. Seventeen 6- to 9-month-old infants from families with a history of specific language impairment (FH+) and 29 control infants (FH−) participated in this study. Infants’ performance on two different RAP paradigms (head-turn procedure [HT] and auditory-visual habituation/recognition memory [AVH/RM]) and on a global processing task (visual habituation/recognition memory [VH/RM]) was assessed at 6 and 9 months. Toddler language and cognitive skills were evaluated at 12 and 16 months. A number of significant group differences were seen: FH+ infants showed significantly poorer discrimination of fast rate stimuli on both RAP tasks, took longer to habituate on both habituation/recognition memory measures, and had lower novelty preference scores on the visual habituation/recognition memory task. Infants’ performance on the two RAP measures provided independent but converging contributions to outcome. Thus, different mechanisms appear to underlie performance on operantly conditioned tasks as compared to habituation/recognition memory paradigms. Further, infant RAP processing abilities predicted to 12- and 16-month language scores above and beyond family history of SLI. The results of this study provide additional support for the validity of infant RAP abilities as a behavioral marker for later language outcome. Finally, this is the first study to use a battery of infant tasks to demonstrate multi-modal processing deficits in infants at risk for SLI. PMID:17286846
Discrimination of sound source velocity in human listeners
NASA Astrophysics Data System (ADS)
Carlile, Simon; Best, Virginia
2002-02-01
The ability of six human subjects to discriminate the velocity of moving sound sources was examined using broadband stimuli presented in virtual auditory space. Subjects were presented with two successive stimuli moving in the frontal horizontal plane level with the ears, and were required to judge which moved the fastest. Discrimination thresholds were calculated for reference velocities of 15, 30, and 60 degrees/s under three stimulus conditions. In one condition, stimuli were centered on 0° azimuth and their duration varied randomly to prevent subjects from using displacement as an indicator of velocity. Performance varied between subjects giving median thresholds of 5.5, 9.1, and 14.8 degrees/s for the three reference velocities, respectively. In a second condition, pairs of stimuli were presented for a constant duration and subjects would have been able to use displacement to assist their judgment as faster stimuli traveled further. It was found that thresholds decreased significantly for all velocities (3.8, 7.1, and 9.8 degrees/s), suggesting that the subjects were using the additional displacement cue. The third condition differed from the second in that the stimuli were ``anchored'' on the same starting location rather than centered on the midline, thus doubling the spatial offset between stimulus endpoints. Subjects showed the lowest thresholds in this condition (2.9, 4.0, and 7.0 degrees/s). The results suggested that the auditory system is sensitive to velocity per se, but velocity comparisons are greatly aided if displacement cues are present.
Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.
Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C
2015-11-04
Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia. Copyright © 2015 the authors 0270-6474/15/3514910-13$15.00/0.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Karmakar, Kajari; Narita, Yuichi; Fadok, Jonathan; Ducret, Sebastien; Loche, Alberto; Kitazawa, Taro; Genoud, Christel; Di Meglio, Thomas; Thierry, Raphael; Bacelo, Joao; Lüthi, Andreas; Rijli, Filippo M
2017-01-03
Tonotopy is a hallmark of auditory pathways and provides the basis for sound discrimination. Little is known about the involvement of transcription factors in brainstem cochlear neurons orchestrating the tonotopic precision of pre-synaptic input. We found that in the absence of Hoxa2 and Hoxb2 function in Atoh1-derived glutamatergic bushy cells of the anterior ventral cochlear nucleus, broad input topography and sound transmission were largely preserved. However, fine-scale synaptic refinement and sharpening of isofrequency bands of cochlear neuron activation upon pure tone stimulation were impaired in Hox2 mutants, resulting in defective sound-frequency discrimination in behavioral tests. These results establish a role for Hox factors in tonotopic refinement of connectivity and in ensuring the precision of sound transmission in the mammalian auditory circuit. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Auditory discrimination therapy (ADT) for tinnitus management.
Herraiz, C; Diges, I; Cobo, P
2007-01-01
Auditory discrimination training (ADT) designs a procedure to increase cortical areas responding to trained frequencies (damaged cochlear areas with cortical misrepresentation) and to shrink the neighboring over-represented ones (tinnitus pitch). In a prospective descriptive study of 27 patients with high frequency tinnitus, the severity of the tinnitus was measured using a visual analog scale (VAS) and the tinnitus handicap inventory (THI). Patients performed a 10-min auditory discrimination task twice a day during one month. Discontinuous 4 kHz pure tones were mixed randomly with short broadband noise sounds through an MP3 system. After the treatment mean VAS scores were reduced from 5.2 to 4.5 (p=0.000) and the THI decreased from 26.2% to 21.3% (p=0.000). Forty percent of the patients had improvement in tinnitus perception (RESP). Comparing the ADT group with a control group showed statistically significant improvement of their tinnitus as assessed by RESP, VAS, and THI.
Sustained attention in language production: an individual differences investigation.
Jongman, Suzanne R; Roelofs, Ardi; Meyer, Antje S
2015-01-01
Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that these processes do require some form of attention. Here we investigated the contribution of sustained attention: the ability to maintain alertness over time. In Experiment 1, participants' sustained attention ability was measured using auditory and visual continuous performance tasks. Subsequently, employing a dual-task procedure, participants described pictures using simple noun phrases and performed an arrow-discrimination task while their vocal and manual response times (RTs) and the durations of their gazes to the pictures were measured. Earlier research has demonstrated that gaze duration reflects language planning processes up to and including phonological encoding. The speakers' sustained attention ability correlated with the magnitude of the tail of the vocal RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. This suggests that sustained attention was most important after phonological encoding. Experiment 2 showed that the involvement of sustained attention was significantly stronger in a dual-task situation (picture naming and arrow discrimination) than in simple naming. Thus, individual differences in maintaining attention on the production processes become especially apparent when a simultaneous second task also requires attentional resources.
Speech perception in medico-legal assessment of hearing disabilities.
Pedersen, Ellen Raben; Juhl, Peter Møller; Wetke, Randi; Andersen, Ture Dammann
2016-10-01
Examination of Danish data for medico-legal compensations regarding hearing disabilities. The study purposes are: (1) to investigate whether discrimination scores (DSs) relate to patients' subjective experience of their hearing and communication ability (the latter referring to audio-visual perception), (2) to compare DSs from different discrimination tests (auditory/audio-visual perception and without/with noise), and (3) to relate different handicap measures in the scaling used for compensation purposes in Denmark. Data from a 15 year period (1999-2014) were collected and analysed. The data set includes 466 patients, from which 50 were omitted due to suspicion of having exaggerated their hearing disabilities. The DSs relate well to the patients' subjective experience of their speech perception ability. By comparing DSs for different test setups it was found that adding noise entails a relatively more difficult listening condition than removing visual cues. The hearing and communication handicap degrees were found to agree, whereas the measured handicap degrees tended to be higher than the self-assessed handicap degrees. The DSs can be used to assess patients' hearing and communication abilities. The difference in the obtained handicap degrees emphasizes the importance of collecting self-assessed as well as measured handicap degrees.
Costa, Nayara Thais de Oliveira; Martinho-Carvalho, Ana Claudia; Cunha, Maria Claudia; Lewis, Doris Ruthi
2012-01-01
This study had the aim to investigate the auditory and communicative abilities of children diagnosed with Auditory Neuropathy Spectrum Disorder due to mutation in the Otoferlin gene. It is a descriptive and qualitative study in which two siblings with this diagnosis were assessed. The procedures conducted were: speech perception tests for children with profound hearing loss, and assessment of communication abilities using the Behavioral Observation Protocol. Because they were siblings, the subjects in the study shared family and communicative context. However, they developed different communication abilities, especially regarding the use of oral language. The study showed that the Auditory Neuropathy Spectrum Disorder is a heterogeneous condition in all its aspects, and it is not possible to make generalizations or assume that cases with similar clinical features will develop similar auditory and communicative abilities, even when they are siblings. It is concluded that the acquisition of communicative abilities involves subjective factors, which should be investigated based on the uniqueness of each case.
Oscillatory support for rapid frequency change processing in infants.
Musacchia, Gabriella; Choudhury, Naseem A; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P; Benasich, April A
2013-11-01
Rapid auditory processing and auditory change detection abilities are crucial aspects of speech and language development, particularly in the first year of life. Animal models and adult studies suggest that oscillatory synchrony, and in particular low-frequency oscillations play key roles in this process. We hypothesize that infant perception of rapid pitch and timing changes is mediated, at least in part, by oscillatory mechanisms. Using event-related potentials (ERPs), source localization and time-frequency analysis of event-related oscillations (EROs), we examined the neural substrates of rapid auditory processing in 4-month-olds. During a standard oddball paradigm, infants listened to tone pairs with invariant standard (STD, 800-800 Hz) and variant deviant (DEV, 800-1200 Hz) pitch. STD and DEV tone pairs were first presented in a block with a short inter-stimulus interval (ISI) (Rapid Rate: 70 ms ISI), followed by a block of stimuli with a longer ISI (Control Rate: 300 ms ISI). Results showed greater ERP peak amplitude in response to the DEV tone in both conditions and later and larger peaks during Rapid Rate presentation, compared to the Control condition. Sources of neural activity, localized to right and left auditory regions, showed larger and faster activation in the right hemisphere for both rate conditions. Time-frequency analysis of the source activity revealed clusters of theta band enhancement to the DEV tone in right auditory cortex for both conditions. Left auditory activity was enhanced only during Rapid Rate presentation. These data suggest that local low-frequency oscillatory synchrony underlies rapid processing and can robustly index auditory perception in young infants. Furthermore, left hemisphere recruitment during rapid frequency change discrimination suggests a difference in the spectral and temporal resolution of right and left hemispheres at a very young age. © 2013 Elsevier Ltd. All rights reserved.
Spectral-temporal EEG dynamics of speech discrimination processing in infants during sleep.
Gilley, Phillip M; Uhler, Kristin; Watson, Kaylee; Yoshinaga-Itano, Christine
2017-03-22
Oddball paradigms are frequently used to study auditory discrimination by comparing event-related potential (ERP) responses from a standard, high probability sound and to a deviant, low probability sound. Previous research has established that such paradigms, such as the mismatch response or mismatch negativity, are useful for examining auditory processes in young children and infants across various sleep and attention states. The extent to which oddball ERP responses may reflect subtle discrimination effects, such as speech discrimination, is largely unknown, especially in infants that have not yet acquired speech and language. Mismatch responses for three contrasts (non-speech, vowel, and consonant) were computed as a spectral-temporal probability function in 24 infants, and analyzed at the group level by a modified multidimensional scaling. Immediately following an onset gamma response (30-50 Hz), the emergence of a beta oscillation (12-30 Hz) was temporally coupled with a lower frequency theta oscillation (2-8 Hz). The spectral-temporal probability of this coupling effect relative to a subsequent theta modulation corresponds with discrimination difficulty for non-speech, vowel, and consonant contrast features. The theta modulation effect suggests that unexpected sounds are encoded as a probabilistic measure of surprise. These results support the notion that auditory discrimination is driven by the development of brain networks for predictive processing, and can be measured in infants during sleep. The results presented here have implications for the interpretation of discrimination as a probabilistic process, and may provide a basis for the development of single-subject and single-trial classification in a clinically useful context. An infant's brain is processing information about the environment and performing computations, even during sleep. These computations reflect subtle differences in acoustic feature processing that are necessary for language-learning. Results from this study suggest that brain responses to deviant sounds in an oddball paradigm follow a cascade of oscillatory modulations. This cascade begins with a gamma response that later emerges as a beta synchronization, which is temporally coupled with a theta modulation, and followed by a second, subsequent theta modulation. The difference in frequency and timing of the theta modulations appears to reflect a measure of surprise. These insights into the neurophysiological mechanisms of auditory discrimination provide a basis for exploring the clinically utility of the MMR TF and other auditory oddball responses.
The Encoding of Sound Source Elevation in the Human Auditory Cortex.
Trapeau, Régis; Schönwiesner, Marc
2018-03-28
Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source. Copyright © 2018 the authors 0270-6474/18/383252-13$15.00/0.
Agnosia for accents in primary progressive aphasia☆
Fletcher, Phillip D.; Downey, Laura E.; Agustus, Jennifer L.; Hailstone, Julia C.; Tyndall, Marina H.; Cifelli, Alberto; Schott, Jonathan M.; Warrington, Elizabeth K.; Warren, Jason D.
2013-01-01
As an example of complex auditory signal processing, the analysis of accented speech is potentially vulnerable in the progressive aphasias. However, the brain basis of accent processing and the effects of neurodegenerative disease on this processing are not well understood. Here we undertook a detailed neuropsychological study of a patient, AA with progressive nonfluent aphasia, in whom agnosia for accents was a prominent clinical feature. We designed a battery to assess AA's ability to process accents in relation to other complex auditory signals. AA's performance was compared with a cohort of 12 healthy age and gender matched control participants and with a second patient, PA, who had semantic dementia with phonagnosia and prosopagnosia but no reported difficulties with accent processing. Relative to healthy controls, the patients showed distinct profiles of accent agnosia. AA showed markedly impaired ability to distinguish change in an individual's accent despite being able to discriminate phonemes and voices (apperceptive accent agnosia); and in addition, a severe deficit of accent identification. In contrast, PA was able to perceive changes in accents, phonemes and voices normally, but showed a relatively mild deficit of accent identification (associative accent agnosia). Both patients showed deficits of voice and environmental sound identification, however PA showed an additional deficit of face identification whereas AA was able to identify (though not name) faces normally. These profiles suggest that AA has conjoint (or interacting) deficits involving both apperceptive and semantic processing of accents, while PA has a primary semantic (associative) deficit affecting accents along with other kinds of auditory objects and extending beyond the auditory modality. Brain MRI revealed left peri-Sylvian atrophy in case AA and relatively focal asymmetric (predominantly right sided) temporal lobe atrophy in case PA. These cases provide further evidence for the fractionation of brain mechanisms for complex sound analysis, and for the stratification of progressive aphasia syndromes according to the signature of nonverbal auditory deficits they produce. PMID:23721780
Agnosia for accents in primary progressive aphasia.
Fletcher, Phillip D; Downey, Laura E; Agustus, Jennifer L; Hailstone, Julia C; Tyndall, Marina H; Cifelli, Alberto; Schott, Jonathan M; Warrington, Elizabeth K; Warren, Jason D
2013-08-01
As an example of complex auditory signal processing, the analysis of accented speech is potentially vulnerable in the progressive aphasias. However, the brain basis of accent processing and the effects of neurodegenerative disease on this processing are not well understood. Here we undertook a detailed neuropsychological study of a patient, AA with progressive nonfluent aphasia, in whom agnosia for accents was a prominent clinical feature. We designed a battery to assess AA's ability to process accents in relation to other complex auditory signals. AA's performance was compared with a cohort of 12 healthy age and gender matched control participants and with a second patient, PA, who had semantic dementia with phonagnosia and prosopagnosia but no reported difficulties with accent processing. Relative to healthy controls, the patients showed distinct profiles of accent agnosia. AA showed markedly impaired ability to distinguish change in an individual's accent despite being able to discriminate phonemes and voices (apperceptive accent agnosia); and in addition, a severe deficit of accent identification. In contrast, PA was able to perceive changes in accents, phonemes and voices normally, but showed a relatively mild deficit of accent identification (associative accent agnosia). Both patients showed deficits of voice and environmental sound identification, however PA showed an additional deficit of face identification whereas AA was able to identify (though not name) faces normally. These profiles suggest that AA has conjoint (or interacting) deficits involving both apperceptive and semantic processing of accents, while PA has a primary semantic (associative) deficit affecting accents along with other kinds of auditory objects and extending beyond the auditory modality. Brain MRI revealed left peri-Sylvian atrophy in case AA and relatively focal asymmetric (predominantly right sided) temporal lobe atrophy in case PA. These cases provide further evidence for the fractionation of brain mechanisms for complex sound analysis, and for the stratification of progressive aphasia syndromes according to the signature of nonverbal auditory deficits they produce. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Laterality of basic auditory perception.
Sininger, Yvonne S; Bhatara, Anjali
2012-01-01
Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.
Laterality of Basic Auditory Perception
Sininger, Yvonne S.; Bhatara, Anjali
2010-01-01
Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: 1) gap detection 2) frequency discrimination and 3) intensity discrimination. Stimuli included tones (500, 1000 and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was: processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by 1) spectral width, a narrow band noise (NBN) of 450 Hz bandwidth was evaluated using intensity discrimination and 2) stimulus duration, 200, 500 and 1000 ms duration tones were evaluated using frequency discrimination. Results A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterized as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli. PMID:22385138
Ireland, Kierla; Parker, Averil; Foster, Nicholas; Penhune, Virginia
2018-01-01
Measuring musical abilities in childhood can be challenging. When music training and maturation occur simultaneously, it is difficult to separate the effects of specific experience from age-based changes in cognitive and motor abilities. The goal of this study was to develop age-equivalent scores for two measures of musical ability that could be reliably used with school-aged children (7-13) with and without musical training. The children's Rhythm Synchronization Task (c-RST) and the children's Melody Discrimination Task (c-MDT) were adapted from adult tasks developed and used in our laboratories. The c-RST is a motor task in which children listen and then try to synchronize their taps with the notes of a woodblock rhythm while it plays twice in a row. The c-MDT is a perceptual task in which the child listens to two melodies and decides if the second was the same or different. We administered these tasks to 213 children in music camps (musicians, n = 130) and science camps (non-musicians, n = 83). We also measured children's paced tapping, non-paced tapping, and phonemic discrimination as baseline motor and auditory abilities We estimated internal-consistency reliability for both tasks, and compared children's performance to results from studies with adults. As expected, musically trained children outperformed those without music lessons, scores decreased as difficulty increased, and older children performed the best. Using non-musicians as a reference group, we generated a set of age-based z-scores, and used them to predict task performance with additional years of training. Years of lessons significantly predicted performance on both tasks, over and above the effect of age. We also assessed the relation between musician's scores on music tasks, baseline tasks, auditory working memory, and non-verbal reasoning. Unexpectedly, musician children outperformed non-musicians in two of three baseline tasks. The c-RST and c-MDT fill an important need for researchers interested in evaluating the impact of musical training in longitudinal studies, those interested in comparing the efficacy of different training methods, and for those assessing the impact of training on non-musical cognitive abilities such as language processing.
Ireland, Kierla; Parker, Averil; Foster, Nicholas; Penhune, Virginia
2018-01-01
Measuring musical abilities in childhood can be challenging. When music training and maturation occur simultaneously, it is difficult to separate the effects of specific experience from age-based changes in cognitive and motor abilities. The goal of this study was to develop age-equivalent scores for two measures of musical ability that could be reliably used with school-aged children (7–13) with and without musical training. The children's Rhythm Synchronization Task (c-RST) and the children's Melody Discrimination Task (c-MDT) were adapted from adult tasks developed and used in our laboratories. The c-RST is a motor task in which children listen and then try to synchronize their taps with the notes of a woodblock rhythm while it plays twice in a row. The c-MDT is a perceptual task in which the child listens to two melodies and decides if the second was the same or different. We administered these tasks to 213 children in music camps (musicians, n = 130) and science camps (non-musicians, n = 83). We also measured children's paced tapping, non-paced tapping, and phonemic discrimination as baseline motor and auditory abilities We estimated internal-consistency reliability for both tasks, and compared children's performance to results from studies with adults. As expected, musically trained children outperformed those without music lessons, scores decreased as difficulty increased, and older children performed the best. Using non-musicians as a reference group, we generated a set of age-based z-scores, and used them to predict task performance with additional years of training. Years of lessons significantly predicted performance on both tasks, over and above the effect of age. We also assessed the relation between musician's scores on music tasks, baseline tasks, auditory working memory, and non-verbal reasoning. Unexpectedly, musician children outperformed non-musicians in two of three baseline tasks. The c-RST and c-MDT fill an important need for researchers interested in evaluating the impact of musical training in longitudinal studies, those interested in comparing the efficacy of different training methods, and for those assessing the impact of training on non-musical cognitive abilities such as language processing. PMID:29674984
Scheperle, Rachel A; Abbas, Paul J
2015-01-01
The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.
Impaired short-term memory for pitch in congenital amusia.
Tillmann, Barbara; Lévêque, Yohana; Fornoni, Lesly; Albouy, Philippe; Caclin, Anne
2016-06-01
Congenital amusia is a neuro-developmental disorder of music perception and production. The hypothesis is that the musical deficits arise from altered pitch processing, with impairments in pitch discrimination (i.e., pitch change detection, pitch direction discrimination and identification) and short-term memory. The present review article focuses on the deficit of short-term memory for pitch. Overall, the data discussed here suggest impairments at each level of processing in short-term memory tasks; starting with the encoding of the pitch information and the creation of the adequate memory trace, the retention of the pitch traces over time as well as the recollection and comparison of the stored information with newly incoming information. These impairments have been related to altered brain responses in a distributed fronto-temporal network, associated with decreased connectivity between these structures, as well as in abnormalities in the connectivity between the two auditory cortices. In contrast, amusic participants׳ short-term memory abilities for verbal material are preserved. These findings show that short-term memory deficits in congenital amusia are specific to pitch, suggesting a pitch-memory system that is, at least partly, separated from verbal memory. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Basirat, Anahita
2017-01-01
Cochlear implant (CI) users frequently achieve good speech understanding based on phoneme and word recognition. However, there is a significant variability between CI users in processing prosody. The aim of this study was to examine the abilities of an excellent CI user to segment continuous speech using intonational cues. A post-lingually deafened adult CI user and 22 normal hearing (NH) subjects segmented phonemically identical and prosodically different sequences in French such as 'l'affiche' (the poster) versus 'la fiche' (the sheet), both [lafiʃ]. All participants also completed a minimal pair discrimination task. Stimuli were presented in auditory-only and audiovisual presentation modalities. The performance of the CI user in the minimal pair discrimination task was 97% in the auditory-only and 100% in the audiovisual condition. In the segmentation task, contrary to the NH participants, the performance of the CI user did not differ from the chance level. Visual speech did not improve word segmentation. This result suggests that word segmentation based on intonational cues is challenging when using CIs even when phoneme/word recognition is very well rehabilitated. This finding points to the importance of the assessment of CI users' skills in prosody processing and the need for specific interventions focusing on this aspect of speech communication.
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
2015-01-01
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047
Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela
2015-07-01
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.
Cortical activity patterns predict robust speech discrimination ability in noise
Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.
2012-01-01
The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331
NASA Astrophysics Data System (ADS)
Hay, Jessica F.; Holt, Lori L.; Lotto, Andrew J.; Diehl, Randy L.
2005-04-01
The present study was designed to investigate the effects of long-term linguistic experience on the perception of non-speech sounds in English and Spanish speakers. Research using tone-onset-time (TOT) stimuli, a type of non-speech analogue of voice-onset-time (VOT) stimuli, has suggested that there is an underlying auditory basis for the perception of stop consonants based on a threshold for detecting onset asynchronies in the vicinity of +20 ms. For English listeners, stop consonant labeling boundaries are congruent with the positive auditory discontinuity, while Spanish speakers place their VOT labeling boundaries and discrimination peaks in the vicinity of 0 ms VOT. The present study addresses the question of whether long-term linguistic experience with different VOT categories affects the perception of non-speech stimuli that are analogous in their acoustic timing characteristics. A series of synthetic VOT stimuli and TOT stimuli were created for this study. Using language appropriate labeling and ABX discrimination tasks, labeling boundaries (VOT) and discrimination peaks (VOT and TOT) are assessed for 24 monolingual English speakers and 24 monolingual Spanish speakers. The interplay between language experience and auditory biases are discussed. [Work supported by NIDCD.
ERIC Educational Resources Information Center
Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.
2012-01-01
According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…
Anxiety sensitivity and auditory perception of heartbeat.
Pollock, R A; Carter, A S; Amir, N; Marks, L E
2006-12-01
Anxiety sensitivity (AS) is the fear of sensations associated with autonomic arousal. AS has been associated with the development and maintenance of panic disorder. Given that panic patients often rate cardiac symptoms as the most fear-provoking feature of a panic attack, AS individuals may be especially responsive to cardiac stimuli. Consequently, we developed a signal-in-white-noise detection paradigm to examine the strategies that high and low AS individuals use to detect and discriminate normal and abnormal heartbeat sounds. Compared to low AS individuals, high AS individuals demonstrated a greater propensity to report the presence of normal, but not abnormal, heartbeat sounds. High and low AS individuals did not differ in their ability to perceive normal heartbeat sounds against a background of white noise; however, high AS individuals consistently demonstrated lower ability to discriminate abnormal heartbeats from background noise and between abnormal and normal heartbeats. AS was characterized by an elevated false alarm rate across all tasks. These results suggest that heartbeat sounds may be fear-relevant cues for AS individuals, and may affect their attention and perception in tasks involving threat signals.
Visual and auditory perception in preschool children at risk for dyslexia.
Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina
2014-11-01
Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.
Barcroft, Joe; Sommers, Mitchell S; Tye-Murray, Nancy; Mauzé, Elizabeth; Schroy, Catherine; Spehar, Brent
2011-11-01
Our long-term objective is to develop an auditory training program that will enhance speech recognition in those situations where patients most want improvement. As a first step, the current investigation trained participants using either a single talker or multiple talkers to determine if auditory training leads to transfer-appropriate gains. The experiment implemented a 2 × 2 × 2 mixed design, with training condition as a between-participants variable and testing interval and test version as repeated-measures variables. Participants completed a computerized six-week auditory training program wherein they heard either the speech of a single talker or the speech of six talkers. Training gains were assessed with single-talker and multi-talker versions of the Four-choice discrimination test. Participants in both groups were tested on both versions. Sixty-nine adult hearing-aid users were randomly assigned to either single-talker or multi-talker auditory training. Both groups showed significant gains on both test versions. Participants who trained with multiple talkers showed greater improvement on the multi-talker version whereas participants who trained with a single talker showed greater improvement on the single-talker version. Transfer-appropriate gains occurred following auditory training, suggesting that auditory training can be designed to target specific patient needs.
Auditory processing disorders and problems with hearing-aid fitting in old age.
Antonelli, A R
1978-01-01
The hearing handicap experienced by elderly subjects depends only partially on end-organ impairment. Not only the neural unit loss along the central auditory pathways contributes to decreased speech discrimination, but also learning processes are slowed down. Diotic listening in elderly people seems to fasten learning of discrimination in critical conditions, as in the case of sensitized speech. This fact, and the binaural gain through the binaural release from masking, stress the superiority, on theoretical grounds, of binaural over monaural hearing-aid fitting.
Is the Role of External Feedback in Auditory Skill Learning Age Dependent?
ERIC Educational Resources Information Center
Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat
2017-01-01
Purpose: The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Method: Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for…
Zhang, Juan; Meng, Yaxuan; Wu, Chenggang; Zhou, Danny Q
2017-01-01
Music and language share many attributes and a large body of evidence shows that sensitivity to acoustic cues in music is positively related to language development and even subsequent reading acquisition. However, such association was mainly found in alphabetic languages. What remains unclear is whether sensitivity to acoustic cues in music is associated with reading in Chinese, a morphosyllabic language. The present study aimed to answer this question by measuring music (i.e., musical metric perception and pitch discrimination), language (i.e., phonological awareness, lexical tone sensitivity), and reading abilities (i.e., word recognition) among 54 third-grade Chinese-English bilingual children. After controlling for age and non-verbal intelligence, we found that both musical metric perception and pitch discrimination accounted for unique variance of Chinese phonological awareness while pitch discrimination rather than musical metric perception predicted Chinese lexical tone sensitivity. More importantly, neither musical metric perception nor pitch discrimination was associated with Chinese reading. As for English, musical metric perception and pitch discrimination were correlated with both English phonological awareness and English reading. Furthermore, sensitivity to acoustic cues in music was associated with English reading through the mediation of English phonological awareness. The current findings indicate that the association between sensitivity to acoustic cues in music and reading may be modulated by writing systems. In Chinese, the mapping between orthography and phonology is not as transparent as in alphabetic languages such as English. Thus, this opaque mapping may alter the auditory perceptual sensitivity in music to Chinese reading.
Zhang, Juan; Meng, Yaxuan; Wu, Chenggang; Zhou, Danny Q.
2017-01-01
Music and language share many attributes and a large body of evidence shows that sensitivity to acoustic cues in music is positively related to language development and even subsequent reading acquisition. However, such association was mainly found in alphabetic languages. What remains unclear is whether sensitivity to acoustic cues in music is associated with reading in Chinese, a morphosyllabic language. The present study aimed to answer this question by measuring music (i.e., musical metric perception and pitch discrimination), language (i.e., phonological awareness, lexical tone sensitivity), and reading abilities (i.e., word recognition) among 54 third-grade Chinese–English bilingual children. After controlling for age and non-verbal intelligence, we found that both musical metric perception and pitch discrimination accounted for unique variance of Chinese phonological awareness while pitch discrimination rather than musical metric perception predicted Chinese lexical tone sensitivity. More importantly, neither musical metric perception nor pitch discrimination was associated with Chinese reading. As for English, musical metric perception and pitch discrimination were correlated with both English phonological awareness and English reading. Furthermore, sensitivity to acoustic cues in music was associated with English reading through the mediation of English phonological awareness. The current findings indicate that the association between sensitivity to acoustic cues in music and reading may be modulated by writing systems. In Chinese, the mapping between orthography and phonology is not as transparent as in alphabetic languages such as English. Thus, this opaque mapping may alter the auditory perceptual sensitivity in music to Chinese reading. PMID:29170647
Kagerer, Florian A; Viswanathan, Priya; Contreras-Vidal, Jose L; Whitall, Jill
2014-04-01
Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (nine per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high-threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set. Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier sub-cortical circuitry in those with higher thresholds.
Kagerer, Florian A.; Viswanathan, Priya; Contreras-Vidal, Jose L.; Whitall, Jill
2014-01-01
Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (9 per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set (p=0.05). Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier subcortical circuitry in those with higher thresholds. PMID:24449013
Spatiotemporal differentiation in auditory and motor regions during auditory phoneme discrimination.
Aerts, Annelies; Strobbe, Gregor; van Mierlo, Pieter; Hartsuiker, Robert J; Corthals, Paul; Santens, Patrick; De Letter, Miet
2017-06-01
Auditory phoneme discrimination (APD) is supported by both auditory and motor regions through a sensorimotor interface embedded in a fronto-temporo-parietal cortical network. However, the specific spatiotemporal organization of this network during APD with respect to different types of phonemic contrasts is still unclear. Here, we use source reconstruction, applied to event-related potentials in a group of 47 participants, to uncover a potential spatiotemporal differentiation in these brain regions during a passive and active APD task with respect to place of articulation (PoA), voicing and manner of articulation (MoA). Results demonstrate that in an early stage (50-110 ms), auditory, motor and sensorimotor regions elicit more activation during the passive and active APD task with MoA and active APD task with voicing compared to PoA. In a later stage (130-175 ms), the same auditory and motor regions elicit more activation during the APD task with PoA compared to MoA and voicing, yet only in the active condition, implying important timing differences. Degree of attention influences a frontal network during the APD task with PoA, whereas auditory regions are more affected during the APD task with MoA and voicing. Based on these findings, it can be carefully suggested that APD is supported by the integration of early activation of auditory-acoustic properties in superior temporal regions, more perpetuated for MoA and voicing, and later auditory-to-motor integration in sensorimotor areas, more perpetuated for PoA.
Giordano, Bruno L; Visell, Yon; Yao, Hsin-Yun; Hayward, Vincent; Cooperstock, Jeremy R; McAdams, Stephen
2012-05-01
Locomotion generates multisensory information about walked-upon objects. How perceptual systems use such information to get to know the environment remains unexplored. The ability to identify solid (e.g., marble) and aggregate (e.g., gravel) walked-upon materials was investigated in auditory, haptic or audio-haptic conditions, and in a kinesthetic condition where tactile information was perturbed with a vibromechanical noise. Overall, identification performance was better than chance in all experimental conditions and for both solids and the better identified aggregates. Despite large mechanical differences between the response of solids and aggregates to locomotion, for both material categories discrimination was at its worst in the auditory and kinesthetic conditions and at its best in the haptic and audio-haptic conditions. An analysis of the dominance of sensory information in the audio-haptic context supported a focus on the most accurate modality, haptics, but only for the identification of solid materials. When identifying aggregates, response biases appeared to produce a focus on the least accurate modality--kinesthesia. When walking on loose materials such as gravel, individuals do not perceive surfaces by focusing on the most accurate modality, but by focusing on the modality that would most promptly signal postural instabilities.
Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415
Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M
2018-01-01
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.
Winn, Matthew B; Won, Jong Ho; Moon, Il Joon
This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language.
Different Timescales for the Neural Coding of Consonant and Vowel Sounds
Perez, Claudia A.; Engineer, Crystal T.; Jakkamsetti, Vikram; Carraway, Ryan S.; Perry, Matthew S.
2013-01-01
Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders. PMID:22426334
Kujala, T; Aho, E; Lepistö, T; Jansson-Verkasalo, E; Nieminen-von Wendt, T; von Wendt, L; Näätänen, R
2007-04-01
Asperger syndrome, which belongs to the autistic spectrum of disorders, is characterized by deficits of social interaction and abnormal perception, like hypo- or hypersensitivity in reacting to sounds and discriminating certain sound features. We determined auditory feature discrimination in adults with Asperger syndrome with the mismatch negativity (MMN), a neural response which is an index of cortical change detection. We recorded MMN for five different sound features (duration, frequency, intensity, location, and gap). Our results suggest hypersensitive auditory change detection in Asperger syndrome, as reflected in the enhanced MMN for deviant sounds with a gap or shorter duration, and speeded MMN elicitation for frequency changes.
Input from the medial geniculate nucleus modulates amygdala encoding of fear memory discrimination.
Ferrara, Nicole C; Cullen, Patrick K; Pullins, Shane P; Rotondo, Elena K; Helmstetter, Fred J
2017-09-01
Generalization of fear can involve abnormal responding to cues that signal safety and is common in people diagnosed with post-traumatic stress disorder. Differential auditory fear conditioning can be used as a tool to measure changes in fear discrimination and generalization. Most prior work in this area has focused on elevated amygdala activity as a critical component underlying generalization. The amygdala receives input from auditory cortex as well as the medial geniculate nucleus (MgN) of the thalamus, and these synapses undergo plastic changes in response to fear conditioning and are major contributors to the formation of memory related to both safe and threatening cues. The requirement for MgN protein synthesis during auditory discrimination and generalization, as well as the role of MgN plasticity in amygdala encoding of discrimination or generalization, have not been directly tested. GluR1 and GluR2 containing AMPA receptors are found at synapses throughout the amygdala and their expression is persistently up-regulated after learning. Some of these receptors are postsynaptic to terminals from MgN neurons. We found that protein synthesis-dependent plasticity in MgN is necessary for elevated freezing to both aversive and safe auditory cues, and that this is accompanied by changes in the expressions of AMPA receptor and synaptic scaffolding proteins (e.g., SHANK) at amygdala synapses. This work contributes to understanding the neural mechanisms underlying increased fear to safety signals after stress. © 2017 Ferrara et al.; Published by Cold Spring Harbor Laboratory Press.
Spectral context affects temporal processing in awake auditory cortex
Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E.
2013-01-01
Amplitude modulation encoding is critical for human speech perception and complex sound processing in general. The modulation transfer function (MTF) is a staple of auditory psychophysics, and has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, including cochlear implant-supported hearing. Although both tonal and broadband carriers have been employed in psychophysical studies of modulation detection and discrimination, relatively little is known about differences in the cortical representation of such signals. We obtained MTFs in response to sinusoidal amplitude modulation (SAM) for both narrowband tonal carriers and 2-octave bandwidth noise carriers in the auditory core of awake squirrel monkeys. MTFs spanning modulation frequencies from 4 to 512 Hz were obtained using 16 channel linear recording arrays sampling across all cortical laminae. Carrier frequency for tonal SAM and center frequency for noise SAM was set at the estimated best frequency for each penetration. Changes in carrier type affected both rate and temporal MTFs in many neurons. Using spike discrimination techniques, we found that discrimination of modulation frequency was significantly better for tonal SAM than for noise SAM, though the differences were modest at the population level. Moreover, spike trains elicited by tonal and noise SAM could be readily discriminated in most cases. Collectively, our results reveal remarkable sensitivity to the spectral content of modulated signals, and indicate substantial interdependence between temporal and spectral processing in neurons of the core auditory cortex. PMID:23719811
Development of the auditory system
Litovsky, Ruth
2015-01-01
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262
Auditory-motor learning influences auditory memory for music.
Brown, Rachel M; Palmer, Caroline
2012-05-01
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.
2015-01-01
Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on functioning. PMID:26136699
Cochlear implant rehabilitation outcomes in Waardenburg syndrome children.
de Sousa Andrade, Susana Margarida; Monteiro, Ana Rita Tomé; Martins, Jorge Humberto Ferreira; Alves, Marisa Costa; Santos Silva, Luis Filipe; Quadros, Jorge Manuel Cardoso; Ribeiro, Carlos Alberto Reis
2012-09-01
The purpose of this study was to review the outcomes of children with documented Waardenburg syndrome implanted in the ENT Department of Centro Hospitalar de Coimbra, concerning postoperative speech perception and production, in comparison to the rest of non-syndromic implanted children. A retrospective chart review was performed for children congenitally deaf who had undergone cochlear implantation with multichannel implants, diagnosed as having Waardenburg syndrome, between 1992 and 2011. Postoperative performance outcomes were assessed and confronted with results obtained by children with non-syndromic congenital deafness also implanted in our department. Open-set auditory perception skills were evaluated by using European Portuguese speech discrimination tests (vowels test, monosyllabic word test, number word test and words in sentence test). Meaningful auditory integration scales (MAIS) and categories of auditory performance (CAP) were also measured. Speech production was further assessed and included results on meaningful use of speech Scale (MUSS) and speech intelligibility rating (SIR). To date, 6 implanted children were clinically identified as having WS type I, and one met the diagnosis of type II. All WS children received multichannel cochlear implants, with a mean age at implantation of 30.6±9.7months (ranging from 19 to 42months). Postoperative outcomes in WS children were similar to other nonsyndromic children. In addition, in number word and vowels discrimination test WS group showed slightly better performances, as well as in MUSS and MAIS assessment. Our study has shown that cochlear implantation should be considered a rehabilitative option for Waardenburg syndrome children with profound deafness, enabling the development and improvement of speech perception and production abilities in this group of patients, reinforcing their candidacy for this audio-oral rehabilitation method. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex.
Häkkinen, Suvi; Rinne, Teemu
2018-06-01
A hierarchical and modular organization is a central hypothesis in the current primate model of auditory cortex (AC) but lacks validation in humans. Here we investigated whether fMRI connectivity at rest and during active tasks is informative of the functional organization of human AC. Identical pitch-varying sounds were presented during a visual discrimination (i.e. no directed auditory attention), pitch discrimination, and two versions of pitch n-back memory tasks. Analysis based on fMRI connectivity at rest revealed a network structure consisting of six modules in supratemporal plane (STP), temporal lobe, and inferior parietal lobule (IPL) in both hemispheres. In line with the primate model, in which higher-order regions have more longer-range connections than primary regions, areas encircling the STP module showed the highest inter-modular connectivity. Multivariate pattern analysis indicated significant connectivity differences between the visual task and rest (driven by the presentation of sounds during the visual task), between auditory and visual tasks, and between pitch discrimination and pitch n-back tasks. Further analyses showed that these differences were particularly due to connectivity modulations between the STP and IPL modules. While the results are generally in line with the primate model, they highlight the important role of human IPL during the processing of both task-irrelevant and task-relevant auditory information. Importantly, the present study shows that fMRI connectivity at rest, during presentation of sounds, and during active listening provides novel information about the functional organization of human AC.
Meaning in the avian auditory cortex: Neural representation of communication calls
Elie, Julie E; Theunissen, Frédéric E
2014-01-01
Understanding how the brain extracts the behavioral meaning carried by specific vocalization types that can be emitted by various vocalizers and in different conditions is a central question in auditory research. This semantic categorization is a fundamental process required for acoustic communication and presupposes discriminative and invariance properties of the auditory system for conspecific vocalizations. Songbirds have been used extensively to study vocal learning, but the communicative function of all their vocalizations and their neural representation has yet to be examined. In our research, we first generated a library containing almost the entire zebra finch vocal repertoire and organized communication calls along 9 different categories based on their behavioral meaning. We then investigated the neural representations of these semantic categories in the primary and secondary auditory areas of 6 anesthetized zebra finches. To analyze how single units encode these call categories, we described neural responses in terms of their discrimination, selectivity and invariance properties. Quantitative measures for these neural properties were obtained using an optimal decoder based both on spike counts and spike patterns. Information theoretic metrics show that almost half of the single units encode semantic information. Neurons achieve higher discrimination of these semantic categories by being more selective and more invariant. These results demonstrate that computations necessary for semantic categorization of meaningful vocalizations are already present in the auditory cortex and emphasize the value of a neuro-ethological approach to understand vocal communication. PMID:25728175
Contribution of Auditory Learning Style to Students’ Mathematical Connection Ability
NASA Astrophysics Data System (ADS)
Karlimah; Risfiani, F.
2017-09-01
This paper presents the results of the research on the relation of mathematical concept with mathematics, other subjects, and with everyday life. This research reveals study result of the students who had auditory learning style and correlates it with their ability of mathematical connection. In this research, the researchers used a combination model or sequential exploratory design method, which is the use of qualitative and quantitative research methods in sequence. The result proves that giving learning facilities which are not suitable for the class whose students have the auditory learning style results in the barely sufficient math connection ability. The average mathematical connection ability of the auditory students was initially in the medium level of qualification. Then, the improvement in the form of the varied learning that suited the auditory learning style still showed the average ability of mathematical connection in medium level of qualification. Nevertheless, there was increase in the frequency of students in the medium level of qualification and decrease in the very low and low level of qualification. This suggests that the learning facilities, which are appropriate for the student’s auditory learning style, contribute well enough to the students’ mathematical connection ability. Therefore, the mathematics learning for students who have an auditory learning style should consist of particular activity that is understanding the concepts of mathematics and their relations.
Speaker variability augments phonological processing in early word learning
Rost, Gwyneth C.; McMurray, Bob
2010-01-01
Infants in the early stages of word learning have difficulty learning lexical neighbors (i.e., word pairs that differ by a single phoneme), despite the ability to discriminate the same contrast in a purely auditory task. While prior work has focused on top-down explanations for this failure (e.g. task demands, lexical competition), none has examined if bottom-up acoustic-phonetic factors play a role. We hypothesized that lexical neighbor learning could be improved by incorporating greater acoustic variability in the words being learned, as this may buttress still developing phonetic categories, and help infants identify the relevant contrastive dimension. Infants were exposed to pictures accompanied by labels spoken by either a single or multiple speakers. At test, infants in the single-speaker condition failed to recognize the difference between the two words, while infants who heard multiple speakers discriminated between them. PMID:19143806
ERIC Educational Resources Information Center
Friar, John T.
Two factors of predicted learning disorders were investigated: (1) inability to maintain appropriate classroom behavior (BEH), (2) perceptual discrimination deficit (PERC). Three groups of first-graders (BEH, PERC, normal control) were administered measures of impulse control, distractability, auditory discrimination, and visual discrimination.…
Preschoolers Benefit From Visually Salient Speech Cues
Holt, Rachael Frush
2015-01-01
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336
Musical experience and Mandarin tone discrimination and imitation
NASA Astrophysics Data System (ADS)
Gottfried, Terry L.; Staby, Ann M.; Ziemer, Christine J.
2004-05-01
Previous work [T. L. Gottfried and D. Riester, J. Acoust. Soc. Am. 108, 2604 (2000)] showed that native speakers of American English with musical training performed better than nonmusicians when identifying the four distinctive tones of Mandarin Chinese (high-level, mid-rising, low-dipping, high-falling). Accuracy for both groups was relatively low since listeners were not trained on the phonemic contrasts. Current research compares musicians and nonmusicians on discrimination and imitation of unfamiliar tones. Listeners were presented with two different Mandarin words that had either the same or different tones; listeners indicated whether the tones were same or different. Thus, they were required to determine a categorical match (same or different tone), rather than an auditory match. All listeners had significantly more difficulty discriminating between mid-rising and low-dipping tones than with other contrasts. Listeners with more musical training showed significantly greater accuracy in their discrimination. Likewise, musicians' spoken imitations of Mandarin tones (model tokens presented by a native speaker) were rated as significantly more native-like than those of nonmusicians. These findings suggest that musicians may have abilities or training that facilitate their perception and production of Mandarin tones. However, further research is needed to determine whether this advantage transfers to language learning situations.
Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K
2013-11-01
Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.
Impey, Danielle; Knott, Verner
2015-08-01
Membrane potentials and brain plasticity are basic modes of cerebral information processing. Both can be externally (non-invasively) modulated by weak transcranial direct current stimulation (tDCS). Polarity-dependent tDCS-induced reversible circumscribed increases and decreases in cortical excitability and functional changes have been observed following stimulation of motor and visual cortices but relatively little research has been conducted with respect to the auditory cortex. The aim of this pilot study was to examine the effects of tDCS on auditory sensory discrimination in healthy participants (N = 12) assessed with the mismatch negativity (MMN) brain event-related potential (ERP). In a randomized, double-blind, sham-controlled design, participants received anodal tDCS over the primary auditory cortex (2 mA for 20 min) in one session and 'sham' stimulation (i.e., no stimulation except initial ramp-up for 30 s) in the other session. MMN elicited by changes in auditory pitch was found to be enhanced after receiving anodal tDCS compared to 'sham' stimulation, with the effects being evidenced in individuals with relatively reduced (vs. increased) baseline amplitudes and with relatively small (vs. large) pitch deviants. Additional studies are needed to further explore relationships between tDCS-related parameters, auditory stimulus features and individual differences prior to assessing the utility of this tool for treating auditory processing deficits in psychiatric and/or neurological disorders.
The ability to tap to a beat relates to cognitive, linguistic, and perceptual skills
Tierney, Adam T.; Kraus, Nina
2013-01-01
Reading-impaired children have difficulty tapping to a beat. Here we tested whether this relationship between reading ability and synchronized tapping holds in typically-developing adolescents. We also hypothesized that tapping relates to two other abilities. First, since auditory-motor synchronization requires monitoring of the relationship between motor output and auditory input, we predicted that subjects better able to tap to the beat would perform better on attention tests. Second, since auditory-motor synchronization requires fine temporal precision within the auditory system for the extraction of a sound’s onset time, we predicted that subjects better able to tap to the beat would be less affected by backward masking, a measure of temporal precision within the auditory system. As predicted, tapping performance related to reading, attention, and backward masking. These results motivate future research investigating whether beat synchronization training can improve not only reading ability, but potentially executive function and basic auditory processing as well. PMID:23400117
Musicians and non-musicians are equally adept at perceiving masked speech
Boebinger, Dana; Evans, Samuel; Scott, Sophie K.; Rosen, Stuart; Lima, César F.; Manly, Tom
2015-01-01
There is much interest in the idea that musicians perform better than non-musicians in understanding speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n = 25) and non-musicians (n = 25). Although musicians outperformed non-musicians on a measure of frequency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that nonverbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise. PMID:25618067
Evaluating the Precision of Auditory Sensory Memory as an Index of Intrusion in Tinnitus.
Barrett, Doug J K; Pilling, Michael
The purpose of this study was to investigate the potential of measures of auditory short-term memory (ASTM) to provide a clinical measure of intrusion in tinnitus. Response functions for six normal listeners on a delayed pitch discrimination task were contrasted in three conditions designed to manipulate attention in the presence and absence of simulated tinnitus: (1) no-tinnitus, (2) ignore-tinnitus, and (3) attend-tinnitus. Delayed pitch discrimination functions were more variable in the presence of simulated tinnitus when listeners were asked to divide attention between the primary task and the amplitude of the tinnitus tone. Changes in the variability of auditory short-term memory may provide a novel means of quantifying the level of intrusion associated with the tinnitus percept during listening.
Engineering Data Compendium. Human Perception and Performance. Volume 2
1988-01-01
Stimulation 5.1014 5.1004 Auditory Detection in the Presence of Visual Stimulation 5.1015 5.1005 Tactual Detection and Discrimination in the Presence of...Accessory Stimulation 5.1016 5.1006 Tactile Versus Auditory Localization of Sound 5.1007 Spatial Localization in the Presence of Inter- 5.1017...York: Wiley. Cross References 5.1004 Auditory detection in the presence of visual stimulation ; 5.1005 Tactual detection and dis- crimination in
The organization and reorganization of audiovisual speech perception in the first year of life.
Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F
2017-04-01
The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.
The organization and reorganization of audiovisual speech perception in the first year of life
Danielson, D. Kyle; Bruderer, Alison G.; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F.
2017-01-01
The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone. PMID:28970650
Comparing Monotic and Diotic Selective Auditory Attention Abilities in Children
ERIC Educational Resources Information Center
Cherry, Rochelle; Rubinstein, Adrienne
2006-01-01
Purpose: Some researchers have assessed ear-specific performance of auditory processing ability using speech recognition tasks with normative data based on diotic administration. The present study investigated whether monotic and diotic administrations yield similar results using the Selective Auditory Attention Test. Method: Seventy-two typically…
Fu, Q Y; Liang, Y; Zou, A; Wang, T; Zhao, X D; Wan, J
2016-04-07
To investigate the relationships between electrophysiological characteristic of speech evoked auditory brainstem response(s-ABR) and Mandarin phonetically balanced maximum(PBmax) at different hearing impairment, so as to provide more clues for the mechanism of speech cognitive behavior. Forty-one ears in 41 normal hearing adults(NH), thirty ears in 30 conductive hearing loss patients(CHL) and twenty-seven ears in 27 sensorineural hearing loss patients(SNHL) were included in present study. The speech discrimination scores were obtained by Mandarin phonemic-balanced monosyllable lists via speech audiometric software. Their s-ABRs were recorded with speech syllables /da/ with the intensity of phonetically balanced maximum(PBmax). The electrophysiological characteristic of s-ABR, as well as the relationships between PBmax and s-ABR parameters including latency in time domain, fundamental frequency(F0) and first formant(F1) in frequency domain were analyzed statistically. All subjects completed good speech perception tests and PBmax of CHL and SNHL had no significant difference (P>0.05), but both significantly less than that of NH (P<0.05). While divided the subjects into three groups by 90%
Audiovisual Interval Size Estimation Is Associated with Early Musical Training.
Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
Audiovisual Interval Size Estimation Is Associated with Early Musical Training
Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134
Influence of musical and psychoacoustical training on pitch discrimination.
Micheyl, Christophe; Delhommeau, Karine; Perrot, Xavier; Oxenham, Andrew J
2006-09-01
This study compared the influence of musical and psychoacoustical training on auditory pitch discrimination abilities. In a first experiment, pitch discrimination thresholds for pure and complex tones were measured in 30 classical musicians and 30 non-musicians, none of whom had prior psychoacoustical training. The non-musicians' mean thresholds were more than six times larger than those of the classical musicians initially, and still about four times larger after 2h of training using an adaptive two-interval forced-choice procedure; this difference is two to three times larger than suggested by previous studies. The musicians' thresholds were close to those measured in earlier psychoacoustical studies using highly trained listeners, and showed little improvement with training; this suggests that classical musical training can lead to optimal or nearly optimal pitch discrimination performance. A second experiment was performed to determine how much additional training was required for the non-musicians to obtain thresholds as low as those of the classical musicians from experiment 1. Eight new non-musicians with no prior training practiced the frequency discrimination task for a total of 14 h. It took between 4 and 8h of training for their thresholds to become as small as those measured in the classical musicians from experiment 1. These findings supplement and qualify earlier data in the literature regarding the respective influence of musical and psychoacoustical training on pitch discrimination performance.
Heim, Sabine; Choudhury, Naseem; Benasich, April A
2016-05-01
Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training.
Prefrontal consolidation supports the attainment of fear memory accuracy
Vieira, Philip A.; Lovelace, Jonathan W.; Corches, Alex; Rashid, Asim J.; Josselyn, Sheena A.
2014-01-01
The neural mechanisms underlying the attainment of fear memory accuracy for appropriate discriminative responses to aversive and nonaversive stimuli are unclear. Considerable evidence indicates that coactivator of transcription and histone acetyltransferase cAMP response element binding protein (CREB) binding protein (CBP) is critically required for normal neural function. CBP hypofunction leads to severe psychopathological symptoms in human and cognitive abnormalities in genetic mutant mice with severity dependent on the neural locus and developmental time of the gene inactivation. Here, we showed that an acute hypofunction of CBP in the medial prefrontal cortex (mPFC) results in a disruption of fear memory accuracy in mice. In addition, interruption of CREB function in the mPFC also leads to a deficit in auditory discrimination of fearful stimuli. While mice with deficient CBP/CREB signaling in the mPFC maintain normal responses to aversive stimuli, they exhibit abnormal responses to similar but nonrelevant stimuli when compared to control animals. These data indicate that improvement of fear memory accuracy involves mPFC-dependent suppression of fear responses to nonrelevant stimuli. Evidence from a context discriminatory task and a newly developed task that depends on the ability to distinguish discrete auditory cues indicated that CBP-dependent neural signaling within the mPFC circuitry is an important component of the mechanism for disambiguating the meaning of fear signals with two opposing values: aversive and nonaversive. PMID:25031365
ERIC Educational Resources Information Center
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquiere, Pol
2007-01-01
This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school…
Winn, Matthew B.; Won, Jong Ho; Moon, Il Joon
2016-01-01
Objectives This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). We hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. We further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design Nineteen CI listeners and 10 listeners with normal hearing (NH) participated in a suite of tasks that included spectral ripple discrimination (SRD), temporal modulation detection (TMD), and syllable categorization, which was split into a spectral-cue-based task (targeting the /ba/-/da/ contrast) and a timing-cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated in order to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression in order to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for CI listeners. Results CI users were generally less successful at utilizing both spectral and temporal cues for categorization compared to listeners with normal hearing. For the CI listener group, SRD was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. TMD using 100 Hz and 10 Hz modulated noise was not correlated with the CI subjects’ categorization of VOT, nor with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart non-linguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (VOT) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language. PMID:27438871
Vanniasegaram, Iyngaram; Cohen, Mazal; Rosen, Stuart
2004-12-01
To compare the auditory function of normal-hearing children attending mainstream schools who were referred for an auditory evaluation because of listening/hearing problems (suspected auditory processing disorders [susAPD]) with that of normal-hearing control children. Sixty-five children with a normal standard audiometric evaluation, ages 6-14 yr (32 of whom were referred for susAPD, with the rest age-matched control children), completed a battery of four auditory tests: a dichotic test of competing sentences; a simple discrimination of short tone pairs differing in fundamental frequency at varying interstimulus intervals (TDT); a discrimination task using consonant cluster minimal pairs of real words (CCMP), and an adaptive threshold task for detecting a brief tone presented either simultaneously with a masker (simultaneous masking) or immediately preceding it (backward masking). Regression analyses, including age as a covariate, were performed to determine the extent to which the performance of the two groups differed on each task. Age-corrected z-scores were calculated to evaluate the effectiveness of the complete battery in discriminating the groups. The performance of the susAPD group was significantly poorer than the control group on all but the masking tasks, which failed to differentiate the two groups. The CCMP discriminated the groups most effectively, as it yielded the lowest number of control children with abnormal scores, and performance in both groups was independent of age. By contrast, the proportion of control children who performed poorly on the competing sentences test was unacceptably high. Together, the CCMP (verbal) and TDT (nonverbal) tasks detected impaired listening skills in 56% of the children who were referred to the clinic, compared with 6% of the control children. Performance on the two tasks was not correlated. Two of the four tests evaluated, the CCMP and TDT, proved effective in differentiating the two groups of children of this study. The application of both tests increased the proportion of susAPD children who performed poorly compared with the application of each test alone, while reducing the proportion of control subjects who performed poorly. The findings highlight the importance of carrying out a complete auditory evaluation in children referred for medical attention, even if their standard audiometric evaluation is unremarkable.
Children's discrimination of vowel sequences
NASA Astrophysics Data System (ADS)
Coady, Jeffry A.; Kluender, Keith R.; Evans, Julia
2003-10-01
Children's ability to discriminate sequences of steady-state vowels was investigated. Vowels (as in ``beet,'' ``bat,'' ``bought,'' and ``boot'') were synthesized at durations of 40, 80, 160, 320, 640, and 1280 ms. Four different vowel sequences were created by concatenating different orders of vowels for each duration, separated by 10-ms intervening silence. Thus, sequences differed in vowel order and duration (rate). Sequences were 12 s in duration, with amplitude ramped linearly over the first and last 2 s. Sequence pairs included both same (identical sequences) and different trials (sequences with vowels in different orders). Sequences with vowel of equal duration were presented on individual trials. Children aged 7;0 to 10;6 listened to pairs of sequences (with 100 ms between sequences) and responded whether sequences sounded the same or different. Results indicate that children are best able to discriminate sequences of intermediate-duration vowels, typical of conversational speaking rate. Children were less accurate with both shorter and longer vowels. Results are discussed in terms of auditory processing (shortest vowels) and memory (longest vowels). [Research supported by NIDCD DC-05263, DC-04072, and DC-005650.
Borucki, Ewa; Berg, Bruce G
2017-05-01
This study investigated the psychophysical effects of distortion products in a listening task traditionally used to estimate the bandwidth of phase sensitivity. For a 2000 Hz carrier, estimates of modulation depth necessary to discriminate amplitude modulated (AM) tones and quasi-frequency modulated (QFM) were measured in a two interval forced choice task as a function modulation frequency. Temporal modulation transfer functions were often non-monotonic at modulation frequencies above 300 Hz. This was likely to be due to a spectral cue arising from the interaction of auditory distortion products and the lower sideband of the stimulus complex. When the stimulus duration was decreased from 200 ms to 20 ms, thresholds for low-frequency modulators rose to near-chance levels, whereas thresholds in the region of non-monotonicities were less affected. The decrease in stimulus duration appears to hinder the listener's ability to use temporal cues in order to discriminate between AM and QFM, whereas spectral information derived from distortion product cues appears more resilient. Copyright © 2017. Published by Elsevier B.V.
Relating age and hearing loss to monaural, bilateral, and binaural temporal sensitivity1
Gallun, Frederick J.; McMillan, Garnett P.; Molis, Michelle R.; Kampel, Sean D.; Dann, Serena M.; Konrad-Martin, Dawn L.
2014-01-01
Older listeners are more likely than younger listeners to have difficulties in making temporal discriminations among auditory stimuli presented to one or both ears. In addition, the performance of older listeners is often observed to be more variable than that of younger listeners. The aim of this work was to relate age and hearing loss to temporal processing ability in a group of younger and older listeners with a range of hearing thresholds. Seventy-eight listeners were tested on a set of three temporal discrimination tasks (monaural gap discrimination, bilateral gap discrimination, and binaural discrimination of interaural differences in time). To examine the role of temporal fine structure in these tasks, four types of brief stimuli were used: tone bursts, broad-frequency chirps with rising or falling frequency contours, and random-phase noise bursts. Between-subject group analyses conducted separately for each task revealed substantial increases in temporal thresholds for the older listeners across all three tasks, regardless of stimulus type, as well as significant correlations among the performance of individual listeners across most combinations of tasks and stimuli. Differences in performance were associated with the stimuli in the monaural and binaural tasks, but not the bilateral task. Temporal fine structure differences among the stimuli had the greatest impact on monaural thresholds. Threshold estimate values across all tasks and stimuli did not show any greater variability for the older listeners as compared to the younger listeners. A linear mixed model applied to the data suggested that age and hearing loss are independent factors responsible for temporal processing ability, thus supporting the increasingly accepted hypothesis that temporal processing can be impaired for older compared to younger listeners with similar hearing and/or amounts of hearing loss. PMID:25009458
ERIC Educational Resources Information Center
Moore, D.R.; Rosenberg, J.F.; Coleman, J.S.
2005-01-01
Auditory perceptual learning has been proposed as effective for remediating impaired language and for enhancing normal language development. We examined the effect of phonemic contrast discrimination training on the discrimination of whole words and on phonological awareness in 8- to 10-year-old mainstream school children. Eleven phonemic contrast…
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2016-01-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060
Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru
2017-03-01
The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.
Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina
2013-02-01
Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.
NASA Astrophysics Data System (ADS)
Leek, Marjorie R.; Neff, Donna L.
2004-05-01
Charles Watson's studies of informational masking and the effects of stimulus uncertainty on auditory perception have had a profound impact on auditory research. His series of seminal studies in the mid-1970s on the detection and discrimination of target sounds in sequences of brief tones with uncertain properties addresses the fundamental problem of extracting target signals from background sounds. As conceptualized by Chuck and others, informational masking results from more central (even ``cogneetive'') processes as a consequence of stimulus uncertainty, and can be distinguished from ``energetic'' masking, which primarily arises from the auditory periphery. Informational masking techniques are now in common use to study the detection, discrimination, and recognition of complex sounds, the capacity of auditory memory and aspects of auditory selective attention, the often large effects of training to reduce detrimental effects of uncertainty, and the perceptual segregation of target sounds from irrelevant context sounds. This paper will present an overview of past and current research on informational masking, and show how Chuck's work has been expanded in several directions by other scientists to include the effects of informational masking on speech perception and on perception by listeners with hearing impairment. [Work supported by NIDCD.
Strategy Choice Mediates the Link between Auditory Processing and Spelling
Kwong, Tru E.; Brachman, Kyle J.
2014-01-01
Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities. PMID:25198787
Strategy choice mediates the link between auditory processing and spelling.
Kwong, Tru E; Brachman, Kyle J
2014-01-01
Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.
Fritz, Jonathan; Elhilali, Mounya; Shamma, Shihab
2005-08-01
Listening is an active process in which attentive focus on salient acoustic features in auditory tasks can influence receptive field properties of cortical neurons. Recent studies showing rapid task-related changes in neuronal spectrotemporal receptive fields (STRFs) in primary auditory cortex of the behaving ferret are reviewed in the context of current research on cortical plasticity. Ferrets were trained on spectral tasks, including tone detection and two-tone discrimination, and on temporal tasks, including gap detection and click-rate discrimination. STRF changes could be measured on-line during task performance and occurred within minutes of task onset. During spectral tasks, there were specific spectral changes (enhanced response to tonal target frequency in tone detection and discrimination, suppressed response to tonal reference frequency in tone discrimination). However, only in the temporal tasks, the STRF was changed along the temporal dimension by sharpening temporal dynamics. In ferrets trained on multiple tasks, distinctive and task-specific STRF changes could be observed in the same cortical neurons in successive behavioral sessions. These results suggest that rapid task-related plasticity is an ongoing process that occurs at a network and single unit level as the animal switches between different tasks and dynamically adapts cortical STRFs in response to changing acoustic demands.
Schumann, Annette; Serman, Maja; Gefeller, Olaf; Hoppe, Ulrich
2015-03-01
Specific computer-based auditory training may be a useful completion in the rehabilitation process for cochlear implant (CI) listeners to achieve sufficient speech intelligibility. This study evaluated the effectiveness of a computerized, phoneme-discrimination training programme. The study employed a pretest-post-test design; participants were randomly assigned to the training or control group. Over a period of three weeks, the training group was instructed to train in phoneme discrimination via computer, twice a week. Sentence recognition in different noise conditions (moderate to difficult) was tested pre- and post-training, and six months after the training was completed. The control group was tested and retested within one month. Twenty-seven adult CI listeners who had been using cochlear implants for more than two years participated in the programme; 15 adults in the training group, 12 adults in the control group. Besides significant improvements for the trained phoneme-identification task, a generalized training effect was noted via significantly improved sentence recognition in moderate noise. No significant changes were noted in the difficult noise conditions. Improved performance was maintained over an extended period. Phoneme-discrimination training improves experienced CI listeners' speech perception in noise. Additional research is needed to optimize auditory training for individual benefit.
Vocal development and auditory perception in CBA/CaJ mice
NASA Astrophysics Data System (ADS)
Radziwon, Kelly E.
Mice are useful laboratory subjects because of their small size, their modest cost, and the fact that researchers have created many different strains to study a variety of disorders. In particular, researchers have found nearly 100 naturally occurring mouse mutations with hearing impairments. For these reasons, mice have become an important model for studies of human deafness. Although much is known about the genetic makeup and physiology of the laboratory mouse, far less is known about mouse auditory behavior. To fully understand the effects of genetic mutations on hearing, it is necessary to determine the hearing abilities of these mice. Two experiments here examined various aspects of mouse auditory perception using CBA/CaJ mice, a commonly used mouse strain. The frequency difference limens experiment tested the mouse's ability to discriminate one tone from another based solely on the frequency of the tone. The mice had similar thresholds as wild mice and gerbils but needed a larger change in frequency than humans and cats. The second psychoacoustic experiment sought to determine which cue, frequency or duration, was more salient when the mice had to identify various tones. In this identification task, the mice overwhelmingly classified the tones based on frequency instead of duration, suggesting that mice are using frequency when differentiating one mouse vocalization from another. The other two experiments were more naturalistic and involved both auditory perception and mouse vocal production. Interest in mouse vocalizations is growing because of the potential for mice to become a model of human speech disorders. These experiments traced mouse vocal development from infant to adult, and they tested the mouse's preference for various vocalizations. This was the first known study to analyze the vocalizations of individual mice across development. Results showed large variation in calling rates among the three cages of adult mice but results were highly consistent across all infant vocalizations. Although the preference experiment did not reveal significant differences between various mouse vocalizations, suggestions are given for future attempts to identify mouse preferences for auditory stimuli.
McCreery, Ryan W; Walker, Elizabeth A; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia
2015-01-01
Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.
Subcortical Plasticity Following Perceptual Learning in a Pitch Discrimination Task
Plack, Christopher J.
2010-01-01
Practice can lead to dramatic improvements in the discrimination of auditory stimuli. In this study, we investigated changes of the frequency-following response (FFR), a subcortical component of the auditory evoked potentials, after a period of pitch discrimination training. Twenty-seven adult listeners were trained for 10 h on a pitch discrimination task using one of three different complex tone stimuli. One had a static pitch contour, one had a rising pitch contour, and one had a falling pitch contour. Behavioral measures of pitch discrimination and FFRs for all the stimuli were measured before and after the training phase for these participants, as well as for an untrained control group (n = 12). Trained participants showed significant improvements in pitch discrimination compared to the control group for all three trained stimuli. These improvements were partly specific for stimuli with the same pitch modulation (dynamic vs. static) and with the same pitch trajectory (rising vs. falling) as the trained stimulus. Also, the robustness of FFR neural phase locking to the sound envelope increased significantly more in trained participants compared to the control group for the static and rising contour, but not for the falling contour. Changes in FFR strength were partly specific for stimuli with the same pitch modulation (dynamic vs. static) of the trained stimulus. Changes in FFR strength, however, were not specific for stimuli with the same pitch trajectory (rising vs. falling) as the trained stimulus. These findings indicate that even relatively low-level processes in the mature auditory system are subject to experience-related change. PMID:20878201
Subcortical plasticity following perceptual learning in a pitch discrimination task.
Carcagno, Samuele; Plack, Christopher J
2011-02-01
Practice can lead to dramatic improvements in the discrimination of auditory stimuli. In this study, we investigated changes of the frequency-following response (FFR), a subcortical component of the auditory evoked potentials, after a period of pitch discrimination training. Twenty-seven adult listeners were trained for 10 h on a pitch discrimination task using one of three different complex tone stimuli. One had a static pitch contour, one had a rising pitch contour, and one had a falling pitch contour. Behavioral measures of pitch discrimination and FFRs for all the stimuli were measured before and after the training phase for these participants, as well as for an untrained control group (n = 12). Trained participants showed significant improvements in pitch discrimination compared to the control group for all three trained stimuli. These improvements were partly specific for stimuli with the same pitch modulation (dynamic vs. static) and with the same pitch trajectory (rising vs. falling) as the trained stimulus. Also, the robustness of FFR neural phase locking to the sound envelope increased significantly more in trained participants compared to the control group for the static and rising contour, but not for the falling contour. Changes in FFR strength were partly specific for stimuli with the same pitch modulation (dynamic vs. static) of the trained stimulus. Changes in FFR strength, however, were not specific for stimuli with the same pitch trajectory (rising vs. falling) as the trained stimulus. These findings indicate that even relatively low-level processes in the mature auditory system are subject to experience-related change.
ERIC Educational Resources Information Center
McKeown, Denis; Wellsted, David
2009-01-01
Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…
A Study of Semantic Features: Electrophysiological Correlates.
ERIC Educational Resources Information Center
Wetzel, Frederick; And Others
This study investigates whether words differing in a single contrastive semantic feature (positive/negative) can be discriminated by auditory evoked responses (AERs). Ten right-handed college students were provided with auditory stimuli consisting of 20 relational words (more/less; high/low, etc.) spoken with a middle American accent and computer…
Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes
ERIC Educational Resources Information Center
Getzmann, Stephan
2009-01-01
The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…
Effect of Three Classroom Listening Conditions on Speech Intelligibility
ERIC Educational Resources Information Center
Ross, Mark; Giolas, Thomas G.
1971-01-01
Speech discrimination scores for 13 deaf children were obtained in a classroom under: usual listening condition (hearing aid or not), binaural listening situation using auditory trainer/FM receiver with wireless microphone transmitter turned off, and binaural condition with inputs from auditory trainer/FM receiver and wireless microphone/FM…
ERIC Educational Resources Information Center
Flowers, Arthur; Crandell, Edwin W.
Three auditory perceptual processes (resistance to distortion, selective listening in the form of auditory dedifferentiation, and binaural synthesis) were evaluated by five assessment techniques: (1) low pass filtered speech, (2) accelerated speech, (3) competing messages, (4) accelerated plus competing messages, and (5) binaural synthesis.…
Understanding music with cochlear implants
Bruns, Lisa; Mürbe, Dirk; Hahne, Anja
2016-01-01
Direct stimulation of the auditory nerve via a Cochlear Implant (CI) enables profoundly hearing-impaired people to perceive sounds. Many CI users find language comprehension satisfactory, but music perception is generally considered difficult. However, music contains different dimensions which might be accessible in different ways. We aimed to highlight three main dimensions of music processing in CI users which rely on different processing mechanisms: (1) musical discrimination abilities, (2) access to meaning in music, and (3) subjective music appreciation. All three dimensions were investigated in two CI user groups (post- and prelingually deafened CI users, all implanted as adults) and a matched normal hearing control group. The meaning of music was studied by using event-related potentials (with the N400 component as marker) during a music-word priming task while music appreciation was gathered by a questionnaire. The results reveal a double dissociation between the three dimensions of music processing. Despite impaired discrimination abilities of both CI user groups compared to the control group, appreciation was reduced only in postlingual CI users. While musical meaning processing was restorable in postlingual CI users, as shown by a N400 effect, data of prelingual CI users lack the N400 effect and indicate previous dysfunctional concept building. PMID:27558546
Riva, Valentina; Cantiani, Chiara; Benasich, April A; Molteni, Massimo; Piazza, Caterina; Giorda, Roberto; Dionne, Ginette; Marino, Cecilia
2018-06-01
Although it is clear that early language acquisition can be a target of CNTNAP2, the pathway between gene and language is still largely unknown. This research focused on the mediation role of rapid auditory processing (RAP). We tested RAP at 6 months of age by the use of event-related potentials, as a mediator between common variants of the CNTNAP2 gene (rs7794745 and rs2710102) and 20-month-old language outcome in a prospective longitudinal study of 96 Italian infants. The mediation model examines the hypothesis that language outcome is explained by a sequence of effects involving RAP and CNTNAP2. The ability to discriminate spectrotemporally complex auditory frequency changes at 6 months of age mediates the contribution of rs2710102 to expressive vocabulary at 20 months. The indirect effect revealed that rs2710102 C/C was associated with lower P3 amplitude in the right hemisphere, which, in turn, predicted poorer expressive vocabulary at 20 months of age. These findings add to a growing body of literature implicating RAP as a viable marker in genetic studies of language development. The results demonstrate a potential developmental cascade of effects, whereby CNTNAP2 drives RAP functioning that, in turn, contributes to early expressive outcome.
Early electrophysiological markers of atypical language processing in prematurely born infants.
Paquette, Natacha; Vannasing, Phetsamone; Tremblay, Julie; Lefebvre, Francine; Roy, Marie-Sylvie; McKerral, Michelle; Lepore, Franco; Lassonde, Maryse; Gallagher, Anne
2015-12-01
Because nervous system development may be affected by prematurity, many prematurely born children present language or cognitive disorders at school age. The goal of this study is to investigate whether these impairments can be identified early in life using electrophysiological auditory event-related potentials (AERPs) and mismatch negativity (MMN). Brain responses to speech and non-speech stimuli were assessed in prematurely born children to identify early electrophysiological markers of language and cognitive impairments. Participants were 74 children (41 full-term, 33 preterm) aged 3, 12, and 36 months. Pre-attentional auditory responses (MMN and AERPs) were assessed using an oddball paradigm, with speech and non-speech stimuli presented in counterbalanced order between participants. Language and cognitive development were assessed using the Bayley Scale of Infant Development, Third Edition (BSID-III). Results show that preterms as young as 3 months old had delayed MMN response to speech stimuli compared to full-terms. A significant negative correlation was also found between MMN latency to speech sounds and the BSID-III expressive language subscale. However, no significant differences between full-terms and preterms were found for the MMN to non-speech stimuli, suggesting preserved pre-attentional auditory discrimination abilities in these children. Identification of early electrophysiological markers for delayed language development could facilitate timely interventions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Bauer, Julia; Widmer, Susann; Meyer, Martin
2018-01-01
Cognitive abilities such as attention or working memory can support older adults during speech perception. However, cognitive abilities as well as speech perception decline with age, leading to the expenditure of effort during speech processing. This longitudinal study therefore investigated age-related differences in electrophysiological processes during speech discrimination and assessed the extent of enhancement to such cognitive auditory processes through repeated auditory exposure. For that purpose, accuracy and reaction time were compared between 13 older adults (62-76 years) and 15 middle-aged (28-52 years) controls in an active oddball paradigm which was administered at three consecutive measurement time points at an interval of 2 wk, while EEG was recorded. As a standard stimulus, the nonsense syllable /'a:ʃa/was used, while the nonsense syllable /'a:sa/ and a morphing between /'a:ʃa/ and /'a:sa/ served as deviants. N2b and P3b ERP responses were evaluated as a function of age, deviant, and measurement time point using a data-driven topographical microstate analysis. From middle age to old age, age-related decline in attentive perception (as reflected in the N2b-related microstates) and in memory updating and attentional processes (as reflected in the P3b-related microstates) was found, as indicated by both lower neural responses and later onsets of the respective cortical networks, and in age-related changes in frontal activation during attentional stimulus processing. Importantly, N2b- and P3b-related microstates changed as a function of repeated stimulus exposure in both groups. This research therefore suggests that experience with auditory stimuli can support auditory neurocognitive processes in normal hearing adults into advanced age. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Kale, Sushrut; Micheyl, Christophe; Heinz, Michael G.
2013-01-01
Listeners with sensorineural hearing loss (SNHL) often show poorer thresholds for fundamental-frequency (F0) discrimination, and poorer discrimination between harmonic and frequency-shifted (inharmonic) complex tones, than normal-hearing (NH) listeners—especially when these tones contain resolved or partially resolved components. It has been suggested that these perceptual deficits reflect reduced access to temporal-fine-structure (TFS) information, and could be due to degraded phase-locking in the auditory nerve (AN) with SNHL. In the present study, TFS and temporal-envelope (ENV) cues in single AN-fiber responses to bandpass-filtered harmonic and inharmonic complex tones were measured in chinchillas with either normal hearing or noise-induced SNHL. The stimuli were comparable to those used in recent psychophysical studies of F0 and harmonic/inharmonic discrimination. As in those studies, the rank of the center component was manipulated to produce different resolvability conditions, different phase relationships (cosine and random phase) were tested, and background noise was present. Neural TFS and ENV cues were quantified using cross-correlation coefficients computed using shuffled cross-correlograms between neural responses to REF (harmonic) and TEST (F0- or frequency-shifted) stimuli. In animals with SNHL, AN-fiber tuning curves showed elevated thresholds, broadened tuning, best-frequency shifts, and downward shifts in the dominant TFS response component; however, no significant degradation in the ability of AN fibers to encode TFS or ENV cues was found. Consistent with optimal-observer analyses, the results indicate that TFS and ENV cues depended only on the relevant frequency shift in Hz and thus were not degraded because phase-locking remained intact. These results suggest that perceptual “TFS-processing” deficits do not simply reflect degraded phase-locking at the level of the AN. To the extent that performance in F0 and harmonic/inharmonic discrimination tasks depend on TFS cues, it is likely through a more complicated (sub-optimal) decoding mechanism, which may involve “spatiotemporal” (place-time) neural representations. PMID:23716215
Visser, Eelke; Zwiers, Marcel P.; Kan, Cornelis C.; Hoekstra, Liesbeth; van Opstal, A. John; Buitelaar, Jan K.
2013-01-01
Background Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. Methods We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Results Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. Limitations The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Conclusion Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs. PMID:24148845
Auditory processing and morphological anomalies in medial geniculate nucleus of Cntnap2 mutant mice.
Truong, Dongnhu T; Rendall, Amanda R; Castelluccio, Brian C; Eigsti, Inge-Marie; Fitch, R Holly
2015-12-01
Genetic epidemiological studies support a role for CNTNAP2 in developmental language disorders such as autism spectrum disorder, specific language impairment, and dyslexia. Atypical language development and function represent a core symptom of autism spectrum disorder (ASD), with evidence suggesting that aberrant auditory processing-including impaired spectrotemporal processing and enhanced pitch perception-may both contribute to an anomalous language phenotype. Investigation of gene-brain-behavior relationships in social and repetitive ASD symptomatology have benefited from experimentation on the Cntnap2 knockout (KO) mouse. However, auditory-processing behavior and effects on neural structures within the central auditory pathway have not been assessed in this model. Thus, this study examined whether auditory-processing abnormalities were associated with mutation of the Cntnap2 gene in mice. Cntnap2 KO mice were assessed on auditory-processing tasks including silent gap detection, embedded tone detection, and pitch discrimination. Cntnap2 knockout mice showed deficits in silent gap detection but a surprising superiority in pitch-related discrimination as compared with controls. Stereological analysis revealed a reduction in the number and density of neurons, as well as a shift in neuronal size distribution toward smaller neurons, in the medial geniculate nucleus of mutant mice. These findings are consistent with a central role for CNTNAP2 in the ontogeny and function of neural systems subserving auditory processing and suggest that developmental disruption of these neural systems could contribute to the atypical language phenotype seen in autism spectrum disorder. (c) 2015 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Squires, K. C.; Hillyard, S. A.; Lindsay, P. H.
1973-01-01
Vertex potentials elicited by visual feedback signals following an auditory intensity discrimination have been studied with eight subjects. Feedback signals which confirmed the prior sensory decision elicited small P3s, while disconfirming feedback elicited P3s that were larger. On the average, the latency of P3 was also found to increase with increasing disparity between the judgment and the feedback information. These effects were part of an overall dichotomy in wave shape following confirming vs disconfirming feedback. These findings are incorporated in a general model of the role of P3 in perceptual decision making.
Pitch perception prior to cortical maturation
NASA Astrophysics Data System (ADS)
Lau, Bonnie K.
Pitch perception plays an important role in many complex auditory tasks including speech perception, music perception, and sound source segregation. Because of the protracted and extensive development of the human auditory cortex, pitch perception might be expected to mature, at least over the first few months of life. This dissertation investigates complex pitch perception in 3-month-olds, 7-month-olds and adults -- time points when the organization of the auditory pathway is distinctly different. Using an observer-based psychophysical procedure, a series of four studies were conducted to determine whether infants (1) discriminate the pitch of harmonic complex tones, (2) discriminate the pitch of unresolved harmonics, (3) discriminate the pitch of missing fundamental melodies, and (4) have comparable sensitivity to pitch and spectral changes as adult listeners. The stimuli used in these studies were harmonic complex tones, with energy missing at the fundamental frequency. Infants at both three and seven months of age discriminated the pitch of missing fundamental complexes composed of resolved and unresolved harmonics as well as missing fundamental melodies, demonstrating perception of complex pitch by three months of age. More surprisingly, infants in both age groups had lower pitch and spectral discrimination thresholds than adult listeners. Furthermore, no differences in performance on any of the tasks presented were observed between infants at three and seven months of age. These results suggest that subcortical processing is not only sufficient to support pitch perception prior to cortical maturation, but provides adult-like sensitivity to pitch by three months.
ERIC Educational Resources Information Center
Reinertsen, Gloria M.
A study compared performances on a test of selective auditory attention between students educated in open-space versus closed classroom environments. An open-space classroom environment was defined as having no walls separating it from hallways or other classrooms. It was hypothesized that the incidence of auditory figure-ground (ability to focus…
Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss
ERIC Educational Resources Information Center
Koravand, Amineh; Jutras, Benoit
2013-01-01
Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…
Turning down the noise: the benefit of musical training on the aging auditory brain.
Alain, Claude; Zendel, Benjamin Rich; Hutka, Stefanie; Bidelman, Gavin M
2014-02-01
Age-related decline in hearing abilities is a ubiquitous part of aging, and commonly impacts speech understanding, especially when there are competing sound sources. While such age effects are partially due to changes within the cochlea, difficulties typically exist beyond measurable hearing loss, suggesting that central brain processes, as opposed to simple peripheral mechanisms (e.g., hearing sensitivity), play a critical role in governing hearing abilities late into life. Current training regimens aimed to improve central auditory processing abilities have experienced limited success in promoting listening benefits. Interestingly, recent studies suggest that in young adults, musical training positively modifies neural mechanisms, providing robust, long-lasting improvements to hearing abilities as well as to non-auditory tasks that engage cognitive control. These results offer the encouraging possibility that musical training might be used to counteract age-related changes in auditory cognition commonly observed in older adults. Here, we reviewed studies that have examined the effects of age and musical experience on auditory cognition with an emphasis on auditory scene analysis. We infer that musical training may offer potential benefits to complex listening and might be utilized as a means to delay or even attenuate declines in auditory perception and cognition that often emerge later in life. Copyright © 2013 Elsevier B.V. All rights reserved.
Kabella, Danielle M; Flynn, Lucinda; Peters, Amanda; Kodituwakku, Piyadasa; Stephen, Julia M
2018-05-24
Prior studies indicate that the auditory mismatch response is sensitive to early alterations in brain development in multiple developmental disorders. Prenatal alcohol exposure is known to impact early auditory processing. The current study hypothesized alterations in the mismatch response in young children with fetal alcohol spectrum disorders (FASD). Participants in this study were 9 children with a FASD and 17 control children (Control) aged 3 to 6 years. Participants underwent magnetoencephalography and structural magnetic resonance imaging scans separately. We compared groups on neurophysiological mismatch negativity (MMN) responses to auditory stimuli measured using the auditory oddball paradigm. Frequent (1,000 Hz) and rare (1,200 Hz) tones were presented at 72 dB. There was no significant group difference in MMN response latency or amplitude represented by the peak located ~200 ms after stimulus presentation in the difference time course between frequent and infrequent tones. Examining the time courses to the frequent and infrequent tones separately, repeated measures analysis of variance with condition (frequent vs. rare), peak (N100m and N200m), and hemisphere as within-subject factors and diagnosis and sex as the between-subject factors showed a significant interaction of peak by diagnosis (p = 0.001), with a pattern of decreased amplitude from N100m to N200m in Control children and the opposite pattern in children with FASD. However, no significant difference was found with the simple effects comparisons. No group differences were found in the response latencies of the rare auditory evoked fields. The results indicate that there was no detectable effect of alcohol exposure on the amplitude or latency of the MMNm response to simple tones modulated by frequency change in preschool-aged children with FASD. However, while discrimination abilities to simple tones may be intact, early auditory sensory processing revealed by the interaction between N100m and N200m amplitude indicates that auditory sensory processing may be altered in children with FASD. Copyright © 2018 by the Research Society on Alcoholism.
Late Maturation of Auditory Perceptual Learning
ERIC Educational Resources Information Center
Huyck, Julia Jones; Wright, Beverly A.
2011-01-01
Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…
ERIC Educational Resources Information Center
Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.
2013-01-01
A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…
Auditory Pattern Memory: Mechanisms of Tonal Sequence Discrimination by Human Observers
1988-10-30
and Creelman (1977) in a study of categorical perception. Tanner’s model included a short-term decaying memory for the acoustic input to the system plus...auditory pattern components, J. &Coust. Soc. 91 Am., 76, 1037- 1044. Macmillan, N. A., Kaplan H. L., & Creelman , C. D. (1977). The psychophysics of
Auditory Stream Segregation and the Perception of Across-Frequency Synchrony
ERIC Educational Resources Information Center
Micheyl, Christophe; Hunter, Cynthia; Oxenham, Andrew J.
2010-01-01
This study explored the extent to which sequential auditory grouping affects the perception of temporal synchrony. In Experiment 1, listeners discriminated between 2 pairs of asynchronous "target" tones at different frequencies, A and B, in which the B tone either led or lagged. Thresholds were markedly higher when the target tones were temporally…
ERIC Educational Resources Information Center
Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb
2017-01-01
The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…
McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia
2015-01-01
Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use and better language abilities generally had higher parent ratings of auditory skills and better speech recognition abilities in quiet and in noise than peers with less audibility, more limited HA use or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions Children who are hard of hearing continue to experience delays in auditory skill development and speech recognition abilities compared to peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported prior to the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech recognition abilities, and may also enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children’s speech recognition. PMID:26731160
Auditory Cortex Is Required for Fear Potentiation of Gap Detection
Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.
2014-01-01
Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510
Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie
2003-05-08
We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.
Tuning the mind: Exploring the connections between musical ability and executive functions.
Slevc, L Robert; Davey, Nicholas S; Buschkuehl, Martin; Jaeggi, Susanne M
2016-07-01
A growing body of research suggests that musical experience and ability are related to a variety of cognitive abilities, including executive functioning (EF). However, it is not yet clear if these relationships are limited to specific components of EF, limited to auditory tasks, or reflect very general cognitive advantages. This study investigated the existence and generality of the relationship between musical ability and EFs by evaluating the musical experience and ability of a large group of participants and investigating whether this predicts individual differences on three different components of EF - inhibition, updating, and switching - in both auditory and visual modalities. Musical ability predicted better performance on both auditory and visual updating tasks, even when controlling for a variety of potential confounds (age, handedness, bilingualism, and socio-economic status). However, musical ability was not clearly related to inhibitory control and was unrelated to switching performance. These data thus show that cognitive advantages associated with musical ability are not limited to auditory processes, but are limited to specific aspects of EF. This supports a process-specific (but modality-general) relationship between musical ability and non-musical aspects of cognition. Copyright © 2016 Elsevier B.V. All rights reserved.
Park, Jin; Park, So-yeon; Kim, Yong-wook; Woo, Youngkeun
2015-01-01
Generally, treadmill training is very effective intervention, and rhythmic auditory stimulation is designed to feedback during gait training in stroke patients. The purpose of this study was to compare the gait abilities in chronic stroke patients following either treadmill walking training with rhythmic auditory stimulation (TRAS) or over ground walking training with rhythmic auditory stimulation (ORAS). Nineteen subjects were divided into two groups: a TRAS group (9 subjects) and an ORAS group (10 subjects). Temporal and spatial gait parameters and motor recovery ability were measured before and after the training period. Gait ability was measured by the Biodex Gait trainer treadmill system, Timed up and go test (TUG), 6 meter walking distance (6MWD) and Functional gait assessment (FGA). After the training periods, the TRAS group showed a significant improvement in walking speed, step cycle, step length of the unaffected limb, coefficient of variation, 6MWD, and, FGA when compared to the ORAS group (p < 0.05). Treadmill walking training during the rhythmic auditory stimulation may be useful for rehabilitation of patients with chronic stroke.
Discrimination of Male Voice Quality by 8 and 9 Week Old Infants.
ERIC Educational Resources Information Center
Culp, Rex E.; Gallas, Howard B.
This paper reports a study which investigated 2-month-old infants' auditory discrimination of tone quality in the male voice, extending a previous study which found that voice quality changes (soft versus harsh) in a female voice were discriminable by infants at this age. Subjects were 20 infants, tested at 8 and 9 weeks of age. Each infant was…
The use of listening devices to ameliorate auditory deficit in children with autism.
Rance, Gary; Saunders, Kerryn; Carew, Peter; Johansson, Marlin; Tan, Johanna
2014-02-01
To evaluate both monaural and binaural processing skills in a group of children with autism spectrum disorder (ASD) and to determine the degree to which personal frequency modulation (radio transmission) (FM) listening systems could ameliorate their listening difficulties. Auditory temporal processing (amplitude modulation detection), spatial listening (integration of binaural difference cues), and functional hearing (speech perception in background noise) were evaluated in 20 children with ASD. Ten of these subsequently underwent a 6-week device trial in which they wore the FM system for up to 7 hours per day. Auditory temporal processing and spatial listening ability were poorer in subjects with ASD than in matched controls (temporal: P = .014 [95% CI -6.4 to -0.8 dB], spatial: P = .003 [1.0 to 4.4 dB]), and performance on both of these basic processing measures was correlated with speech perception ability (temporal: r = -0.44, P = .022; spatial: r = -0.50, P = .015). The provision of FM listening systems resulted in improved discrimination of speech in noise (P < .001 [11.6% to 21.7%]). Furthermore, both participant and teacher questionnaire data revealed device-related benefits across a range of evaluation categories including Effect of Background Noise (P = .036 [-60.7% to -2.8%]) and Ease of Communication (P = .019 [-40.1% to -5.0%]). Eight of the 10 participants who undertook the 6-week device trial remained consistent FM users at study completion. Sustained use of FM listening devices can enhance speech perception in noise, aid social interaction, and improve educational outcomes in children with ASD. Copyright © 2014 Mosby, Inc. All rights reserved.
On pure word deafness, temporal processing, and the left hemisphere.
Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean
2005-07-01
Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.
Auditory discrimination therapy (ADT) for tinnitus managment: preliminary results.
Herraiz, C; Diges, I; Cobo, P; Plaza, G; Aparicio, J M
2006-12-01
This clinical assay has demonstrated the efficacy of auditory discrimination therapy (ADT) in tinnitus management compared with a waiting-list group. In all, 43% of the ADT patients improved their tinnitus, and its intensity together with its handicap were statistically decreased (EMB rating: B-2). To describe the effect of sound discrimination training on tinnitus. ADT designs a procedure to increase the cortical representation of trained frequencies (damaged cochlear areas with a secondary reduction of cortical stimulation) and to shrink the neighbouring over-represented ones (corresponding to tinnitus pitch). This prospective descriptive study included 14 patients with high frequency matched tinnitus. Tinnitus severity was measured according to a visual analogue scale (VAS) and the Tinnitus Handicap Inventory (THI). Patients performed a 10-min auditory discrimination task twice a day for 1 month. Discontinuous 8 kHz pure tones were randomly mixed with 500 ms 'white noise' sounds through a MP3 system. ADT group results were compared with a waiting-list group (n=21). In all, 43% of our patients had improvement in their tinnitus. A significant improvement in VAS (p=0.004) and THI mean scores was achieved (p=0.038). Statistical differences between ADT and the waiting-list group have been proved, considering patients' self-evaluations (p=0.043) and VAS scores (p=0.004). A non-significant reduction of THI was achieved (p=0.113).
Auditory sequence analysis and phonological skill
Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.
2012-01-01
This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739
NASA Astrophysics Data System (ADS)
McMullen, Kyla A.
Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.
Discrimination of timbre in early auditory responses of the human brain.
Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee
2011-01-01
The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1)-testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.
Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Huotilainen, Minna
2015-03-01
Adult musicians show superior neural sound discrimination when compared to nonmusicians. However, it is unclear whether these group differences reflect the effects of experience or preexisting neural enhancement in individuals who seek out musical training. Tracking how brain function matures over time in musically trained and nontrained children can shed light on this issue. Here, we review our recent longitudinal event-related potential (ERP) studies that examine how formal musical training and less formal musical activities influence the maturation of brain responses related to sound discrimination and auditory attention. These studies found that musically trained school-aged children and preschool-aged children attending a musical playschool show more rapid maturation of neural sound discrimination than their control peers. Importantly, we found no evidence for pretraining group differences. In a related cross-sectional study, we found ERP and behavioral evidence for improved executive functions and control over auditory novelty processing in musically trained school-aged children and adolescents. Taken together, these studies provide evidence for the causal role of formal musical training and less formal musical activities in shaping the development of important neural auditory skills and suggest transfer effects with domain-general implications. © 2015 New York Academy of Sciences.
Prefrontal consolidation supports the attainment of fear memory accuracy.
Vieira, Philip A; Lovelace, Jonathan W; Corches, Alex; Rashid, Asim J; Josselyn, Sheena A; Korzus, Edward
2014-08-01
The neural mechanisms underlying the attainment of fear memory accuracy for appropriate discriminative responses to aversive and nonaversive stimuli are unclear. Considerable evidence indicates that coactivator of transcription and histone acetyltransferase cAMP response element binding protein (CREB) binding protein (CBP) is critically required for normal neural function. CBP hypofunction leads to severe psychopathological symptoms in human and cognitive abnormalities in genetic mutant mice with severity dependent on the neural locus and developmental time of the gene inactivation. Here, we showed that an acute hypofunction of CBP in the medial prefrontal cortex (mPFC) results in a disruption of fear memory accuracy in mice. In addition, interruption of CREB function in the mPFC also leads to a deficit in auditory discrimination of fearful stimuli. While mice with deficient CBP/CREB signaling in the mPFC maintain normal responses to aversive stimuli, they exhibit abnormal responses to similar but nonrelevant stimuli when compared to control animals. These data indicate that improvement of fear memory accuracy involves mPFC-dependent suppression of fear responses to nonrelevant stimuli. Evidence from a context discriminatory task and a newly developed task that depends on the ability to distinguish discrete auditory cues indicated that CBP-dependent neural signaling within the mPFC circuitry is an important component of the mechanism for disambiguating the meaning of fear signals with two opposing values: aversive and nonaversive. © 2014 Vieira et al.; Published by Cold Spring Harbor Laboratory Press.
Ferguson, Melanie A; Henshaw, Helen; Clark, Daniel P A; Moore, David R
2014-01-01
The aims of this study were to (i) evaluate the efficacy of phoneme discrimination training for hearing and cognitive abilities of adults aged 50 to 74 years with mild sensorineural hearing loss who were not users of hearing aids, and to (ii) determine participant compliance with a self-administered, computer-delivered, home- and game-based auditory training program. This study was a randomized controlled trial with repeated measures and crossover design. Participants were trained and tested over an 8- to 12-week period. One group (Immediate Training) trained during weeks 1 and 4. A second waitlist group (Delayed Training) did no training during weeks 1 and 4, but then trained during weeks 5 and 8. On-task (phoneme discrimination) and transferable outcome measures (speech perception, cognition, self-report of hearing disability) for both groups were obtained during weeks 0, 4, and 8, and for the Delayed Training group only at week 12. Robust phoneme discrimination learning was found for both groups, with the largest improvements in threshold shown for those with the poorest initial thresholds. Between weeks 1 and 4, the Immediate Training group showed moderate, significant improvements on self-report of hearing disability, divided attention, and working memory, specifically for conditions or situations that were more complex and therefore more challenging. Training did not result in consistent improvements in speech perception in noise. There was no evidence of any test-retest effects between weeks 1 and 4 for the Delayed Training group. Retention of benefit at 4 weeks post-training was shown for phoneme discrimination, divided attention, working memory, and self-report of hearing disability. Improved divided attention and reduced self-reported hearing difficulties were highly correlated. It was observed that phoneme discrimination training benefits some but not all people with mild hearing loss. Evidence presented here, together with that of other studies that used different training stimuli, suggests that auditory training may facilitate cognitive skills that index executive function and the self-perception of hearing difficulty in challenging situations. The development of cognitive skills may be more important than the development of sensory skills for improving communication and speech perception in everyday life. However, improvements were modest. Outcome measures need to be appropriately challenging to be sensitive to the effects of the relatively small amount of training performed.
Loebach, Jeremy L; Pisoni, David B; Svirsky, Mario A
2009-12-01
The objective of this study was to assess whether training on speech processed with an eight-channel noise vocoder to simulate the output of a cochlear implant would produce transfer of auditory perceptual learning to the recognition of nonspeech environmental sounds, the identification of speaker gender, and the discrimination of talkers by voice. Twenty-four normal-hearing subjects were trained to transcribe meaningful English sentences processed with a noise vocoder simulation of a cochlear implant. An additional 24 subjects served as an untrained control group and transcribed the same sentences in their unprocessed form. All subjects completed pre- and post-test sessions in which they transcribed vocoded sentences to provide an assessment of training efficacy. Transfer of perceptual learning was assessed using a series of closed set, nonlinguistic tasks: subjects identified talker gender, discriminated the identity of pairs of talkers, and identified ecologically significant environmental sounds from a closed set of alternatives. Although both groups of subjects showed significant pre- to post-test improvements, subjects who transcribed vocoded sentences during training performed significantly better at post-test than those in the control group. Both groups performed equally well on gender identification and talker discrimination. Subjects who received explicit training on the vocoded sentences, however, performed significantly better on environmental sound identification than the untrained subjects. Moreover, across both groups, pre-test speech performance and, to a higher degree, post-test speech performance, were significantly correlated with environmental sound identification. For both groups, environmental sounds that were characterized as having more salient temporal information were identified more often than environmental sounds that were characterized as having more salient spectral information. Listeners trained to identify noise-vocoded sentences showed evidence of transfer of perceptual learning to the identification of environmental sounds. In addition, the correlation between environmental sound identification and sentence transcription indicates that subjects who were better able to use the degraded acoustic information to identify the environmental sounds were also better able to transcribe the linguistic content of novel sentences. Both trained and untrained groups performed equally well ( approximately 75% correct) on the gender-identification task, indicating that training did not have an effect on the ability to identify the gender of talkers. Although better than chance, performance on the talker discrimination task was poor overall ( approximately 55%), suggesting that either explicit training is required to discriminate talkers' voices reliably or that additional information (perhaps spectral in nature) not present in the vocoded speech is required to excel in such tasks. Taken together, the results suggest that although transfer of auditory perceptual learning with spectrally degraded speech does occur, explicit task-specific training may be necessary for tasks that cannot rely on temporal information alone.
Loebach, Jeremy L.; Pisoni, David B.; Svirsky, Mario A.
2009-01-01
Objective The objective of this study was to assess whether training on speech processed with an 8-channel noise vocoder to simulate the output of a cochlear implant would produce transfer of auditory perceptual learning to the recognition of non-speech environmental sounds, the identification of speaker gender, and the discrimination of talkers by voice. Design Twenty-four normal hearing subjects were trained to transcribe meaningful English sentences processed with a noise vocoder simulation of a cochlear implant. An additional twenty-four subjects served as an untrained control group and transcribed the same sentences in their unprocessed form. All subjects completed pre- and posttest sessions in which they transcribed vocoded sentences to provide an assessment of training efficacy. Transfer of perceptual learning was assessed using a series of closed-set, nonlinguistic tasks: subjects identified talker gender, discriminated the identity of pairs of talkers, and identified ecologically significant environmental sounds from a closed set of alternatives. Results Although both groups of subjects showed significant pre- to posttest improvements, subjects who transcribed vocoded sentences during training performed significantly better at posttest than subjects in the control group. Both groups performed equally well on gender identification and talker discrimination. Subjects who received explicit training on the vocoded sentences, however, performed significantly better on environmental sound identification than the untrained subjects. Moreover, across both groups, pretest speech performance, and to a higher degree posttest speech performance, were significantly correlated with environmental sound identification. For both groups, environmental sounds that were characterized as having more salient temporal information were identified more often than environmental sounds that were characterized as having more salient spectral information. Conclusions Listeners trained to identify noise-vocoded sentences showed evidence of transfer of perceptual learning to the identification of environmental sounds. In addition, the correlation between environmental sound identification and sentence transcription indicates that subjects who were better able to utilize the degraded acoustic information to identify the environmental sounds were also better able to transcribe the linguistic content of novel sentences. Both trained and untrained groups performed equally well (~75% correct) on the gender identification task, indicating that training did not have an effect on the ability to identify the gender of talkers. Although better than chance, performance on the talker discrimination task was poor overall (~55%), suggesting that either explicit training is required to reliably discriminate talkers’ voices, or that additional information (perhaps spectral in nature) not present in the vocoded speech is required to excel in such tasks. Taken together, the results suggest that while transfer of auditory perceptual learning with spectrally degraded speech does occur, explicit task-specific training may be necessary for tasks that cannot rely on temporal information alone. PMID:19773659
ERIC Educational Resources Information Center
Bolen, L. M.; Kimball, D. J.; Hall, C. W.; Webster, R. E.
1997-01-01
Compares the visual and auditory processing factors of the Woodcock Johnson Tests of Cognitive Ability, Revised (WJR COG) and the visual and auditory memory factors of the Learning Efficiency Test, II (LET-II) among 120 college students. Results indicate two significant performance differences between the WJR COG and LET-II. (RJM)
Keeping Timbre in Mind: Working Memory for Complex Sounds that Can't Be Verbalized
ERIC Educational Resources Information Center
Golubock, Jason L.; Janata, Petr
2013-01-01
Properties of auditory working memory for sounds that lack strong semantic associations and are not readily verbalized or sung are poorly understood. We investigated auditory working memory capacity for lists containing 2-6 easily discriminable abstract sounds synthesized within a constrained timbral space, at delays of 1-6 s (Experiment 1), and…
ERIC Educational Resources Information Center
Stevens, Catherine; Gallagher, Melinda
2004-01-01
This experiment investigated relational complexity and relational shift in judgments of auditory patterns. Pitch and duration values were used to construct two-note perceptually similar sequences (unary relations) and four-note relationally similar sequences (binary relations). It was hypothesized that 5-, 8- and 11-year-old children would perform…
Auditory/visual Duration Bisection in Patients with Left or Right Medial-Temporal Lobe Resection
ERIC Educational Resources Information Center
Melgire, Manuela; Ragot, Richard; Samson, Severine; Penney, Trevor B.; Meck, Warren H.; Pouthas, Viviane
2005-01-01
Patients with unilateral (left or right) medial temporal lobe lesions and normal control (NC) volunteers participated in two experiments, both using a duration bisection procedure. Experiment 1 assessed discrimination of auditory and visual signal durations ranging from 2 to 8 s, in the same test session. Patients and NC participants judged…
Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa
2011-07-01
A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.
Hahn, Allison H; Campbell, Kimberley A; Congdon, Jenna V; Hoang, John; McMillan, Neil; Scully, Erin N; Yong, Joshua J H; Elie, Julie E; Sturdy, Christopher B
2017-07-01
Chickadees produce a multi-note chick-a-dee call in multiple socially relevant contexts. One component of this call is the D note, which is a low-frequency and acoustically complex note with a harmonic-like structure. In the current study, we tested black-capped chickadees on a between-category operant discrimination task using vocalizations with acoustic structures similar to black-capped chickadee D notes, but produced by various songbird species, in order to examine the role that phylogenetic distance plays in acoustic perception of vocal signals. We assessed the extent to which discrimination performance was influenced by the phylogenetic relatedness among the species producing the vocalizations and by the phylogenetic relatedness between the subjects' species (black-capped chickadees) and the vocalizers' species. We also conducted a bioacoustic analysis and discriminant function analysis in order to examine the acoustic similarities among the discrimination stimuli. A previous study has shown that neural activation in black-capped chickadee auditory and perceptual brain regions is similar following the presentation of these vocalization categories. However, we found that chickadees had difficulty discriminating between forward and reversed black-capped chickadee D notes, a result that directly corresponded to the bioacoustic analysis indicating that these stimulus categories were acoustically similar. In addition, our results suggest that the discrimination between vocalizations produced by two parid species (chestnut-backed chickadees and tufted titmice) is perceptually difficult for black-capped chickadees, a finding that is likely in part because these vocalizations contain acoustic similarities. Overall, our results provide evidence that black-capped chickadees' perceptual abilities are influenced by both phylogenetic relatedness and acoustic structure.
Investigations in mechanisms and strategies to enhance hearing with cochlear implants
NASA Astrophysics Data System (ADS)
Churchill, Tyler H.
Cochlear implants (CIs) produce hearing sensations by stimulating the auditory nerve (AN) with current pulses whose amplitudes are modulated by filtered acoustic temporal envelopes. While this technology has provided hearing for multitudinous CI recipients, even bilaterally-implanted listeners have more difficulty understanding speech in noise and localizing sounds than normal hearing (NH) listeners. Three studies reported here have explored ways to improve electric hearing abilities. Vocoders are often used to simulate CIs for NH listeners. Study 1 was a psychoacoustic vocoder study examining the effects of harmonic carrier phase dispersion and simulated CI current spread on speech intelligibility in noise. Results showed that simulated current spread was detrimental to speech understanding and that speech vocoded with carriers whose components' starting phases were equal was the least intelligible. Cross-correlogram analyses of AN model simulations confirmed that carrier component phase dispersion resulted in better neural envelope representation. Localization abilities rely on binaural processing mechanisms in the brainstem and mid-brain that are not fully understood. In Study 2, several potential mechanisms were evaluated based on the ability of metrics extracted from stereo AN simulations to predict azimuthal locations. Results suggest that unique across-frequency patterns of binaural cross-correlation may provide a strong cue set for lateralization and that interaural level differences alone cannot explain NH sensitivity to lateral position. While it is known that many bilateral CI users are sensitive to interaural time differences (ITDs) in low-rate pulsatile stimulation, most contemporary CI processing strategies use high-rate, constant-rate pulse trains. In Study 3, we examined the effects of pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition by bilateral CI listeners. Results showed that listeners were able to use low-rate pulse timing cues presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli even when mixed with high rates on other electrodes. These results have contributed to a better understanding of those aspects of the auditory system that support speech understanding and binaural hearing, suggested vocoder parameters that may simulate aspects of electric hearing, and shown that redundant, low-rate pulse timing supports improved spatial hearing for bilateral CI listeners.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829
Encoding frequency contrast in primate auditory cortex
Scott, Brian H.; Semple, Malcolm N.
2014-01-01
Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This “dynamic hyperacuity” suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing. PMID:24598525
Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger
2016-05-01
A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.
The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns
Duarte, Fabiola; Lemus, Luis
2017-01-01
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406
Auditory-Perceptual Learning Improves Speech Motor Adaptation in Children
Shiller, Douglas M.; Rochon, Marie-Lyne
2015-01-01
Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5–7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children’s ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation. PMID:24842067
Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy
2012-06-01
Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.
Headphone screening to facilitate web-based auditory experiments
Woods, Kevin J.P.; Siegel, Max; Traer, James; McDermott, Josh H.
2017-01-01
Psychophysical experiments conducted remotely over the internet permit data collection from large numbers of participants, but sacrifice control over sound presentation, and therefore are not widely employed in hearing research. To help standardize online sound presentation, we introduce a brief psychophysical test for determining if online experiment participants are wearing headphones. Listeners judge which of three pure tones is quietest, with one of the tones presented 180° out of phase across the stereo channels. This task is intended to be easy over headphones but difficult over loudspeakers due to phase-cancellation. We validated the test in the lab by testing listeners known to be wearing headphones or listening over loudspeakers. The screening test was effective and efficient, discriminating between the two modes of listening with a small number of trials. When run online, a bimodal distribution of scores was obtained, suggesting that some participants performed the task over loudspeakers despite instructions to use headphones. The ability to detect and screen out these participants mitigates concerns over sound quality for online experiments, a first step toward opening auditory perceptual research to the possibilities afforded by crowdsourcing. PMID:28695541
Cell-assembly coding in several memory processes.
Sakurai, Y
1998-01-01
The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.
A corollary discharge maintains auditory sensitivity during sound production
NASA Astrophysics Data System (ADS)
Poulet, James F. A.; Hedwig, Berthold
2002-08-01
Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.
Auditory cortical volumes and musical ability in Williams syndrome.
Martens, Marilee A; Reutens, David C; Wilson, Sarah J
2010-07-01
Individuals with Williams syndrome (WS) have been shown to have atypical morphology in the auditory cortex, an area associated with aspects of musicality. Some individuals with WS have demonstrated specific musical abilities, despite intellectual delays. Primary auditory cortex and planum temporale volumes were manually segmented in 25 individuals with WS and 25 control participants, and the participants also underwent testing of musical abilities. Left and right planum temporale volumes were significantly larger in the participants with WS than in controls, with no significant difference noted between groups in planum temporale asymmetry or primary auditory cortical volumes. Left planum temporale volume was significantly increased in a subgroup of the participants with WS who demonstrated specific musical strengths, as compared to the remaining WS participants, and was highly correlated with scores on a musical task. These findings suggest that differences in musical ability within WS may be in part associated with variability in the left auditory cortical region, providing further evidence of cognitive and neuroanatomical heterogeneity within this syndrome. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Finke, Mareike; Sandmann, Pascale; Bönitz, Hanna; Kral, Andrej; Büchner, Andreas
2016-01-01
Single-sided deaf subjects with a cochlear implant (CI) provide the unique opportunity to compare central auditory processing of the electrical input (CI ear) and the acoustic input (normal-hearing, NH, ear) within the same individual. In these individuals, sensory processing differs between their two ears, while cognitive abilities are the same irrespectively of the sensory input. To better understand perceptual-cognitive factors modulating speech intelligibility with a CI, this electroencephalography study examined the central-auditory processing of words, the cognitive abilities, and the speech intelligibility in 10 postlingually single-sided deaf CI users. We found lower hit rates and prolonged response times for word classification during an oddball task for the CI ear when compared with the NH ear. Also, event-related potentials reflecting sensory (N1) and higher-order processing (N2/N4) were prolonged for word classification (targets versus nontargets) with the CI ear compared with the NH ear. Our results suggest that speech processing via the CI ear and the NH ear differs both at sensory (N1) and cognitive (N2/N4) processing stages, thereby affecting the behavioral performance for speech discrimination. These results provide objective evidence for cognition to be a key factor for speech perception under adverse listening conditions, such as the degraded speech signal provided from the CI. © 2016 S. Karger AG, Basel.
Ronacher, Bernhard; Wohlgemuth, Sandra; Vogel, Astrid; Krahe, Rüdiger
2008-08-01
A characteristic feature of hearing systems is their ability to resolve both fast and subtle amplitude modulations of acoustic signals. This applies also to grasshoppers, which for mate identification rely mainly on the characteristic temporal patterns of their communication signals. Usually the signals arriving at a receiver are contaminated by various kinds of noise. In addition to extrinsic noise, intrinsic noise caused by stochastic processes within the nervous system contributes to making signal recognition a difficult task. The authors asked to what degree intrinsic noise affects temporal resolution and, particularly, the discrimination of similar acoustic signals. This study aims at exploring the neuronal basis for sexual selection, which depends on exploiting subtle differences between basically similar signals. Applying a metric, by which the similarities of spike trains can be assessed, the authors investigated how well the communication signals of different individuals of the same species could be discriminated and correctly classified based on the responses of auditory neurons. This spike train metric yields clues to the optimal temporal resolution with which spike trains should be evaluated. (c) 2008 APA, all rights reserved
Intramodal and Intermodal Functioning of Normal and LD Children
ERIC Educational Resources Information Center
Heath, Earl J.; Early, George H.
1973-01-01
Assessed were the abilities of 50 normal 5-to 9-year-old children and 30 learning disabled 7-to 9-year-old children to recognize temporal patterns presented visually and auditorially (intramodal abilities) and to vocally produce the patterns whether presentation was visual or auditory (intramodal and cross-modal abilities). (MC)
Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies
2016-01-01
Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle-treated animals. These results indicate that NE has a key role in A1 and attention of stressed rats during tone discrimination. PMID:28082872
NASA Astrophysics Data System (ADS)
Allitt, B. J.; Benjaminsen, C.; Morgan, S. J.; Paolini, A. G.
2013-08-01
Objective. Auditory midbrain implants (AMI) provide inadequate frequency discrimination for open set speech perception. AMIs that can take advantage of the tonotopic laminar of the midbrain may be able to better deliver frequency specific perception and lead to enhanced performance. Stimulation strategies that best elicit frequency specific activity need to be identified. This research examined the characteristic frequency (CF) relationship between regions of the auditory cortex (AC), in response to stimulated regions of the inferior colliculus (IC), comparing monopolar, and intralaminar bipolar electrical stimulation. Approach. Electrical stimulation using multi-channel micro-electrode arrays in the IC was used to elicit AC responses in anaesthetized male hooded Wistar rats. The rate of activity in AC regions with CFs within 3 kHz (CF-aligned) and unaligned CFs was used to assess the frequency specificity of responses. Main results. Both monopolar and bipolar IC stimulation led to CF-aligned neural activity in the AC. Altering the distance between the stimulation and reference electrodes in the IC led to changes in both threshold and dynamic range, with bipolar stimulation with 400 µm spacing evoking the lowest AC threshold and widest dynamic range. At saturation, bipolar stimulation elicited a significantly higher mean spike count in the AC at CF-aligned areas than at CF-unaligned areas when electrode spacing was 400 µm or less. Bipolar stimulation using electrode spacing of 400 µm or less also elicited a higher rate of elicited activity in the AC in both CF-aligned and CF-unaligned regions than monopolar stimulation. When electrodes were spaced 600 µm apart no benefit over monopolar stimulation was observed. Furthermore, monopolar stimulation of the external cortex of the IC resulted in more localized frequency responses than bipolar stimulation when stimulation and reference sites were 200 µm apart. Significance. These findings have implications for the future development of AMI, as a bipolar stimulation strategy may improve the ability of implant users to discriminate between frequencies.
Auditory Pitch Perception in Autism Spectrum Disorder Is Associated With Nonverbal Abilities.
Chowdhury, Rakhee; Sharda, Megha; Foster, Nicholas E V; Germain, Esther; Tryfon, Ana; Doyle-Thomas, Krissy; Anagnostou, Evdokia; Hyde, Krista L
2017-11-01
Atypical sensory perception and heterogeneous cognitive profiles are common features of autism spectrum disorder (ASD). However, previous findings on auditory sensory processing in ASD are mixed. Accordingly, auditory perception and its relation to cognitive abilities in ASD remain poorly understood. Here, children with ASD, and age- and intelligence quotient (IQ)-matched typically developing children, were tested on a low- and a higher level pitch processing task. Verbal and nonverbal cognitive abilities were measured using the Wechsler's Abbreviated Scale of Intelligence. There were no group differences in performance on either auditory task or IQ measure. However, there was significant variability in performance on the auditory tasks in both groups that was predicted by nonverbal, not verbal skills. These results suggest that auditory perception is related to nonverbal reasoning rather than verbal abilities in ASD and typically developing children. In addition, these findings provide evidence for preserved pitch processing in school-age children with ASD with average IQ, supporting the idea that there may be a subgroup of individuals with ASD that do not present perceptual or cognitive difficulties. Future directions involve examining whether similar perceptual-cognitive relationships might be observed in a broader sample of individuals with ASD, such as those with language impairment or lower IQ.
Brüggemann, Petra; Szczepek, Agnieszka J.; Klee, Katharina; Gräbel, Stefan; Mazurek, Birgit; Olze, Heidi
2017-01-01
Cochlear implantation (CI) is increasingly being used in the auditory rehabilitation of deaf patients. Here, we investigated whether the auditory rehabilitation can be influenced by the psychological burden caused by mental conditions. Our sample included 47 patients who underwent implantation. All patients were monitored before and 6 months after CI. Auditory performance was assessed using the Oldenburg Inventory (OI) and Freiburg monosyllable (FB MS) speech discrimination test. The health-related quality of life was measured with Nijmegen Cochlear implantation Questionnaire (NCIQ) whereas tinnitus-related distress was measured with the German version of Tinnitus Questionnaire (TQ). We additionally assessed the general perceived quality of life, the perceived stress, coping abilities, anxiety levels and the depressive symptoms. Finally, a structured interview to detect mental conditions (CIDI) was performed before and after surgery. We found that CI led to an overall improvement in auditory performance as well as the anxiety and depression, quality of life, tinnitus distress and coping strategies. CIDI revealed that 81% of patients in our sample had affective, anxiety, and/or somatoform disorders before or after CI. The affective disorders included dysthymia and depression, while anxiety disorders included agoraphobias and unspecified phobias. We also diagnosed cases of somatoform pain disorders and unrecognizable figure somatoform disorders. We found a positive correlation between the auditory performance and the decrease of anxiety and depression, tinnitus-related distress and perceived stress. There was no association between the presence of a mental condition itself and the outcome of auditory rehabilitation. We conclude that the CI candidates exhibit high rates of psychological disorders, and there is a particularly strong association between somatoform disorders and tinnitus. The presence of mental disorders remained unaffected by CI but the degree of psychological burden decreased significantly post-CI. The implants benefitted patients in a number of psychosocial areas, improving the symptoms of depression and anxiety, tinnitus, and their quality of life and coping strategies. The prevalence of mental disorders in patients who are candidates for CI suggests the need for a comprehensive psychological and psychosomatic management of their treatment. PMID:28529479
ERIC Educational Resources Information Center
Langstaff, Nancy
This book, intended for use by inservice teachers, preservice teachers, and parents interested in open classrooms, contains three chapters. "Beginning Reading in an Open Classroom" discusses language development, sight vocabulary, visual discrimination, auditory discrimination, directional concepts, small muscle control, and measurement of…
1989-08-14
DISCRIMINATE SIMILAR KANJt CHARACTERS. Yoshihiro Mori, Kazuhiko Yokosawa . 12 FURTHER EXPLORATIONS IN THE LEARNING OF VISUALLY-GUIDED REACHING: MAKING MURPHY...NETWORKS THAT LEARN TO DISCRIMINATE SIMILAR KANJI CHARACTERS YOSHIHIRO MORI, KAZUHIKO YOKOSAWA , ATR Auditory and Visual Perception Research Laboratories
Auditory Discrimination of Frequency Ratios: The Octave Singularity
ERIC Educational Resources Information Center
Bonnard, Damien; Micheyl, Christophe; Semal, Catherine; Dauman, Rene; Demany, Laurent
2013-01-01
Sensitivity to frequency ratios is essential for the perceptual processing of complex sounds and the appreciation of music. This study assessed the effect of ratio simplicity on ratio discrimination for pure tones presented either simultaneously or sequentially. Each stimulus consisted of four 100-ms pure tones, equally spaced in terms of…
Pollonini, Luca; Olds, Cristen; Abaya, Homer; Bortfeld, Heather; Beauchamp, Michael S; Oghalai, John S
2014-03-01
The primary goal of most cochlear implant procedures is to improve a patient's ability to discriminate speech. To accomplish this, cochlear implants are programmed so as to maximize speech understanding. However, programming a cochlear implant can be an iterative, labor-intensive process that takes place over months. In this study, we sought to determine whether functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging method which is safe to use repeatedly and for extended periods of time, can provide an objective measure of whether a subject is hearing normal speech or distorted speech. We used a 140 channel fNIRS system to measure activation within the auditory cortex in 19 normal hearing subjects while they listed to speech with different levels of intelligibility. Custom software was developed to analyze the data and compute topographic maps from the measured changes in oxyhemoglobin and deoxyhemoglobin concentration. Normal speech reliably evoked the strongest responses within the auditory cortex. Distorted speech produced less region-specific cortical activation. Environmental sounds were used as a control, and they produced the least cortical activation. These data collected using fNIRS are consistent with the fMRI literature and thus demonstrate the feasibility of using this technique to objectively detect differences in cortical responses to speech of different intelligibility. Copyright © 2013 Elsevier B.V. All rights reserved.
Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong
2017-02-01
Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.
Psychophysical Evaluation of Three-Dimensional Auditory Displays
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.
1996-01-01
This report describes the progress made during the second year of a three-year Cooperative Research Agreement. The CRA proposed a program of applied psychophysical research designed to determine the requirements and limitations of three-dimensional (3-D) auditory display systems. These displays present synthesized stimuli to a pilot or virtual workstation operator that evoke auditory images at predetermined positions in space. The images can be either stationary or moving. In previous years, we completed a number of studies that provided data on listeners' abilities to localize stationary sound sources with 3-D displays. The current focus is on the use of 3-D displays in 'natural' listening conditions, which include listeners'head movements, moving sources, multiple sources and 'echoic' sources. The results of our research on one of these topics, the localization of multiple sources, was reported in the most recent Semi-Annual Progress Report (Appendix A). That same progress report described work on two related topics, the influence of a listener's a-priori knowledge of source characteristics and the discriminability of real and virtual sources. In the period since the last Progress Report we have conducted several new studies to evaluate the effectiveness of a new and simpler method for measuring the HRTF's that are used to synthesize virtual sources and have expanded our studies of multiple sources. The results of this research are described below.
Painted Goby Larvae under High-CO2 Fail to Recognize Reef Sounds.
Castro, Joana M; Amorim, M Clara P; Oliveira, Ana P; Gonçalves, Emanuel J; Munday, Philip L; Simpson, Stephen D; Faria, Ana M
2017-01-01
Atmospheric CO2 levels have been increasing at an unprecedented rate due to anthropogenic activity. Consequently, ocean pCO2 is increasing and pH decreasing, affecting marine life, including fish. For many coastal marine fishes, selection of the adult habitat occurs at the end of the pelagic larval phase. Fish larvae use a range of sensory cues, including sound, for locating settlement habitat. This study tested the effect of elevated CO2 on the ability of settlement-stage temperate fish to use auditory cues from adult coastal reef habitats. Wild late larval stages of painted goby (Pomatoschistus pictus) were exposed to control pCO2 (532 μatm, pH 8.06) and high pCO2 (1503 μatm, pH 7.66) conditions, likely to occur in nearshore regions subjected to upwelling events by the end of the century, and tested in an auditory choice chamber for their preference or avoidance to nighttime reef recordings. Fish reared in control pCO2 conditions discriminated reef soundscapes and were attracted by reef recordings. This behaviour changed in fish reared in the high CO2 conditions, with settlement-stage larvae strongly avoiding reef recordings. This study provides evidence that ocean acidification might affect the auditory responses of larval stages of temperate reef fish species, with potentially significant impacts on their survival.
Painted Goby Larvae under High-CO2 Fail to Recognize Reef Sounds
Castro, Joana M.; Amorim, M. Clara P.; Oliveira, Ana P.; Gonçalves, Emanuel J.; Munday, Philip L.; Simpson, Stephen D.
2017-01-01
Atmospheric CO2 levels have been increasing at an unprecedented rate due to anthropogenic activity. Consequently, ocean pCO2 is increasing and pH decreasing, affecting marine life, including fish. For many coastal marine fishes, selection of the adult habitat occurs at the end of the pelagic larval phase. Fish larvae use a range of sensory cues, including sound, for locating settlement habitat. This study tested the effect of elevated CO2 on the ability of settlement-stage temperate fish to use auditory cues from adult coastal reef habitats. Wild late larval stages of painted goby (Pomatoschistus pictus) were exposed to control pCO2 (532 μatm, pH 8.06) and high pCO2 (1503 μatm, pH 7.66) conditions, likely to occur in nearshore regions subjected to upwelling events by the end of the century, and tested in an auditory choice chamber for their preference or avoidance to nighttime reef recordings. Fish reared in control pCO2 conditions discriminated reef soundscapes and were attracted by reef recordings. This behaviour changed in fish reared in the high CO2 conditions, with settlement-stage larvae strongly avoiding reef recordings. This study provides evidence that ocean acidification might affect the auditory responses of larval stages of temperate reef fish species, with potentially significant impacts on their survival. PMID:28125690
The precedence effect and its buildup and breakdown in ferrets and humans
Tolnai, Sandra; Litovsky, Ruth Y.; King, Andrew J.
2014-01-01
Although many studies have examined the precedence effect (PE), few have tested whether it shows a buildup and breakdown in nonhuman animals comparable to that seen in humans. These processes are thought to reflect the ability of the auditory system to adjust to a listener's acoustic environment, and their mechanisms are still poorly understood. In this study, ferrets were trained on a two-alternative forced-choice task to discriminate the azimuthal direction of brief sounds. In one experiment, pairs of noise bursts were presented from two loudspeakers at different interstimulus delays (ISDs). Results showed that localization performance changed as a function of ISD in a manner consistent with the PE being operative. A second experiment investigated buildup and breakdown of the PE by measuring the ability of ferrets to discriminate the direction of a click pair following presentation of a conditioning train. Human listeners were also tested using this paradigm. In both species, performance was better when the test clicks and conditioning train had the same ISD but deteriorated following a switch in the direction of the leading and lagging sounds between the conditioning train and test clicks. These results suggest that ferrets, like humans, experience a buildup and breakdown of the PE. PMID:24606278
Influence of musical expertise and musical training on pitch processing in music and language.
Besson, Mireille; Schön, Daniele; Moreno, Sylvain; Santos, Andréia; Magne, Cyrille
2007-01-01
We review a series of experiments aimed at studying pitch processing in music and speech. These studies were conducted with musician and non musician adults and children. We found that musical expertise improved pitch processing not only in music but also in speech. Demonstrating transfer of training between music and language has interesting applications for second language learning. We also addressed the issue of whether the positive effects of musical expertise are linked with specific predispositions for music or with extensive musical practice. Results of longitudinal studies argue for the later. Finally, we also examined pitch processing in dyslexic children and found that they had difficulties discriminating strong pitch changes that are easily discriminate by normal readers. These results argue for a strong link between basic auditory perception abilities and reading abilities. We used conjointly the behavioral method (Reaction Times and error rates) and the electrophysiological method (recording of the changes in brain electrical activity time-locked to stimulus presentation, Event-Related brain Potentials or ERPs). A set of common processes may be responsible for pitch processing in music and in speech and these processes are shaped by musical practice. These data add evidence in favor of brain plasticity and open interesting perspectives for the remediation of dyslexia using musical training.
Perception of non-verbal auditory stimuli in Italian dyslexic children.
Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo
2010-01-01
Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).
ERIC Educational Resources Information Center
McArthur, Genevieve M.; Hogben, John H.
2012-01-01
Children with specific reading disability (SRD) or specific language impairment (SLI), who scored poorly on an auditory discrimination task, did up to 140 runs on the failed task. Forty-one percent of the children produced widely fluctuating scores that did not improve across runs (untrainable errant performance), 23% produced widely fluctuating…
Auditory Stream Segregation Improves Infants' Selective Attention to Target Tones Amid Distracters
ERIC Educational Resources Information Center
Smith, Nicholas A.; Trainor, Laurel J.
2011-01-01
This study examined the role of auditory stream segregation in the selective attention to target tones in infancy. Using a task adapted from Bregman and Rudnicky's 1975 study and implemented in a conditioned head-turn procedure, infant and adult listeners had to discriminate the temporal order of 2,200 and 2,400 Hz target tones presented alone,…
ERIC Educational Resources Information Center
Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi
2005-01-01
Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…
Selective attention in normal and impaired hearing.
Shinn-Cunningham, Barbara G; Best, Virginia
2008-12-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.
Selective Attention in Normal and Impaired Hearing
Shinn-Cunningham, Barbara G.; Best, Virginia
2008-01-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202
[Auditory and corporal laterality, logoaudiometry, and monaural hearing aid gain].
Benavides, Mariela; Peñaloza-López, Yolanda R; de la Sancha-Jiménez, Sabino; García Pedroza, Felipe; Gudiño, Paula K
2007-12-01
To identify the auditory or clinical test that has the best correlation with the ear in which we apply the monaural hearing aid in symmetric bilateral hearing loss. A total of 37 adult patients with symmetric bilateral hearing loss were examined regarding the correlation between the best score in speech discrimination test, corporal laterality, auditory laterality with dichotic digits in Spanish and score for filtered words with monaural hearing aid. The best correlation was obtained between auditory laterality and gain with hearing aid (0.940). The dichotic test for auditory laterality is a good tool for identifying the best ear in which to apply a monaural hearing aid. The results of this paper suggest the necessity to apply this test in patients before a hearing aid is indicated.
Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter
2002-12-01
Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.
Hearing loss in older adults affects neural systems supporting speech comprehension.
Peelle, Jonathan E; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur
2011-08-31
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.
Hearing loss in older adults affects neural systems supporting speech comprehension
Peelle, Jonathan E.; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur
2011-01-01
Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging (fMRI) to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry (VBM), demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task. PMID:21880924
NASA Astrophysics Data System (ADS)
Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.
1984-08-01
This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to acoustic signals. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays. Appendix 1 also contains citations of the scientific literature on which was based the answers to each question. There are nineteen questions and answers, and more than two hundred citations contained in the list of references given in Appendix 2. This is one of two related works, the other of which reviewed the literature in the areas of auditory attention, recognition memory, and auditory perception of patterns, pitch, and loudness.
Double dissociation of 'what' and 'where' processing in auditory cortex.
Lomber, Stephen G; Malhotra, Shveta
2008-05-01
Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.
Yang, Lixue; Chen, Kean
2015-11-01
To improve the design of underwater target recognition systems based on auditory perception, this study compared human listeners with automatic classifiers. Performances measures and strategies in three discrimination experiments, including discriminations between man-made and natural targets, between ships and submarines, and among three types of ships, were used. In the experiments, the subjects were asked to assign a score to each sound based on how confident they were about the category to which it belonged, and logistic regression, which represents linear discriminative models, also completed three similar tasks by utilizing many auditory features. The results indicated that the performances of logistic regression improved as the ratio between inter- and intra-class differences became larger, whereas the performances of the human subjects were limited by their unfamiliarity with the targets. Logistic regression performed better than the human subjects in all tasks but the discrimination between man-made and natural targets, and the strategies employed by excellent human subjects were similar to that of logistic regression. Logistic regression and several human subjects demonstrated similar performances when discriminating man-made and natural targets, but in this case, their strategies were not similar. An appropriate fusion of their strategies led to further improvement in recognition accuracy.
Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain
2015-05-01
Psychophysiological evidence supports a music-language association, such that experience in one domain can impact processing required in the other domain. We investigated the bidirectionality of this association by measuring event-related potentials (ERPs) in native English-speaking musicians, native tone language (Cantonese) nonmusicians, and native English-speaking nonmusician controls. We tested the degree to which pitch expertise stemming from musicianship or tone language experience similarly enhances the neural encoding of auditory information necessary for speech and music processing. Early cortical discriminatory processing for music and speech sounds was characterized using the mismatch negativity (MMN). Stimuli included 'large deviant' and 'small deviant' pairs of sounds that differed minimally in pitch (fundamental frequency, F0; contrastive musical tones) or timbre (first formant, F1; contrastive speech vowels). Behavioural F0 and F1 difference limen tasks probed listeners' perceptual acuity for these same acoustic features. Musicians and Cantonese speakers performed comparably in pitch discrimination; only musicians showed an additional advantage on timbre discrimination performance and an enhanced MMN responses to both music and speech. Cantonese language experience was not associated with enhancements on neural measures, despite enhanced behavioural pitch acuity. These data suggest that while both musicianship and tone language experience enhance some aspects of auditory acuity (behavioural pitch discrimination), musicianship confers farther-reaching enhancements to auditory function, tuning both pitch and timbre-related brain processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
A biologically plausible computational model for auditory object recognition.
Larson, Eric; Billimoria, Cyrus P; Sen, Kamal
2009-01-01
Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.
Altvater-Mackensen, Nicole; Mani, Nivedita; Grossmann, Tobias
2016-02-01
Recent studies suggest that infants' audiovisual speech perception is influenced by articulatory experience (Mugitani et al., 2008; Yeung & Werker, 2013). The current study extends these findings by testing if infants' emerging ability to produce native sounds in babbling impacts their audiovisual speech perception. We tested 44 6-month-olds on their ability to detect mismatches between concurrently presented auditory and visual vowels and related their performance to their productive abilities and later vocabulary size. Results show that infants' ability to detect mismatches between auditory and visually presented vowels differs depending on the vowels involved. Furthermore, infants' sensitivity to mismatches is modulated by their current articulatory knowledge and correlates with their vocabulary size at 12 months of age. This suggests that-aside from infants' ability to match nonnative audiovisual cues (Pons et al., 2009)-their ability to match native auditory and visual cues continues to develop during the first year of life. Our findings point to a potential role of salient vowel cues and productive abilities in the development of audiovisual speech perception, and further indicate a relation between infants' early sensitivity to audiovisual speech cues and their later language development. PsycINFO Database Record (c) 2016 APA, all rights reserved.
White matter microstructural properties correlate with sensorimotor synchronization abilities.
Blecher, Tal; Tal, Idan; Ben-Shachar, Michal
2016-09-01
Sensorimotor synchronization (SMS) to an external auditory rhythm is a developed ability in humans, particularly evident in dancing and singing. This ability is typically measured in the lab via a simple task of finger tapping to an auditory beat. While simplistic, there is some evidence that poor performance on this task could be related to impaired phonological and reading abilities in children. Auditory-motor synchronization is hypothesized to rely on a tight coupling between auditory and motor neural systems, but the specific pathways that mediate this coupling have not been identified yet. In this study, we test this hypothesis and examine the contribution of fronto-temporal and callosal connections to specific measures of rhythmic synchronization. Twenty participants went through SMS and diffusion magnetic resonance imaging (dMRI) measurements. We quantified the mean asynchrony between an auditory beat and participants' finger taps, as well as the time to resynchronize (TTR) with an altered meter, and examined the correlations between these behavioral measures and diffusivity in a small set of predefined pathways. We found significant correlations between asynchrony and fractional anisotropy (FA) in the left (but not right) arcuate fasciculus and in the temporal segment of the corpus callosum. On the other hand, TTR correlated with FA in the precentral segment of the callosum. To our knowledge, this is the first demonstration that relates these particular white matter tracts with performance on an auditory-motor rhythmic synchronization task. We propose that left fronto-temporal and temporal-callosal fibers are involved in prediction and constant comparison between auditory inputs and motor commands, while inter-hemispheric connections between the motor/premotor cortices contribute to successful resynchronization of motor responses with a new external rhythm, perhaps via inhibition of tapping to the previous rhythm. Our results indicate that auditory-motor synchronization skills are associated with anatomical pathways that have been previously related to phonological awareness, thus offering a possible anatomical basis for the behavioral covariance between these abilities. Copyright © 2016 Elsevier Inc. All rights reserved.
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Hellier, Jennifer L; Arevalo, Nicole L; Blatner, Megan J; Dang, An K; Clevenger, Amy C; Adams, Catherine E; Restrepo, Diego
2010-10-28
Previous studies have shown that schizophrenics have decreased expression of α7-nicotinic acetylcholine (α7) receptors in the hippocampus and other brain regions, paranoid delusions, disorganized speech, deficits in auditory gating (i.e., inability to inhibit neuronal responses to repetitive auditory stimuli), and difficulties in odor discrimination and detection. Here we use mice with decreased α7 expression that also show a deficit in auditory gating to determine if these mice have similar deficits in olfaction. In the adult mouse olfactory bulb (OB), α7 expression localizes in the glomerular layer; however, the functional role of α7 is unknown. We show that inbred mouse strains (i.e., C3H and C57) with varying α7 expressions (e.g., α7 wild-type [α7+/+], α7 heterozygous knock-out [α7+/-] and α7 homozygous knock-out mice [α7-/-]) significantly differ in odor discrimination and detection of chemically-related odorant pairs. Using [(125)I] α-bungarotoxin (α-BGT) autoradiography, α7 expression was measured in the OB. As previously demonstrated, α-BGT binding was localized to the glomerular layer. Significantly more expression of α7 was observed in C57 α7+/+ mice compared to C3H α7+/+ mice. Furthermore, C57 α7+/+ mice were able to detect a significantly lower concentration of an odor in a mixture compared to C3H α7+/+ mice. Both C57 and C3H α7+/+ mice discriminated between chemically-related odorants sooner than α7+/- or α7-/- mice. These data suggest that α7-nicotinic-receptors contribute strongly to olfactory discrimination and detection in mice and may be one of the mechanisms producing olfactory dysfunction in schizophrenics. Copyright © 2010 Elsevier B.V. All rights reserved.
From bird to sparrow: Learning-induced modulations in fine-grained semantic discrimination.
De Meo, Rosanna; Bourquin, Nathalie M-P; Knebel, Jean-François; Murray, Micah M; Clarke, Stephanie
2015-09-01
Recognition of environmental sounds is believed to proceed through discrimination steps from broad to more narrow categories. Very little is known about the neural processes that underlie fine-grained discrimination within narrow categories or about their plasticity in relation to newly acquired expertise. We investigated how the cortical representation of birdsongs is modulated by brief training to recognize individual species. During a 60-minute session, participants learned to recognize a set of birdsongs; they improved significantly their performance for trained (T) but not control species (C), which were counterbalanced across participants. Auditory evoked potentials (AEPs) were recorded during pre- and post-training sessions. Pre vs. post changes in AEPs were significantly different between T and C i) at 206-232ms post stimulus onset within a cluster on the anterior part of the left superior temporal gyrus; ii) at 246-291ms in the left middle frontal gyrus; and iii) 512-545ms in the left middle temporal gyrus as well as bilaterally in the cingulate cortex. All effects were driven by weaker activity for T than C species. Thus, expertise in discriminating T species modulated early stages of semantic processing, during and immediately after the time window that sustains the discrimination between human vs. animal vocalizations. Moreover, the training-induced plasticity is reflected by the sharpening of a left lateralized semantic network, including the anterior part of the temporal convexity and the frontal cortex. Training to identify birdsongs influenced, however, also the processing of C species, but at a much later stage. Correct discrimination of untrained sounds seems to require an additional step which results from lower-level features analysis such as apperception. We therefore suggest that the access to objects within an auditory semantic category is different and depends on subject's level of expertise. More specifically, correct intra-categorical auditory discrimination for untrained items follows the temporal hierarchy and transpires in a late stage of semantic processing. On the other hand, correct categorization of individually trained stimuli occurs earlier, during a period contemporaneous with human vs. animal vocalization discrimination, and involves a parallel semantic pathway requiring expertise. Copyright © 2015 Elsevier Inc. All rights reserved.
Shared neural substrates for song discrimination in parental and parasitic songbirds.
Louder, Matthew I M; Voss, Henning U; Manna, Thomas J; Carryl, Sophia S; London, Sarah E; Balakrishnan, Christopher N; Hauber, Mark E
2016-05-27
In many social animals, early exposure to conspecific stimuli is critical for the development of accurate species recognition. Obligate brood parasitic songbirds, however, forego parental care and young are raised by heterospecific hosts in the absence of conspecific stimuli. Having evolved from non-parasitic, parental ancestors, how brood parasites recognize their own species remains unclear. In parental songbirds (e.g. zebra finch Taeniopygia guttata), the primary and secondary auditory forebrain areas are known to be critical in the differential processing of conspecific vs. heterospecific songs. Here we demonstrate that the same auditory brain regions underlie song discrimination in adult brood parasitic pin-tailed whydahs (Vidua macroura), a close relative of the zebra finch lineage. Similar to zebra finches, whydahs showed stronger behavioral responses during conspecific vs. heterospecific song and tone pips as well as increased neural responses within the auditory forebrain, as measured by both functional magnetic resonance imaging (fMRI) and immediate early gene (IEG) expression. Given parallel behavioral and neuroanatomical patterns of song discrimination, our results suggest that the evolutionary transition to brood parasitism from parental songbirds likely involved an "evolutionary tinkering" of existing proximate mechanisms, rather than the wholesale reworking of the neural substrates of species recognition. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The dispersion-focalization theory of sound systems
NASA Astrophysics Data System (ADS)
Schwartz, Jean-Luc; Abry, Christian; Boë, Louis-Jean; Vallée, Nathalie; Ménard, Lucie
2005-04-01
The Dispersion-Focalization Theory states that sound systems in human languages are shaped by two major perceptual constraints: dispersion driving auditory contrast towards maximal or sufficient values [B. Lindblom, J. Phonetics 18, 135-152 (1990)] and focalization driving auditory spectra towards patterns with close neighboring formants. Dispersion is computed from the sum of the inverse squared inter-spectra distances in the (F1, F2, F3, F4) space, using a non-linear process based on the 3.5 Bark critical distance to estimate F2'. Focalization is based on the idea that close neighboring formants produce vowel spectra with marked peaks, easier to process and memorize in the auditory system. Evidence for increased stability of focal vowels in short-term memory was provided in a discrimination experiment on adult French subjects [J. L. Schwartz and P. Escudier, Speech Comm. 8, 235-259 (1989)]. A reanalysis of infant discrimination data shows that focalization could well be the responsible for recurrent discrimination asymmetries [J. L. Schwartz et al., Speech Comm. (in press)]. Recent data about children vowel production indicate that focalization seems to be part of the perceptual templates driving speech development. The Dispersion-Focalization Theory produces valid predictions for both vowel and consonant systems, in relation with available databases of human languages inventories.
Schumacher, Joseph W.; Schneider, David M.
2011-01-01
The majority of sensory physiology experiments have used anesthesia to facilitate the recording of neural activity. Current techniques allow researchers to study sensory function in the context of varying behavioral states. To reconcile results across multiple behavioral and anesthetic states, it is important to consider how and to what extent anesthesia plays a role in shaping neural response properties. The role of anesthesia has been the subject of much debate, but the extent to which sensory coding properties are altered by anesthesia has yet to be fully defined. In this study we asked how urethane, an anesthetic commonly used for avian and mammalian sensory physiology, affects the coding of complex communication vocalizations (songs) and simple artificial stimuli in the songbird auditory midbrain. We measured spontaneous and song-driven spike rates, spectrotemporal receptive fields, and neural discriminability from responses to songs in single auditory midbrain neurons. In the same neurons, we recorded responses to pure tone stimuli ranging in frequency and intensity. Finally, we assessed the effect of urethane on population-level representations of birdsong. Results showed that intrinsic neural excitability is significantly depressed by urethane but that spectral tuning, single neuron discriminability, and population representations of song do not differ significantly between unanesthetized and anesthetized animals. PMID:21543752
Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns
Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Valente, Giancarlo; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia
2017-01-01
Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2–4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice). PMID:28420788
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
Web-based auditory self-training system for adult and elderly users of hearing aids.
Vitti, Simone Virginia; Blasca, Wanderléia Quinhoneiro; Sigulem, Daniel; Torres Pisa, Ivan
2015-01-01
Adults and elderly users of hearing aids suffer psychosocial reactions as a result of hearing loss. Auditory rehabilitation is typically carried out with support from a speech therapist, usually in a clinical center. For these cases, there is a lack of computer-based self-training tools for minimizing the psychosocial impact of hearing deficiency. To develop and evaluate a web-based auditory self-training system for adult and elderly users of hearing aids. Two modules were developed for the web system: an information module based on guidelines for using hearing aids; and an auditory training module presenting a sequence of training exercises for auditory abilities along the lines of the auditory skill steps within auditory processing. We built aweb system using PHP programming language and a MySQL database .from requirements surveyed through focus groups that were conducted by healthcare information technology experts. The web system was evaluated by speech therapists and hearing aid users. An initial sample of 150 patients at DSA/HRAC/USP was defined to apply the system with the inclusion criteria that: the individuals should be over the age of 25 years, presently have hearing impairment, be a hearing aid user, have a computer and have internet experience. They were divided into two groups: a control group (G1) and an experimental group (G2). These patients were evaluated clinically using the HHIE for adults and HHIA for elderly people, before and after system implementation. A third web group was formed with users who were invited through social networks for their opinions on using the system. A questionnaire evaluating hearing complaints was given to all three groups. The study hypothesis considered that G2 would present greater auditory perception, higher satisfaction and fewer complaints than G1 after the auditory training. It was expected that G3 would have fewer complaints regarding use and acceptance of the system. The web system, which was named SisTHA portal, was finalized, rated by experts and hearing aid users and approved for use. The system comprised auditory skills training along five lines: discrimination; recognition; comprehension and temporal sequencing; auditory closure; and cognitive-linguistic and communication strategies. Users needed to undergo auditory training over a minimum period of 1 month: 5 times a week for 30 minutes a day. Comparisons were made between G1 and G2 and web system use by G3. The web system developed was approved for release to hearing aid users. It is expected that the self-training will help improve effective use of hearing aids, thereby decreasing their rejection.
Exploration of Acoustic Features for Automatic Vowel Discrimination in Spontaneous Speech
ERIC Educational Resources Information Center
Tyson, Na'im R.
2012-01-01
In an attempt to understand what acoustic/auditory feature sets motivated transcribers towards certain labeling decisions, I built machine learning models that were capable of discriminating between canonical and non-canonical vowels excised from the Buckeye Corpus. Specifically, I wanted to model when the dictionary form and the transcribed-form…
Auditory velocity discrimination in the horizontal plane at very high velocities.
Frissen, Ilja; Féron, François-Xavier; Guastavino, Catherine
2014-10-01
We determined velocity discrimination thresholds and Weber fractions for sounds revolving around the listener at very high velocities. Sounds used were a broadband white noise and two harmonic sounds with fundamental frequencies of 330 Hz and 1760 Hz. Experiment 1 used velocities ranging between 288°/s and 720°/s in an acoustically treated room and Experiment 2 used velocities between 288°/s and 576°/s in a highly reverberant hall. A third experiment addressed potential confounds in the first two experiments. The results show that people can reliably discriminate velocity at very high velocities and that both thresholds and Weber fractions decrease as velocity increases. These results violate Weber's law but are consistent with the empirical trend observed in the literature. While thresholds for the noise and 330 Hz harmonic stimulus were similar, those for the 1760 Hz harmonic stimulus were substantially higher. There were no reliable differences in velocity discrimination between the two acoustical environments, suggesting that auditory motion perception at high velocities is robust against the effects of reverberation. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Choudhury, Naseem; Leppanen, Paavo H. T.; Leevers, Hilary J.; Benasich, April A.
2007-01-01
An infant's ability to process auditory signals presented in rapid succession (i.e. rapid auditory processing abilities [RAP]) has been shown to predict differences in language outcomes in toddlers and preschool children. Early deficits in RAP abilities may serve as a behavioral marker for language-based learning disabilities. The purpose of this…
Scheperle, Rachel A.; Abbas, Paul J.
2014-01-01
Objectives The ability to perceive speech is related to the listener’s ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Design Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every-other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex (ACC) with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel-discrimination and the Bamford-Kowal-Bench Sentence-in-Noise (BKB-SIN) test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. Results All electrophysiological measures were significantly correlated with each other and with speech perception for the mixed-model analysis, which takes into account multiple measures per person (i.e. experimental MAPs). The ECAP measures were the best predictor of speech perception. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech; spectral ACC amplitude was the strongest predictor. Conclusions The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be the most useful for within-subject applications, when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered. PMID:25658746
Auditory temporal processing skills in musicians with dyslexia.
Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha
2014-08-01
The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.
Rizza, Aurora; Terekhov, Alexander V; Montone, Guglielmo; Olivetti-Belardinelli, Marta; O'Regan, J Kevin
2018-01-01
Tactile speech aids, though extensively studied in the 1980's and 1990's, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level.
Rizza, Aurora; Terekhov, Alexander V.; Montone, Guglielmo; Olivetti-Belardinelli, Marta; O’Regan, J. Kevin
2018-01-01
Tactile speech aids, though extensively studied in the 1980’s and 1990’s, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level. PMID:29875719
Enhanced attention-dependent activity in the auditory cortex of older musicians.
Zendel, Benjamin Rich; Alain, Claude
2014-01-01
Musical training improves auditory processing abilities, which correlates with neuro-plastic changes in exogenous (input-driven) and endogenous (attention-dependent) components of auditory event-related potentials (ERPs). Evidence suggests that musicians, compared to non-musicians, experience less age-related decline in auditory processing abilities. Here, we investigated whether lifelong musicianship mitigates exogenous or endogenous processing by measuring auditory ERPs in younger and older musicians and non-musicians while they either attended to auditory stimuli or watched a muted subtitled movie of their choice. Both age and musical training-related differences were observed in the exogenous components; however, the differences between musicians and non-musicians were similar across the lifespan. These results suggest that exogenous auditory ERPs are enhanced in musicians, but decline with age at the same rate. On the other hand, attention-related activity, modeled in the right auditory cortex using a discrete spatiotemporal source analysis, was selectively enhanced in older musicians. This suggests that older musicians use a compensatory strategy to overcome age-related decline in peripheral and exogenous processing of acoustic information. Copyright © 2014 Elsevier Inc. All rights reserved.
Lense, Miriam D; Shivers, Carolyn M; Dykens, Elisabeth M
2013-01-01
Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.
Cognitive Abilities Relate to Self-Reported Hearing Disability
ERIC Educational Resources Information Center
Zekveld, Adriana A.; George, Erwin L. J.; Houtgast, Tammo; Kramer, Sophia E.
2013-01-01
Purpose: In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. Method: Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1996) and…
Text as a Supplement to Speech in Young and Older Adults a)
Krull, Vidya; Humes, Larry E.
2015-01-01
Objective The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, we tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. Our working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speechreading literature. We hypothesized that: 1) combining auditory and visual text information will result in improved recognition accuracy compared to auditory or visual text information alone; 2) benefit from supplementing speech with visual text (auditory and visual enhancement) in young adults will be greater than that in older adults; and 3) individual differences in performance on perceptual measures would be associated with cognitive abilities. Design Fifteen young adults with normal hearing, fifteen older adults with normal hearing, and fifteen older adults with hearing loss participated in this study. All participants completed sentence recognition tasks in auditory-only, text-only, and combined auditory-text conditions. The auditory sentence stimuli were spectrally shaped to restore audibility for the older participants with impaired hearing. All participants also completed various cognitive measures, including measures of working memory, processing speed, verbal comprehension, perceptual and cognitive speed, processing efficiency, inhibition, and the ability to form wholes from parts. Group effects were examined for each of the perceptual and cognitive measures. Audiovisual benefit was calculated relative to performance on auditory-only and visual-text only conditions. Finally, the relationship between perceptual measures and other independent measures were examined using principal-component factor analyses, followed by regression analyses. Results Both young and older adults performed similarly on nine out of ten perceptual measures (auditory, visual, and combined measures). Combining degraded speech with partially correct text from an automatic speech recognizer improved the understanding of speech in both young and older adults, relative to both auditory- and text-only performance. In all subjects, cognition emerged as a key predictor for a general speech-text integration ability. Conclusions These results suggest that neither age nor hearing loss affected the ability of subjects to benefit from text when used to support speech, after ensuring audibility through spectral shaping. These results also suggest that the benefit obtained by supplementing auditory input with partially accurate text is modulated by cognitive ability, specifically lexical and verbal skills. PMID:26458131
The Measurement of Auditory Abilities of Blind, Partially Sighted, and Sighted Children.
ERIC Educational Resources Information Center
Stankov, Lazar; Spilsbury, Georgina
1979-01-01
Auditory tests were administered to 30 blind, partially sighted, and sighted children. Overall, the blind and sighted were equal on most of the measured abilities. Blind children performed well on tonal memory tests. Partially sighted children performed more poorly than the other two groups. (MH)
ERIC Educational Resources Information Center
Sutcliffe, Paul A.; Bishop, Dorothy V. M.; Houghton, Stephen; Taylor, Myra
2006-01-01
Debate continues over the hypothesis that children with language or literacy difficulties have a genuine auditory processing deficit. Several recent studies have reported deficits in frequency discrimination (FD), but it is unclear whether these are genuine perceptual impairments or reflective of the comorbid attentional problems that exist in…
ERIC Educational Resources Information Center
Janssen, David Rainsford
This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…
Input from the Medial Geniculate Nucleus Modulates Amygdala Encoding of Fear Memory Discrimination
ERIC Educational Resources Information Center
Ferrara, Nicole C.; Cullen, Patrick K.; Pullins, Shane P.; Rotondo, Elena K.; Helmstetter, Fred J.
2017-01-01
Generalization of fear can involve abnormal responding to cues that signal safety and is common in people diagnosed with post-traumatic stress disorder. Differential auditory fear conditioning can be used as a tool to measure changes in fear discrimination and generalization. Most prior work in this area has focused on elevated amygdala activity…
Identification of a motor to auditory pathway important for vocal learning
Roberts, Todd F.; Hisey, Erin; Tanaka, Masashi; Kearney, Matthew; Chattree, Gaurav; Yang, Cindy F.; Shah, Nirao M.; Mooney, Richard
2017-01-01
Summary Learning to vocalize depends on the ability to adaptively modify the temporal and spectral features of vocal elements. Neurons that convey motor-related signals to the auditory system are theorized to facilitate vocal learning, but the identity and function of such neurons remain unknown. Here we identify a previously unknown neuron type in the songbird brain that transmits vocal motor signals to the auditory cortex. Genetically ablating these neurons in juveniles disrupted their ability to imitate features of an adult tutor’s song. Ablating these neurons in adults had little effect on previously learned songs, but interfered with their ability to adaptively modify the duration of vocal elements and largely prevented the degradation of song’s temporal features normally caused by deafening. These findings identify a motor to auditory circuit essential to vocal imitation and to the adaptive modification of vocal timing. PMID:28504672
Frontal Cortex Activation Causes Rapid Plasticity of Auditory Cortical Processing
Winkowski, Daniel E.; Bandyopadhyay, Sharba; Shamma, Shihab A.
2013-01-01
Neurons in the primary auditory cortex (A1) can show rapid changes in receptive fields when animals are engaged in sound detection and discrimination tasks. The source of a signal to A1 that triggers these changes is suspected to be in frontal cortical areas. How or whether activity in frontal areas can influence activity and sensory processing in A1 and the detailed changes occurring in A1 on the level of single neurons and in neuronal populations remain uncertain. Using electrophysiological techniques in mice, we found that pairing orbitofrontal cortex (OFC) stimulation with sound stimuli caused rapid changes in the sound-driven activity within A1 that are largely mediated by noncholinergic mechanisms. By integrating in vivo two-photon Ca2+ imaging of A1 with OFC stimulation, we found that pairing OFC activity with sounds caused dynamic and selective changes in sensory responses of neural populations in A1. Further, analysis of changes in signal and noise correlation after OFC pairing revealed improvement in neural population-based discrimination performance within A1. This improvement was frequency specific and dependent on correlation changes. These OFC-induced influences on auditory responses resemble behavior-induced influences on auditory responses and demonstrate that OFC activity could underlie the coordination of rapid, dynamic changes in A1 to dynamic sensory environments. PMID:24227723
Temporal plasticity in auditory cortex improves neural discrimination of speech sounds
Engineer, Crystal T.; Shetake, Jai A.; Engineer, Navzer D.; Vrana, Will A.; Wolf, Jordan T.; Kilgard, Michael P.
2017-01-01
Background Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. Objective/Hypothesis We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. Methods VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Results Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. Conclusion This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. PMID:28131520
[Children with specific language impairment: electrophysiological and pedaudiological findings].
Rinker, T; Hartmann, K; Smith, E; Reiter, R; Alku, P; Kiefer, M; Brosch, S
2014-08-01
Auditory deficits may be at the core of the language delay in children with Specific Language Impairment (SLI). It was therefore hypothesized that children with SLI perform poorly on 4 tests typically used to diagnose central auditory processing disorder (CAPD) as well in the processing of phonetic and tone stimuli in an electrophysiological experiment. 14 children with SLI (mean age 61,7 months) and 16 children without SLI (mean age 64,9 months) were tested with 4 tasks: non-word repetition, language discrimination in noise, directional hearing, and dichotic listening. The electrophysiological recording Mismatch Negativity (MMN) employed sine tones (600 vs. 650 Hz) and phonetic stimuli (/ε/ versus /e/). Control children and children with SLI differed significantly in the non-word repetition as well as in the dichotic listening task but not in the two other tasks. Only the control children recognized the frequency difference in the MMN-experiment. The phonetic difference was discriminated by both groups, however, effects were longer lasting for the control children. Group differences were not significant. Children with SLI show limitations in auditory processing that involve either a complex task repeating unfamiliar or difficult material and show subtle deficits in auditory processing at the neural level. © Georg Thieme Verlag KG Stuttgart · New York.
Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System
Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.
2015-01-01
Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843
Auditory perception and the control of spatially coordinated action of deaf and hearing children.
Savelsbergh, G J; Netelenbos, J B; Whiting, H T
1991-03-01
From birth onwards, auditory stimulation directs and intensifies visual orientation behaviour. In deaf children, by definition, auditory perception cannot take place and cannot, therefore, make a contribution to visual orientation to objects approaching from outside the initial field of view. In experiment 1, a difference in catching ability is demonstrated between deaf and hearing children (10-13 years of age) when the ball approached from the periphery or from outside the field of view. No differences in catching ability between the two groups occurred when the ball approached from within the field of view. A second experiment was conducted in order to determine if differences in catching ability between deaf and hearing children could be attributed to execution of slow orientating movements and/or slow reaction time as a result of the auditory loss. The deaf children showed slower reaction times. No differences were found in movement times between deaf and hearing children. Overall, the findings suggest that a lack of auditory stimulation during development can lead to deficiencies in the coordination of actions such as catching which are both spatially and temporally constrained.
Autism-specific covariation in perceptual performances: "g" or "p" factor?
Meilleur, Andrée-Anne S; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent
2014-01-01
Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor). Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor), which may drive perceptual abilities differently in autistic and non-autistic individuals.
Autism-Specific Covariation in Perceptual Performances: “g” or “p” Factor?
Meilleur, Andrée-Anne S.; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent
2014-01-01
Background Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. Methods We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. Results In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Conclusions Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or “g” factor). Instead, this residual covariation is accounted for by a common perceptual process (or “p” factor), which may drive perceptual abilities differently in autistic and non-autistic individuals. PMID:25117450
Verbal auditory agnosia in a patient with traumatic brain injury: A case report.
Kim, Jong Min; Woo, Seung Beom; Lee, Zeeihn; Heo, Sung Jae; Park, Donghwi
2018-03-01
Verbal auditory agnosia is the selective inability to recognize verbal sounds. Patients with this disorder lose the ability to understand language, write from dictation, and repeat words with reserved ability to identify nonverbal sounds. However, to the best of our knowledge, there was no report about verbal auditory agnosia in adult patient with traumatic brain injury. He was able to clearly distinguish between language and nonverbal sounds, and he did not have any difficulty in identifying the environmental sounds. However, he did not follow oral commands and could not repeat and dictate words. On the other hand, he had fluent and comprehensible speech, and was able to read and understand written words and sentences. Verbal auditory agnosia INTERVENTION:: He received speech therapy and cognitive rehabilitation during his hospitalization, and he practiced understanding of verbal language by providing written sentences together. Two months after hospitalization, he regained his ability to understand some verbal words. Six months after hospitalization, his ability to understand verbal language was improved to an understandable level when speaking slowly in front of his eyes, but his comprehension of verbal sound language was still word level, not sentence level. This case gives us the lesson that the evaluation of auditory functions as well as cognition and language functions important for accurate diagnosis and appropriate treatment, because the verbal auditory agnosia tends to be easily misdiagnosed as hearing impairment, cognitive dysfunction and sensory aphasia.
Stevenson, Ryan A; Schlesinger, Joseph J; Wallace, Mark T
2013-02-01
Anesthesiology requires performing visually oriented procedures while monitoring auditory information about a patient's vital signs. A concern in operating room environments is the amount of competing information and the effects that divided attention has on patient monitoring, such as detecting auditory changes in arterial oxygen saturation via pulse oximetry. The authors measured the impact of visual attentional load and auditory background noise on the ability of anesthesia residents to monitor the pulse oximeter auditory display in a laboratory setting. Accuracies and response times were recorded reflecting anesthesiologists' abilities to detect changes in oxygen saturation across three levels of visual attention in quiet and with noise. Results show that visual attentional load substantially affects the ability to detect changes in oxygen saturation concentrations conveyed by auditory cues signaling 99 and 98% saturation. These effects are compounded by auditory noise, up to a 17% decline in performance. These deficits are seen in the ability to accurately detect a change in oxygen saturation and in speed of response. Most anesthesia accidents are initiated by small errors that cascade into serious events. Lack of monitor vigilance and inattention are two of the more commonly cited factors. Reducing such errors is thus a priority for improving patient safety. Specifically, efforts to reduce distractors and decrease background noise should be considered during induction and emergence, periods of especially high risk, when anesthesiologists has to attend to many tasks and are thus susceptible to error.
Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng
2014-12-01
To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Kanaya, Shoko; Fujisaki, Waka; Nishida, Shin'ya; Furukawa, Shigeto; Yokosawa, Kazuhiko
2015-02-01
Temporal phase discrimination is a useful psychophysical task to evaluate how sensory signals, synchronously detected in parallel, are perceptually bound by human observers. In this task two stimulus sequences synchronously alternate between two states (say, A-B-A-B and X-Y-X-Y) in either of two temporal phases (ie A and B are respectively paired with X and Y, or vice versa). The critical alternation frequency beyond which participants cannot discriminate the temporal phase is measured as an index characterizing the temporal property of the underlying binding process. This task has been used to reveal the mechanisms underlying visual and cross-modal bindings. To directly compare these binding mechanisms with those in another modality, this study used the temporal phase discrimination task to reveal the processes underlying auditory bindings. The two sequences were alternations between two pitches. We manipulated the distance between the two sequences by changing intersequence frequency separation, or presentation ears (diotic vs dichotic). Results showed that the alternation frequency limit ranged from 7 to 30 Hz, becoming higher as the intersequence distance decreased, as is the case with vision. However, unlike vision, auditory phase discrimination limits were higher and more variable across participants. © 2015 SAGE Publications.
Saltuklaroglu, Tim; Harkrider, Ashley W; Thornton, David; Jenson, David; Kittilstved, Tiffani
2017-06-01
Stuttering is linked to sensorimotor deficits related to internal modeling mechanisms. This study compared spectral power and oscillatory activity of EEG mu (μ) rhythms between persons who stutter (PWS) and controls in listening and auditory discrimination tasks. EEG data were analyzed from passive listening in noise and accurate (same/different) discrimination of tones or syllables in quiet and noisy backgrounds. Independent component analysis identified left and/or right μ rhythms with characteristic alpha (α) and beta (β) peaks localized to premotor/motor regions in 23 of 27 people who stutter (PWS) and 24 of 27 controls. PWS produced μ spectra with reduced β amplitudes across conditions, suggesting reduced forward modeling capacity. Group time-frequency differences were associated with noisy conditions only. PWS showed increased μ-β desynchronization when listening to noise and early in discrimination events, suggesting evidence of heightened motor activity that might be related to forward modeling deficits. PWS also showed reduced μ-α synchronization in discrimination conditions, indicating reduced sensory gating. Together these findings indicate spectral and oscillatory analyses of μ rhythms are sensitive to stuttering. More specifically, they can reveal stuttering-related sensorimotor processing differences in listening and auditory discrimination that also may be influenced by basal ganglia deficits. Copyright © 2017 Elsevier Inc. All rights reserved.
Current understanding of auditory neuropathy.
Boo, Nem-Yun
2008-12-01
Auditory neuropathy is defined by the presence of normal evoked otoacoustic emissions (OAE) and absent or abnormal auditory brainstem responses (ABR). The sites of lesion could be at the cochlear inner hair cells, spiral ganglion cells of the cochlea, synapse between the inner hair cells and auditory nerve, or the auditory nerve itself. Genetic, infectious or neonatal/perinatal insults are the 3 most commonly identified underlying causes. Children usually present with delay in speech and language development while adult patients present with hearing loss and disproportionately poor speech discrimination for the degree of hearing loss. Although cochlear implant is the treatment of choice, current evidence show that it benefits only those patients with endocochlear lesions, but not those with cochlear nerve deficiency or central nervous system disorders. As auditory neuropathy is a disorder with potential long-term impact on a child's development, early hearing screen using both OAE and ABR should be carried out on all newborns and infants to allow early detection and intervention.
Emergent selectivity for task-relevant stimuli in higher-order auditory cortex
Atiani, Serin; David, Stephen V.; Elgueda, Diego; Locastro, Michael; Radtke-Schuller, Susanne; Shamma, Shihab A.; Fritz, Jonathan B.
2014-01-01
A variety of attention-related effects have been demonstrated in primary auditory cortex (A1). However, an understanding of the functional role of higher auditory cortical areas in guiding attention to acoustic stimuli has been elusive. We recorded from neurons in two tonotopic cortical belt areas in the dorsal posterior ectosylvian gyrus (dPEG) of ferrets trained on a simple auditory discrimination task. Neurons in dPEG showed similar basic auditory tuning properties to A1, but during behavior we observed marked differences between these areas. In the belt areas, changes in neuronal firing rate and response dynamics greatly enhanced responses to target stimuli relative to distractors, allowing for greater attentional selection during active listening. Consistent with existing anatomical evidence, the pattern of sensory tuning and behavioral modulation in auditory belt cortex links the spectro-temporal representation of the whole acoustic scene in A1 to a more abstracted representation of task-relevant stimuli observed in frontal cortex. PMID:24742467
Kujala, T; Kuuluvainen, S; Saalasti, S; Jansson-Verkasalo, E; von Wendt, L; Lepistö, T
2010-09-01
Asperger syndrome, belonging to the autistic spectrum of disorders, involves deficits in social interaction and prosodic use of language but normal development of formal language abilities. Auditory processing involves both hyper- and hypoactive reactivity to acoustic changes. Responses composed of mismatch negativity (MMN) and obligatory components were recorded for five types of deviations in syllables (vowel, vowel duration, consonant, syllable frequency, syllable intensity) with the multi-feature paradigm from 8-12-year old children with Asperger syndrome. Children with Asperger syndrome had larger MMNs for intensity and smaller MMNs for frequency changes than typically developing children, whereas no MMN group differences were found for the other deviant stimuli. Furthermore, children with Asperger syndrome performed more poorly than controls in Comprehension of Instructions subtest of a language test battery. Cortical speech-sound discrimination is aberrant in children with Asperger syndrome. This is evident both as hypersensitive and depressed neural reactions to speech-sound changes, and is associated with features (frequency, intensity) which are relevant for prosodic processing. The multi-feature MMN paradigm, which includes variation and thereby resembles natural speech hearing circumstances, suggests abnormal pattern of speech discrimination in Asperger syndrome, including both hypo- and hypersensitive responses for speech features. 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Männel, Claudia; Schaadt, Gesa; Illner, Franziska K; van der Meer, Elke; Friederici, Angela D
2017-02-01
Intact phonological processing is crucial for successful literacy acquisition. While individuals with difficulties in reading and spelling (i.e., developmental dyslexia) are known to experience deficient phoneme discrimination (i.e., segmental phonology), findings concerning their prosodic processing (i.e., suprasegmental phonology) are controversial. Because there are no behavior-independent studies on the underlying neural correlates of prosodic processing in dyslexia, these controversial findings might be explained by different task demands. To provide an objective behavior-independent picture of segmental and suprasegmental phonological processing in impaired literacy acquisition, we investigated event-related brain potentials during passive listening in typically and poor-spelling German school children. For segmental phonology, we analyzed the Mismatch Negativity (MMN) during vowel length discrimination, capturing automatic auditory deviancy detection in repetitive contexts. For suprasegmental phonology, we analyzed the Closure Positive Shift (CPS) that automatically occurs in response to prosodic boundaries. Our results revealed spelling group differences for the MMN, but not for the CPS, indicating deficient segmental, but intact suprasegmental phonological processing in poor spellers. The present findings point towards a differential role of segmental and suprasegmental phonology in literacy disorders and call for interventions that invigorate impaired literacy by utilizing intact prosody in addition to training deficient phonemic awareness. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Ups and Downs in Auditory Development: Preschoolers' Sensitivity to Pitch Contour and Timbre.
Creel, Sarah C
2016-03-01
Much research has explored developing sound representations in language, but less work addresses developing representations of other sound patterns. This study examined preschool children's musical representations using two different tasks: discrimination and sound-picture association. Melodic contour--a musically relevant property--and instrumental timbre, which is (arguably) less musically relevant, were tested. In Experiment 1, children failed to associate cartoon characters to melodies with maximally different pitch contours, with no advantage for melody preexposure. Experiment 2 also used different-contour melodies and found good discrimination, whereas association was at chance. Experiment 3 replicated Experiment 2, but with a large timbre change instead of a contour change. Here, discrimination and association were both excellent. Preschool-aged children may have stronger or more durable representations of timbre than contour, particularly in more difficult tasks. Reasons for weaker association of contour than timbre information are discussed, along with implications for auditory development. Copyright © 2015 Cognitive Science Society, Inc.
Evidence for distinct human auditory cortex regions for sound location versus identity processing
Ahveninen, Jyrki; Huang, Samantha; Nummenmaa, Aapo; Belliveau, John W.; Hung, An-Yi; Jääskeläinen, Iiro P.; Rauschecker, Josef P.; Rossi, Stephanie; Tiitinen, Hannu; Raij, Tommi
2014-01-01
Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound-identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55–145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC. PMID:24121634
Static length changes of cochlear outer hair cells can tune low-frequency hearing
Ciganović, Nikola; Warren, Rebecca L.; Keçeli, Batu; Jacob, Stefan
2018-01-01
The cochlea not only transduces sound-induced vibration into neural spikes, it also amplifies weak sound to boost its detection. Actuators of this active process are sensory outer hair cells in the organ of Corti, whereas the inner hair cells transduce the resulting motion into electric signals that propagate via the auditory nerve to the brain. However, how the outer hair cells modulate the stimulus to the inner hair cells remains unclear. Here, we combine theoretical modeling and experimental measurements near the cochlear apex to study the way in which length changes of the outer hair cells deform the organ of Corti. We develop a geometry-based kinematic model of the apical organ of Corti that reproduces salient, yet counter-intuitive features of the organ’s motion. Our analysis further uncovers a mechanism by which a static length change of the outer hair cells can sensitively tune the signal transmitted to the sensory inner hair cells. When the outer hair cells are in an elongated state, stimulation of inner hair cells is largely inhibited, whereas outer hair cell contraction leads to a substantial enhancement of sound-evoked motion near the hair bundles. This novel mechanism for regulating the sensitivity of the hearing organ applies to the low frequencies that are most important for the perception of speech and music. We suggest that the proposed mechanism might underlie frequency discrimination at low auditory frequencies, as well as our ability to selectively attend auditory signals in noisy surroundings. PMID:29351276
Daikhin, Luba; Raviv, Ofri; Ahissar, Merav
2017-02-01
The reading deficit for people with dyslexia is typically associated with linguistic, memory, and perceptual-discrimination difficulties, whose relation to reading impairment is disputed. We proposed that automatic detection and usage of serial sound regularities for individuals with dyslexia is impaired (anchoring deficit hypothesis), leading to the formation of less reliable sound predictions. Agus, Carrión-Castillo, Pressnitzer, and Ramus, (2014) reported seemingly contradictory evidence by showing similar performance by participants with and without dyslexia in a demanding auditory task that contained task-relevant regularities. To carefully assess the sensitivity of participants with dyslexia to regularities of this task, we replicated their study. Thirty participants with and 24 without dyslexia performed the replicated task. On each trial, a 1-s noise stimulus was presented. Participants had to decide whether the stimulus contained repetitions (was constructed from a 0.5-s noise segment repeated twice) or not. It is implicit in this structure that some of the stimuli with repetitions were themselves repeated across trials. We measured the ability to detect within-noise repetitions and the sensitivity to cross-trial repetitions of the same noise stimuli. We replicated the finding of similar mean performance. However, individuals with dyslexia were less sensitive to the cross-trial repetition of noise stimuli and tended to be more sensitive to repetitions in novel noise stimuli. These findings indicate that online auditory processing for individuals with dyslexia is adequate but their implicit retention and usage of sound regularities is indeed impaired.
Torppa, Ritva; Huotilainen, Minna; Leminen, Miika; Lipsanen, Jari; Tervaniemi, Mari
2014-01-01
Informal music activities such as singing may lead to augmented auditory perception and attention. In order to study the accuracy and development of music-related sound change detection in children with cochlear implants (CIs) and normal hearing (NH) aged 4-13 years, we recorded their auditory event-related potentials twice (at T1 and T2, 14-17 months apart). We compared their MMN (preattentive discrimination) and P3a (attention toward salient sounds) to changes in piano tone pitch, timbre, duration, and gaps. Of particular interest was to determine whether singing can facilitate auditory perception and attention of CI children. It was found that, compared to the NH group, the CI group had smaller and later timbre P3a and later pitch P3a, implying degraded discrimination and attention shift. Duration MMN became larger from T1 to T2 only in the NH group. The development of response patterns for duration and gap changes were not similar in the CI and NH groups. Importantly, CI singers had enhanced or rapidly developing P3a or P3a-like responses over all change types. In contrast, CI non-singers had rapidly enlarging pitch MMN without enlargement of P3a, and their timbre P3a became smaller and later over time. These novel results show interplay between MMN, P3a, brain development, cochlear implantation, and singing. They imply an augmented development of neural networks for attention and more accurate neural discrimination associated with singing. In future studies, differential development of P3a between CI and NH children should be taken into account in comparisons of these groups. Moreover, further studies are needed to assess whether singing enhances auditory perception and attention of children with CIs.
Enhanced perception of pitch changes in speech and music in early blind adults.
Arnaud, Laureline; Gracco, Vincent; Ménard, Lucie
2018-06-12
It is well known that congenitally blind adults have enhanced auditory processing for some tasks. For instance, they show supra-normal capacity to perceive accelerated speech. However, only a few studies have investigated basic auditory processing in this population. In this study, we investigated if pitch processing enhancement in the blind is a domain-general or domain-specific phenomenon, and if pitch processing shares the same properties as in the sighted regarding how scores from different domains are associated. Fifteen congenitally blind adults and fifteen sighted adults participated in the study. We first created a set of personalized native and non-native vowel stimuli using an identification and rating task. Then, an adaptive discrimination paradigm was used to determine the frequency difference limen for pitch direction identification of speech (native and non-native vowels) and non-speech stimuli (musical instruments and pure tones). The results show that the blind participants had better discrimination thresholds than controls for native vowels, music stimuli, and pure tones. Whereas within the blind group, the discrimination thresholds were smaller for musical stimuli than speech stimuli, replicating previous findings in sighted participants, we did not find this effect in the current control group. Further analyses indicate that older sighted participants show higher thresholds for instrument sounds compared to speech sounds. This effect of age was not found in the blind group. Moreover, the scores across domains were not associated to the same extent in the blind as they were in the sighted. In conclusion, in addition to providing further evidence of compensatory auditory mechanisms in early blind individuals, our results point to differences in how auditory processing is modulated in this population. Copyright © 2018 Elsevier Ltd. All rights reserved.
Prosody perception and musical pitch discrimination in adults using cochlear implants.
Kalathottukaren, Rose Thomas; Purdy, Suzanne C; Ballard, Elaine
2015-07-01
This study investigated prosodic perception and musical pitch discrimination in adults using cochlear implants (CI), and examined the relationship between prosody perception scores and non-linguistic auditory measures, demographic variables, and speech recognition scores. Participants were given four subtests of the PEPS-C (profiling elements of prosody in speech-communication), the adult paralanguage subtest of the DANVA 2 (diagnostic analysis of non verbal accuracy 2), and the contour and interval subtests of the MBEA (Montreal battery of evaluation of amusia). Twelve CI users aged 25;5 to 78;0 years participated. CI participants performed significantly more poorly than normative values for New Zealand adults for PEPS-C turn-end, affect, and contrastive stress reception subtests, but were not different from the norm for the chunking reception subtest. Performance on the DANVA 2 adult paralanguage subtest was lower than the normative mean reported by Saindon (2010) . Most of the CI participants performed at chance level on both MBEA subtests. CI users have difficulty perceiving prosodic information accurately. Difficulty in understanding different aspects of prosody and music may be associated with reduced pitch perception ability.
Lense, Miriam D.; Shivers, Carolyn M.; Dykens, Elisabeth M.
2013-01-01
Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia. PMID:23966965
A Circuit for Motor Cortical Modulation of Auditory Cortical Activity
Nelson, Anders; Schneider, David M.; Takatoh, Jun; Sakurai, Katsuyasu; Wang, Fan
2013-01-01
Normal hearing depends on the ability to distinguish self-generated sounds from other sounds, and this ability is thought to involve neural circuits that convey copies of motor command signals to various levels of the auditory system. Although such interactions at the cortical level are believed to facilitate auditory comprehension during movements and drive auditory hallucinations in pathological states, the synaptic organization and function of circuitry linking the motor and auditory cortices remain unclear. Here we describe experiments in the mouse that characterize circuitry well suited to transmit motor-related signals to the auditory cortex. Using retrograde viral tracing, we established that neurons in superficial and deep layers of the medial agranular motor cortex (M2) project directly to the auditory cortex and that the axons of some of these deep-layer cells also target brainstem motor regions. Using in vitro whole-cell physiology, optogenetics, and pharmacology, we determined that M2 axons make excitatory synapses in the auditory cortex but exert a primarily suppressive effect on auditory cortical neuron activity mediated in part by feedforward inhibition involving parvalbumin-positive interneurons. Using in vivo intracellular physiology, optogenetics, and sound playback, we also found that directly activating M2 axon terminals in the auditory cortex suppresses spontaneous and stimulus-evoked synaptic activity in auditory cortical neurons and that this effect depends on the relative timing of motor cortical activity and auditory stimulation. These experiments delineate the structural and functional properties of a corticocortical circuit that could enable movement-related suppression of auditory cortical activity. PMID:24005287
Harnsberger, James D.; Svirsky, Mario A.; Kaiser, Adam R.; Pisoni, David B.; Wright, Richard; Meyer, Ted A.
2012-01-01
Cochlear implant (CI) users differ in their ability to perceive and recognize speech sounds. Two possible reasons for such individual differences may lie in their ability to discriminate formant frequencies or to adapt to the spectrally shifted information presented by cochlear implants, a basalward shift related to the implant’s depth of insertion in the cochlea. In the present study, we examined these two alternatives using a method-of-adjustment (MOA) procedure with 330 synthetic vowel stimuli varying in F1 and F2 that were arranged in a two-dimensional grid. Subjects were asked to label the synthetic stimuli that matched ten monophthongal vowels in visually presented words. Subjects then provided goodness ratings for the stimuli they had chosen. The subjects’ responses to all ten vowels were used to construct individual perceptual “vowel spaces.” If CI users fail to adapt completely to the basalward spectral shift, then the formant frequencies of their vowel categories should be shifted lower in both F1 and F2. However, with one exception, no systematic shifts were observed in the vowel spaces of CI users. Instead, the vowel spaces differed from one another in the relative size of their vowel categories. The results suggest that differences in formant frequency discrimination may account for the individual differences in vowel perception observed in cochlear implant users. PMID:11386565
Grandin, Cécile B.; Dricot, Laurence; Plaza, Paula; Lerens, Elodie; Rombaux, Philippe; De Volder, Anne G.
2013-01-01
Using functional magnetic resonance imaging (fMRI) in ten early blind humans, we found robust occipital activation during two odor-processing tasks (discrimination or categorization of fruit and flower odors), as well as during control auditory-verbal conditions (discrimination or categorization of fruit and flower names). We also found evidence for reorganization and specialization of the ventral part of the occipital cortex, with dissociation according to stimulus modality: the right fusiform gyrus was most activated during olfactory conditions while part of the left ventral lateral occipital complex showed a preference for auditory-verbal processing. Only little occipital activation was found in sighted subjects, but the same right-olfactory/left-auditory-verbal hemispheric lateralization was found overall in their brain. This difference between the groups was mirrored by superior performance of the blind in various odor-processing tasks. Moreover, the level of right fusiform gyrus activation during the olfactory conditions was highly correlated with individual scores in a variety of odor recognition tests, indicating that the additional occipital activation may play a functional role in odor processing. PMID:23967263
Language impairment is reflected in auditory evoked fields.
Pihko, Elina; Kujala, Teija; Mickos, Annika; Alku, Paavo; Byring, Roger; Korkman, Marit
2008-05-01
Specific language impairment (SLI) is diagnosed when a child has problems in producing or understanding language despite having a normal IQ and there being no other obvious explanation. There can be several associated problems, and no single underlying cause has yet been identified. Some theories propose problems in auditory processing, specifically in the discrimination of sound frequency or rapid temporal frequency changes. We compared automatic cortical speech-sound processing and discrimination between a group of children with SLI and control children with normal language development (mean age: 6.6 years; range: 5-7 years). We measured auditory evoked magnetic fields using two sets of CV syllables, one with a changing consonant /da/ba/ga/ and another one with a changing vowel /su/so/sy/ in an oddball paradigm. The P1m responses for onsets of repetitive stimuli were weaker in the SLI group whereas no significant group differences were found in the mismatch responses. The results indicate that the SLI group, having weaker responses to the onsets of sounds, might have slightly depressed sensory encoding.
Speech processing and production in two-year-old children acquiring isiXhosa: A tale of two children
Rossouw, Kate; Fish, Laura; Jansen, Charne; Manley, Natalie; Powell, Michelle; Rosen, Loren
2016-01-01
We investigated the speech processing and production of 2-year-old children acquiring isiXhosa in South Africa. Two children (2 years, 5 months; 2 years, 8 months) are presented as single cases. Speech input processing, stored phonological knowledge and speech output are described, based on data from auditory discrimination, naming, and repetition tasks. Both children were approximating adult levels of accuracy in their speech output, although naming was constrained by vocabulary. Performance across tasks was variable: One child showed a relative strength with repetition, and experienced most difficulties with auditory discrimination. The other performed equally well in naming and repetition, and obtained 100% for her auditory task. There is limited data regarding typical development of isiXhosa, and the focus has mainly been on speech production. This exploratory study describes typical development of isiXhosa using a variety of tasks understood within a psycholinguistic framework. We describe some ways in which speech and language therapists can devise and carry out assessment with children in situations where few formal assessments exist, and also detail the challenges of such work. PMID:27245131
1984-06-01
types of conditions, discriminable differences in intensity, pitch, or use of beats or harmonics shall be provided. If absolute discrimination is...shall be directed to the operator’s headset as well as to the work area. Binaural headsets should not be used in any operational environment below 85...signals are to be used to alert an operator to different types of conditions, discriminable difference in Intensity, pitch, or use of beats and harmonics
Vieira, Philip A; Corches, Alex; Lovelace, Jonathan W; Westbrook, Kevin B; Mendoza, Michael; Korzus, Edward
2015-03-01
N-methyl-D-aspartate receptors (NMDARs) are critically involved in various learning mechanisms including modulation of fear memory, brain development and brain disorders. While NMDARs mediate opposite effects on medial prefrontal cortex (mPFC) interneurons and excitatory neurons, NMDAR antagonists trigger profound cortical activation. The objectives of the present study were to determine the involvement of NMDARs expressed specifically in excitatory neurons in mPFC-dependent adaptive behaviors, specifically fear discrimination and fear extinction. To achieve this, we tested mice with locally deleted Grin1 gene encoding the obligatory NR1 subunit of the NMDAR from prefrontal CamKIIα positive neurons for their ability to distinguish frequency modulated (FM) tones in fear discrimination test. We demonstrated that NMDAR-dependent signaling in the mPFC is critical for effective fear discrimination following initial generalization of conditioned fear. While mice with deficient NMDARs in prefrontal excitatory neurons maintain normal responses to a dangerous fear-conditioned stimulus, they exhibit abnormal generalization decrement. These studies provide evidence that NMDAR-dependent neural signaling in the mPFC is a component of a neural mechanism for disambiguating the meaning of fear signals and supports discriminative fear learning by retaining proper gating information, viz. both dangerous and harmless cues. We also found that selective deletion of NMDARs from excitatory neurons in the mPFC leads to a deficit in fear extinction of auditory conditioned stimuli. These studies suggest that prefrontal NMDARs expressed in excitatory neurons are involved in adaptive behavior. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
LANE, HARLAN; AND OTHERS
THIS DOCUMENT IS THE FIRST IN A SERIES REPORTING ON PROGRESS OF AN EXPERIMENTAL RESEARCH PROGRAM IN SPEECH CONTROL. THE TOPICS DISCUSSED ARE--(1) THE DISCONTINUITY OF AUDITORY DISCRIMINATION LEARNING IN HUMAN ADULTS, (2) DISCRIMINATIVE CONTROL OF CONCURRENT RESPONSES--THE RELATIONS AMONG RESPONSE FREQUENCY, LATENCY, AND TOPOGRAPHY IN AUDITORY…
Do informal musical activities shape auditory skill development in preschool-age children?
Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari
2013-08-29
The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children.
Do informal musical activities shape auditory skill development in preschool-age children?
Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari
2013-01-01
The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children. PMID:24009597
Representations of temporal information in short-term memory: Are they modality-specific?
Bratzke, Daniel; Quinn, Katrina R; Ulrich, Rolf; Bausenhart, Karin M
2016-10-01
Rattat and Picard (2012) reported that the coding of temporal information in short-term memory is modality-specific, that is, temporal information received via the visual (auditory) modality is stored as a visual (auditory) code. This conclusion was supported by modality-specific interference effects on visual and auditory duration discrimination, which were induced by secondary tasks (visual tracking or articulatory suppression), presented during a retention interval. The present study assessed the stability of these modality-specific interference effects. Our study did not replicate the selective interference pattern but rather indicated that articulatory suppression not only impairs short-term memory for auditory but also for visual durations. This result pattern supports a crossmodal or an abstract view of temporal encoding. Copyright © 2016 Elsevier B.V. All rights reserved.
Movement goals and feedback and feedforward control mechanisms in speech production
Perkell, Joseph S.
2010-01-01
Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences. PMID:22661828
Movement goals and feedback and feedforward control mechanisms in speech production.
Perkell, Joseph S
2012-09-01
Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences.
Dissociative Experiences and Vividness of Auditory Imagery
ERIC Educational Resources Information Center
Pérez-Fabello, María José; Campos, Alfredo
2017-01-01
The relationship between dissociation and auditory imagery were assessed, 2 variables that sometime influence on artistic creativity. A total of 170 fine arts undergraduates (94 women and 76 men) received 2 dissociation questionnaires--the Dissociative Ability Scale (DAS), and the Dissociative Experiences Scale (DES)--and 2 auditory imagery…
Enhanced perceptual functioning in autism: an update, and eight principles of autistic perception.
Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle; Hubert, Benedicte; Burack, Jake
2006-01-01
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.
Auditory interfaces: The human perceiver
NASA Technical Reports Server (NTRS)
Colburn, H. Steven
1991-01-01
A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.
Yoshimura, Yuko; Kikuchi, Mitsuru; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Remijn, Gerard B; Oi, Manabu; Munesue, Toshio; Higashida, Haruhiro; Minabe, Yoshio
2016-01-01
The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.
Auditory-visual fusion in speech perception in children with cochlear implants
Schorr, Efrat A.; Fox, Nathan A.; van Wassenhove, Virginie; Knudsen, Eric I.
2005-01-01
Speech, for most of us, is a bimodal percept whenever we both hear the voice and see the lip movements of a speaker. Children who are born deaf never have this bimodal experience. We tested children who had been deaf from birth and who subsequently received cochlear implants for their ability to fuse the auditory information provided by their implants with visual information about lip movements for speech perception. For most of the children with implants (92%), perception was dominated by vision when visual and auditory speech information conflicted. For some, bimodal fusion was strong and consistent, demonstrating a remarkable plasticity in their ability to form auditory-visual associations despite the atypical stimulation provided by implants. The likelihood of consistent auditory-visual fusion declined with age at implant beyond 2.5 years, suggesting a sensitive period for bimodal integration in speech perception. PMID:16339316
Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.
Woolley, Sarah M N; Portfors, Christine V
2013-11-01
The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance
ERIC Educational Resources Information Center
Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina
2013-01-01
Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…
Musician enhancement for speech-in-noise.
Parbery-Clark, Alexandra; Skoe, Erika; Lam, Carrie; Kraus, Nina
2009-12-01
To investigate the effect of musical training on speech-in-noise (SIN) performance, a complex task requiring the integration of working memory and stream segregation as well as the detection of time-varying perceptual cues. Previous research has indicated that, in combination with lifelong experience with musical stream segregation, musicians have better auditory perceptual skills and working memory. It was hypothesized that musicians would benefit from these factors and perform better on speech perception in noise than age-matched nonmusician controls. The performance of 16 musicians and 15 nonmusicians was compared on clinical measures of speech perception in noise-QuickSIN and Hearing-In-Noise Test (HINT). Working memory capacity and frequency discrimination were also assessed. All participants had normal hearing and were between the ages of 19 and 31 yr. To be categorized as a musician, participants needed to have started musical training before the age of 7 yr, have 10 or more years of consistent musical experience, and have practiced more than three times weekly within the 3 yr before study enrollment. Nonmusicians were categorized by the failure to meet the musician criteria, along with not having received musical training within the 7 yr before the study. Musicians outperformed the nonmusicians on both QuickSIN and HINT, in addition to having more fine-grained frequency discrimination and better working memory. Years of consistent musical practice correlated positively with QuickSIN, working memory, and frequency discrimination but not HINT. The results also indicate that working memory and frequency discrimination are more important for QuickSIN than for HINT. Musical experience appears to enhance the ability to hear speech in challenging listening environments. Large group differences were found for QuickSIN, and the results also suggest that this enhancement is derived in part from musicians' enhanced working memory and frequency discrimination. For HINT, in which performance was not linked to frequency discrimination ability and was only moderately linked to working memory, musicians still performed significantly better than the nonmusicians. The group differences for HINT were evident in the most difficult condition in which the speech and noise were presented from the same location and not spatially segregated. Understanding which cognitive and psychoacoustic factors as well as which lifelong experiences contribute to SIN may lead to more effective remediation programs for clinical populations for whom SIN poses a particular perceptual challenge. These results provide further evidence for musical training transferring to nonmusical domains and highlight the importance of taking musical training into consideration when evaluating a person's SIN ability in a clinical setting.
It Is Time to Rethink Central Auditory Processing Disorder Protocols for School-Aged Children.
DeBonis, David A
2015-06-01
The purpose of this article is to review the literature that pertains to ongoing concerns regarding the central auditory processing construct among school-aged children and to assess whether the degree of uncertainty surrounding central auditory processing disorder (CAPD) warrants a change in current protocols. Methodology on this topic included a review of relevant and recent literature through electronic search tools (e.g., ComDisDome, PsycINFO, Medline, and Cochrane databases); published texts; as well as published articles from the Journal of the American Academy of Audiology; the American Journal of Audiology; the Journal of Speech, Language, and Hearing Research; and Language, Speech, and Hearing Services in Schools. This review revealed strong support for the following: (a) Current testing of CAPD is highly influenced by nonauditory factors, including memory, attention, language, and executive function; (b) the lack of agreement regarding the performance criteria for diagnosis is concerning; (c) the contribution of auditory processing abilities to language, reading, and academic and listening abilities, as assessed by current measures, is not significant; and (d) the effectiveness of auditory interventions for improving communication abilities has not been established. Routine use of CAPD test protocols cannot be supported, and strong consideration should be given to redirecting focus on assessing overall listening abilities. Also, intervention needs to be contextualized and functional. A suggested protocol is provided for consideration. All of these issues warrant ongoing research.
Assessment of short-term memory in Arabic speaking children with specific language impairment.
Kaddah, F A; Shoeib, R M; Mahmoud, H E
2010-12-15
Children with Specific Language Impairment (SLI) may have some kind of memory disorder that could increase their linguistic impairment. This study assessed the short-term memory skills in Arabic speaking children with either Expressive Language Impairment (ELI) or Receptive/Expressive Language Impairment (R/ELI) in comparison to controls in order to estimate the nature and extent of any specific deficits in these children that could explain the different prognostic results of language intervention. Eighteen children were included in each group. Receptive, expressive and total language quotients were calculated using the Arabic language test. Assessment of auditory and visual short-term memory was done using the Arabic version of the Illinois Test of Psycholinguistic Abilities. Both groups of SLI performed significantly lower linguistic abilities and poorer auditory and visual short-term memory in comparison to normal children. The R/ELI group presented an inferior performance than the ELI group in all measured parameters. Strong association was found between most tasks of auditory and visual short-term memory and linguistic abilities. The results of this study highlighted a specific degree of deficit of auditory and visual short-term memories in both groups of SLI. These deficits were more prominent in R/ELI group. Moreover, the strong association between the different auditory and visual short-term memories and language abilities in children with SLI must be taken into account when planning an intervention program for these children.
Distributional Learning of Lexical Tones: A Comparison of Attended vs. Unattended Listening.
Ong, Jia Hoong; Burnham, Denis; Escudero, Paola
2015-01-01
This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants' auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution.
Distributional Learning of Lexical Tones: A Comparison of Attended vs. Unattended Listening
Ong, Jia Hoong; Burnham, Denis; Escudero, Paola
2015-01-01
This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants’ auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution. PMID:26214002