Sample records for auditory language comprehension

  1. Sentence Comprehension in Adolescents with down Syndrome and Typically Developing Children: Role of Sentence Voice, Visual Context, and Auditory-Verbal Short-Term Memory.

    ERIC Educational Resources Information Center

    Miolo, Giuliana; Chapman, Robins S.; Sindberg, Heidi A.

    2005-01-01

    The authors evaluated the roles of auditory-verbal short-term memory, visual short-term memory, and group membership in predicting language comprehension, as measured by an experimental sentence comprehension task (SCT) and the Test for Auditory Comprehension of Language--Third Edition (TACL-3; E. Carrow-Woolfolk, 1999) in 38 participants: 19 with…

  2. Auditory Learning. Dimensions in Early Learning Series.

    ERIC Educational Resources Information Center

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  3. Training in rapid auditory processing ameliorates auditory comprehension in aphasic patients: a randomized controlled pilot study.

    PubMed

    Szelag, Elzbieta; Lewandowska, Monika; Wolak, Tomasz; Seniow, Joanna; Poniatowska, Renata; Pöppel, Ernst; Szymaszek, Aneta

    2014-03-15

    Experimental studies have often reported close associations between rapid auditory processing and language competency. The present study was aimed at improving auditory comprehension in aphasic patients following specific training in the perception of temporal order (TO) of events. We tested 18 aphasic patients showing both comprehension and TO perception deficits. Auditory comprehension was assessed by the Token Test, phonemic awareness and Voice-Onset-Time Test. The TO perception was assessed using auditory Temporal-Order-Threshold, defined as the shortest interval between two consecutive stimuli, necessary to report correctly their before-after relation. Aphasic patients participated in eight 45-minute sessions of either specific temporal training (TT, n=11) aimed to improve sequencing abilities, or control non-temporal training (NT, n=7) focussed on volume discrimination. The TT yielded improved TO perception; moreover, a transfer of improvement was observed from the time domain to the language domain, which was untrained during the training. The NT did not improve either the TO perception or comprehension in any language test. These results are in agreement with previous literature studies which proved ameliorated language competency following the TT in language-learning-impaired or dyslexic children. Our results indicated for the first time such benefits also in aphasic patients. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. The effect of written text on comprehension of spoken English as a foreign language.

    PubMed

    Diao, Yali; Chandler, Paul; Sweller, John

    2007-01-01

    Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.

  5. The cortical language circuit: from auditory perception to sentence comprehension.

    PubMed

    Friederici, Angela D

    2012-05-01

    Over the years, a large body of work on the brain basis of language comprehension has accumulated, paving the way for the formulation of a comprehensive model. The model proposed here describes the functional neuroanatomy of the different processing steps from auditory perception to comprehension as located in different gray matter brain regions. It also specifies the information flow between these regions, taking into account white matter fiber tract connections. Bottom-up, input-driven processes proceeding from the auditory cortex to the anterior superior temporal cortex and from there to the prefrontal cortex, as well as top-down, controlled and predictive processes from the prefrontal cortex back to the temporal cortex are proposed to constitute the cortical language circuit. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Auditory perception modulated by word reading.

    PubMed

    Cao, Liyu; Klepp, Anne; Schnitzler, Alfons; Gross, Joachim; Biermann-Ruben, Katja

    2016-10-01

    Theories of embodied cognition positing that sensorimotor areas are indispensable during language comprehension are supported by neuroimaging and behavioural studies. Among others, the auditory system has been suggested to be important for understanding sound-related words (visually presented) and the motor system for action-related words. In this behavioural study, using a sound detection task embedded in a lexical decision task, we show that in participants with high lexical decision performance sound verbs improve auditory perception. The amount of modulation was correlated with lexical decision performance. Our study provides convergent behavioural evidence of auditory cortex involvement in word processing, supporting the view of embodied language comprehension concerning the auditory domain.

  7. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency: an ERP study.

    PubMed

    van Zuijen, Titia L; Plakas, Anna; Maassen, Ben A M; Been, Pieter; Maurits, Natasha M; Krikhaar, Evelien; van Driel, Joram; van der Leij, Aryan

    2012-10-18

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or without familial risk for dyslexia to investigate whether their auditory system was able to detect a temporal change in a tone pattern. The children were followed longitudinally and performed an intelligence- and language development test at ages 4 and 4.5 years. Literacy related skills were measured at the beginning of second grade, and word- and pseudo-word reading fluency were measured at the end of second grade. The EEG responses showed that control children could detect the temporal change as indicated by a mismatch response (MMR). The MMR was not observed in at-risk children. Furthermore, the fronto-central MMR amplitude correlated with preliterate language comprehension and with later word reading fluency, but not with phonological awareness. We conclude that temporal auditory processing differentiates young children at risk for dyslexia from controls and is a precursor of preliterate language comprehension and reading fluency. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Revealing and quantifying the impaired phonological analysis underpinning impaired comprehension in Wernicke's aphasia.

    PubMed

    Robson, Holly; Keidel, James L; Ralph, Matthew A Lambon; Sage, Karen

    2012-01-01

    Wernicke's aphasia is a condition which results in severely disrupted language comprehension following a lesion to the left temporo-parietal region. A phonological analysis deficit has traditionally been held to be at the root of the comprehension impairment in Wernicke's aphasia, a view consistent with current functional neuroimaging which finds areas in the superior temporal cortex responsive to phonological stimuli. However behavioural evidence to support the link between a phonological analysis deficit and auditory comprehension has not been yet shown. This study extends seminal work by Blumstein, Baker, and Goodglass (1977) to investigate the relationship between acoustic-phonological perception, measured through phonological discrimination, and auditory comprehension in a case series of Wernicke's aphasia participants. A novel adaptive phonological discrimination task was used to obtain reliable thresholds of the phonological perceptual distance required between nonwords before they could be discriminated. Wernicke's aphasia participants showed significantly elevated thresholds compared to age and hearing matched control participants. Acoustic-phonological thresholds correlated strongly with auditory comprehension abilities in Wernicke's aphasia. In contrast, nonverbal semantic skills showed no relationship with auditory comprehension. The results are evaluated in the context of recent neurobiological models of language and suggest that impaired acoustic-phonological perception underlies the comprehension impairment in Wernicke's aphasia and favour models of language which propose a leftward asymmetry in phonological analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Auditory Attention and Comprehension During a Simulated Night Shift: Effects of Task Characteristics.

    PubMed

    Pilcher, June J; Jennings, Kristen S; Phillips, Ginger E; McCubbin, James A

    2016-11-01

    The current study investigated performance on a dual auditory task during a simulated night shift. Night shifts and sleep deprivation negatively affect performance on vigilance-based tasks, but less is known about the effects on complex tasks. Because language processing is necessary for successful work performance, it is important to understand how it is affected by night work and sleep deprivation. Sixty-two participants completed a simulated night shift resulting in 28 hr of total sleep deprivation. Performance on a vigilance task and a dual auditory language task was examined across four testing sessions. The results indicate that working at night negatively impacts vigilance, auditory attention, and comprehension. The effects on the auditory task varied based on the content of the auditory material. When the material was interesting and easy, the participants performed better. Night work had a greater negative effect when the auditory material was less interesting and more difficult. These findings support research that vigilance decreases during the night. The results suggest that auditory comprehension suffers when individuals are required to work at night. Maintaining attention and controlling effort especially on passages that are less interesting or more difficult could improve performance during night shifts. The results from the current study apply to many work environments where decision making is necessary in response to complex auditory information. Better predicting the effects of night work on language processing is important for developing improved means of coping with shiftwork. © 2016, Human Factors and Ergonomics Society.

  10. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    PubMed

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.

  11. Electrostimulation mapping of comprehension of auditory and visual words.

    PubMed

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Lesion localization of speech comprehension deficits in chronic aphasia

    PubMed Central

    Binder, Jeffrey R.; Humphries, Colin; Gross, William L.; Book, Diane S.

    2017-01-01

    Objective: Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Methods: Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. Results: ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Conclusions: Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. PMID:28179469

  13. Lesion localization of speech comprehension deficits in chronic aphasia.

    PubMed

    Pillay, Sara B; Binder, Jeffrey R; Humphries, Colin; Gross, William L; Book, Diane S

    2017-03-07

    Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. © 2017 American Academy of Neurology.

  14. Verbal auditory agnosia in a patient with traumatic brain injury: A case report.

    PubMed

    Kim, Jong Min; Woo, Seung Beom; Lee, Zeeihn; Heo, Sung Jae; Park, Donghwi

    2018-03-01

    Verbal auditory agnosia is the selective inability to recognize verbal sounds. Patients with this disorder lose the ability to understand language, write from dictation, and repeat words with reserved ability to identify nonverbal sounds. However, to the best of our knowledge, there was no report about verbal auditory agnosia in adult patient with traumatic brain injury. He was able to clearly distinguish between language and nonverbal sounds, and he did not have any difficulty in identifying the environmental sounds. However, he did not follow oral commands and could not repeat and dictate words. On the other hand, he had fluent and comprehensible speech, and was able to read and understand written words and sentences. Verbal auditory agnosia INTERVENTION:: He received speech therapy and cognitive rehabilitation during his hospitalization, and he practiced understanding of verbal language by providing written sentences together. Two months after hospitalization, he regained his ability to understand some verbal words. Six months after hospitalization, his ability to understand verbal language was improved to an understandable level when speaking slowly in front of his eyes, but his comprehension of verbal sound language was still word level, not sentence level. This case gives us the lesson that the evaluation of auditory functions as well as cognition and language functions important for accurate diagnosis and appropriate treatment, because the verbal auditory agnosia tends to be easily misdiagnosed as hearing impairment, cognitive dysfunction and sensory aphasia.

  15. Language Comprehension and Performance.

    ERIC Educational Resources Information Center

    Tanaka, Masako N.; Massad, Carolyn E.

    The effectiveness of the CIRCUS language instruments for determining language comprehension and performance in the 4- and 5-year-old child is discussed. In these instruments, the use of content words is primarily studied through the use of single-word measures, such as a picture vocabulary test and an auditory discrimination test, whereas the use…

  16. The development of a multimedia online language assessment tool for young children with autism.

    PubMed

    Lin, Chu-Sui; Chang, Shu-Hui; Liou, Wen-Ying; Tsai, Yu-Show

    2013-10-01

    This study aimed to provide early childhood special education professionals with a standardized and comprehensive language assessment tool for the early identification of language learning characteristics (e.g., hyperlexia) of young children with autism. In this study, we used computer technology to develop a multi-media online language assessment tool that presents auditory or visual stimuli. This online comprehensive language assessment consists of six subtests: decoding, homographs, auditory vocabulary comprehension, visual vocabulary comprehension, auditory sentence comprehension, and visual sentence comprehension. Three hundred typically developing children and 35 children with autism from Tao-Yuan County in Taiwan aged 4-6 participated in this study. The Cronbach α values of the six subtests ranged from .64 to .97. The variance explained by the six subtests ranged from 14% to 56%, the current validity of each subtest with the Peabody Picture Vocabulary Test-Revised ranged from .21 to .45, and the predictive validity of each subtest with WISC-III ranged from .47 to .75. This assessment tool was also found to be able to accurately differentiate children with autism up to 92%. These results indicate that this assessment tool has both adequate reliability and validity. Additionally, 35 children with autism have completed the entire assessment in this study without exhibiting any extremely troubling behaviors. However, future research is needed to increase the sample size of both typically developing children and young children with autism and to overcome the technical challenges associated with internet issues. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Real-Time Processing of ASL Signs: Delayed First Language Acquisition Affects Organization of the Mental Lexicon

    ERIC Educational Resources Information Center

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2015-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…

  18. Nonverbal auditory agnosia with lesion to Wernicke's area.

    PubMed

    Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic

    2010-01-01

    We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.

  19. Sentence level auditory comprehension treatment program for aphasic adults.

    PubMed

    Naeser, M A; Haas, G; Mazurski, P; Laughlin, S

    1986-06-01

    The purpose of this study was to investigate whether a newly developed sentence level auditory comprehension (SLAC) treatment program could be used to improve language comprehension test scores in adults with chronic aphasia. Results indicate that the SLAC treatment program can be used with chronic patients; performance on a standardized test (the Token Test) was improved after treatment; and improved performance could not be predicted from either anatomic CT scan lesion sites or pretreatment test scores. One advantage to the SLAC treatment program is that the patient can practice listening independently with a tape recorder device (Language Master) and earphones either in the hospital or at home.

  20. Language Processing in Children with Cochlear Implants: A Preliminary Report on Lexical Access for Production and Comprehension

    ERIC Educational Resources Information Center

    Schwartz, Richard G.; Steinman, Susan; Ying, Elizabeth; Mystal, Elana Ying; Houston, Derek M.

    2013-01-01

    In this plenary paper, we present a review of language research in children with cochlear implants along with an outline of a 5-year project designed to examine the lexical access for production and recognition. The project will use auditory priming, picture naming with auditory or visual interfering stimuli (Picture-Word Interference and…

  1. Individual Differences in Auditory Sentence Comprehension in Children: An Exploratory Event-Related Functional Magnetic Resonance Imaging Investigation

    ERIC Educational Resources Information Center

    Yeatman, Jason D.; Ben-Shachar, Michal; Glover, Gary H.; Feldman, Heidi M.

    2010-01-01

    The purpose of this study was to explore changes in activation of the cortical network that serves auditory sentence comprehension in children in response to increasing demands of complex sentences. A further goal is to study how individual differences in children's receptive language abilities are associated with such changes in cortical…

  2. The neural consequences of age-related hearing loss

    PubMed Central

    Peelle, Jonathan E.; Wingfield, Arthur

    2016-01-01

    During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension. PMID:27262177

  3. Linguistic and auditory temporal processing in children with specific language impairment.

    PubMed

    Fortunato-Tavares, Talita; Rocha, Caroline Nunes; Andrade, Claudia Regina Furquim de; Befi-Lopes, Débora Maria; Schochat, Eliane; Hestvik, Arild; Schwartz, Richard G

    2009-01-01

    Several studies suggest the association of specific language impairment (SLI) to deficits in auditory processing. It has been evidenced that children with SLI present deficit in brief stimuli discrimination. Such deficit would lead to difficulties in developing phonological abilities necessary to map phonemes and to effectively and automatically code and decode words and sentences. However, the correlation between temporal processing (TP) and specific deficits in language disorders--such as syntactic comprehension abilities--has received little or no attention. To analyze the correlation between: TP (through the Frequency Pattern Test--FPT) and Syntactic Complexity Comprehension (through a Sentence Comprehension Task). Sixteen children with typical language development (8;9 +/- 1;1 years) and seven children with SLI (8;1 +/- 1;2 years) participated on the study. Accuracy of both groups decreased with the increase on syntactic complexity (both p < 0.01). On the between groups comparison, performance difference on the Test of Syntactic Complexity Comprehension (TSCC) was statistically significant (p = 0.02).As expected, children with SLI presented FPT performance outside reference values. On the SLI group, correlations between TSCC and FPT were positive and higher for high syntactic complexity (r = 0.97) than for low syntactic complexity (r = 0.51). Results suggest that FPT is positively correlated to syntactic complexity comprehension abilities.The low performance on FPT could serve as an additional indicator of deficits in complex linguistic processing. Future studies should consider, besides the increase of the sample, longitudinal studies that investigate the effect of frequency pattern auditory training on performance in high syntactic complexity comprehension tasks.

  4. Temporal information processing as a basis for auditory comprehension: clinical evidence from aphasic patients.

    PubMed

    Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta

    2015-01-01

    Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap separating two successive stimuli necessary for a subject to report their temporal order correctly, thus the relation 'before-after'. Neuropsychological evidence has indicated elevated TOT values (corresponding to deteriorated time perception) in different clinical groups, such as aphasic patients, dyslexic subjects or children with specific language impairment. To test relationships between elevated TOT and declined cognitive functions in brain-injured patients suffering from post-stroke aphasia. We tested 30 aphasic patients (13 male, 17 female), aged between 50 and 81 years. TIP comprised assessment of TOT. Auditory comprehension was assessed with the selected language tests, i.e. Token Test, Phoneme Discrimination Test (PDT) and Voice-Onset-Time Test (VOT), while two aspects of attentional resources (i.e. alertness and vigilance) were measured using the Test of Attentional Performance (TAP) battery. Significant correlations were indicated between elevated values of TOT and deteriorated performance on all applied language tests. Moreover, significant correlations were evidenced between elevated TOT and alertness. Finally, positive correlations were found between particular language tests, i.e. (1) Token Test and PDT; (2) Token Test and VOT Test; and (3) PDT and VOT Test, as well as between PDT and both attentional tasks. These results provide further clinical evidence supporting the thesis that TIP constitutes the core process incorporated in both language and attentional resources. The novel value of the present study is the indication for the first time in Slavic language users a clear coexistence of the 'timing-auditory comprehension-attention' relationships. © 2015 Royal College of Speech and Language Therapists.

  5. Unique Auditory Language-Learning Needs of Hearing-Impaired Children: Implications for Intervention.

    ERIC Educational Resources Information Center

    Johnson, Barbara Ann; Paterson, Marietta M.

    Twenty-seven hearing-impaired young adults with hearing potentially usable for language comprehension and a history of speech language therapy participated in this study of training in using residual hearing for the purpose of learning spoken language. Evaluation of their recalled therapy experiences indicated that listening to spoken language did…

  6. Fundamental deficits of auditory perception in Wernicke's aphasia.

    PubMed

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Facilitating Comprehension and Processing of Language in Classroom and Clinic.

    ERIC Educational Resources Information Center

    Lasky, Elaine Z.

    A speech/language remediation-intervention model is proposed to enhance processing of auditory information in students with language or learning disabilities. Such children have difficulty attending to language signals (verbal and nonverbal responses ranging from facial expressions and gestures to those requiring the generation of complex…

  8. Hearing loss in older adults affects neural systems supporting speech comprehension.

    PubMed

    Peelle, Jonathan E; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur

    2011-08-31

    Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.

  9. Hearing loss in older adults affects neural systems supporting speech comprehension

    PubMed Central

    Peelle, Jonathan E.; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur

    2011-01-01

    Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging (fMRI) to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry (VBM), demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task. PMID:21880924

  10. Cognitive Process in Second Language Reading: Transfer of L1 Reading Skills and Strategies.

    ERIC Educational Resources Information Center

    Koda, Keiko

    1988-01-01

    Experiments with skilled readers (N=83) from four native-language orthographic backgrounds examined the effects of: (1) blocked visual or auditory information on lexical decision-making; and (2) heterographic homophones on reading comprehension. Native and second language transfer does occur in second language reading, and orthographic structure…

  11. Lévy-like diffusion in eye movements during spoken-language comprehension.

    PubMed

    Stephen, Damian G; Mirman, Daniel; Magnuson, James S; Dixon, James A

    2009-05-01

    This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.

  12. Lévy-like diffusion in eye movements during spoken-language comprehension

    NASA Astrophysics Data System (ADS)

    Stephen, Damian G.; Mirman, Daniel; Magnuson, James S.; Dixon, James A.

    2009-05-01

    This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.

  13. Auditory integration training for children with autism: no behavioral benefits detected.

    PubMed

    Mudford, O C; Cross, B A; Breen, S; Cullen, C; Reeves, D; Gould, J; Douglas, J

    2000-03-01

    Auditory integration training and a control treatment were provided for 16 children with autism in a crossover experimental design. Measures, blind to treatment order, included parent and teacher ratings of behavior, direct observational recordings, IQ, language, and social/adaptive tests. Significant differences tended to show that the control condition was superior on parent-rated measures of hyperactivity and on direct observational measures of ear-occlusion. No differences were detected on teacher-rated measures. Children's IQs and language comprehension did not increase, but adaptive/social behavior scores and expressive language quotients decreased. The majority of parents (56%) were unable to report in retrospect when their child had received auditory integration training. No individual child was identified as benefiting clinically or educationally from the treatment.

  14. Dialect Usage as a Factor in Developmental Language Performance of Primary Grade School Children.

    ERIC Educational Resources Information Center

    Levine, Madlyn A.; Hanes, Michael L.

    This study investigated the relationship between dialect usage and performance on four language tasks designed to reflect features developmental in nature: articulation, grammatical closure, auditory discrimination, and sentence comprehension. Predictor and criterion language tasks were administered to 90 kindergarten, first-, and second-grade…

  15. The development of sentence interpretation: effects of perceptual, attentional and semantic interference.

    PubMed

    Leech, Robert; Aydelott, Jennifer; Symons, Germaine; Carnevale, Julia; Dick, Frederic

    2007-11-01

    How does the development and consolidation of perceptual, attentional, and higher cognitive abilities interact with language acquisition and processing? We explored children's (ages 5-17) and adults' (ages 18-51) comprehension of morphosyntactically varied sentences under several competing speech conditions that varied in the degree of attentional demands, auditory masking, and semantic interference. We also evaluated the relationship between subjects' syntactic comprehension and their word reading efficiency and general 'speed of processing'. We found that the interactions between perceptual and attentional processes and complex sentence interpretation changed considerably over the course of development. Perceptual masking of the speech signal had an early and lasting impact on comprehension, particularly for more complex sentence structures. In contrast, increased attentional demand in the absence of energetic auditory masking primarily affected younger children's comprehension of difficult sentence types. Finally, the predictability of syntactic comprehension abilities by external measures of development and expertise is contingent upon the perceptual, attentional, and semantic milieu in which language processing takes place.

  16. Lesion characteristics driving right-hemispheric language reorganization in congenital left-hemispheric brain damage.

    PubMed

    Lidzba, Karen; de Haan, Bianca; Wilke, Marko; Krägeloh-Mann, Ingeborg; Staudt, Martin

    2017-10-01

    Pre- or perinatally acquired ("congenital") left-hemispheric brain lesions can be compensated for by reorganizing language into homotopic brain regions in the right hemisphere. Language comprehension may be hemispherically dissociated from language production. We investigated the lesion characteristics driving inter-hemispheric reorganization of language comprehension and language production in 19 patients (7-32years; eight females) with congenital left-hemispheric brain lesions (periventricular lesions [n=11] and middle cerebral artery infarctions [n=8]) by fMRI. 16/17 patients demonstrated reorganized language production, while 7/19 patients had reorganized language comprehension. Lesions to the insular cortex and the temporo-parietal junction (predominantly supramarginal gyrus) were significantly more common in patients in whom both, language production and comprehension were reorganized. These areas belong to the dorsal stream of the language network, participating in the auditory-motor integration of language. Our data suggest that the integrity of this stream might be crucial for a normal left-lateralized language development. Copyright © 2017. Published by Elsevier Inc.

  17. Working Memory for Patterned Sequences of Auditory Objects in a Songbird

    ERIC Educational Resources Information Center

    Comins, Jordan A.; Gentner, Timothy Q.

    2010-01-01

    The capacity to remember sequences is critical to many behaviors, such as navigation and communication. Adult humans readily recall the serial order of auditory items, and this ability is commonly understood to support, in part, the speech processing for language comprehension. Theories of short-term serial recall posit either use of absolute…

  18. Bilingualism influences inhibitory control in auditory comprehension

    PubMed Central

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Bilinguals have been shown to outperform monolinguals at suppressing task-irrelevant information. The present study aimed to identify how processing linguistic ambiguity during auditory comprehension may be associated with inhibitory control. Monolinguals and bilinguals listened to words in their native language (English) and identified them among four pictures while their eye-movements were tracked. Each target picture (e.g., hamper) appeared together with a similar-sounding within-language competitor picture (e.g., hammer) and two neutral pictures. Following each eye-tracking trial, priming probe trials indexed residual activation of target words, and residual inhibition of competitor words. Eye-tracking showed similar within-language competition across groups; priming showed stronger competitor inhibition in monolinguals than in bilinguals, suggesting differences in how inhibitory control was used to resolve within-language competition. Notably, correlation analyses revealed that inhibition performance on a nonlinguistic Stroop task was related to linguistic competition resolution in bilinguals but not in monolinguals. Together, monolingual-bilingual comparisons suggest that cognitive control mechanisms can be shaped by linguistic experience. PMID:21159332

  19. Anatomical Substrates of Visual and Auditory Miniature Second-language Learning

    PubMed Central

    Newman-Norlund, Roger D.; Frey, Scott H.; Petitto, Laura-Ann; Grafton, Scott T.

    2007-01-01

    Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent proficiency levels. Brain activity during sentence comprehension while hearing/viewing stimuli was assessed at low, medium, and high levels of proficiency in three separate fMRI sessions. Activation in the left inferior frontal gyrus (Broca’s area) correlated positively with improving L2 proficiency, whereas activity in the right-hemisphere (RH) homologue was negatively correlated for both auditory and visual forms of the language. Activity in sequence learning areas including the premotor cortex and putamen also correlated with L2 proficiency. Modality-specific differences in the blood oxygenation level-dependent signal accompanying L2 acquisition were localized to the planum temporale (PT). Participants learning the auditory form exhibited decreasing reliance on bilateral PT sites across sessions. In the visual form, bilateral PT sites increased in activity between Session 1 and Session 2, then decreased in left PT activity from Session 2 to Session 3. Comparison of L2 laterality (as compared to L1 laterality) in auditory and visual groups failed to demonstrate greater RH lateralization for the visual versus auditory L2. These data establish a common role for Broca’s area in language acquisition irrespective of the perceptual form of the language and suggest that L2s are processed similar to first languages even when learned after the ‘‘critical period.’’ The right frontal cortex was not preferentially recruited by visual language after accounting for phonetic/structural complexity and performance. PMID:17129186

  20. Multilingualism and fMRI: Longitudinal Study of Second Language Acquisition

    PubMed Central

    Andrews, Edna; Frigau, Luca; Voyvodic-Casabo, Clara; Voyvodic, James; Wright, John

    2013-01-01

    BOLD fMRI is often used for the study of human language. However, there are still very few attempts to conduct longitudinal fMRI studies in the study of language acquisition by measuring auditory comprehension and reading. The following paper is the first in a series concerning a unique longitudinal study devoted to the analysis of bi- and multilingual subjects who are: (1) already proficient in at least two languages; or (2) are acquiring Russian as a second/third language. The focus of the current analysis is to present data from the auditory sections of a set of three scans acquired from April, 2011 through April, 2012 on a five-person subject pool who are learning Russian during the study. All subjects were scanned using the same protocol for auditory comprehension on the same General Electric LX 3T Signa scanner in Duke University Hospital. Using a multivariate analysis of covariance (MANCOVA) for statistical analysis, proficiency measurements are shown to correlate significantly with scan results in the Russian conditions over time. The importance of both the left and right hemispheres in language processing is discussed. Special attention is devoted to the importance of contextualizing imaging data with corresponding behavioral and empirical testing data using a multivariate analysis of variance. This is the only study to date that includes: (1) longitudinal fMRI data with subject-based proficiency and behavioral data acquired in the same time frame; and (2) statistical modeling that demonstrates the importance of covariate language proficiency data for understanding imaging results of language acquisition. PMID:24961428

  1. Multilingualism and fMRI: Longitudinal Study of Second Language Acquisition.

    PubMed

    Andrews, Edna; Frigau, Luca; Voyvodic-Casabo, Clara; Voyvodic, James; Wright, John

    2013-05-28

    BOLD fMRI is often used for the study of human language. However, there are still very few attempts to conduct longitudinal fMRI studies in the study of language acquisition by measuring auditory comprehension and reading. The following paper is the first in a series concerning a unique longitudinal study devoted to the analysis of bi- and multilingual subjects who are: (1) already proficient in at least two languages; or (2) are acquiring Russian as a second/third language. The focus of the current analysis is to present data from the auditory sections of a set of three scans acquired from April, 2011 through April, 2012 on a five-person subject pool who are learning Russian during the study. All subjects were scanned using the same protocol for auditory comprehension on the same General Electric LX 3T Signa scanner in Duke University Hospital. Using a multivariate analysis of covariance (MANCOVA) for statistical analysis, proficiency measurements are shown to correlate significantly with scan results in the Russian conditions over time. The importance of both the left and right hemispheres in language processing is discussed. Special attention is devoted to the importance of contextualizing imaging data with corresponding behavioral and empirical testing data using a multivariate analysis of variance. This is the only study to date that includes: (1) longitudinal fMRI data with subject-based proficiency and behavioral data acquired in the same time frame; and (2) statistical modeling that demonstrates the importance of covariate language proficiency data for understanding imaging results of language acquisition.

  2. Spontaneous Language Production of Italian Children with Cochlear Implants and Their Mothers in Two Interactive Contexts

    ERIC Educational Resources Information Center

    Majorano, Marinella; Guidotti, Laura; Guerzoni, Letizia; Murri, Alessandra; Morelli, Marika; Cuda, Domenico; Lavelli, Manuela

    2018-01-01

    Background: In recent years many studies have shown that the use of cochlear implants (CIs) improves children's skills in processing the auditory signal and, consequently, the development of both language comprehension and production. Nevertheless, many authors have also reported that the development of language skills in children with CIs is…

  3. Temporal Information Processing as a Basis for Auditory Comprehension: Clinical Evidence from Aphasic Patients

    ERIC Educational Resources Information Center

    Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta

    2015-01-01

    Background: Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap…

  4. A New Measure for Assessing the Contributions of Higher Level Processes to Language Comprehension Performance in Preschoolers

    ERIC Educational Resources Information Center

    Hannon, Brenda; Frias, Sarah

    2012-01-01

    The present study reports the development of a theoretically motivated measure that provides estimates of a preschooler's ability to recall auditory text, to make text-based inferences, to access knowledge from long-term memory, and to integrate this accessed knowledge with new information from auditory text. This new preschooler component…

  5. Functional significance of the electrocorticographic auditory responses in the premotor cortex.

    PubMed

    Tanji, Kazuyo; Sakurada, Kaori; Funiu, Hayato; Matsuda, Kenichiro; Kayama, Takamasa; Ito, Sayuri; Suzuki, Kyoko

    2015-01-01

    Other than well-known motor activities in the precentral gyrus, functional magnetic resonance imaging (fMRI) studies have found that the ventral part of the precentral gyrus is activated in response to linguistic auditory stimuli. It has been proposed that the premotor cortex in the precentral gyrus is responsible for the comprehension of speech, but the precise function of this area is still debated because patients with frontal lesions that include the precentral gyrus do not exhibit disturbances in speech comprehension. We report on a patient who underwent resection of the tumor in the precentral gyrus with electrocorticographic recordings while she performed the verb generation task during awake brain craniotomy. Consistent with previous fMRI studies, high-gamma band auditory activity was observed in the precentral gyrus. Due to the location of the tumor, the patient underwent resection of the auditory responsive precentral area which resulted in the post-operative expression of a characteristic articulatory disturbance known as apraxia of speech (AOS). The language function of the patient was otherwise preserved and she exhibited intact comprehension of both spoken and written language. The present findings demonstrated that a lesion restricted to the ventral precentral gyrus is sufficient for the expression of AOS and suggest that the auditory-responsive area plays an important role in the execution of fluent speech rather than the comprehension of speech. These findings also confirm that the function of the premotor area is predominantly motor in nature and its sensory responses is more consistent with the "sensory theory of speech production," in which it was proposed that sensory representations are used to guide motor-articulatory processes.

  6. On pure word deafness, temporal processing, and the left hemisphere.

    PubMed

    Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean

    2005-07-01

    Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.

  7. Mismatch negativity (MMN) reveals inefficient auditory ventral stream function in chronic auditory comprehension impairments.

    PubMed

    Robson, Holly; Cloutman, Lauren; Keidel, James L; Sage, Karen; Drakesmith, Mark; Welbourne, Stephen

    2014-10-01

    Auditory discrimination is significantly impaired in Wernicke's aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Behavioural auditory discrimination thresholds of consonant-vowel-consonant (CVC) syllables and pure tones (PTs) were measured in WA (n = 7) and control (n = 7) participants. Threshold results were used to develop multiple deviant MMN oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed a relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.

  8. Tactile Aid Usage with Young Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Proctor, Adele

    Five hearing impaired children (2 to 4 years old) were followed longitudinally while using a single channel, vibrotactile aid as a supplement to hearing aids. Standardized language tests (including the Scales of Early Communication Skills for Hearing Impaired Children, the Test for Auditory Comprehension of Language, and the Test for Auditory…

  9. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    PubMed Central

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  10. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    PubMed

    Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  11. PRESCHOOL SPEECH ARTICULATION AND NONWORD REPETITION ABILITIES MAY HELP PREDICT EVENTUAL RECOVERY OR PERSISTENCE OF STUTTERING

    PubMed Central

    Spencer, Caroline; Weber-Fox, Christine

    2014-01-01

    Purpose In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Methods Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9–5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. Results CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Conclusion Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. PMID:25173455

  12. [Agraphia and preservation of music writing in a bilingual piano teacher].

    PubMed

    Assal, G; Buttet, J

    1983-01-01

    A bilingual virtuoso piano teacher developed aphasia and amusia, probably due to cerebral embolism. The perfectly demarcated and unique lesion was located in the left posterior temporoparietal region. Language examinations in French and Italian demonstrated entirely comparable difficulties in both languages. The linguistic course was favorable after a period of auditory agnosia and global aphasia. Language became fluent again 3 months after the onset, with a marked vocabulary loss and phonemic paraphasias with attempts at self-correction. Repetition was altered markedly with a deficit in auditory comprehension but no remaining elements of auditory agnosia. Reading was possible, but with some difficulty and total agraphia and acalculia persisted. Musical ability was better conserved, particularly with respect to repetition and above all to writing, the sparing of the latter constituting a fairly uncommon dissociation in relation to agraphia. Findings are discussed in relation to data in the literature concerning hemispheric participation in various musical tasks.

  13. Auditory evoked fields predict language ability and impairment in children.

    PubMed

    Oram Cardy, Janis E; Flagg, Elissa J; Roberts, Wendy; Roberts, Timothy P L

    2008-05-01

    Recent evidence suggests that a subgroup of children with autism show similarities to children with Specific Language Impairment (SLI) in the pattern of their linguistic impairments, but the source of this overlap is unclear. We examined the ability of auditory evoked magnetic fields to predict language and other developmental abilities in children and adolescents. Following standardized assessment of language ability, nonverbal IQ, and autism-associated behaviors, 110 trails of a tone were binaurally presented to 45 7-18 year olds who had typical development, autism (with LI), Asperger Syndrome (i.e., without LI), or SLI. Using a 151-channel MEG system, latency of left hemisphere (LH) and right hemisphere (RH) auditory M50 and M100 peaks was recorded. RH M50 latency (and to a lesser extent, RH M100 latency) predicted overall oral language ability, accounting for 36% of the variance. Nonverbal IQ and autism behavior ratings were not predicted by any of the evoked fields. Latency of the RH M50 was the best predictor of clinical LI (i.e., irrespective of autism diagnosis), and demonstrated 82% accuracy in predicting Receptive LI; a cutoff of 84.6 ms achieved 92% specificity and 70% sensitivity in classifying children with and without Receptive LI. Auditory evoked responses appear to reflect language functioning and impairment rather than non-specific brain (dys)function (e.g., IQ, behavior). RH M50 latency proved to be a relatively useful indicator of impaired language comprehension, suggesting that delayed auditory perceptual processing in the RH may be a key neural dysfunction underlying the overlap between subgroups of children with autism and SLI.

  14. Parallel language activation and inhibitory control in bimodal bilinguals.

    PubMed

    Giezen, Marcel R; Blumenfeld, Henrike K; Shook, Anthony; Marian, Viorica; Emmorey, Karen

    2015-08-01

    Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Predicting Longitudinal Change in Language Production and Comprehension in Individuals with Down Syndrome: Hierarchical Linear Modeling.

    ERIC Educational Resources Information Center

    Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.

    2002-01-01

    Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…

  16. Brain potentials reveal unconscious translation during foreign-language comprehension.

    PubMed

    Thierry, Guillaume; Wu, Yan Jing

    2007-07-24

    Whether the native language of bilingual individuals is active during second-language comprehension is the subject of lively debate. Studies of bilingualism have often used a mix of first- and second-language words, thereby creating an artificial "dual-language" context. Here, using event-related brain potentials, we demonstrate implicit access to the first language when bilinguals read words exclusively in their second language. Chinese-English bilinguals were required to decide whether English words presented in pairs were related in meaning or not; they were unaware of the fact that half of the words concealed a character repetition when translated into Chinese. Whereas the hidden factor failed to affect behavioral performance, it significantly modulated brain potentials in the expected direction, establishing that English words were automatically and unconsciously translated into Chinese. Critically, the same modulation was found in Chinese monolinguals reading the same words in Chinese, i.e., when Chinese character repetition was evident. Finally, we replicated this pattern of results in the auditory modality by using a listening comprehension task. These findings demonstrate that native-language activation is an unconscious correlate of second-language comprehension.

  17. A FUNCTIONAL NEUROIMAGING INVESTIGATION OF THE ROLES OF STRUCTURAL COMPLEXITY AND TASK-DEMAND DURING AUDITORY SENTENCE PROCESSING

    PubMed Central

    Love, Tracy; Haist, Frank; Nicol, Janet; Swinney, David

    2009-01-01

    Using functional magnetic resonance imaging (fMRI), this study directly examined an issue that bridges the potential language processing and multi-modal views of the role of Broca’s area: the effects of task-demands in language comprehension studies. We presented syntactically simple and complex sentences for auditory comprehension under three different (differentially complex) task-demand conditions: passive listening, probe verification, and theme judgment. Contrary to many language imaging findings, we found that both simple and complex syntactic structures activated left inferior frontal cortex (L-IFC). Critically, we found activation in these frontal regions increased together with increased task-demands. Specifically, tasks that required greater manipulation and comparison of linguistic material recruited L-IFC more strongly; independent of syntactic structure complexity. We argue that much of the presumed syntactic effects previously found in sentence imaging studies of L-IFC may, among other things, reflect the tasks employed in these studies and that L-IFC is a region underlying mnemonic and other integrative functions, on which much language processing may rely. PMID:16881268

  18. Bilingualism influences inhibitory control in auditory comprehension.

    PubMed

    Blumenfeld, Henrike K; Marian, Viorica

    2011-02-01

    Bilinguals have been shown to outperform monolinguals at suppressing task-irrelevant information. The present study aimed to identify how processing linguistic ambiguity during auditory comprehension may be associated with inhibitory control. Monolinguals and bilinguals listened to words in their native language (English) and identified them among four pictures while their eye-movements were tracked. Each target picture (e.g., hamper) appeared together with a similar-sounding within-language competitor picture (e.g., hammer) and two neutral pictures. Following each eye-tracking trial, priming probe trials indexed residual activation of target words, and residual inhibition of competitor words. Eye-tracking showed similar within-language competition across groups; priming showed stronger competitor inhibition in monolinguals than in bilinguals, suggesting differences in how inhibitory control was used to resolve within-language competition. Notably, correlation analyses revealed that inhibition performance on a nonlinguistic Stroop task was related to linguistic competition resolution in bilinguals but not in monolinguals. Together, monolingual-bilingual comparisons suggest that cognitive control mechanisms can be shaped by linguistic experience. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Simultaneous perception of a spoken and a signed language: The brain basis of ASL-English code-blends

    PubMed Central

    Weisberg, Jill; McCullough, Stephen; Emmorey, Karen

    2018-01-01

    Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration. PMID:26177161

  20. Racial-Ethnic Differences in Word Fluency and Auditory Comprehension Among Persons With Poststroke Aphasia.

    PubMed

    Ellis, Charles; Peach, Richard K

    2017-04-01

    To examine aphasia outcomes and to determine whether the observed language profiles vary by race-ethnicity. Retrospective cross-sectional study using a convenience sample of persons of with aphasia (PWA) obtained from AphasiaBank, a database designed for the study of aphasia outcomes. Aphasia research laboratories. PWA (N=381; 339 white and 42 black individuals). Not applicable. Western Aphasia Battery-Revised (WAB-R) total scale score (Aphasia Quotient) and subtest scores were analyzed for racial-ethnic differences. The WAB-R is a comprehensive assessment of communication function designed to evaluate PWA in the areas of spontaneous speech, auditory comprehension, repetition, and naming in addition to reading, writing, apraxia, and constructional, visuospatial, and calculation skills. In univariate comparisons, black PWA exhibited lower word fluency (5.7 vs 7.6; P=.004), auditory word comprehension (49.0 vs 53.0; P=.021), and comprehension of sequential commands (44.2 vs 52.2; P=.012) when compared with white PWA. In multivariate comparisons, adjusted for age and years of education, black PWA exhibited lower word fluency (5.5 vs 7.6; P=.015), auditory word recognition (49.3 vs 53.3; P=.02), and comprehension of sequential commands (43.7 vs 53.2; P=.017) when compared with white PWA. This study identified racial-ethnic differences in word fluency and auditory comprehension ability among PWA. Both skills are critical to effective communication, and racial-ethnic differences in outcomes must be considered in treatment approaches designed to improve overall communication ability. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  1. Real-time lexical comprehension in young children learning American Sign Language.

    PubMed

    MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne

    2018-04-16

    When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.

  2. Neuronal basis of speech comprehension.

    PubMed

    Specht, Karsten

    2014-01-01

    Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Bilingual language processing after a lesion in the left thalamic and temporal regions. A case report with early childhood onset

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Lieshout, P.; Renier, W.; Eling, P.

    1990-02-01

    This case study concerns an 18-year-old bilingual girl who suffered a radiation lesion in the left (dominant) thalamic and temporal region when she was 4 years old. Language and memory assessment revealed deficits in auditory short-term memory, auditory word comprehension, nonword repetition, syntactic processing, word fluency, and confrontation naming tasks. Both languages (English and Dutch) were found to be affected in a similar manner, despite the fact that one language (English) was acquired before and the other (Dutch) after the period of lesion onset. Most of the deficits appear to be related to verbal (short-term) memory dysfunction. Several hypotheses ofmore » subcortical involvement in memory processes are discussed with reference to existing theories in this area.« less

  4. Training-induced brain plasticity in aphasia.

    PubMed

    Musso, M; Weiller, C; Kiebel, S; Müller, S P; Bülau, P; Rijntjes, M

    1999-09-01

    It has long been a matter of debate whether recovery from aphasia after left perisylvian lesions is mediated by the preserved left hemispheric language zones or by the homologous right hemisphere regions. Using PET, we investigated the short-term changes in the cortical network involved in language comprehension during recovery from aphasia. In 12 consecutive measurements of regional cerebral blood flow (rCBF), four patients with Wernicke's aphasia, caused by a posterior left middle cerebral artery infarction, were tested with a language comprehension task. Comprehension was estimated directly after each scan with a modified version of the Token Test. In the interval between the scans, the patients participated in brief, intense language comprehension training. A significant improvement in performance was observed in all patients. We correlated changes in blood flow measured during the language comprehension task with the scores achieved in the Token Test. The regions which best correlated with the training-induced improvement in verbal comprehension were the posterior part of the right superior temporal gyrus and the left precuneus. This study supports the role of the right hemisphere in recovery from aphasia and demonstrates that the improvement in auditory comprehension induced by specific training is associated with functional brain reorganization.

  5. Can very early music interventions promote at-risk infants' development?

    PubMed

    Virtala, Paula; Partanen, Eino

    2018-04-30

    Music and musical activities are often a natural part of parenting. As accumulating evidence shows, music can promote auditory and language development in infancy and early childhood. It may even help to support auditory and language skills in infants whose development is compromised by heritable conditions, like the reading deficit dyslexia, or by environmental factors, such as premature birth. For example, infants born to dyslexic parents can have atypical brain responses to speech sounds and subsequent challenges in language development. Children born very preterm, in turn, have an increased likelihood of sensory, cognitive, and motor deficits. To ameliorate these deficits, we have developed early interventions focusing on music. Preliminary results of our ongoing longitudinal studies suggest that music making and parental singing promote infants' early language development and auditory neural processing. Together with previous findings in the field, the present studies highlight the role of active, social music making in supporting auditory and language development in at-risk children and infants. Once completed, the studies will illuminate both risk and protective factors in development and offer a comprehensive model of understanding the promises of music activities in promoting positive developmental outcomes during the first years of life. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals Inc. on behalf of The New York Academy of Sciences.

  6. Language in Context: MEG Evidence for Modality-General and -Specific Responses to Reference Resolution

    PubMed Central

    2016-01-01

    Abstract Successful language comprehension critically depends on our ability to link linguistic expressions to the entities they refer to. Without reference resolution, newly encountered language cannot be related to previously acquired knowledge. The human experience includes many different types of referents, some visual, some auditory, some very abstract. Does the neural basis of reference resolution depend on the nature of the referents, or do our brains use a modality-general mechanism for linking meanings to referents? Here we report evidence for both. Using magnetoencephalography (MEG), we varied both the modality of referents, which consisted either of visual or auditory objects, and the point at which reference resolution was possible within sentences. Source-localized MEG responses revealed brain activity associated with reference resolution that was independent of the modality of the referents, localized to the medial parietal lobe and starting ∼415 ms after the onset of reference resolving words. A modality-specific response to reference resolution in auditory domains was also found, in the vicinity of auditory cortex. Our results suggest that referential language processing cannot be reduced to processing in classical language regions and representations of the referential domain in modality-specific neural systems. Instead, our results suggest that reference resolution engages medial parietal cortex, which supports a mechanism for referential processing regardless of the content modality. PMID:28058272

  7. A decrease in brain activation associated with driving when listening to someone speak.

    PubMed

    Just, Marcel Adam; Keller, Timothy A; Cynkar, Jacquelyn

    2008-04-18

    Behavioral studies have shown that engaging in a secondary task, such as talking on a cellular telephone, disrupts driving performance. This study used functional magnetic resonance imaging (fMRI) to investigate the impact of concurrent auditory language comprehension on the brain activity associated with a simulated driving task. Participants steered a vehicle along a curving virtual road, either undisturbed or while listening to spoken sentences that they judged as true or false. The dual-task condition produced a significant deterioration in driving accuracy caused by the processing of the auditory sentences. At the same time, the parietal lobe activation associated with spatial processing in the undisturbed driving task decreased by 37% when participants concurrently listened to sentences. The findings show that language comprehension performed concurrently with driving draws mental resources away from the driving and produces deterioration in driving performance, even when it does not require holding or dialing a phone.

  8. A Decrease in Brain Activation Associated with Driving When Listening to Someone Speak

    PubMed Central

    Just, Marcel Adam; Keller, Timothy A.; Cynkar, Jacquelyn

    2009-01-01

    Behavioral studies have shown that engaging in a secondary task, such as talking on a cellular telephone, disrupts driving performance. This study used functional magnetic resonance imaging (fMRI) to investigate the impact of concurrent auditory language comprehension on the brain activity associated with a simulated driving task. Participants steered a vehicle along a curving virtual road, either undisturbed or while listening to spoken sentences that they judged as true or false. The dual task condition produced a significant deterioration in driving accuracy caused by the processing of the auditory sentences. At the same time, the parietal lobe activation associated with spatial processing in the undisturbed driving task decreased by 37% when participants concurrently listened to sentences. The findings show that language comprehension performed concurrently with driving draws mental resources away from the driving and produces deterioration in driving performance, even when it does not require holding or dialing a phone. PMID:18353285

  9. Effects of hand gestures on auditory learning of second-language vowel length contrasts.

    PubMed

    Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael

    2014-12-01

    Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.

  10. Neural Tuning to Low-Level Features of Speech throughout the Perisylvian Cortex.

    PubMed

    Berezutskaya, Julia; Freudenburg, Zachary V; Güçlü, Umut; van Gerven, Marcel A J; Ramsey, Nick F

    2017-08-16

    Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain. SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the perisylvian cortex and potentially contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typically associated with high-level language processing. These findings add to the previous work on auditory processing and underline a distinctive role of inferior frontal gyrus in natural speech comprehension. Copyright © 2017 the authors 0270-6474/17/377906-15$15.00/0.

  11. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    PubMed Central

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  12. Neural Mechanism Underling Comprehension of Narrative Speech and Its Heritability: Study in a Large Population.

    PubMed

    Babajani-Feremi, Abbas

    2017-09-01

    Comprehension of narratives constitutes a fundamental part of our everyday life experience. Although the neural mechanism of auditory narrative comprehension has been investigated in some studies, the neural correlates underlying this mechanism and its heritability remain poorly understood. We investigated comprehension of naturalistic speech in a large, healthy adult population (n = 429; 176/253 M/F; 22-36 years of age) consisting of 192 twin pairs (49 monozygotic and 47 dizygotic pairs) and 237 of their siblings. We used high quality functional MRI datasets from the Human Connectome Project (HCP) in which a story-based paradigm was utilized for the auditory narrative comprehension. Our results revealed that narrative comprehension was associated with activations of the classical language regions including superior temporal gyrus (STG), middle temporal gyrus (MTG), and inferior frontal gyrus (IFG) in both hemispheres, though STG and MTG were activated symmetrically and activation in IFG were left-lateralized. Our results further showed that the narrative comprehension was associated with activations in areas beyond the classical language regions, e.g. medial superior frontal gyrus (SFGmed), middle frontal gyrus (MFG), and supplementary motor area (SMA). Of subcortical structures, only the hippocampus was involved. The results of heritability analysis revealed that the oral reading recognition and picture vocabulary comprehension were significantly heritable (h 2  > 0.56, p < 10 - 13 ). In addition, the extent of activation of five areas in the left hemisphere, i.e. STG, IFG pars opercularis, SFGmed, SMA, and precuneus, and one area in the right hemisphere, i.e. MFG, were significantly heritable (h 2  > 0.33, p < 0.0004). The current study, to the best of our knowledge, is the first to investigate auditory narrative comprehension and its heritability in a large healthy population. Referring to the excellent quality of the HCP data, our results can clarify the functional contributions of linguistic and extra-linguistic cortices during narrative comprehension.

  13. Reading comprehension in Parkinson's disease.

    PubMed

    Murray, Laura L; Rutledge, Stefanie

    2014-05-01

    Although individuals with Parkinson's disease (PD) self-report reading problems and experience difficulties in cognitive-linguistic functions that support discourse-level reading, prior research has primarily focused on sentence-level processing and auditory comprehension. Accordingly, the authors investigated the presence and nature of reading comprehension in PD, hypothesizing that (a) individuals with PD would display impaired accuracy and/or speed on reading comprehension tests and (b) reading performances would be correlated with cognitive test results. Eleven adults with PD and 9 age- and education-matched control participants completed tests that evaluated reading comprehension; general language and cognitive abilities; and aspects of attention, memory, and executive functioning. The PD group obtained significantly lower scores on several, but not all, reading comprehension, language, and cognitive measures. Memory, language, and disease severity were significantly correlated with reading comprehension for the PD group. Individuals in the early stages of PD without dementia or broad cognitive deficits can display reading comprehension difficulties, particularly for high- versus basic-level reading tasks. These reading difficulties are most closely related to memory, high-level language, and PD symptom severity status. The findings warrant additional research to delineate further the types and nature of reading comprehension impairments experienced by individuals with PD.

  14. The Effects of YouTube Listening/Viewing Activities on Taiwanese EFL Learners' Listening Comprehension

    ERIC Educational Resources Information Center

    Kuo, Li-Li

    2009-01-01

    Declared the year of YouTube, 2007 was hailed as bringing a technological revolution in relation to pedagogy, one that may provide more convenient access to materials for language input, such as auditory, visual, and other types of authentic resources in order to promote advancement in all four language learning skills--listening, speaking,…

  15. Disentangling syntax and intelligibility in auditory language comprehension.

    PubMed

    Friederici, Angela D; Kotz, Sonja A; Scott, Sophie K; Obleser, Jonas

    2010-03-01

    Studies of the neural basis of spoken language comprehension typically focus on aspects of auditory processing by varying signal intelligibility, or on higher-level aspects of language processing such as syntax. Most studies in either of these threads of language research report brain activation including peaks in the superior temporal gyrus (STG) and/or the superior temporal sulcus (STS), but it is not clear why these areas are recruited in functionally different studies. The current fMRI study aims to disentangle the functional neuroanatomy of intelligibility and syntax in an orthogonal design. The data substantiate functional dissociations between STS and STG in the left and right hemispheres: first, manipulations of speech intelligibility yield bilateral mid-anterior STS peak activation, whereas syntactic phrase structure violations elicit strongly left-lateralized mid STG and posterior STS activation. Second, ROI analyses indicate all interactions of speech intelligibility and syntactic correctness to be located in the left frontal and temporal cortex, while the observed right-hemispheric activations reflect less specific responses to intelligibility and syntax. Our data demonstrate that the mid-to-anterior STS activation is associated with increasing speech intelligibility, while the mid-to-posterior STG/STS is more sensitive to syntactic information within the speech. 2009 Wiley-Liss, Inc.

  16. Describing the trajectory of language development in the presence of severe-to-profound hearing loss: a closer look at children with cochlear implants versus hearing aids.

    PubMed

    Yoshinaga-Itano, Christine; Baca, Rosalinda L; Sedey, Allison L

    2010-10-01

    The objective of this investigation was to describe the language growth of children with severe or profound hearing loss with cochlear implants versus those children with the same degree of hearing loss using hearing aids. A prospective longitudinal observation and analysis. University of Colorado Department of Speech Language and Hearing Sciences. There were 87 children with severe-to-profound hearing loss from 48 to 87 months of age. All children received early intervention services through the Colorado Home Intervention Program. Most children received intervention services from a certified auditory-verbal therapist or an auditory-oral therapist and weekly sign language instruction from an instructor who was deaf or hard of hearing and native or fluent in American Sign Language. The Test of Auditory Comprehension of Language, 3rd Edition, and the Expressive One Word Picture Vocabulary Test, 3rd Edition, were the assessment tools for children 4 to 7 years of age. The expressive language subscale of the Minnesota Child Development was used in the infant/toddler period (birth to 36 mo). Average language estimates at 84 months of age were nearly identical to the normative sample for receptive language and 7 months delayed for expressive vocabulary. Children demonstrated a mean rate of growth from 4 years through 7 years on these 2 assessments that was equivalent to their normal-hearing peers. As a group, children with hearing aids deviated more from the age equivalent trajectory on the Test of Auditory Comprehension of Language, 3rd Edition, and the Expressive One Word Picture Vocabulary Test, 3rd Edition, than children with cochlear implants. When a subset of children were divided into performance categories, we found that children with cochlear implants were more likely to be "gap closers" and less likely to be "gap openers," whereas the reverse was true for the children with hearing aids for both measures. Children who are educated through oral-aural combined with sign language instruction can achieve age-appropriate language levels on expressive vocabulary and receptive syntax ages 4 through 7 years. However, it is easier to maintain a constant rate of development rather than to accelerate from birth through 84 months of age, which represented approximately 80% of our sample. However, acceleration of language development is possible in some children and could result from cochlear implantation.

  17. The frequency modulated auditory evoked response (FMAER), a technical advance for study of childhood language disorders: cortical source localization and selected case studies

    PubMed Central

    2013-01-01

    Background Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. Results FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. Conclusion The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral responses. Serial FMAERs may be useful for tracking language change in LKS. Cortical FMAERs may augment invasive cortical language testing in epilepsy surgical patients. The FMAER may be normal in ASD and other language disorders when pathology spares the superior temporal gyrus and surround but presumably involves other brain regions. Ear/mastoid reference electrodes should be avoided and multichannel, reference free recordings utilized. Source analysis may assist in better understanding of complex FMAER findings. PMID:23351174

  18. Helping Remedial Readers Master the Reading Vocabulary through a Seven Step Method.

    ERIC Educational Resources Information Center

    Aaron, Robert L.

    1981-01-01

    An outline of seven important steps for teaching vocabulary development includes components of language development, visual memory, visual-auditory perception, speeded recall, spelling, reading the word in a sentence, and word comprehension in written context. (JN)

  19. Lexical prosody beyond first-language boundary: Chinese lexical tone sensitivity predicts English reading comprehension.

    PubMed

    Choi, William; Tong, Xiuli; Cain, Kate

    2016-08-01

    This 1-year longitudinal study examined the role of Cantonese lexical tone sensitivity in predicting English reading comprehension and the pathways underlying their relation. Multiple measures of Cantonese lexical tone sensitivity, English lexical stress sensitivity, Cantonese segmental phonological awareness, general auditory sensitivity, English word reading, and English reading comprehension were administered to 133 Cantonese-English unbalanced bilingual second graders. Structural equation modeling analysis identified transfer of Cantonese lexical tone sensitivity to English reading comprehension. This transfer was realized through a direct pathway via English stress sensitivity and also an indirect pathway via English word reading. These results suggest that prosodic sensitivity is an important factor influencing English reading comprehension and that it needs to be incorporated into theoretical accounts of reading comprehension across languages. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures.

    PubMed

    Hickok, G; Okada, K; Barr, W; Pa, J; Rogalsky, C; Donnelly, K; Barde, L; Grant, A

    2008-12-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.

  1. Prelingual auditory-perceptual skills as indicators of initial oral language development in deaf children with cochlear implants.

    PubMed

    Pianesi, Federica; Scorpecci, Alessandro; Giannantonio, Sara; Micardi, Mariella; Resca, Alessandra; Marsella, Pasquale

    2016-03-01

    To assess when prelingually deaf children with a cochlear implant (CI) achieve the First Milestone of Oral Language, to study the progression of their prelingual auditory skills in the first year after CI and to investigate a possible correlation between such skills and the timing of initial oral language development. The sample included 44 prelingually deaf children (23 M and 21 F) from the same tertiary care institution, who received unilateral or bilateral cochlear implants. Achievement of the First Milestone of Oral Language (FMOL) was defined as speech comprehension of at least 50 words and speech production of a minimum of 10 words, as established by administration of a validated Italian test for the assessment of initial language competence in infants. Prelingual auditory-perceptual skills were assessed over time by means of a test battery consisting of: the Infant Toddler Meaningful Integration Scale (IT-MAIS); the Infant Listening Progress Profile (ILiP) and the Categories of Auditory Performance (CAP). On average, the 44 children received their CI at 24±9 months and experienced FMOL after 8±4 months of continuous CI use. The IT-MAIS, ILiP and CAP scores increased significantly over time, the greatest improvement occurring between baseline and six months of CI use. On multivariate regression analysis, age at diagnosis and age at CI did not appear to bear correlation with FMOL timing; instead, the only variables contributing to its variance were IT-MAIS and ILiP scores after six months of CI use, accounting for 43% and 55%, respectively. Prelingual auditory skills of implanted children assessed via a test battery six months after CI treatment, can act as indicators of the timing of initial oral language development. Accordingly, the period from CI switch-on to six months can be considered as a window of opportunity for appropriate intervention in children failing to show the expected progression of their auditory skills and who would have higher risk of delayed oral language development. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. A preliminary investigation of the relationship between language and gross motor skills in preschool children.

    PubMed

    Merriman, W J; Barnett, B E

    1995-12-01

    This study was undertaken to explore the relationship between language skills and gross-motor skills of 28 preschool children from two private preschools in New York City. Pearson product-moment correlation coefficients were calculated for language (revised Preschool Language Scale) and gross motor (Test of Gross Motor Development) scores. Locomotor skills were significantly related to both auditory comprehension and verbal ability while object control scores did not correlate significantly with either language score. These results were discussed in terms of previous research and with reference to dynamical systems theory. Suggestions for research were made.

  3. Accessibility of spoken, written, and sign language in Landau-Kleffner syndrome: a linguistic and functional MRI study.

    PubMed

    Sieratzki, J S; Calvert, G A; Brammer, M; David, A; Woll, B

    2001-06-01

    Landau-Kleffner syndrome (LKS) is an acquired aphasia which begins in childhood and is thought to arise from an epileptic disorder within the auditory speech cortex. Although the epilepsy usually subsides at puberty, a severe communication impairment often persists. Here we report on a detailed study of a 26-year old, left-handed male, with onset of LKS at age 5 years, who is aphasic for English but who learned British Sign Language (BSL) at age 13. We have investigated his skills in different language modalities, recorded EEGs during wakefulness, sleep, and under conditions of auditory stimulation, measured brain stem auditory-evoked potentials (BAEP), and performed functional MRI (fMRI) during a range of linguistic tasks. Our investigation demonstrated severe restrictions in comprehension and production of spoken English as well as lip-reading, while reading was comparatively less impaired. BSL was by far the most efficient mode of communication. All EEG recordings were normal, while BAEP showed minor abnormalities. fMRI revealed: 1) powerful and extensive bilateral (R > L) activation of auditory cortices in response to heard speech, much stronger than when listening to music; 2) very little response to silent lip-reading; 3) strong activation in the temporo-parieto-occipital association cortex, exclusively in the right hemisphere (RH), when viewing BSL signs. Analysis of these findings provides novel insights into the disturbance of the auditory speech cortex which underlies LKS and its diagnostic evaluation by fMRI, and underpins a strategy of restoring communication abilities in LKS through a natural sign language of the deaf (with Video)

  4. Differences between conduction aphasia and Wernicke's aphasia.

    PubMed

    Anzaki, F; Izumi, S

    2001-07-01

    Conduction aphasia and Wernike's aphasia have been differentiated by the degree of auditory language comprehension. We quantitatively compared the speech sound errors of two conduction aphasia patients and three Wernicke's aphasia patients on various language modality tests. All of the patients were Japanese. The two conduction aphasia patients had "conduites d'approche" errors and phonological paraphasia. The patient with mild Wernicke's aphasia made various errors. In the patient with severe Wernicke's aphasia, neologism was observed. Phonological paraphasia in the two conduction aphasia patients seemed to occur when the examinee searched for the target word. They made more errors in vowels than in consonants of target words on the naming and repetition tests. They seemed to search the target word by the correct consonant phoneme and incorrect vocalic phoneme in the table of the Japanese alphabet. The Wernicke's aphasia patients who had severe impairment of auditory comprehension, made more errors in consonants than in vowels of target words. In conclusion, utterance of conduction aphasia and that of Wernicke's aphasia are qualitatively distinct.

  5. Preschool speech articulation and nonword repetition abilities may help predict eventual recovery or persistence of stuttering.

    PubMed

    Spencer, Caroline; Weber-Fox, Christine

    2014-09-01

    In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9-5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. At the end of this activity the reader will be able to: (1) describe the current status of nonlinguistic and linguistic predictors for recovery and persistence of stuttering; (2) summarize current evidence regarding the potential value of consonant cluster articulation and nonword repetition abilities in helping to predict stuttering outcome in preschool children; (3) discuss the current findings in relation to potential implications for theories of developmental stuttering; (4) discuss the current findings in relation to potential considerations for the evaluation and treatment of developmental stuttering. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Is selective mutism associated with deficits in memory span and visual memory?: An exploratory case-control study.

    PubMed

    Kristensen, Hanne; Oerbeck, Beate

    2006-01-01

    Our main aim in this study was to explore the association between selective mutism (SM) and aspects of nonverbal cognition such as visual memory span and visual memory. Auditory-verbal memory span was also examined. The etiology of SM is unclear, and it probably represents a heterogeneous condition. SM is associated with language impairment, but nonspecific neurodevelopmental factors, including motor problems, are also reported in SM without language impairment. Furthermore, SM is described in Asperger's syndrome. Studies on nonverbal cognition in SM thus merit further investigation. Neuropsychological tests were administered to a clinical sample of 32 children and adolescents with SM (ages 6-17 years, 14 boys and 18 girls) and 62 nonreferred controls matched for age, gender, and socioeconomic status. We used independent t-tests to compare groups with regard to auditory-verbal memory span, visual memory span, and visual memory (Benton Visual Retention Test), and employed linear regression analysis to study the impact of SM on visual memory, controlling for IQ and measures of language and motor function. The SM group differed from controls on auditory-verbal memory span but not on visual memory span. Controlled for IQ, language, and motor function, the SM group did not differ from controls on visual memory. Motor function was the strongest predictor of visual memory performance. SM does not appear to be associated with deficits in visual memory span or visual memory. The reduced auditory-verbal memory span supports the association between SM and language impairment. More comprehensive neuropsychological studies are needed.

  7. Transcortical sensory aphasia: revisited and revised.

    PubMed

    Boatman, D; Gordon, B; Hart, J; Selnes, O; Miglioretti, D; Lenz, F

    2000-08-01

    Transcortical sensory aphasia (TSA) is characterized by impaired auditory comprehension with intact repetition and fluent speech. We induced TSA transiently by electrical interference during routine cortical function mapping in six adult seizure patients. For each patient, TSA was associated with multiple posterior cortical sites, including the posterior superior and middle temporal gyri, in classical Wernicke's area. A number of TSA sites were immediately adjacent to sites where Wernicke's aphasia was elicited in the same patients. Phonological decoding of speech sounds was assessed by auditory syllable discrimination and found to be intact at all sites where TSA was induced. At a subset of electrode sites where the pattern of language deficits otherwise resembled TSA, naming and word reading remained intact. Language lateralization testing by intracarotid amobarbital injection showed no evidence of independent right hemisphere language. These results suggest that TSA may result from a one-way disruption between left hemisphere phonology and lexical-semantic processing.

  8. Damage to ventral and dorsal language pathways in acute aphasia

    PubMed Central

    Hartwigsen, Gesa; Kellmeyer, Philipp; Glauche, Volkmar; Mader, Irina; Klöppel, Stefan; Suchan, Julia; Karnath, Hans-Otto; Weiller, Cornelius; Saur, Dorothee

    2013-01-01

    Converging evidence from neuroimaging studies and computational modelling suggests an organization of language in a dual dorsal–ventral brain network: a dorsal stream connects temporoparietal with frontal premotor regions through the superior longitudinal and arcuate fasciculus and integrates sensorimotor processing, e.g. in repetition of speech. A ventral stream connects temporal and prefrontal regions via the extreme capsule and mediates meaning, e.g. in auditory comprehension. The aim of our study was to test, in a large sample of 100 aphasic stroke patients, how well acute impairments of repetition and comprehension correlate with lesions of either the dorsal or ventral stream. We combined voxelwise lesion-behaviour mapping with the dorsal and ventral white matter fibre tracts determined by probabilistic fibre tracking in our previous study in healthy subjects. We found that repetition impairments were mainly associated with lesions located in the posterior temporoparietal region with a statistical lesion maximum in the periventricular white matter in projection of the dorsal superior longitudinal and arcuate fasciculus. In contrast, lesions associated with comprehension deficits were found more ventral-anterior in the temporoprefrontal region with a statistical lesion maximum between the insular cortex and the putamen in projection of the ventral extreme capsule. Individual lesion overlap with the dorsal fibre tract showed a significant negative correlation with repetition performance, whereas lesion overlap with the ventral fibre tract revealed a significant negative correlation with comprehension performance. To summarize, our results from patients with acute stroke lesions support the claim that language is organized along two segregated dorsal–ventral streams. Particularly, this is the first lesion study demonstrating that task performance on auditory comprehension measures requires an interaction between temporal and prefrontal brain regions via the ventral extreme capsule pathway. PMID:23378217

  9. When instructions fail. The effects of stimulus control training on brain injury survivors' attending and reporting during hearing screenings.

    PubMed

    Schlund, M W

    2000-10-01

    Bedside hearing screenings are routinely conducted by speech and language pathologists for brain injury survivors during rehabilitation. Cognitive deficits resulting from brain injury, however, may interfere with obtaining estimates of auditory thresholds. Poor comprehension or attention deficits often compromise patient abilities to follow procedural instructions. This article describes the effects of jointly applying behavioral methods and psychophysical methods to improve two severely brain-injured survivors' attending and reporting on auditory test stimuli presentation. Treatment consisted of stimulus control training that involved differentially reinforcing responding in the presence and absence of an auditory test tone. Subsequent hearing screenings were conducted with novel auditory test tones and a common titration procedure. Results showed that prior stimulus control training improved attending and reporting such that hearing screenings were conducted and estimates of auditory thresholds were obtained.

  10. Factors contributing to speech perception scores in long-term pediatric cochlear implant users.

    PubMed

    Davidson, Lisa S; Geers, Ann E; Blamey, Peter J; Tobey, Emily A; Brenner, Christine A

    2011-02-01

    The objectives of this report are to (1) describe the speech perception abilities of long-term pediatric cochlear implant (CI) recipients by comparing scores obtained at elementary school (CI-E, 8 to 9 yrs) with scores obtained at high school (CI-HS, 15 to 18 yrs); (2) evaluate speech perception abilities in demanding listening conditions (i.e., noise and lower intensity levels) at adolescence; and (3) examine the relation of speech perception scores to speech and language development over this longitudinal timeframe. All 112 teenagers were part of a previous nationwide study of 8- and 9-yr-olds (N = 181) who received a CI between 2 and 5 yrs of age. The test battery included (1) the Lexical Neighborhood Test (LNT; hard and easy word lists); (2) the Bamford Kowal Bench sentence test; (3) the Children's Auditory-Visual Enhancement Test; (4) the Test of Auditory Comprehension of Language at CI-E; (5) the Peabody Picture Vocabulary Test at CI-HS; and (6) the McGarr sentences (consonants correct) at CI-E and CI-HS. CI-HS speech perception was measured in both optimal and demanding listening conditions (i.e., background noise and low-intensity level). Speech perception scores were compared based on age at test, lexical difficulty of stimuli, listening environment (optimal and demanding), input mode (visual and auditory-visual), and language age. All group mean scores significantly increased with age across the two test sessions. Scores of adolescents significantly decreased in demanding listening conditions. The effect of lexical difficulty on the LNT scores, as evidenced by the difference in performance between easy versus hard lists, increased with age and decreased for adolescents in challenging listening conditions. Calculated curves for percent correct speech perception scores (LNT and Bamford Kowal Bench) and consonants correct on the McGarr sentences plotted against age-equivalent language scores on the Test of Auditory Comprehension of Language and Peabody Picture Vocabulary Test achieved asymptote at similar ages, around 10 to 11 yrs. On average, children receiving CIs between 2 and 5 yrs of age exhibited significant improvement on tests of speech perception, lipreading, speech production, and language skills measured between primary grades and adolescence. Evidence suggests that improvement in speech perception scores with age reflects increased spoken language level up to a language age of about 10 yrs. Speech perception performance significantly decreased with softer stimulus intensity level and with introduction of background noise. Upgrades to newer speech processing strategies and greater use of frequency-modulated systems may be beneficial for ameliorating performance under these demanding listening conditions.

  11. Brain responses and looking behavior during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life

    PubMed Central

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Potton, Anita; Birtles, Deidre; Frostick, Caroline; Moore, Derek G.

    2013-01-01

    The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants. PMID:23882240

  12. The Neural Mechanisms of Word Order Processing Revisited: Electrophysiological Evidence from Japanese

    ERIC Educational Resources Information Center

    Wolff, Susann; Schlesewsky, Matthias; Hirotani, Masako; Bornkessel-Schlesewsky, Ina

    2008-01-01

    We present two ERP studies on the processing of word order variations in Japanese, a language that is suited to shedding further light on the implications of word order freedom for neurocognitive approaches to sentence comprehension. Experiment 1 used auditory presentation and revealed that initial accusative objects elicit increased processing…

  13. Acquiring L2 Sentence Comprehension: A Longitudinal Study of Word Monitoring in Noise

    ERIC Educational Resources Information Center

    Oliver, Georgina; Gullberg, Marianne; Hellwig, Frauke; Mitterer, Holger; Indefrey, Peter

    2012-01-01

    This study investigated the development of second language online auditory processing with ab initio German learners of Dutch. We assessed the influence of different levels of background noise and different levels of semantic and syntactic target word predictability on word-monitoring latencies. There was evidence of syntactic, but not…

  14. Tracking Real-Time Neural Activation of Conceptual Knowledge Using Single-Trial Event-Related Potentials

    ERIC Educational Resources Information Center

    Amsel, Ben D.

    2011-01-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed…

  15. Preschool-Aged Children Have Difficulty Constructing and Interpreting Simple Utterances Composed of Graphic Symbols

    ERIC Educational Resources Information Center

    Sutton, Ann; Trudeau, Natacha; Morford, Jill; Rios, Monica; Poirier, Marie-Andree

    2010-01-01

    Children who require augmentative and alternative communication (AAC) systems while they are in the process of acquiring language face unique challenges because they use graphic symbols for communication. In contrast to the situation of typically developing children, they use different modalities for comprehension (auditory) and expression…

  16. Early human communication helps in understanding language evolution.

    PubMed

    Lenti Boero, Daniela

    2014-12-01

    Building a theory on extant species, as Ackermann et al. do, is a useful contribution to the field of language evolution. Here, I add another living model that might be of interest: human language ontogeny in the first year of life. A better knowledge of this phase might help in understanding two more topics among the "several building blocks of a comprehensive theory of the evolution of spoken language" indicated in their conclusion by Ackermann et al., that is, the foundation of the co-evolution of linguistic motor skills with the auditory skills underlying speech perception, and the possible phylogenetic interactions of protospeech production with referential capabilities.

  17. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    PubMed Central

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language. PMID:24904497

  18. Bilingual Language Switching in the Laboratory versus in the Wild: The Spatiotemporal Dynamics of Adaptive Language Control

    PubMed Central

    2017-01-01

    For a bilingual human, every utterance requires a choice about which language to use. This choice is commonly regarded as part of general executive control, engaging prefrontal and anterior cingulate cortices similarly to many types of effortful task switching. However, although language control within artificial switching paradigms has been heavily studied, the neurobiology of natural switching within socially cued situations has not been characterized. Additionally, although theoretical models address how language control mechanisms adapt to the distinct demands of different interactional contexts, these predictions have not been empirically tested. We used MEG (RRID: NIFINV:nlx_inv_090918) to investigate language switching in multiple contexts ranging from completely artificial to the comprehension of a fully natural bilingual conversation recorded “in the wild.” Our results showed less anterior cingulate and prefrontal cortex involvement for more natural switching. In production, voluntary switching did not engage the prefrontal cortex or elicit behavioral switch costs. In comprehension, while laboratory switches recruited executive control areas, fully natural switching within a conversation only engaged auditory cortices. Multivariate pattern analyses revealed that, in production, interlocutor identity was represented in a sustained fashion throughout the different stages of language planning until speech onset. In comprehension, however, a biphasic pattern was observed: interlocutor identity was first represented at the presentation of the interlocutor and then again at the presentation of the auditory word. In all, our findings underscore the importance of ecologically valid experimental paradigms and offer the first neurophysiological characterization of language control in a range of situations simulating real life to various degrees. SIGNIFICANCE STATEMENT Bilingualism is an inherently social phenomenon, interactional context fully determining language choice. This research addresses the neural mechanisms underlying multilingual individuals' ability to successfully adapt to varying conversational contexts both while speaking and listening. Our results showed that interactional context critically determines language control networks' engagement: switching under external constraints heavily recruited prefrontal control regions, whereas natural, voluntary switching did not. These findings challenge conclusions derived from artificial switching paradigms, which suggested that language switching is intrinsically effortful. Further, our results predict that the so-called bilingual advantage should be limited to individuals who need to control their languages according to external cues and thus would not occur by virtue of an experience in which switching is fully free. PMID:28821648

  19. Evaluation of the language profile in children with rolandic epilepsy and developmental dysphasia: Evidence for distinct strengths and weaknesses.

    PubMed

    Verly, M; Gerrits, R; Lagae, L; Sunaert, S; Rommel, N; Zink, I

    2017-07-01

    Although benign, rolandic epilepsy (RE) or benign childhood epilepsy with centro-temporal spikes is often associated with language impairment. Recently, fronto-rolandic EEG abnormalities have been described in children with developmental dysphasia (DD), suggesting an interaction between language impairment and interictal epileptiform discharges. To investigate if a behavioral-linguistic continuum between RE and DD exists, a clinical prospective study was carried out to evaluate the language profile of 15 children with RE and 22 children with DD. Language skills were assessed using an extensive, standardized test battery. Language was found to be impaired in both study groups, however RE and DD were associated with distinct language impairment profiles. Children with RE had difficulties with sentence comprehension, semantic verbal fluency and auditory short-term memory, which are unrelated to age of epilepsy onset and laterality of epileptic focus. In children with DD, sentence comprehension and verbal fluency were among their relative strengths, whereas sentence and lexical production constituted relative weaknesses. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. The Efficacy of Fast ForWord-Language Intervention in School-Age Children with Language Impairment: A Randomized Controlled Trial

    PubMed Central

    Gillam, Ronald B.; Loeb, Diane Frome; Hoffman, LaVae M.; Bohman, Thomas; Champlin, Craig A.; Thibodeau, Linda; Widen, Judith; Brandel, Jayne; Friel-Patti, Sandy

    2008-01-01

    Purpose A randomized controlled trial (RCT) was conducted to compare the language and auditory processing outcomes of children assigned to Fast ForWord-Language (FFW-L) to the outcomes of children assigned to nonspecific or specific language intervention comparison treatments that did not contain modified speech. Method Two hundred and sixteen children between the ages of 6 and 9 years with language impairments were randomly assigned to one of four arms: Fast ForWord-Language (FFW-L), academic enrichment (AE), computer-assisted language intervention (CALI), or individualized language intervention (ILI) provided by a speech-language pathologist. All children received 1 hour and 40 minutes of treatment, 5 days per week, for 6 weeks. Language and auditory processing measures were administered to the children by blinded examiners before treatment, immediately after treatment, 3 months after treatment, and 6 months after treatment. Results The children in all four arms improved significantly on a global language test and a test of backward masking. Children with poor backward masking scores who were randomized to the FFW-L arm did not present greater improvement on the language measures than children with poor backward masking scores who were randomized to the other three arms. Effect sizes, analyses of standard error of measurement, and normalization percentages supported the clinical significance of the improvements on the CASL. There was a treatment effect for the Blending Words subtest on the Comprehensive Test of Phonological Processing (Wagner, Torgesen, & Rashotte, 1999). Participants in the FFW-L and CALI arms earned higher phonological awareness scores than children in the ILI and AE arms at the six-month follow-up testing. Conclusion Fast ForWord-Language, the language intervention that provided modified speech to address a hypothesized underlying auditory processing deficit, was not more effective at improving general language skills or temporal processing skills than a nonspecific comparison treatment (AE) or specific language intervention comparison treatments (CALI and ILI) that did not contain modified speech stimuli. These findings call into question the temporal processing hypothesis of language impairment and the hypothesized benefits of using acoustically modified speech to improve language skills. The finding that children in the three treatment arms and the active comparison arm made clinically relevant gains on measures of language and temporal auditory processing informs our understanding of the variety of intervention activities that can facilitate development. PMID:18230858

  1. Musical metaphors: evidence for a spatial grounding of non-literal sentences describing auditory events.

    PubMed

    Wolter, Sibylla; Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara

    2015-03-01

    This study investigated whether the spatial terms high and low, when used in sentence contexts implying a non-literal interpretation, trigger similar spatial associations as would have been expected from the literal meaning of the words. In three experiments, participants read sentences describing either a high or a low auditory event (e.g., The soprano sings a high aria vs. The pianist plays a low note). In all Experiments, participants were asked to judge (yes/no) whether the sentences were meaningful by means of up/down (Experiments 1 and 2) or left/right (Experiment 3) key press responses. Contrary to previous studies reporting that metaphorical language understanding differs from literal language understanding with regard to simulation effects, the results show compatibility effects between sentence implied pitch height and response location. The results are in line with grounded models of language comprehension proposing that sensory motor experiences are being elicited when processing literal as well as non-literal sentences. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Comprehensive evaluation of a child with an auditory brainstem implant.

    PubMed

    Eisenberg, Laurie S; Johnson, Karen C; Martinez, Amy S; DesJardin, Jean L; Stika, Carren J; Dzubak, Danielle; Mahalak, Mandy Lutz; Rector, Emily P

    2008-02-01

    We had an opportunity to evaluate an American child whose family traveled to Italy to receive an auditory brainstem implant (ABI). The goal of this evaluation was to obtain insight into possible benefits derived from the ABI and to begin developing assessment protocols for pediatric clinical trials. Case study. Tertiary referral center. Pediatric ABI Patient 1 was born with auditory nerve agenesis. Auditory brainstem implant surgery was performed in December, 2005, in Verona, Italy. The child was assessed at the House Ear Institute, Los Angeles, in July 2006 at the age of 3 years 11 months. Follow-up assessment has continued at the HEAR Center in Birmingham, Alabama. Auditory brainstem implant. Performance was assessed for the domains of audition, speech and language, intelligence and behavior, quality of life, and parental factors. Patient 1 demonstrated detection of sound, speech pattern perception with visual cues, and inconsistent auditory-only vowel discrimination. Language age with signs was approximately 2 years, and vocalizations were increasing. Of normal intelligence, he exhibited attention deficits with difficulty completing structured tasks. Twelve months later, this child was able to identify speech patterns consistently; closed-set word identification was emerging. These results were within the range of performance for a small sample of similarly aged pediatric cochlear implant users. Pediatric ABI assessment with a group of well-selected children is needed to examine risk versus benefit in this population and to analyze whether open-set speech recognition is achievable.

  3. Response latencies in auditory sentence comprehension: effects of linguistic versus perceptual challenge.

    PubMed

    Tun, Patricia A; Benichov, Jonathan; Wingfield, Arthur

    2010-09-01

    Older adults with good hearing and with mild-to-moderate hearing loss were tested for comprehension of spoken sentences that required perceptual effort (hearing speech at lower sound levels), and two degrees of cognitive load (sentences with simpler or more complex syntax). Although comprehension accuracy was equivalent for both participant groups and for young adults with good hearing, hearing loss was associated with longer response latencies to the correct comprehension judgments, especially for complex sentences heard at relatively low amplitudes. These findings demonstrate the need to take into account both sensory and cognitive demands of speech materials in older adults' language comprehension. (c) 2010 APA, all rights reserved.

  4. Areas of Left Perisylvian Cortex Mediate Auditory-Verbal Short-Term Memory

    ERIC Educational Resources Information Center

    Koenigs, Michael; Acheson, Daniel J.; Barbey, Aron K.; Solomon, Jeffrey; Postle, Bradley R.; Grafman, Jordan

    2011-01-01

    A contentious issue in memory research is whether verbal short-term memory (STM) depends on a neural system specifically dedicated to the temporary maintenance of information, or instead relies on the same brain areas subserving the comprehension and production of language. In this study, we examined a large sample of adults with acquired brain…

  5. Auditory brainstem response to complex sounds: a tutorial

    PubMed Central

    Skoe, Erika; Kraus, Nina

    2010-01-01

    This tutorial provides a comprehensive overview of the methodological approach to collecting and analyzing auditory brainstem responses to complex sounds (cABRs). cABRs provide a window into how behaviorally relevant sounds such as speech and music are processed in the brain. Because temporal and spectral characteristics of sounds are preserved in this subcortical response, cABRs can be used to assess specific impairments and enhancements in auditory processing. Notably, subcortical function is neither passive nor hardwired but dynamically interacts with higher-level cognitive processes to refine how sounds are transcribed into neural code. This experience-dependent plasticity, which can occur on a number of time scales (e.g., life-long experience with speech or music, short-term auditory training, online auditory processing), helps shape sensory perception. Thus, by being an objective and non-invasive means for examining cognitive function and experience-dependent processes in sensory activity, cABRs have considerable utility in the study of populations where auditory function is of interest (e.g., auditory experts such as musicians, persons with hearing loss, auditory processing and language disorders). This tutorial is intended for clinicians and researchers seeking to integrate cABRs into their clinical and/or research programs. PMID:20084007

  6. Bilinguals at the "cocktail party": dissociable neural activity in auditory-linguistic brain regions reveals neurobiological basis for nonnative listeners' speech-in-noise recognition deficits.

    PubMed

    Bidelman, Gavin M; Dexter, Lauren

    2015-04-01

    We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Listening comprehension across the adult lifespan.

    PubMed

    Sommers, Mitchell S; Hale, Sandra; Myerson, Joel; Rose, Nathan; Tye-Murray, Nancy; Spehar, Brent

    2011-01-01

    Although age-related declines in perceiving spoken language are well established, the primary focus of research has been on perception of phonemes, words, and sentences. In contrast, relatively few investigations have been directed at establishing the effects of age on the comprehension of extended spoken passages. Moreover, most previous work has used extreme-group designs in which the performance of a group of young adults is contrasted with that of a group of older adults and little if any information is available regarding changes in listening comprehension across the adult lifespan. Accordingly, the goals of the current investigation were to determine whether there are age differences in listening comprehension across the adult lifespan and, if so, whether similar trajectories are observed for age-related changes in auditory sensitivity and listening comprehension. This study used a cross-sectional lifespan design in which approximately 60 individuals in each of 7 decades, from age 20 to 89 yr (a total of 433 participants), were tested on three different measures of listening comprehension. In addition, we obtained measures of auditory sensitivity from all participants. Changes in auditory sensitivity across the adult lifespan exhibited the progressive high-frequency loss typical of age-related hearing impairment. Performance on the listening comprehension measures, however, demonstrated a very different pattern, with scores on all measures remaining relatively stable until age 65 to 70 yr, after which significant declines were observed. Follow-up analyses indicated that this same general pattern was observed across three different types of passages (lectures, interviews, and narratives) and three different question types (information, integration, and inference). Multiple regression analyses indicated that low-frequency pure-tone average was the single largest contributor to age-related variance in listening comprehension for individuals older than 65 yr, but that age accounted for significant variance even after controlling for auditory sensitivity. Results suggest that age-related reductions in auditory sensitivity account for a sizable portion of individual variance in listening comprehension that was observed across the adult lifespan. Other potential contributors including a possible role for age-related declines in perceptual and cognitive abilities are discussed. Clinically, the results suggest that amplification is likely to improve listening comprehension but that increased audibility alone may not be sufficient to maintain listening comprehension beyond age 65 and 70 yr. Additional research will be needed to identify potential target abilities for training or other rehabilitation procedures that could supplement sensory aids to provide additional improvements in listening comprehension.

  8. Comorbidity of Auditory Processing, Language, and Reading Disorders

    ERIC Educational Resources Information Center

    Sharma, Mridula; Purdy, Suzanne C.; Kelly, Andrea S.

    2009-01-01

    Purpose: The authors assessed comorbidity of auditory processing disorder (APD), language impairment (LI), and reading disorder (RD) in school-age children. Method: Children (N = 68) with suspected APD and nonverbal IQ standard scores of 80 or more were assessed using auditory, language, reading, attention, and memory measures. Auditory processing…

  9. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps

    PubMed Central

    2016-01-01

    Abstract Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor‐preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface‐based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory‐motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory‐motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M‐I. Hum Brain Mapp 37:2784–2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:27061771

  10. Grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents.

    PubMed

    Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang

    2015-01-01

    Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.

  11. Assessment and Treatment of Short-Term and Working Memory Impairments in Stroke Aphasia: A Practical Tutorial

    ERIC Educational Resources Information Center

    Salis, Christos; Kelly, Helen; Code, Chris

    2015-01-01

    Background: Aphasia following stroke refers to impairments that affect the comprehension and expression of spoken and/or written language, and co-occurring cognitive deficits are common. In this paper we focus on short-term and working memory impairments that impact on the ability to retain and manipulate auditory-verbal information. Evidence from…

  12. Auditory Serial Position Effects in Story Retelling for Non-Brain-Injured Participants and Persons with Aphasia

    ERIC Educational Resources Information Center

    Brodsky, Martin B.; McNeil, Malcolm R.; Doyle, Patrick J.; Fossett, Tepanata R. D.; Timm, Neil H.

    2003-01-01

    Using story retelling as an index of language ability, it is difficult to disambiguate comprehension and memory deficits. Collecting data on the serial position effect (SPE), however, illuminates the memory component. This study examined the SPE of the percentage of information units (%IU) produced in the connected speech samples of adults with…

  13. Hearing, Auditory Processing, and Language Skills of Male Youth Offenders and Remandees in Youth Justice Residences in New Zealand.

    PubMed

    Lount, Sarah A; Purdy, Suzanne C; Hand, Linda

    2017-01-01

    International evidence suggests youth offenders have greater difficulties with oral language than their nonoffending peers. This study examined the hearing, auditory processing, and language skills of male youth offenders and remandees (YORs) in New Zealand. Thirty-three male YORs, aged 14-17 years, were recruited from 2 youth justice residences, plus 39 similarly aged male students from local schools for comparison. Testing comprised tympanometry, self-reported hearing, pure-tone audiometry, 4 auditory processing tests, 2 standardized language tests, and a nonverbal intelligence test. Twenty-one (64%) of the YORs were identified as language impaired (LI), compared with 4 (10%) of the controls. Performance on all language measures was significantly worse in the YOR group, as were their hearing thresholds. Nine (27%) of the YOR group versus 7 (18%) of the control group fulfilled criteria for auditory processing disorder. Only 1 YOR versus 5 controls had an auditory processing disorder without LI. Language was an area of significant difficulty for YORs. Difficulties with auditory processing were more likely to be accompanied by LI in this group, compared with the controls. Provision of speech-language therapy services and awareness of auditory and language difficulties should be addressed in youth justice systems.

  14. Neural correlates of language comprehension in autism spectrum disorders: when language conflicts with world knowledge.

    PubMed

    Tesink, Cathelijne M J Y; Buitelaar, Jan K; Petersson, Karl Magnus; van der Gaag, Rutger Jan; Teunisse, Jan-Pieter; Hagoort, Peter

    2011-04-01

    In individuals with ASD, difficulties with language comprehension are most evident when higher-level semantic-pragmatic language processing is required, for instance when context has to be used to interpret the meaning of an utterance. Until now, it is unclear at what level of processing and for what type of context these difficulties in language comprehension occur. Therefore, in the current fMRI study, we investigated the neural correlates of the integration of contextual information during auditory language comprehension in 24 adults with ASD and 24 matched control participants. Different levels of context processing were manipulated by using spoken sentences that were correct or contained either a semantic or world knowledge anomaly. Our findings demonstrated significant differences between the groups in inferior frontal cortex that were only present for sentences with a world knowledge anomaly. Relative to the ASD group, the control group showed significantly increased activation in left inferior frontal gyrus (LIFG) for sentences with a world knowledge anomaly compared to correct sentences. This effect possibly indicates reduced integrative capacities of the ASD group. Furthermore, world knowledge anomalies elicited significantly stronger activation in right inferior frontal gyrus (RIFG) in the control group compared to the ASD group. This additional RIFG activation probably reflects revision of the situation model after new, conflicting information. The lack of recruitment of RIFG is possibly related to difficulties with exception handling in the ASD group. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  16. Opposite brain laterality in analogous auditory and visual tests.

    PubMed

    Oltedal, Leif; Hugdahl, Kenneth

    2017-11-01

    Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.

  17. Central auditory processing disorder (CAPD) in children with specific language impairment (SLI). Central auditory tests.

    PubMed

    Dlouha, Olga; Novak, Alexej; Vokral, Jan

    2007-06-01

    The aim of this project is to use central auditory tests for diagnosis of central auditory processing disorder (CAPD) in children with specific language impairment (SLI), in order to confirm relationship between speech-language impairment and central auditory processing. We attempted to establish special dichotic binaural tests in Czech language modified for younger children. Tests are based on behavioral audiometry using dichotic listening (different auditory stimuli that presented to each ear simultaneously). The experimental tasks consisted of three auditory measures (test 1-3)-dichotic listening of two-syllable words presented like binaural interaction tests. Children with SLI are unable to create simple sentences from two words that are heard separately but simultaneously. Results in our group of 90 pre-school children (6-7 years old) confirmed integration deficit and problems with quality of short-term memory. Average rate of success of children with specific language impairment was 56% in test 1, 64% in test 2 and 63% in test 3. Results of control group: 92% in test 1, 93% in test 2 and 92% in test 3 (p<0.001). Our results indicate the relationship between disorders of speech-language perception and central auditory processing disorders.

  18. Comparing the effect of auditory-only and auditory-visual modes in two groups of Persian children using cochlear implants: a randomized clinical trial.

    PubMed

    Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam

    2013-09-01

    Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school when communicating with their parents and educators prior to and after implantation. The trial has been registered at IRCT.ir, number IRCT201109267637N1. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Story retelling and language ability in school-aged children with cerebral palsy and speech impairment.

    PubMed

    Nordberg, Ann; Dahlgren Sandberg, Annika; Miniscalco, Carmela

    2015-01-01

    Research on retelling ability and cognition is limited in children with cerebral palsy (CP) and speech impairment. To explore the impact of expressive and receptive language, narrative discourse dimensions (Narrative Assessment Profile measures), auditory and visual memory, theory of mind (ToM) and non-verbal cognition on the retelling ability of children with CP and speech impairment. Fifteen speaking children with speech impairment (seven girls, eight boys) (mean age = 11 years, SD = 1;4 years), and different types of CP and different levels of gross motor and cognitive function participated in the present study. Story retelling skills were tested and analysed with the Bus Story Test (BST) and the Narrative Assessment Profile (NAP). Receptive language ability was tested with the Test for Reception of Grammar-2 (TROG-2) and the Peabody Picture Vocabulary Test - IV (PPVT-IV). Non-verbal cognitive level was tested with the Raven's coloured progressive matrices (RCPM), memory functions assessed with the Corsi block-tapping task (CB) and the Digit Span from the Wechsler Intelligence Scale for Children-III. ToM was assessed with the false belief items of the two story tests "Kiki and the Cat" and "Birthday Puppy". The children had severe problems with retelling ability corresponding to an age-equivalent of 5;2-6;9 years. Receptive and expressive language, visuo-spatial and auditory memory, non-verbal cognitive level and ToM varied widely within and among the children. Both expressive and receptive language correlated significantly with narrative ability in terms of NAP total scores, so did auditory memory. The results suggest that retelling ability in the children with CP in the present study is dependent on language comprehension and production, and memory functions. Consequently, it is important to examine retelling ability together with language and cognitive abilities in these children in order to provide appropriate support. © 2015 Royal College of Speech and Language Therapists.

  20. Effects of Written and Auditory Language-Processing Skills on Written Passage Comprehension in Middle and High School Students

    ERIC Educational Resources Information Center

    Caplan, David; Waters, Gloria; Bertram, Julia; Ostrowski, Adam; Michaud, Jennifer

    2016-01-01

    The authors assessed 4,865 middle and high school students for the ability to recognize and understand written and spoken morphologically simple words, morphologically complex words, and the syntactic structure of sentences and for the ability to answer questions about facts presented in a written passage and to make inferences based on those…

  1. LAMP: 100+ Systematic Exercise Lessons for Developing Linguistic Auditory Memory Patterns in Beginning Readers.

    ERIC Educational Resources Information Center

    Valett, Robert E.

    Research findings on auditory sequencing and auditory blending and fusion, auditory-visual integration, and language patterns are presented in support of the Linguistic Auditory Memory Patterns (LAMP) program. LAMP consists of 100 developmental lessons for young students with learning disabilities or language problems. The lessons are included in…

  2. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    ERIC Educational Resources Information Center

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  3. Animacy-based predictions in language comprehension are robust: contextual cues modulate but do not nullify them.

    PubMed

    Muralikrishnan, R; Schlesewsky, Matthias; Bornkessel-Schlesewsky, Ina

    2015-05-22

    Couldn׳t a humble coconut hurt a gardener? At least in the first instance, the brain seems to assume that it should not: we perceive inanimate entities such as coconuts as poor event instigators ("Actors"). Ideally, entities causing a change in another entity should be animate and this assumption not only influences event perception but also carries over to language comprehension. We present three auditory event-related brain potential (ERP) studies on the processing of inanimate and animate subjects and objects in simple transitive sentences in Tamil. ERP responses were measured at the second argument (event participant) in all three studies. Experiment 1 employed all possible animacy combinations of Actors and Undergoers (affected participants) in Actor- and Undergoer-initial verb-final orders. Experiments 2 and 3 employed a fairly novel context design that enabled us to compare ERPs evoked by identical auditory material to differing contextual expectations: Experiment 2 focussed on constructions in which an inanimate Actor acts upon an inanimate Undergoer, whereas Experiment 3 examined whether and how a preceding context modulates the prediction for an ideal Actor. Results showed an N400 effect when the prediction for an ideal (animate) Actor following an Undergoer was not met, thus further supporting the cross-linguistically robust nature of animacy preferences. In addition, though specific contextual cues that are indicative of a forthcoming non-ideal Actor may reduce this negativity in comparison to when such cues are not available, they nevertheless do not nullify it, suggesting that animacy-based predictions are stronger than contextual cues in online language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. The interictal language profile in adult epilepsy.

    PubMed

    Bartha-Doering, Lisa; Trinka, Eugen

    2014-10-01

    The purpose of this study was to systematically review the literature on the interictal language profile in adult patients with epilepsy. An extensive literature search was performed using MEDLINE, Embase, PsycINFO, Cochrane Central Register of Controlled Trials, PASCAL, and PSYNDEXplus databases. Key aspects of inclusion criteria were adult patients with epilepsy, patient number >10, and in-depth qualitative investigations of a specific language modality or administration of tests of at least two different language modalities, including comprehension, naming, repetition, reading, writing, and spontaneous speech. Our search strategy yielded 933 articles on epilepsy and language. Of these, 31 met final eligibility criteria. Most included articles focused on temporal lobe epilepsy; only three studies were interested in the language profile of patients with idiopathic generalized epilepsies, and one study on frontal lobe epilepsy met inclusion criteria. Study results showed a pronounced heterogeneity of language abilities in patients with epilepsy, varying from intact language profiles to impairment in several language functions. However, at least 17% of patients displayed deficits in more than one language function, with naming, reading comprehension, spontaneous speech, and discourse production being most often affected. This review underscores the need to evaluate different language functions-including spontaneous speech, discourse abilities, naming, auditory and reading comprehension, reading, writing, and repetition-individually in order to obtain a reliable profile of language functioning in patients with epilepsy. Moreover, our findings show that in contrast to the huge scientific interest of memory functions in epilepsy, the examination of language functions so far played a minor role in epilepsy research, emphasizing the need for future research activities in this field. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  5. Auditory and language development in Mandarin-speaking children after cochlear implantation.

    PubMed

    Lu, Xing; Qin, Zhaobing

    2018-04-01

    To evaluate early auditory performance, speech perception and language skills in Mandarin-speaking prelingual deaf children in the first two years after they received a cochlear implant (CI) and analyse the effects of possible associated factors. The Infant-Toddler Meaningful Auditory Integration Scale (ITMAIS)/Meaningful Auditory Integration Scale (MAIS), Mandarin Early Speech Perception (MESP) test and Putonghua Communicative Development Inventory (PCDI) were used to assess auditory and language outcomes in 132 Mandarin-speaking children pre- and post-implantation. Children with CIs exhibited an ITMAIS/MAIS and PCDI developmental trajectory similar to that of children with normal hearing. The increased number of participants who achieved MESP categories 1-6 at each test interval showed a significant improvement in speech perception by paediatric CI recipients. Age at implantation and socioeconomic status were consistently associated with both auditory and language outcomes in the first two years post-implantation. Mandarin-speaking children with CIs exhibit significant improvements in early auditory and language development. Though these improvements followed the normative developmental trajectories, they still exhibited a gap compared with normative values. Earlier implantation and higher socioeconomic status are consistent predictors of greater auditory and language skills in the early stage. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps.

    PubMed

    Sood, Mariam R; Sereno, Martin I

    2016-08-01

    Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  7. Exploring the role of auditory analysis in atypical compared to typical language development.

    PubMed

    Grube, Manon; Cooper, Freya E; Kumar, Sukhbinder; Kelly, Tom; Griffiths, Timothy D

    2014-02-01

    The relationship between auditory processing and language skills has been debated for decades. Previous findings have been inconsistent, both in typically developing and impaired subjects, including those with dyslexia or specific language impairment. Whether correlations between auditory and language skills are consistent between different populations has hardly been addressed at all. The present work presents an exploratory approach of testing for patterns of correlations in a range of measures of auditory processing. In a recent study, we reported findings from a large cohort of eleven-year olds on a range of auditory measures and the data supported a specific role for the processing of short sequences in pitch and time in typical language development. Here we tested whether a group of individuals with dyslexic traits (DT group; n = 28) from the same year group would show the same pattern of correlations between auditory and language skills as the typically developing group (TD group; n = 173). Regarding the raw scores, the DT group showed a significantly poorer performance on the language but not the auditory measures, including measures of pitch, time and rhythm, and timbre (modulation). In terms of correlations, there was a tendency to decrease in correlations between short-sequence processing and language skills, contrasted by a significant increase in correlation for basic, single-sound processing, in particular in the domain of modulation. The data support the notion that the fundamental relationship between auditory and language skills might differ in atypical compared to typical language development, with the implication that merging data or drawing inference between populations might be problematic. Further examination of the relationship between both basic sound feature analysis and music-like sound analysis and language skills in impaired populations might allow the development of appropriate training strategies. These might include types of musical training to augment language skills via their common bases in sound sequence analysis. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Neuropsychological impairments on the NEPSY-II among children with FASD.

    PubMed

    Rasmussen, Carmen; Tamana, Sukhpreet; Baugh, Lauren; Andrew, Gail; Tough, Suzanne; Zwaigenbaum, Lonnie

    2013-01-01

    We examined the pattern of neuropsychological impairments of children with FASD (compared to controls) on NEPSY-II measures of attention and executive functioning, language, memory, visuospatial processing, and social perception. Participants included 32 children with FASD and 30 typically developing control children, ranging in age from 6 to 16 years. Children were tested on the following subtests of the NEPSY-II: Attention and Executive Functioning (animal sorting, auditory attention/response set, and inhibition), Language (comprehension of instructions and speeded naming), Memory (memory for names/delayed memory for names), Visual-Spatial Processing (arrows), and Social Perception (theory of mind). Groups were compared using MANOVA. Children with FASD were impaired relative to controls on the following subtests: animal sorting, response set, inhibition (naming and switching conditions), comprehension of instructions, speeded naming, and memory for names total and delayed, but group differences were not significant on auditory attention, inhibition (inhibition condition), arrows, and theory of mind. Among the FASD group, IQ scores were not correlated with performance on the NEPSY-II subtests, and there were no significant differences between those with and without comorbid ADHD. The NEPSY-II is an effective and useful tool for measuring a variety of neuropsychological impairments among children with FASD. Children with FASD displayed a pattern of results with impairments (relative to controls) on measures of executive functioning (set shifting, concept formation, and inhibition), language, and memory, and relative strengths on measures of basic attention, visual spatial processing, and social perception.

  9. Speech comprehension training and auditory and cognitive processing in older adults.

    PubMed

    Pichora-Fuller, M Kathleen; Levitt, Harry

    2012-12-01

    To provide a brief history of speech comprehension training systems and an overview of research on auditory and cognitive aging as background to recommendations for future directions for rehabilitation. Two distinct domains were reviewed: one concerning technological and the other concerning psychological aspects of training. Historical trends and advances in these 2 domains were interrelated to highlight converging trends and directions for future practice. Over the last century, technological advances have influenced both the design of hearing aids and training systems. Initially, training focused on children and those with severe loss for whom amplification was insufficient. Now the focus has shifted to older adults with relatively little loss but difficulties listening in noise. Evidence of brain plasticity from auditory and cognitive neuroscience provides new insights into how to facilitate perceptual (re-)learning by older adults. There is a new imperative to complement training to increase bottom-up processing of the signal with more ecologically valid training to boost top-down information processing based on knowledge of language and the world. Advances in digital technologies enable the development of increasingly sophisticated training systems incorporating complex meaningful materials such as music, audiovisual interactive displays, and conversation.

  10. How Age, Linguistic Status, and the Nature of the Auditory Scene Alter the Manner in Which Listening Comprehension Is Achieved in Multitalker Conversations.

    PubMed

    Avivi-Reich, Meital; Jakubczyk, Agnes; Daneman, Meredyth; Schneider, Bruce A

    2015-10-01

    We investigated how age and linguistic status affected listeners' ability to follow and comprehend 3-talker conversations, and the extent to which individual differences in language proficiency predict speech comprehension under difficult listening conditions. Younger and older L1s as well as young L2s listened to 3-talker conversations, with or without spatial separation between talkers, in either quiet or against moderate or high 12-talker babble background, and were asked to answer questions regarding their contents. After compensating for individual differences in speech recognition, no significant differences in conversation comprehension were found among the groups. As expected, conversation comprehension decreased as babble level increased. Individual differences in reading comprehension skill contributed positively to performance in younger EL1s and in young EL2s to a lesser degree but not in older EL1s. Vocabulary knowledge was significantly and positively related to performance only at the intermediate babble level. The results indicate that the manner in which spoken language comprehension is achieved is modulated by the listeners' age and linguistic status.

  11. Auditory Phoneme Discrimination in Illiterates: Mismatch Negativity--A Question of Literacy?

    ERIC Educational Resources Information Center

    Schaadt, Gesa; Pannekamp, Ann; van der Meer, Elke

    2013-01-01

    These days, illiteracy is still a major problem. There is empirical evidence that auditory phoneme discrimination is one of the factors contributing to written language acquisition. The current study investigated auditory phoneme discrimination in participants who did not acquire written language sufficiently. Auditory phoneme discrimination was…

  12. Relation between language, audio-vocal psycholinguistic abilities and P300 in children having specific language impairment.

    PubMed

    Shaheen, Elham Ahmed; Shohdy, Sahar Saad; Abd Al Raouf, Mahmoud; Mohamed El Abd, Shereen; Abd Elhamid, Asmss

    2011-09-01

    Specific language impairment is a relatively common developmental condition in which a child fails to develop language at the typical rate despite normal general intellectual abilities, adequate exposure to language, and in the absence of hearing impairments, or neurological or psychiatric disorders. There is much controversy about the extent to which the auditory processing deficits are important in the genesis specific language impairment. The objective of this paper is to assess the higher cortical functions in children with specific language impairment, through assessing neurophysiological changes in order to correlate the results with the clinical picture of the patients to choose the proper rehabilitation training program. This study was carried out on 40 children diagnosed to have specific language impairment and 20 normal children as a control group. All children were subjected to the assessment protocol applied in Kasr El-Aini hospital. They were also subjected to a language test (receptive, expressive and total language items), the audio-vocal items of Illinois test of psycholinguistic (auditory reception, auditory association, verbal expression, grammatical closure, auditory sequential memory and sound blending) as well as audiological assessment that included peripheral audiological and P300 amplitude and latency assessment. The results revealed a highly significant difference in P300 amplitude and latency between specific language impairment group and control group. There is also strong correlations between P300 latency and the grammatical closure, auditory sequential memory and sound blending, while significant correlation between the P300 amplitude and auditory association and verbal expression. Children with specific language impairment, in spite of the normal peripheral hearing, have evidence of cognitive and central auditory processing defects as evidenced by P300 auditory event related potential in the form of prolonged latency which indicate a slow rate of processing and defective memory as evidenced by small amplitude. These findings affect cognitive and language development in specific language impairment children and should be considered during planning the intervention program. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. Auditory processing deficits are sometimes necessary and sometimes sufficient for language difficulties in children: Evidence from mild to moderate sensorineural hearing loss.

    PubMed

    Halliday, Lorna F; Tuomainen, Outi; Rosen, Stuart

    2017-09-01

    There is a general consensus that many children and adults with dyslexia and/or specific language impairment display deficits in auditory processing. However, how these deficits are related to developmental disorders of language is uncertain, and at least four categories of model have been proposed: single distal cause models, risk factor models, association models, and consequence models. This study used children with mild to moderate sensorineural hearing loss (MMHL) to investigate the link between auditory processing deficits and language disorders. We examined the auditory processing and language skills of 46, 8-16year-old children with MMHL and 44 age-matched typically developing controls. Auditory processing abilities were assessed using child-friendly psychophysical techniques in order to obtain discrimination thresholds. Stimuli incorporated three different timescales (µs, ms, s) and three different levels of complexity (simple nonspeech tones, complex nonspeech sounds, speech sounds), and tasks required discrimination of frequency or amplitude cues. Language abilities were assessed using a battery of standardised assessments of phonological processing, reading, vocabulary, and grammar. We found evidence that three different auditory processing abilities showed different relationships with language: Deficits in a general auditory processing component were necessary but not sufficient for language difficulties, and were consistent with a risk factor model; Deficits in slow-rate amplitude modulation (envelope) detection were sufficient but not necessary for language difficulties, and were consistent with either a single distal cause or a consequence model; And deficits in the discrimination of a single speech contrast (/bɑ/ vs /dɑ/) were neither necessary nor sufficient for language difficulties, and were consistent with an association model. Our findings suggest that different auditory processing deficits may constitute distinct and independent routes to the development of language difficulties in children. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Infant discrimination of rapid auditory cues predicts later language impairment.

    PubMed

    Benasich, April A; Tallal, Paula

    2002-10-17

    The etiology and mechanisms of specific language impairment (SLI) in children are unknown. Differences in basic auditory processing abilities have been suggested to underlie their language deficits. Studies suggest that the neuropathology, such as atypical patterns of cerebral lateralization and cortical cellular anomalies, implicated in such impairments likely occur early in life. Such anomalies may play a part in the rapid processing deficits seen in this disorder. However, prospective, longitudinal studies in infant populations that are critical to examining these hypotheses have not been done. In the study described, performance on brief, rapidly-presented, successive auditory processing and perceptual-cognitive tasks were assessed in two groups of infants: normal control infants with no family history of language disorders and infants from families with a positive family history for language impairment. Initial assessments were obtained when infants were 6-9 months of age (M=7.5 months) and the sample was then followed through age 36 months. At the first visit, infants' processing of rapid auditory cues as well as global processing speed and memory were assessed. Significant differences in mean thresholds were seen in infants born into families with a history of SLI as compared with controls. Examination of relations between infant processing abilities and emerging language through 24 months-of-age revealed that threshold for rapid auditory processing at 7.5 months was the single best predictor of language outcome. At age 3, rapid auditory processing threshold and being male, together predicted 39-41% of the variance in language outcome. Thus, early deficits in rapid auditory processing abilities both precede and predict subsequent language delays. These findings support an essential role for basic nonlinguistic, central auditory processes, particularly rapid spectrotemporal processing, in early language development. Further, these findings provide a temporal diagnostic window during which future language impairments may be addressed.

  15. Reality of auditory verbal hallucinations.

    PubMed

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  16. Reality of auditory verbal hallucinations

    PubMed Central

    Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency. PMID:19620178

  17. Auditory and verbal memory predictors of spoken language skills in children with cochlear implants.

    PubMed

    de Hoog, Brigitte E; Langereis, Margreet C; van Weerdenburg, Marjolijn; Keuning, Jos; Knoors, Harry; Verhoeven, Ludo

    2016-10-01

    Large variability in individual spoken language outcomes remains a persistent finding in the group of children with cochlear implants (CIs), particularly in their grammatical development. In the present study, we examined the extent of delay in lexical and morphosyntactic spoken language levels of children with CIs as compared to those of a normative sample of age-matched children with normal hearing. Furthermore, the predictive value of auditory and verbal memory factors in the spoken language performance of implanted children was analyzed. Thirty-nine profoundly deaf children with CIs were assessed using a test battery including measures of lexical, grammatical, auditory and verbal memory tests. Furthermore, child-related demographic characteristics were taken into account. The majority of the children with CIs did not reach age-equivalent lexical and morphosyntactic language skills. Multiple linear regression analyses revealed that lexical spoken language performance in children with CIs was best predicted by age at testing, phoneme perception, and auditory word closure. The morphosyntactic language outcomes of the CI group were best predicted by lexicon, auditory word closure, and auditory memory for words. Qualitatively good speech perception skills appear to be crucial for lexical and grammatical development in children with CIs. Furthermore, strongly developed vocabulary skills and verbal memory abilities predict morphosyntactic language skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Using Eye Movements to Assess Language Comprehension in Toddlers Born Preterm and Full Term.

    PubMed

    Loi, Elizabeth C; Marchman, Virginia A; Fernald, Anne; Feldman, Heidi M

    2017-01-01

    To assess language skills in children born preterm and full term by the use of a standardized language test and eye-tracking methods. Children born ≤32 weeks' gestation (n = 44) were matched on sex and socioeconomic status to children born full term (n = 44) and studied longitudinally. The Bayley Scales of Infant and Toddler Development, Third Edition (BSID-III) were administered at 18 months (corrected for prematurity as applicable). The Looking-While-Listening Task (LWL) simultaneously presents 2 pictures and an auditory stimulus that directs the child's attention to one image. The pattern of eye movements reflects visual processing and the efficiency of language comprehension. Children born preterm were evaluated on LWL 3 times between 18 and 24 months. Children born full term were evaluated at ages corresponding to chronological and corrected ages of their preterm match. Results were compared between groups for the BSID-III and 2 LWL measures: accuracy (proportion of time looking at target) and reaction time (latency to shift gaze from distracter to target). Children born preterm had lower BSID-III scores than children born full term. Children born preterm had poorer performance than children born full term on LWL measures for chronological age but similar performance for corrected age. Accuracy and reaction time at 18 months' corrected age displaced preterm-full term group membership as significant predictors of BSID-III scores. Performance and rate of change on language comprehension measures were similar in children born preterm and full term compared at corrected age. Individual variation in language comprehension efficiency was a robust predictor of scores on a standardized language assessment in both groups. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. The Effect of Noise on the Relationship Between Auditory Working Memory and Comprehension in School-Age Children.

    PubMed

    Sullivan, Jessica R; Osman, Homira; Schafer, Erin C

    2015-06-01

    The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Children with normal hearing between the ages of 8 and 10 years were administered working memory and comprehension tasks in quiet and noise. The comprehension measure comprised 5 domains: main idea, details, reasoning, vocabulary, and understanding messages. Performance on auditory working memory and comprehension tasks were significantly poorer in noise than in quiet. The reasoning, details, understanding, and vocabulary subtests were particularly affected in noise (p < .05). The relationship between auditory working memory and comprehension was stronger in noise than in quiet, suggesting an increased contribution of working memory. These data suggest that school-age children's auditory working memory and comprehension are negatively affected by noise. Performance on comprehension tasks in noise is strongly related to demands placed on working memory, supporting the theory that degrading listening conditions draws resources away from the primary task.

  20. Real-time processing of ASL signs: Delayed first language acquisition affects organization of the mental lexicon

    PubMed Central

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2014-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input and for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf individuals who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native-signers demonstrated early and robust activation of sub-lexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received. PMID:25528091

  1. Neural networks mediating sentence reading in the deaf

    PubMed Central

    Hirshorn, Elizabeth A.; Dye, Matthew W. G.; Hauser, Peter C.; Supalla, Ted R.; Bavelier, Daphne

    2014-01-01

    The present work addresses the neural bases of sentence reading in deaf populations. To better understand the relative role of deafness and spoken language knowledge in shaping the neural networks that mediate sentence reading, three populations with different degrees of English knowledge and depth of hearing loss were included—deaf signers, oral deaf and hearing individuals. The three groups were matched for reading comprehension and scanned while reading sentences. A similar neural network of left perisylvian areas was observed, supporting the view of a shared network of areas for reading despite differences in hearing and English knowledge. However, differences were observed, in particular in the auditory cortex, with deaf signers and oral deaf showing greatest bilateral superior temporal gyrus (STG) recruitment as compared to hearing individuals. Importantly, within deaf individuals, the same STG area in the left hemisphere showed greater recruitment as hearing loss increased. To further understand the functional role of such auditory cortex re-organization after deafness, connectivity analyses were performed from the STG regions identified above. Connectivity from the left STG toward areas typically associated with semantic processing (BA45 and thalami) was greater in deaf signers and in oral deaf as compared to hearing. In contrast, connectivity from left STG toward areas identified with speech-based processing was greater in hearing and in oral deaf as compared to deaf signers. These results support the growing literature indicating recruitment of auditory areas after congenital deafness for visually-mediated language functions, and establish that both auditory deprivation and language experience shape its functional reorganization. Implications for differential reliance on semantic vs. phonological pathways during reading in the three groups is discussed. PMID:24959127

  2. Characterizing speech and language pathology outcomes in stroke rehabilitation.

    PubMed

    Hatfield, Brooke; Millet, Deborah; Coles, Janice; Gassaway, Julie; Conroy, Brendan; Smout, Randall J

    2005-12-01

    Hatfield B, Millet D, Coles J, Gassaway J, Conroy B, Smout RJ. Characterizing speech and language pathology outcomes in stroke rehabilitation. To describe a subset of speech-language pathology (SLP) patients in the Post-Stroke Rehabilitation Outcomes Project and to examine outcomes for patients with low admission FIM levels of auditory comprehension and verbal expression. Observational cohort study. Five inpatient rehabilitation hospitals. Patients (N=397) receiving post-stroke SLP with admission FIM cognitive components at levels 1 through 5. Not applicable. Increase in comprehension and expression FIM scores from admission to discharge. Cognitively and linguistically complex SLP activities (problem-solving and executive functioning skills) were associated with greater likelihood of success in low- to mid-level functioning communicators in the acute post-stroke rehabilitation period. The results challenge common clinical practice by suggesting that use of high-level cognitively and linguistically complex SLP activities early in a patient's stay may result in more efficient practice and better outcomes regardless of the patient's functional communication severity level on admission.

  3. The anterior temporal lobes support residual comprehension in Wernicke's aphasia.

    PubMed

    Robson, Holly; Zahn, Roland; Keidel, James L; Binney, Richard J; Sage, Karen; Lambon Ralph, Matthew A

    2014-03-01

    Wernicke's aphasia occurs after a stroke to classical language comprehension regions in the left temporoparietal cortex. Consequently, auditory-verbal comprehension is significantly impaired in Wernicke's aphasia but the capacity to comprehend visually presented materials (written words and pictures) is partially spared. This study used functional magnetic resonance imaging to investigate the neural basis of written word and picture semantic processing in Wernicke's aphasia, with the wider aim of examining how the semantic system is altered after damage to the classical comprehension regions. Twelve participants with chronic Wernicke's aphasia and 12 control participants performed semantic animate-inanimate judgements and a visual height judgement baseline task. Whole brain and region of interest analysis in Wernicke's aphasia and control participants found that semantic judgements were underpinned by activation in the ventral and anterior temporal lobes bilaterally. The Wernicke's aphasia group displayed an 'over-activation' in comparison with control participants, indicating that anterior temporal lobe regions become increasingly influential following reduction in posterior semantic resources. Semantic processing of written words in Wernicke's aphasia was additionally supported by recruitment of the right anterior superior temporal lobe, a region previously associated with recovery from auditory-verbal comprehension impairments. Overall, the results provide support for models in which the anterior temporal lobes are crucial for multimodal semantic processing and that these regions may be accessed without support from classic posterior comprehension regions.

  4. Developmental Trends in Auditory Processing Can Provide Early Predictions of Language Acquisition in Young Infants

    ERIC Educational Resources Information Center

    Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R.; Shao, Jie; Lozoff, Betsy

    2013-01-01

    Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with…

  5. Teaching Turkish as a Foreign Language: Extrapolating from Experimental Psychology

    ERIC Educational Resources Information Center

    Erdener, Dogu

    2017-01-01

    Speech perception is beyond the auditory domain and a multimodal process, specifically, an auditory-visual one--we process lip and face movements during speech. In this paper, the findings in cross-language studies of auditory-visual speech perception in the past two decades are interpreted to the applied domain of second language (L2)…

  6. Playing Music for a Smarter Ear: Cognitive, Perceptual and Neurobiological Evidence

    PubMed Central

    Strait, Dana; Kraus, Nina

    2012-01-01

    Human hearing depends on a combination of cognitive and sensory processes that function by means of an interactive circuitry of bottom-up and top-down neural pathways, extending from the cochlea to the cortex and back again. Given that similar neural pathways are recruited to process sounds related to both music and language, it is not surprising that the auditory expertise gained over years of consistent music practice fine-tunes the human auditory system in a comprehensive fashion, strengthening neurobiological and cognitive underpinnings of both music and speech processing. In this review we argue not only that common neural mechanisms for speech and music exist, but that experience in music leads to enhancements in sensory and cognitive contributors to speech processing. Of specific interest is the potential for music training to bolster neural mechanisms that undergird language-related skills, such as reading and hearing speech in background noise, which are critical to academic progress, emotional health, and vocational success. PMID:22993456

  7. The effect of fMRI task combinations on determining the hemispheric dominance of language functions.

    PubMed

    Niskanen, Eini; Könönen, Mervi; Villberg, Ville; Nissi, Mikko; Ranta-Aho, Perttu; Säisänen, Laura; Karjalainen, Pasi; Aikiä, Marja; Kälviäinen, Reetta; Mervaala, Esa; Vanninen, Ritva

    2012-04-01

    The purpose of this study is to establish the most suitable combination of functional magnetic resonance imaging (fMRI) language tasks for clinical use in determining language dominance and to define the variability in laterality index (LI) and activation power between different combinations of language tasks. Activation patterns of different fMRI analyses of five language tasks (word generation, responsive naming, letter task, sentence comprehension, and word pair) were defined for 20 healthy volunteers (16 right-handed). LIs and sums of T values were calculated for each task separately and for four combinations of tasks in predefined regions of interest. Variability in terms of activation power and lateralization was defined in each analysis. In addition, the visual assessment of lateralization of language functions based on the individual fMRI activation maps was conducted by an experienced neuroradiologist. A combination analysis of word generation, responsive naming, and sentence comprehension was the most suitable in terms of activation power, robustness to detect essential language areas, and scanning time. In general, combination analyses of the tasks provided higher overall activation levels than single tasks and reduced the number of outlier voxels disturbing the calculation of LI. A combination of auditory and visually presented tasks that activate different aspects of language functions with sufficient activation power may be a useful task battery for determining language dominance in patients.

  8. Contextual Constraint Treatment for coarse coding deficit in adults with right hemisphere brain damage: Generalization to narrative discourse comprehension

    PubMed Central

    Blake, Margaret Lehman; Tompkins, Connie A.; Scharp, Victoria L.; Meigh, Kimberly M.; Wambaugh, Julie

    2014-01-01

    Coarse coding is the activation of broad semantic fields that can include multiple word meanings and a variety of features, including those peripheral to a word’s core meaning. It is a partially domain-general process related to general discourse comprehension and contributes to both literal and non-literal language processing. Adults with damage to the right cerebral hemisphere (RHD) and a coarse coding deficit are particularly slow to activate features of words that are relatively distant or peripheral. This manuscript reports a pre-efficacy study of Contextual Constraint Treatment (CCT), a novel, implicit treatment designed to increase the efficiency of coarse coding with the goal of improving narrative comprehension and other language performance that relies on coarse coding. Participants were four adults with RHD. The study used a single-subject controlled experimental design across subjects and behaviors. The treatment involves pre-stimulation, using a hierarchy of strong- and moderately-biased contexts, to prime the intended distantly-related features of critical stimulus words. Three of the four participants exhibited gains in auditory narrative discourse comprehension, the primary outcome measure. All participants exhibited generalization to untreated items. No strong generalization to processing nonliteral language was evident. The results indicate that CCT yields both improved efficiency of the coarse coding process and generalization to narrative comprehension. PMID:24983133

  9. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    PubMed

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. Music and language: relations and disconnections.

    PubMed

    Kraus, Nina; Slater, Jessica

    2015-01-01

    Music and language provide an important context in which to understand the human auditory system. While they perform distinct and complementary communicative functions, music and language are both rooted in the human desire to connect with others. Since sensory function is ultimately shaped by what is biologically important to the organism, the human urge to communicate has been a powerful driving force in both the evolution of auditory function and the ways in which it can be changed by experience within an individual lifetime. This chapter emphasizes the highly interactive nature of the auditory system as well as the depth of its integration with other sensory and cognitive systems. From the origins of music and language to the effects of auditory expertise on the neural encoding of sound, we consider key themes in auditory processing, learning, and plasticity. We emphasize the unique role of the auditory system as the temporal processing "expert" in the brain, and explore relationships between communication and cognition. We demonstrate how experience with music and language can have a significant impact on underlying neural function, and that auditory expertise strengthens some of the very same aspects of sound encoding that are deficient in impaired populations. © 2015 Elsevier B.V. All rights reserved.

  11. Working memory, short-term memory and reading proficiency in school-age children with cochlear implants.

    PubMed

    Bharadwaj, Sneha V; Maricle, Denise; Green, Laura; Allman, Tamby

    2015-10-01

    The objective of the study was to examine short-term memory and working memory through both visual and auditory tasks in school-age children with cochlear implants. The relationship between the performance on these cognitive skills and reading as well as language outcomes were examined in these children. Ten children between the ages of 7 and 11 years with early-onset bilateral severe-profound hearing loss participated in the study. Auditory and visual short-term memory, auditory and visual working memory subtests and verbal knowledge measures were assessed using the Woodcock Johnson III Tests of Cognitive Abilities, the Wechsler Intelligence Scale for Children-IV Integrated and the Kaufman Assessment Battery for Children II. Reading outcomes were assessed using the Woodcock Reading Mastery Test III. Performance on visual short-term memory and visual working memory measures in children with cochlear implants was within the average range when compared to the normative mean. However, auditory short-term memory and auditory working memory measures were below average when compared to the normative mean. Performance was also below average on all verbal knowledge measures. Regarding reading outcomes, children with cochlear implants scored below average for listening and passage comprehension tasks and these measures were positively correlated to visual short-term memory, visual working memory and auditory short-term memory. Performance on auditory working memory subtests was not related to reading or language outcomes. The children with cochlear implants in this study demonstrated better performance in visual (spatial) working memory and short-term memory skills than in auditory working memory and auditory short-term memory skills. Significant positive relationships were found between visual working memory and reading outcomes. The results of the study provide support for the idea that WM capacity is modality specific in children with hearing loss. Based on these findings, reading instruction that capitalizes on the strengths in visual short-term memory and working memory is suggested for young children with early-onset hearing loss. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Fragile Spectral and Temporal Auditory Processing in Adolescents with Autism Spectrum Disorder and Early Language Delay

    ERIC Educational Resources Information Center

    Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean

    2015-01-01

    We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…

  13. Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.

    PubMed

    Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O

    2008-11-11

    Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.

  14. Reproduction of auditory and visual standards in monochannel cochlear implant users.

    PubMed

    Kanabus, Magdalena; Szelag, Elzbieta; Kolodziejczyk, Iwona; Szuchnik, Joanna

    2004-01-01

    The temporal reproduction of standard durations ranging from 1 to 9 seconds was investigated in monochannel cochlear implant (CI) users and in normally hearing subjects for the auditory and visual modality. The results showed that the pattern of performance in patients depended on their level of auditory comprehension. Results for CI users, who displayed relatively good auditory comprehension, did not differ from that of normally hearing subjects for both modalities. Patients with poor auditory comprehension significantly overestimated shorter auditory standards (1, 1.5 and 2.5 s), compared to both patients with good comprehension and controls. For the visual modality the between-group comparisons were not significant. These deficits in the reproduction of auditory standards were explained in accordance with both the attentional-gate model and the role of working memory in prospective time judgment. The impairments described above can influence the functioning of the temporal integration mechanism that is crucial for auditory speech comprehension on the level of words and phrases. We postulate that the deficits in time reproduction of short standards may be one of the possible reasons for poor speech understanding in monochannel CI users.

  15. Robust Resilience of the Frontotemporal Syntax System to Aging

    PubMed Central

    Samu, Dávid; Davis, Simon W.; Geerligs, Linda; Mustafa, Abdur; Tyler, Lorraine K.

    2016-01-01

    Brain function is thought to become less specialized with age. However, this view is largely based on findings of increased activation during tasks that fail to separate task-related processes (e.g., attention, decision making) from the cognitive process under examination. Here we take a systems-level approach to separate processes specific to language comprehension from those related to general task demands and to examine age differences in functional connectivity both within and between those systems. A large population-based sample (N = 111; 22–87 years) from the Cambridge Centre for Aging and Neuroscience (Cam-CAN) was scanned using functional MRI during two versions of an experiment: a natural listening version in which participants simply listened to spoken sentences and an explicit task version in which they rated the acceptability of the same sentences. Independent components analysis across the combined data from both versions showed that although task-free language comprehension activates only the auditory and frontotemporal (FTN) syntax networks, performing a simple task with the same sentences recruits several additional networks. Remarkably, functionality of the critical FTN is maintained across age groups, showing no difference in within-network connectivity or responsivity to syntactic processing demands despite gray matter loss and reduced connectivity to task-related networks. We found no evidence for reduced specialization or compensation with age. Overt task performance was maintained across the lifespan and performance in older, but not younger, adults related to crystallized knowledge, suggesting that decreased between-network connectivity may be compensated for by older adults' richer knowledge base. SIGNIFICANCE STATEMENT Understanding spoken language requires the rapid integration of information at many different levels of analysis. Given the complexity and speed of this process, it is remarkably well preserved with age. Although previous work claims that this preserved functionality is due to compensatory activation of regions outside the frontotemporal language network, we use a novel systems-level approach to show that these “compensatory” activations simply reflect age differences in response to experimental task demands. Natural, task-free language comprehension solely recruits auditory and frontotemporal networks, the latter of which is similarly responsive to language-processing demands across the lifespan. These findings challenge the conventional approach to neurocognitive aging by showing that the neural underpinnings of a given cognitive function depend on how you test it. PMID:27170120

  16. Auditory processing deficits in growth restricted fetuses affect later language development.

    PubMed

    Kisilevsky, Barbara S; Davies, Gregory A L

    2007-01-01

    An increased risk for language deficits in infants born growth restricted has been reported in follow-up studies for more than 20 years, suggesting a relation between fetal auditory system development and later language learning. Work with animal models indicate that there are at least two ways in which growth restriction could affect the development of auditory perception in human fetuses: a delay in myelination or conduction and an increase in sensorineural threshold. Systematic study of auditory function in growth restricted human fetuses has not been reported. However, results of studies employing low-risk fetuses delivering as healthy full-term infants demonstrate that, by late gestation, the fetus can hear, sound properties modulate behavior, and sensory information is available from both inside (e.g., maternal vascular) and outside (e.g., noise, voices, music) of the maternal body. These data provide substantive evidence that the auditory system is functioning and that environmental sounds are available for shaping neural networks and laying the foundation for language acquisition before birth. We hypothesize that fetal growth restriction affects auditory system development, resulting in atypical auditory information processing in growth restricted fetuses compared to healthy, appropriately-grown-for-gestational-age fetuses. Speech perception that lays the foundation for later language competence will differ in growth restricted compared to normally grown fetuses and be associated with later language abilities.

  17. Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.

    PubMed

    Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta

    2009-01-01

    In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.

  18. Immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention: a mismatch negativity study.

    PubMed

    Li, X; Yang, Y; Ren, G

    2009-06-16

    Language is often perceived together with visual information. Recent experimental evidences indicated that, during spoken language comprehension, the brain can immediately integrate visual information with semantic or syntactic information from speech. Here we used the mismatch negativity to further investigate whether prosodic information from speech could be immediately integrated into a visual scene context or not, and especially the time course and automaticity of this integration process. Sixteen Chinese native speakers participated in the study. The materials included Chinese spoken sentences and picture pairs. In the audiovisual situation, relative to the concomitant pictures, the spoken sentence was appropriately accented in the standard stimuli, but inappropriately accented in the two kinds of deviant stimuli. In the purely auditory situation, the speech sentences were presented without pictures. It was found that the deviants evoked mismatch responses in both audiovisual and purely auditory situations; the mismatch negativity in the purely auditory situation peaked at the same time as, but was weaker than that evoked by the same deviant speech sounds in the audiovisual situation. This pattern of results suggested immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention.

  19. The Impacts of Language Background and Language-Related Disorders in Auditory Processing Assessment

    ERIC Educational Resources Information Center

    Loo, Jenny Hooi Yin; Bamiou, Doris-Eva; Rosen, Stuart

    2013-01-01

    Purpose: To examine the impact of language background and language-related disorders (LRDs--dyslexia and/or language impairment) on performance in English speech and nonspeech tests of auditory processing (AP) commonly used in the clinic. Method: A clinical database concerning 133 multilingual children (mostly with English as an additional…

  20. Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.

    PubMed

    Shahin, Antoine J; Backer, Kristina C; Rosenblum, Lawrence D; Kerlin, Jess R

    2018-02-14

    Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ ( illusion-fa ), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ ( illusion-ba ), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba , and a reduced N1 when they perceived illusion-fa , mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex. SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator). Copyright © 2018 the authors 0270-6474/18/381835-15$15.00/0.

  1. The language profile of Posterior Cortical Atrophy

    PubMed Central

    Crutch, Sebastian J.; Lehmann, Manja; Warren, Jason D.; Rohrer, Jonathan D.

    2015-01-01

    Background Posterior Cortical Atrophy (PCA) is typically considered to be a visual syndrome, primarily characterised by progressive impairment of visuoperceptual and visuospatial skills. However patients commonly describe early difficulties with word retrieval. This paper details the first systematic analysis of linguistic function in PCA. Characterising and quantifying the aphasia associated with PCA is important for clarifying diagnostic and selection criteria for clinical and research studies. Methods Fifteen patients with PCA, 7 patients with logopenic/phonological aphasia (LPA) and 18 age-matched healthy participants completed a detailed battery of linguistic tests evaluating auditory input processing, repetition and working memory, lexical and grammatical comprehension, single word retrieval and fluency, and spontaneous speech. Results Relative to healthy controls, PCA patients exhibited language impairments across all the domains examined, but with anomia, reduced phonemic fluency and slowed speech rate the most prominent deficits. PCA performance most closely resembled that of LPA patients on tests of auditory input processing, repetition and digit span, but was relatively stronger on tasks of comprehension and spontaneous speech. Conclusions The study demonstrates that in addition to the well-reported degradation of vision, literacy and numeracy, PCA is characterised by a progressive oral language dysfunction with prominent word retrieval difficulties. Overlap in the linguistic profiles of PCA and LPA, which are both most commonly caused by Alzheimer’s disease, further emphasises the notion of a phenotypic continuum between typical and atypical manifestations of the disease. Clarifying the boundaries between AD phenotypes has important implications for diagnosis, clinical trial recruitment and investigations into biological factors driving phenotypic heterogeneity in AD. Rehabilitation strategies to ameliorate the phonological deficit in PCA are required. PMID:23138762

  2. Auditory sensory memory and language abilities in former late talkers: a mismatch negativity study.

    PubMed

    Grossheinrich, Nicola; Kademann, Stefanie; Bruder, Jennifer; Bartling, Juergen; Von Suchodoletz, Waldemar

    2010-09-01

    The present study investigated whether (a) a reduced duration of auditory sensory memory is found in late talking children and (b) whether deficits of sensory memory are linked to persistent difficulties in language acquisition. Former late talkers and children without delayed language development were examined at the age of 4 years and 7 months using mismatch negativity (MMN) with interstimulus intervals (ISIs) of 500 ms and 2000 ms. Additionally, short-term memory, language skills, and nonverbal intelligence were assessed. MMN mean amplitude was reduced for the ISI of 2000 ms in former late talking children both with and without persistent language deficits. In summary, our findings suggest that late talkers are characterized by a reduced duration of auditory sensory memory. However, deficits in auditory sensory memory are not sufficient for persistent language difficulties and may be compensated for by some children.

  3. Altered Brain Functional Activity in Infants with Congenital Bilateral Severe Sensorineural Hearing Loss: A Resting-State Functional MRI Study under Sedation.

    PubMed

    Xia, Shuang; Song, TianBin; Che, Jing; Li, Qiang; Chai, Chao; Zheng, Meizhu; Shen, Wen

    2017-01-01

    Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL) and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF) and regional homogeneity (ReHo) of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45) was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.

  4. Gamma phase locking modulated by phonological contrast during auditory comprehension in reading disability.

    PubMed

    Han, Jooman; Mody, Maria; Ahlfors, Seppo P

    2012-10-03

    Children with specific reading impairment may have subtle deficits in speech perception related to difficulties in phonological processing. The aim of this study was to examine brain oscillatory activity related to phonological processing in the context of auditory sentence comprehension using magnetoencephalography to better understand these deficits. Good and poor readers, 16-18 years of age, were tested on speech perception of sentence-terminal incongruent words that were phonologically manipulated to be similar or dissimilar to corresponding congruent target words. Functional coupling between regions was measured using phase-locking values (PLVs). Gamma-band (30-45 Hz) PLV between auditory cortex and superior temporal sulcus in the right hemisphere was differentially modulated in the two groups by the degree of phonological contrast between the congruent and the incongruent target words in the latency range associated with semantic processing. Specifically, the PLV was larger in the phonologically similar than in the phonologically dissimilar condition in the good readers. This pattern was reversed in the poor readers, whose lower PLV in the phonologically similar condition may be indicative of the impaired phonological coding abilities of the group, and consequent vulnerability under perceptually demanding conditions. Overall, the results support the role of gamma oscillations in spoken language processing.

  5. Auditory cortical activity during cochlear implant-mediated perception of spoken language, melody, and rhythm.

    PubMed

    Limb, Charles J; Molloy, Anne T; Jiradejvong, Patpong; Braun, Allen R

    2010-03-01

    Despite the significant advances in language perception for cochlear implant (CI) recipients, music perception continues to be a major challenge for implant-mediated listening. Our understanding of the neural mechanisms that underlie successful implant listening remains limited. To our knowledge, this study represents the first neuroimaging investigation of music perception in CI users, with the hypothesis that CI subjects would demonstrate greater auditory cortical activation than normal hearing controls. H(2) (15)O positron emission tomography (PET) was used here to assess auditory cortical activation patterns in ten postlingually deafened CI patients and ten normal hearing control subjects. Subjects were presented with language, melody, and rhythm tasks during scanning. Our results show significant auditory cortical activation in implant subjects in comparison to control subjects for language, melody, and rhythm. The greatest activity in CI users compared to controls was seen for language tasks, which is thought to reflect both implant and neural specializations for language processing. For musical stimuli, PET scanning revealed significantly greater activation during rhythm perception in CI subjects (compared to control subjects), and the least activation during melody perception, which was the most difficult task for CI users. These results may suggest a possible relationship between auditory performance and degree of auditory cortical activation in implant recipients that deserves further study.

  6. Spoken Language Processing Model: Bridging Auditory and Language Processing to Guide Assessment and Intervention

    ERIC Educational Resources Information Center

    Medwetsky, Larry

    2011-01-01

    Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…

  7. Auditory Technology and Its Impact on Bilingual Deaf Education

    ERIC Educational Resources Information Center

    Mertes, Jennifer

    2015-01-01

    Brain imaging studies suggest that children can simultaneously develop, learn, and use two languages. A visual language, such as American Sign Language (ASL), facilitates development at the earliest possible moments in a child's life. Spoken language development can be delayed due to diagnostic evaluations, device fittings, and auditory skill…

  8. Dyslexia and Specific Language Impairment: The Role of Phonology and Auditory Processing

    ERIC Educational Resources Information Center

    Fraser, Jill; Goswami, Usha; Conti-Ramsden, Gina

    2010-01-01

    We explore potential similarities between developmental dyslexia (specific reading disability [SRD]) and specific language impairment (SLI) in terms of phonological skills, underlying auditory processing abilities, and nonphonological language skills. Children aged 9 to 11 years with reading and/or language difficulties were recruited and compared…

  9. Peeling the Onion of Auditory Processing Disorder: A Language/Curricular-Based Perspective

    ERIC Educational Resources Information Center

    Wallach, Geraldine P.

    2011-01-01

    Purpose: This article addresses auditory processing disorder (APD) from a language-based perspective. The author asks speech-language pathologists to evaluate the functionality (or not) of APD as a diagnostic category for children and adolescents with language-learning and academic difficulties. Suggestions are offered from a…

  10. Facilitation of listening comprehension by visual information under noisy listening condition

    NASA Astrophysics Data System (ADS)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  11. THE EFFECT OF PLAUSIBILITY ON SENTENCE COMPREHENSION AMONG OLDER ADULTS AND ITS RELATION TO COGNITIVE FUNCTIONS

    PubMed Central

    Yoon, Jungmee; Campanelli, Luca; Goral, Mira; Marton, Klara; Eichorn, Naomi; Obler, Loraine K.

    2016-01-01

    Background/Study Context Older adults show age-related decline in complex-sentence comprehension. This has been attributed to a decrease in cognitive abilities that may support language processing, such as working memory (e.g., Caplan, DeDe, Waters, & Michaud, 2011,Psychology and Aging, 26, 439–450). The authors examined whether older adults have difficulty comprehending semantically implausible sentences and whether specific executive functions contribute to their comprehension performance. Methods Forty-two younger adults (aged 18–35) and 42 older adults (aged 55–75) were tested on two experimental tasks: a multiple negative comprehension task and an information processing battery. Results Both groups, older and younger adults, showed poorer performance for implausible sentences than for plausible sentences; however, no interaction was found between plausibility and age group. A regression analysis revealed that inhibition efficiency, as measured by a task that required resistance to proactive interference, predicted comprehension of implausible sentences in older adults only. Consistent with the compensation hypothesis, the older adults with better inhibition skills showed better comprehension than those with poor inhibition skills. Conclusion The findings suggest that semantic implausibility, along with syntactic complexity, increases linguistic and cognitive processing loads on auditory sentence comprehension. Moreover, the contribution of inhibitory control to the processing of semantic plausibility, particularly among older adults, suggests that the relationship between cognitive ability and language comprehension is strongly influenced by age. PMID:25978447

  12. The effect of plausibility on sentence comprehension among older adults and its relation to cognitive functions.

    PubMed

    Yoon, Jungmee; Campanelli, Luca; Goral, Mira; Marton, Klara; Eichorn, Naomi; Obler, Loraine K

    2015-01-01

    BACKGROUND/STUDY CONTEXT: Older adults show age-related decline in complex-sentence comprehension. This has been attributed to a decrease in cognitive abilities that may support language processing, such as working memory (e.g., Caplan, DeDe, Waters, & Michaud, 2011,Psychology and Aging, 26, 439-450). The authors examined whether older adults have difficulty comprehending semantically implausible sentences and whether specific executive functions contribute to their comprehension performance. Forty-two younger adults (aged 18-35) and 42 older adults (aged 55-75) were tested on two experimental tasks: a multiple negative comprehension task and an information processing battery. Both groups, older and younger adults, showed poorer performance for implausible sentences than for plausible sentences; however, no interaction was found between plausibility and age group. A regression analysis revealed that inhibition efficiency, as measured by a task that required resistance to proactive interference, predicted comprehension of implausible sentences in older adults only. Consistent with the compensation hypothesis, the older adults with better inhibition skills showed better comprehension than those with poor inhibition skills. The findings suggest that semantic implausibility, along with syntactic complexity, increases linguistic and cognitive processing loads on auditory sentence comprehension. Moreover, the contribution of inhibitory control to the processing of semantic plausibility, particularly among older adults, suggests that the relationship between cognitive ability and language comprehension is strongly influenced by age.

  13. Auditory Processing Disorder and Foreign Language Acquisition

    ERIC Educational Resources Information Center

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  14. The impact of hearing loss on language performance in older adults with different stages of cognitive function

    PubMed Central

    Lodeiro-Fernández, Leire; Lorenzo-López, Laura; Maseda, Ana; Núñez-Naveira, Laura; Rodríguez-Villamil, José Luis; Millán-Calenti, José Carlos

    2015-01-01

    Purpose The possible relationship between audiometric hearing thresholds and cognitive performance on language tests was analyzed in a cross-sectional cohort of older adults aged ≥65 years (N=98) with different degrees of cognitive impairment. Materials and methods Participants were distributed into two groups according to Reisberg’s Global Deterioration Scale (GDS): a normal/predementia group (GDS scores 1–3) and a moderate/moderately severe dementia group (GDS scores 4 and 5). Hearing loss (pure-tone audiometry) and receptive and production-based language function (Verbal Fluency Test, Boston Naming Test, and Token Test) were assessed. Results Results showed that the dementia group achieved significantly lower scores than the predementia group in all language tests. A moderate negative correlation between hearing loss and verbal comprehension (r=−0.298; P<0.003) was observed in the predementia group (r=−0.363; P<0.007). However, no significant relationship between hearing loss and verbal fluency and naming scores was observed, regardless of cognitive impairment. Conclusion In the predementia group, reduced hearing level partially explains comprehension performance but not language production. In the dementia group, hearing loss cannot be considered as an explanatory factor of poor receptive and production-based language performance. These results are suggestive of cognitive rather than simply auditory problems to explain the language impairment in the elderly. PMID:25914528

  15. Bilateral Capacity for Speech Sound Processing in Auditory Comprehension: Evidence from Wada Procedures

    ERIC Educational Resources Information Center

    Hickok, G.; Okada, K.; Barr, W.; Pa, J.; Rogalsky, C.; Donnelly, K.; Barde, L.; Grant, A.

    2008-01-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated…

  16. Brainstem Correlates of Temporal Auditory Processing in Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Basu, Madhavi; Krishnan, Ananthanarayan; Weber-Fox, Christine

    2010-01-01

    Deficits in identification and discrimination of sounds with short inter-stimulus intervals or short formant transitions in children with specific language impairment (SLI) have been taken to reflect an underlying temporal auditory processing deficit. Using the sustained frequency following response (FFR) and the onset auditory brainstem responses…

  17. Impact of Language on Development of Auditory-Visual Speech Perception

    ERIC Educational Resources Information Center

    Sekiyama, Kaoru; Burnham, Denis

    2008-01-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…

  18. Grammatical Language Impairment and the Specificity of Cognitive Domains: Relations between Auditory and Language Abilities

    ERIC Educational Resources Information Center

    van der Lely, Heather K. J.; Rosen, Stuart; Adlard, Alan

    2004-01-01

    Grammatical-specific language impairment (G-SLI) in children, arguably, provides evidence for the existence of a specialised grammatical sub-system in the brain, necessary for normal language development. Some researchers challenge this, claiming that domain-general, low-level auditory deficits, particular to rapid processing, cause phonological…

  19. The discovery of human auditory-motor entrainment and its role in the development of neurologic music therapy.

    PubMed

    Thaut, Michael H

    2015-01-01

    The discovery of rhythmic auditory-motor entrainment in clinical populations was a historical breakthrough in demonstrating for the first time a neurological mechanism linking music to retraining brain and behavioral functions. Early pilot studies from this research center were followed up by a systematic line of research studying rhythmic auditory stimulation on motor therapies for stroke, Parkinson's disease, traumatic brain injury, cerebral palsy, and other movement disorders. The comprehensive effects on improving multiple aspects of motor control established the first neuroscience-based clinical method in music, which became the bedrock for the later development of neurologic music therapy. The discovery of entrainment fundamentally shifted and extended the view of the therapeutic properties of music from a psychosocially dominated view to a view using the structural elements of music to retrain motor control, speech and language function, and cognitive functions such as attention and memory. © 2015 Elsevier B.V. All rights reserved.

  20. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain.

    PubMed

    Martínez, I; Rosa, M; Arsuaga, J-L; Jarabo, P; Quam, R; Lorenzo, C; Gracia, A; Carretero, J-M; Bermúdez de Castro, J-M; Carbonell, E

    2004-07-06

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range.

  1. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain

    PubMed Central

    Martínez, I.; Rosa, M.; Arsuaga, J.-L.; Jarabo, P.; Quam, R.; Lorenzo, C.; Gracia, A.; Carretero, J.-M.; de Castro, J.-M. Bermúdez; Carbonell, E.

    2004-01-01

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range. PMID:15213327

  2. A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie.

    PubMed

    Hanke, Michael; Baumgartner, Florian J; Ibe, Pierre; Kaule, Falko R; Pollmann, Stefan; Speck, Oliver; Zinke, Wolf; Stadler, Jörg

    2014-01-01

    Here we present a high-resolution functional magnetic resonance (fMRI) dataset - 20 participants recorded at high field strength (7 Tesla) during prolonged stimulation with an auditory feature film ("Forrest Gump"). In addition, a comprehensive set of auxiliary data (T1w, T2w, DTI, susceptibility-weighted image, angiography) as well as measurements to assess technical and physiological noise components have been acquired. An initial analysis confirms that these data can be used to study common and idiosyncratic brain response patterns to complex auditory stimulation. Among the potential uses of this dataset are the study of auditory attention and cognition, language and music perception, and social perception. The auxiliary measurements enable a large variety of additional analysis strategies that relate functional response patterns to structural properties of the brain. Alongside the acquired data, we provide source code and detailed information on all employed procedures - from stimulus creation to data analysis. In order to facilitate replicative and derived works, only free and open-source software was utilized.

  3. Auditory processing and speech perception in children with specific language impairment: relations with oral language and literacy skills.

    PubMed

    Vandewalle, Ellen; Boets, Bart; Ghesquière, Pol; Zink, Inge

    2012-01-01

    This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay (n = 8), (2) children with SLI and normal literacy (n = 10) and (3) typically developing children (n = 14). Moreover, the relations between these auditory processing and speech perception skills and oral language and literacy skills in grade 1 and grade 3 were analyzed. The SLI group with literacy delay scored significantly lower than both other groups on speech perception, but not on temporal auditory processing. Both normal reading groups did not differ in terms of speech perception or auditory processing. Speech perception was significantly related to reading and spelling in grades 1 and 3 and had a unique predictive contribution to reading growth in grade 3, even after controlling reading level, phonological ability, auditory processing and oral language skills in grade 1. These findings indicated that speech perception also had a unique direct impact upon reading development and not only through its relation with phonological awareness. Moreover, speech perception seemed to be more associated with the development of literacy skills and less with oral language ability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Directional Effects between Rapid Auditory Processing and Phonological Awareness in Children

    ERIC Educational Resources Information Center

    Johnson, Erin Phinney; Pennington, Bruce F.; Lee, Nancy Raitano; Boada, Richard

    2009-01-01

    Background: Deficient rapid auditory processing (RAP) has been associated with early language impairment and dyslexia. Using an auditory masking paradigm, children with language disabilities perform selectively worse than controls at detecting a tone in a backward masking (BM) condition (tone followed by white noise) compared to a forward masking…

  5. Visual and Auditory Input in Second-Language Speech Processing

    ERIC Educational Resources Information Center

    Hardison, Debra M.

    2010-01-01

    The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…

  6. Assessment of short-term memory in Arabic speaking children with specific language impairment.

    PubMed

    Kaddah, F A; Shoeib, R M; Mahmoud, H E

    2010-12-15

    Children with Specific Language Impairment (SLI) may have some kind of memory disorder that could increase their linguistic impairment. This study assessed the short-term memory skills in Arabic speaking children with either Expressive Language Impairment (ELI) or Receptive/Expressive Language Impairment (R/ELI) in comparison to controls in order to estimate the nature and extent of any specific deficits in these children that could explain the different prognostic results of language intervention. Eighteen children were included in each group. Receptive, expressive and total language quotients were calculated using the Arabic language test. Assessment of auditory and visual short-term memory was done using the Arabic version of the Illinois Test of Psycholinguistic Abilities. Both groups of SLI performed significantly lower linguistic abilities and poorer auditory and visual short-term memory in comparison to normal children. The R/ELI group presented an inferior performance than the ELI group in all measured parameters. Strong association was found between most tasks of auditory and visual short-term memory and linguistic abilities. The results of this study highlighted a specific degree of deficit of auditory and visual short-term memories in both groups of SLI. These deficits were more prominent in R/ELI group. Moreover, the strong association between the different auditory and visual short-term memories and language abilities in children with SLI must be taken into account when planning an intervention program for these children.

  7. Occupational Styrene Exposure on Auditory Function Among Adults: A Systematic Review of Selected Workers.

    PubMed

    Pleban, Francis T; Oketope, Olutosin; Shrestha, Laxmi

    2017-12-01

    A review study was conducted to examine the adverse effects of styrene, styrene mixtures, or styrene and/or styrene mixtures and noise on the auditory system in humans employed in occupational settings. The search included peer-reviewed articles published in English language involving human volunteers spanning a 25-year period (1990-2015). Studies included peer review journals, case-control studies, and case reports. Animal studies were excluded. An initial search identified 40 studies. After screening for inclusion, 13 studies were retrieved for full journal detail examination and review. As a whole, the results range from no to mild associations between styrene exposure and auditory dysfunction, noting relatively small sample sizes. However, four studies investigating styrene with other organic solvent mixtures and noise suggested combined exposures to both styrene organic solvent mixtures may be more ototoxic than exposure to noise alone. There is little literature examining the effect of styrene on auditory functioning in humans. Nonetheless, findings suggest public health professionals and policy makers should be made aware of the future research needs pertaining to hearing impairment and ototoxicity from styrene. It is recommended that chronic styrene-exposed individuals be routinely evaluated with a comprehensive audiological test battery to detect early signs of auditory dysfunction.

  8. In-Vivo Animation of Auditory-Language-Induced Gamma-Oscillations in Children with Intractable Focal Epilepsy

    PubMed Central

    Brown, Erik C.; Rothermel, Robert; Nishida, Masaaki; Juhász, Csaba; Muzik, Otto; Hoechstetter, Karsten; Sood, Sandeep; Chugani, Harry T.; Asano, Eishi

    2008-01-01

    We determined if high-frequency gamma-oscillations (50- to 150-Hz) were induced by simple auditory communication over the language network areas in children with focal epilepsy. Four children (ages: 7, 9, 10 and 16 years) with intractable left-hemispheric focal epilepsy underwent extraoperative electrocorticography (ECoG) as well as language mapping using neurostimulation and auditory-language-induced gamma-oscillations on ECoG. The audible communication was recorded concurrently and integrated with ECoG recording to allow for accurate time-lock upon ECoG analysis. In three children, who successfully completed the auditory-language task, high-frequency gamma-augmentation sequentially involved: i) the posterior superior temporal gyrus when listening to the question, ii) the posterior lateral temporal region and the posterior frontal region in the time interval between question completion and the patient’s vocalization, and iii) the pre- and post-central gyri immediately preceding and during the patient’s vocalization. The youngest child, with attention deficits, failed to cooperate during the auditory-language task, and high-frequency gamma-augmentation was noted only in the posterior superior temporal gyrus when audible questions were given. The size of language areas suggested by statistically-significant high-frequency gamma-augmentation was larger than that defined by neurostimulation. The present method can provide in-vivo imaging of electrophysiological activities over the language network areas during language processes. Further studies are warranted to determine whether recording of language-induced gamma-oscillations can supplement language mapping using neurostimulation in presurgical evaluation of children with focal epilepsy. PMID:18455440

  9. Brain Bases of Morphological Processing in Young Children

    PubMed Central

    Arredondo, Maria M.; Ip, Ka I; Hsu, Lucy Shih-Ju; Tardif, Twila; Kovelman, Ioulia

    2017-01-01

    How does the developing brain support the transition from spoken language to print? Two spoken language abilities form the initial base of child literacy across languages: knowledge of language sounds (phonology) and knowledge of the smallest units that carry meaning (morphology). While phonology has received much attention from the field, the brain mechanisms that support morphological competence for learning to read remain largely unknown. In the present study, young English-speaking children completed an auditory morphological awareness task behaviorally (n = 69, ages 6–12) and in fMRI (n = 16). The data revealed two findings: First, children with better morphological abilities showed greater activation in left temporo-parietal regions previously thought to be important for supporting phonological reading skills, suggesting that this region supports multiple language abilities for successful reading acquisition. Second, children showed activation in left frontal regions previously found active in young Chinese readers, suggesting morphological processes for reading acquisition might be similar across languages. These findings offer new insights for developing a comprehensive model of how spoken language abilities support children’s reading acquisition across languages. PMID:25930011

  10. Knowledge of a Second Language Influences Auditory Word Recognition in the Native Language

    ERIC Educational Resources Information Center

    Lagrou, Evelyne; Hartsuiker, Robert J.; Duyck, Wouter

    2011-01-01

    Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether…

  11. Differences in neural activation between preterm and full term born adolescents on a sentence comprehension task: implications for educational accommodations.

    PubMed

    Barde, Laura H F; Yeatman, Jason D; Lee, Eliana S; Glover, Gary; Feldman, Heidi M

    2012-02-15

    Adolescent survivors of preterm birth experience persistent functional problems that negatively impact academic outcomes, even when standardized measures of cognition and language suggest normal ability. In this fMRI study, we compared the neural activation supporting auditory sentence comprehension in two groups of adolescents (ages 9-16 years); sentences varied in length and syntactic difficulty. Preterms (n=18, mean gestational age 28.8 weeks) and full terms (n=14) had scores on verbal IQ, receptive vocabulary, and receptive language tests that were within or above normal limits and similar between groups. In early and late phases of the trial, we found interactions by group and length; in the late phase, we also found a group by syntactic difficulty interaction. Post hoc tests revealed that preterms demonstrated significant activation in the left and right middle frontal gyri as syntactic difficulty increased. ANCOVA showed that the interactions could not be attributed to differences in age, receptive language skill, or reaction time. Results are consistent with the hypothesis that preterm birth modulates brain-behavior relations in sentence comprehension as task demands increase. We suggest preterms' differences in neural processing may indicate a need for educational accommodations, even when formal test scores indicate normal academic achievement. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Noise on, voicing off: Speech perception deficits in children with specific language impairment.

    PubMed

    Ziegler, Johannes C; Pech-Georgel, Catherine; George, Florence; Lorenzi, Christian

    2011-11-01

    Speech perception of four phonetic categories (voicing, place, manner, and nasality) was investigated in children with specific language impairment (SLI) (n=20) and age-matched controls (n=19) in quiet and various noise conditions using an AXB two-alternative forced-choice paradigm. Children with SLI exhibited robust speech perception deficits in silence, stationary noise, and amplitude-modulated noise. Comparable deficits were obtained for fast, intermediate, and slow modulation rates, and this speaks against the various temporal processing accounts of SLI. Children with SLI exhibited normal "masking release" effects (i.e., better performance in fluctuating noise than in stationary noise), again suggesting relatively spared spectral and temporal auditory resolution. In terms of phonetic categories, voicing was more affected than place, manner, or nasality. The specific nature of this voicing deficit is hard to explain with general processing impairments in attention or memory. Finally, speech perception in noise correlated with an oral language component but not with either a memory or IQ component, and it accounted for unique variance beyond IQ and low-level auditory perception. In sum, poor speech perception seems to be one of the primary deficits in children with SLI that might explain poor phonological development, impaired word production, and poor word comprehension. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. The Effect of Noise on the Relationship between Auditory Working Memory and Comprehension in School-Age Children

    ERIC Educational Resources Information Center

    Sullivan, Jessica R.; Osman, Homira; Schafer, Erin C.

    2015-01-01

    Purpose: The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Method: Children with normal hearing between the ages…

  14. The neural basis for writing from dictation in the temporoparietal cortex.

    PubMed

    Roux, Franck-Emmanuel; Durand, Jean-Baptiste; Réhault, Emilie; Planton, Samuel; Draper, Louisa; Démonet, Jean-François

    2014-01-01

    Cortical electrical stimulation mapping was used to study neural substrates of the function of writing in the temporoparietal cortex. We identified the sites involved in oral language (sentence reading and naming) and writing from dictation, in order to spare these areas during removal of brain tumours in 30 patients (23 in the left, and 7 in the right hemisphere). Electrostimulation of the cortex impaired writing ability in 62 restricted cortical areas (.25 cm2). These were found in left temporoparietal lobes and were mostly located along the superior temporal gyrus (Brodmann's areas 22 and 42). Stimulation of right temporoparietal lobes in right-handed patients produced no writing impairments. However there was a high variability of location between individuals. Stimulation resulted in combined symptoms (affecting oral language and writing) in fourteen patients, whereas in eight other patients, stimulation-induced pure agraphia symptoms with no oral language disturbance in twelve of the identified areas. Each detected area affected writing in a different way. We detected the various different stages of the auditory-to-motor pathway of writing from dictation: either through comprehension of the dictated sentences (word deafness areas), lexico-semantic retrieval, or phonologic processing. In group analysis, barycentres of all different types of writing interferences reveal a hierarchical functional organization along the superior temporal gyrus from initial word recognition to lexico-semantic and phonologic processes along the ventral and the dorsal comprehension pathways, supporting the previously described auditory-to-motor process. The left posterior Sylvian region supports different aspects of writing function that are extremely specialized and localized, sometimes being segregated in a way that could account for the occurrence of pure agraphia that has long-been described in cases of damage to this region. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Rate of Language Growth in Children with Hearing Loss in an Auditory-Verbal Early Intervention Program

    ERIC Educational Resources Information Center

    Jackson, Carla Wood; Schatschneider, Christopher

    2013-01-01

    This longitudinal study explored the rate of language growth of children in an early intervention program providing auditory-verbal therapy. A retrospective investigation, the study applied a linear growth model to estimate a mean growth curve and the extent of individual variation in language performance on the Preschool Language Scale, 4th ed.…

  16. Speech comprehension aided by multiple modalities: behavioural and neural interactions

    PubMed Central

    McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K.

    2014-01-01

    Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. PMID:22266262

  17. Speech comprehension aided by multiple modalities: behavioural and neural interactions.

    PubMed

    McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K

    2012-04-01

    Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. A longitudinal study of auditory evoked field and language development in young children.

    PubMed

    Yoshimura, Yuko; Kikuchi, Mitsuru; Ueno, Sanae; Shitamichi, Kiyomi; Remijn, Gerard B; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Furutani, Naoki; Oi, Manabu; Munesue, Toshio; Tsubokawa, Tsunehisa; Higashida, Haruhiro; Minabe, Yoshio

    2014-11-01

    The relationship between language development in early childhood and the maturation of brain functions related to the human voice remains unclear. Because the development of the auditory system likely correlates with language development in young children, we investigated the relationship between the auditory evoked field (AEF) and language development using non-invasive child-customized magnetoencephalography (MEG) in a longitudinal design. Twenty typically developing children were recruited (aged 36-75 months old at the first measurement). These children were re-investigated 11-25 months after the first measurement. The AEF component P1m was examined to investigate the developmental changes in each participant's neural brain response to vocal stimuli. In addition, we examined the relationships between brain responses and language performance. P1m peak amplitude in response to vocal stimuli significantly increased in both hemispheres in the second measurement compared to the first measurement. However, no differences were observed in P1m latency. Notably, our results reveal that children with greater increases in P1m amplitude in the left hemisphere performed better on linguistic tests. Thus, our results indicate that P1m evoked by vocal stimuli is a neurophysiological marker for language development in young children. Additionally, MEG is a technique that can be used to investigate the maturation of the auditory cortex based on auditory evoked fields in young children. This study is the first to demonstrate a significant relationship between the development of the auditory processing system and the development of language abilities in young children. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Does long term use of piracetam improve speech disturbances due to ischemic cerebrovascular diseases?

    PubMed

    Güngör, Levent; Terzi, Murat; Onar, Musa Kazim

    2011-04-01

    Aphasia causes significant disability and handicap among stroke survivors. Language therapy is recommended for aphasic patients, but not always available. Piracetam, an old drug with novel properties, has been shown to have mild beneficial effects on post-stroke aphasia. In the current study, we investigated the effects of 6 months treatment with piracetam on aphasia following stroke. Thirty patients with first-ever ischemic strokes and related aphasia were enrolled in the study. The scores for the National Institutes of Health Stroke Scale (NIHSS), Barthel Index (BI), modified Rankin Scale (mRS), and Gülhane Aphasia Test were recorded. The patients were scheduled randomly to receive either 4.8 g piracetam daily or placebo treatment for 6 months. At the end of 24 weeks, clinical assessments and aphasia tests were repeated. The level of improvement in the clinical parameters and aphasia scores was compared between the two groups. All patients had large lesions and severe aphasia. No significant difference was observed between the piracetam and placebo groups regarding the improvements in the NIHSS, BI and mRS scores at the end of the treatment. The improvements observed in spontaneous speech, reading fluency, auditory comprehension, reading comprehension, repetition, and naming were not significantly different in the piracetam and placebo groups, the difference reached significance only for auditory comprehension in favor of piracetam at the end of the treatment. Piracetam is well-tolerated in patients with post-stroke aphasia. Piracetam taken orally in a daily dose of 4.8 g for 6 months has no clear beneficial effect on post-stroke language disorders. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Auditory processing and morphological anomalies in medial geniculate nucleus of Cntnap2 mutant mice.

    PubMed

    Truong, Dongnhu T; Rendall, Amanda R; Castelluccio, Brian C; Eigsti, Inge-Marie; Fitch, R Holly

    2015-12-01

    Genetic epidemiological studies support a role for CNTNAP2 in developmental language disorders such as autism spectrum disorder, specific language impairment, and dyslexia. Atypical language development and function represent a core symptom of autism spectrum disorder (ASD), with evidence suggesting that aberrant auditory processing-including impaired spectrotemporal processing and enhanced pitch perception-may both contribute to an anomalous language phenotype. Investigation of gene-brain-behavior relationships in social and repetitive ASD symptomatology have benefited from experimentation on the Cntnap2 knockout (KO) mouse. However, auditory-processing behavior and effects on neural structures within the central auditory pathway have not been assessed in this model. Thus, this study examined whether auditory-processing abnormalities were associated with mutation of the Cntnap2 gene in mice. Cntnap2 KO mice were assessed on auditory-processing tasks including silent gap detection, embedded tone detection, and pitch discrimination. Cntnap2 knockout mice showed deficits in silent gap detection but a surprising superiority in pitch-related discrimination as compared with controls. Stereological analysis revealed a reduction in the number and density of neurons, as well as a shift in neuronal size distribution toward smaller neurons, in the medial geniculate nucleus of mutant mice. These findings are consistent with a central role for CNTNAP2 in the ontogeny and function of neural systems subserving auditory processing and suggest that developmental disruption of these neural systems could contribute to the atypical language phenotype seen in autism spectrum disorder. (c) 2015 APA, all rights reserved).

  1. Assessing Auditory Processing Abilities in Typically Developing School-Aged Children.

    PubMed

    McDermott, Erin E; Smart, Jennifer L; Boiano, Julie A; Bragg, Lisa E; Colon, Tiffany N; Hanson, Elizabeth M; Emanuel, Diana C; Kelly, Andrea S

    2016-02-01

    Large discrepancies exist in the literature regarding definition, diagnostic criteria, and appropriate assessment for auditory processing disorder (APD). Therefore, a battery of tests with normative data is needed. The purpose of this study is to collect normative data on a variety of tests for APD on children aged 7-12 yr, and to examine effects of outside factors on test performance. Children aged 7-12 yr with normal hearing, speech and language abilities, cognition, and attention were recruited for participation in this normative data collection. One hundred and forty-seven children were recruited using flyers and word of mouth. Of the participants recruited, 137 children qualified for the study. Participants attended schools located in areas that varied in terms of socioeconomic status, and resided in six different states. Audiological testing included a hearing screening (15 dB HL from 250 to 8000 Hz), word recognition testing, tympanometry, ipsilateral and contralateral reflexes, and transient-evoked otoacoustic emissions. The language, nonverbal IQ, phonological processing, and attention skills of each participant were screened using the Clinical Evaluation of Language Fundamentals-4 Screener, Test of Nonverbal Intelligence, Comprehensive Test of Phonological Processing, and Integrated Visual and Auditory-Continuous Performance Test, respectively. The behavioral APD battery included the following tests: Dichotic Digits Test, Frequency Pattern Test, Duration Pattern Test, Random Gap Detection Test, Compressed and Reverberated Words Test, Auditory Figure Ground (signal-to-noise ratio of +8 and +0), and Listening in Spatialized Noise-Sentences Test. Mean scores and standard deviations of each test were calculated, and analysis of variance tests were used to determine effects of factors such as gender, handedness, and birth history on each test. Normative data tables for the test battery were created for the following age groups: 7- and 8-yr-olds (n = 49), 9- and 10-yr-olds (n = 40), and 11- and 12-yr-olds (n = 48). No significant effects were seen for gender or handedness on any of the measures. The data collected in this study are appropriate for use in clinical diagnosis of APD. Use of a low-linguistically loaded core battery with the addition of more language-based tests, when language abilities are known, can provide a well-rounded picture of a child's auditory processing abilities. Screening for language, phonological processing, attention, and cognitive level can provide more information regarding a diagnosis of APD, determine appropriateness of the test battery for the individual child, and may assist with making recommendations or referrals. It is important to use a multidisciplinary approach in the diagnosis and treatment of APD due to the high likelihood of comorbidity with other language, learning, or attention deficits. Although children with other diagnoses may be tested for APD, it is important to establish previously made diagnoses before testing to aid in appropriate test selection and recommendations. American Academy of Audiology.

  2. Auditory Processing Disorder (For Parents)

    MedlinePlus

    ... or other speech-language difficulties? Are verbal (word) math problems difficult for your child? Is your child ... inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  3. ERP Correlates of Language-Specific Processing of Auditory Pitch Feedback during Self-Vocalization

    ERIC Educational Resources Information Center

    Chen, Zhaocong; Liu, Peng; Wang, Emily Q.; Larson, Charles R.; Huang, Dongfeng; Liu, Hanjun

    2012-01-01

    The present study investigated whether the neural correlates for auditory feedback control of vocal pitch can be shaped by tone language experience. Event-related potentials (P2/N1) were recorded from adult native speakers of Mandarin and Cantonese who heard their voice auditory feedback shifted in pitch by -50, -100, -200, or -500 cents when they…

  4. Auditory access, language access, and implicit sequence learning in deaf children.

    PubMed

    Hall, Matthew L; Eigsti, Inge-Marie; Bortfeld, Heather; Lillo-Martin, Diane

    2018-05-01

    Developmental psychology plays a central role in shaping evidence-based best practices for prelingually deaf children. The Auditory Scaffolding Hypothesis (Conway et al., 2009) asserts that a lack of auditory stimulation in deaf children leads to impoverished implicit sequence learning abilities, measured via an artificial grammar learning (AGL) task. However, prior research is confounded by a lack of both auditory and language input. The current study examines implicit learning in deaf children who were (Deaf native signers) or were not (oral cochlear implant users) exposed to language from birth, and in hearing children, using both AGL and Serial Reaction Time (SRT) tasks. Neither deaf nor hearing children across the three groups show evidence of implicit learning on the AGL task, but all three groups show robust implicit learning on the SRT task. These findings argue against the Auditory Scaffolding Hypothesis, and suggest that implicit sequence learning may be resilient to both auditory and language deprivation, within the tested limits. A video abstract of this article can be viewed at: https://youtu.be/EeqfQqlVHLI [Correction added on 07 August 2017, after first online publication: The video abstract link was added.]. © 2017 John Wiley & Sons Ltd.

  5. School performance and wellbeing of children with CI in different communicative-educational environments.

    PubMed

    Langereis, Margreet; Vermeulen, Anneke

    2015-06-01

    This study aimed to evaluate the long term effects of CI on auditory, language, educational and social-emotional development of deaf children in different educational-communicative settings. The outcomes of 58 children with profound hearing loss and normal non-verbal cognition, after 60 months of CI use have been analyzed. At testing the children were enrolled in three different educational settings; in mainstream education, where spoken language is used or in hard-of-hearing education where sign supported spoken language is used and in bilingual deaf education, with Sign Language of the Netherlands and Sign Supported Dutch. Children were assessed on auditory speech perception, receptive language, educational attainment and wellbeing. Auditory speech perception of children with CI in mainstream education enable them to acquire language and educational levels that are comparable to those of their normal hearing peers. Although the children in mainstream and hard-of-hearing settings show similar speech perception abilities, language development in children in hard-of-hearing settings lags significantly behind. Speech perception, language and educational attainments of children in deaf education remained extremely poor. Furthermore more children in mainstream and hard-of-hearing environments are resilient than in deaf educational settings. Regression analyses showed an important influence of educational setting. Children with CI who are placed in early intervention environments that facilitate auditory development are able to achieve good auditory speech perception, language and educational levels on the long term. Most parents of these children report no social-emotional concerns. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Can You Play with Fire and Not Hurt Yourself? A Comparative Study in Figurative Language Comprehension between Individuals with and without Autism Spectrum Disorder

    PubMed Central

    Chahboun, Sobh; Vulchanov, Valentin; Saldaña, David; Eshuis, Hendrik

    2016-01-01

    Individuals with High functioning autism (HFA) are distinguished by relative preservation of linguistic and cognitive skills. However, problems with pragmatic language skills have been consistently reported across the autistic spectrum, even when structural language is intact. Our main goal was to investigate how highly verbal individuals with autism process figurative language and whether manipulation of the stimuli presentation modality had an impact on the processing. We were interested in the extent to which visual context, e.g., an image corresponding either to the literal meaning or the figurative meaning of the expression may facilitate responses to such expressions. Participants with HFA and their typically developing peers (matched on intelligence and language level) completed a cross-modal sentence-picture matching task for figurative expressions and their target figurative meaning represented in images. We expected that the individuals with autism would have difficulties in appreciating the non-literal nature of idioms and metaphors, despite intact structural language skills. Analyses of accuracy and reaction times showed clearly that the participants with autism performed at a lower level than their typically developing peers. Moreover, the modality in which the stimuli were presented was an important variable in task performance for the more transparent expressions. The individuals with autism displayed higher error rates and greater reaction latencies in the auditory modality compared to the visual stimulus presentation modality, implying more difficulty. Performance differed depending on type of expression. Participants had more difficulty understanding the culturally-based expressions, but not expressions grounded in human experience (biological idioms). This research highlights the importance of stimulus presentation modality and that this can lead to differences in figurative language comprehension between typically and atypically developing individuals. The current study also contributes to current debates on the role of structural language in figurative language comprehension in autism. PMID:28036344

  7. Can You Play with Fire and Not Hurt Yourself? A Comparative Study in Figurative Language Comprehension between Individuals with and without Autism Spectrum Disorder.

    PubMed

    Chahboun, Sobh; Vulchanov, Valentin; Saldaña, David; Eshuis, Hendrik; Vulchanova, Mila

    2016-01-01

    Individuals with High functioning autism (HFA) are distinguished by relative preservation of linguistic and cognitive skills. However, problems with pragmatic language skills have been consistently reported across the autistic spectrum, even when structural language is intact. Our main goal was to investigate how highly verbal individuals with autism process figurative language and whether manipulation of the stimuli presentation modality had an impact on the processing. We were interested in the extent to which visual context, e.g., an image corresponding either to the literal meaning or the figurative meaning of the expression may facilitate responses to such expressions. Participants with HFA and their typically developing peers (matched on intelligence and language level) completed a cross-modal sentence-picture matching task for figurative expressions and their target figurative meaning represented in images. We expected that the individuals with autism would have difficulties in appreciating the non-literal nature of idioms and metaphors, despite intact structural language skills. Analyses of accuracy and reaction times showed clearly that the participants with autism performed at a lower level than their typically developing peers. Moreover, the modality in which the stimuli were presented was an important variable in task performance for the more transparent expressions. The individuals with autism displayed higher error rates and greater reaction latencies in the auditory modality compared to the visual stimulus presentation modality, implying more difficulty. Performance differed depending on type of expression. Participants had more difficulty understanding the culturally-based expressions, but not expressions grounded in human experience (biological idioms). This research highlights the importance of stimulus presentation modality and that this can lead to differences in figurative language comprehension between typically and atypically developing individuals. The current study also contributes to current debates on the role of structural language in figurative language comprehension in autism.

  8. Central Auditory Processing through the Looking Glass: A Critical Look at Diagnosis and Management.

    ERIC Educational Resources Information Center

    Young, Maxine L.

    1985-01-01

    The article examines the contributions of both audiologists and speech-language pathologists to the diagnosis and management of students with central auditory processing disorders and language impairments. (CL)

  9. Auditory Deprivation Does Not Impair Executive Function, But Language Deprivation Might: Evidence From a Parent-Report Measure in Deaf Native Signing Children

    PubMed Central

    Hall, Matthew L.; Eigsti, Inge-Marie; Bortfeld, Heather; Lillo-Martin, Diane

    2017-01-01

    Deaf children are often described as having difficulty with executive function (EF), often manifesting in behavioral problems. Some researchers view these problems as a consequence of auditory deprivation; however, the behavioral problems observed in previous studies may not be due to deafness but to some other factor, such as lack of early language exposure. Here, we distinguish these accounts by using the BRIEF EF parent report questionnaire to test for behavioral problems in a group of Deaf children from Deaf families, who have a history of auditory but not language deprivation. For these children, the auditory deprivation hypothesis predicts behavioral impairments; the language deprivation hypothesis predicts no group differences in behavioral control. Results indicated that scores among the Deaf native signers (n = 42) were age-appropriate and similar to scores among the typically developing hearing sample (n = 45). These findings are most consistent with the language deprivation hypothesis, and provide a foundation for continued research on outcomes of children with early exposure to sign language. PMID:27624307

  10. Comparison of auditory comprehension skills in children with cochlear implant and typically developing children.

    PubMed

    Mandal, Joyanta Chandra; Kumar, Suman; Roy, Sumit

    2016-12-01

    The main goal of this study was to obtain auditory comprehension skills of native Hindi speaking children with cochlear implant and typically developing children across the age of 3-7 years and compare the scores between two groups. A total of sixty Hindi speaking participants were selected for the study. They were divided into two groups- Group-A consisted of thirty children with normal hearing and Group-B thirty children using cochlear implants. To assess the auditory comprehension skills, Test of auditory comprehension in Hindi (TACH) was used. The participant was required to point to one of three pictures which would best correspond to the stimulus presented. Correct answers were scored as 1 and incorrect answers as 0. TACH was administered on for both groups. Independent t-test was applied and it was found that auditory comprehension scores of children using cochlear implant were significantly poorer than the score of children with normal hearing for all three subtests. Pearson's correlation coefficient revealed poor correlation between the scores of children with normal hearing and children using cochlear implant. The results of this study suggest that children using cochlear implant have poor auditory comprehension skills than children with normal hearing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Right anterior superior temporal activation predicts auditory sentence comprehension following aphasic stroke.

    PubMed

    Crinion, Jenny; Price, Cathy J

    2005-12-01

    Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.

  12. Temporal lobe networks supporting the comprehension of spoken words.

    PubMed

    Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius

    2017-09-01

    Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    PubMed

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to facilitate lexical access, making it difficult for them to fully engage higher-order cognitive abilities in support of listening comprehension.

  14. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    PubMed

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  15. Developmental trends in auditory processing can provide early predictions of language acquisition in young infants.

    PubMed

    Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R; Shao, Jie; Lozoff, Betsy

    2013-03-01

    Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with both Auditory Brainstem Response (ABR) and language assessments. At 6 weeks and/or 9 months of age, the infants underwent ABR testing using both a standard hearing screening protocol with 30 dB clicks and a second protocol using click pairs separated by 8, 16, and 64-ms intervals presented at 80 dB. We evaluated the effects of interval duration on ABR latency and amplitude elicited by the second click. At 9 months, language development was assessed via parent report on the Chinese Communicative Development Inventory - Putonghua version (CCDI-P). Wave V latency z-scores of the 64-ms condition at 6 weeks showed strong direct relationships with Wave V latency in the same condition at 9 months. More importantly, shorter Wave V latencies at 9 months showed strong relationships with the CCDI-P composite consisting of phrases understood, gestures, and words produced. Likewise, infants who had greater decreases in Wave V latencies from 6 weeks to 9 months had higher CCDI-P composite scores. Females had higher language development scores and shorter Wave V latencies at both ages than males. Interestingly, when the ABR Wave V latencies at both ages were taken into account, the direct effects of gender on language disappeared. In conclusion, these results support the importance of low-level auditory processing capabilities for early language acquisition in a population of typically developing young infants. Moreover, the auditory brainstem response in this paradigm shows promise as an electrophysiological marker to predict individual differences in language development in young children. © 2012 Blackwell Publishing Ltd.

  16. Level of emotion comprehension in children with mid to long term cochlear implant use: How basic and more complex emotion recognition relates to language and age at implantation.

    PubMed

    Mancini, Patrizia; Giallini, Ilaria; Prosperini, Luca; D'alessandro, Hilal Dincer; Guerzoni, Letizia; Murri, Alessandra; Cuda, Domenico; Ruoppolo, Giovanni; De Vincentiis, Marco; Nicastri, Maria

    2016-08-01

    The current study was designed with three main aims: To document the level of emotional comprehension skills, from basic to more complex ones, reached by a wide sample of cochlear implant (CI) deaf children with at least 36 months of device use; To investigate subjective and audiological factors that can affect their emotional development; To identify, if present, a "critical age", in which early intervention might positively affect adequate emotional competence development. This is an observational cohort study. Children with congenital severe/profound deafness were selected based on: aged by 4-11 years, minimum of 36 months of CI use, Italian as the primary language in the family; normal cognitive level and absence of associated disorders or socio-economic difficulties. Audiological characteristics and language development were assessed throughout standardized tests, to measure speech perception in quiet, lexical comprehension and production. The development of emotions' understanding was assessed using the Test of Emotion Comprehension (TEC) of Pons and Harris, a hierarchical developmental model, where emotion comprehension is organized in 3 Stages (external, mental and reflective). Statistical analysis was accomplished via the Spearman Rank Correlation Coefficient, to study the relationship between the personal and audiological characteristics; a multivariate linear regression analysis was carried out to find which variables were better associated with the standardized TEC values; a chi-squared test with Yate's continuity correction and Mann-Whitney U test were used to account for differences between continuous variables and proportions. 72 children (40 females, 32 males) with a mean age of 8.1 years were included. At TEC score, 57 children showed normal range performances (79.17% of recipients) and 15 fell below average (20.83% of recipients). The 16.63% of older subjects (range of age 8-12 years) didn't master the Stage 3 (reflective), which is normally acquired by 8 years of age and failed 2 or all the 3 items of this component. Subjects implanted within 18 months of age had better emotion comprehension skills. TEC results were also positively correlated with an early diagnosis, a longer implant use, better auditory skills and higher scores on lexical and morphosintactic tests. On the contrary, it was negatively correlated with the presence of siblings and the order of birth. The gender, the side and the severity of deafness, type of implant and strategy were not correlated. Early implanted children have more chance to develop adequate emotion comprehension, especially when the complex aspects are included, due to the very strong link between listening and language skills and emotional development. Furthermore, longer CI auditory experience along with early intervention allows an adequate communication development which positively influences the acquisition of such competencies. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Tone Language Speakers and Musicians Share Enhanced Perceptual and Cognitive Abilities for Musical Pitch: Evidence for Bidirectionality between the Domains of Language and Music

    PubMed Central

    Bidelman, Gavin M.; Hutka, Stefanie; Moreno, Sylvain

    2013-01-01

    Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language. PMID:23565267

  18. Speech sound discrimination training improves auditory cortex responses in a rat model of autism

    PubMed Central

    Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Kilgard, Michael P.

    2014-01-01

    Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA) increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field (AAF) responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes. PMID:25140133

  19. Sensory Processing of Backward-Masking Signals in Children with Language-Learning Impairment as Assessed with the Auditory Brainstem Response.

    ERIC Educational Resources Information Center

    Marler, Jeffrey A.; Champlin, Craig A.

    2005-01-01

    The purpose of this study was to examine the possible contribution of sensory mechanisms to an auditory processing deficit shown by some children with language-learning impairment (LLI). Auditory brainstem responses (ABRs) were measured from 2 groups of school-aged (8-10 years) children. One group consisted of 10 children with LLI, and the other…

  20. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension.

    PubMed

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-05-01

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  1. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension

    PubMed Central

    Özyürek, Asli; Jensen, Ole

    2018-01-01

    Abstract During face‐to‐face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued‐recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand‐area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low‐ and high‐frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low‐ and high‐frequency oscillations in predicting the integration of auditory and visual information at a semantic level. PMID:29380945

  2. Spoken language skills and educational placement in Finnish children with cochlear implants.

    PubMed

    Lonka, Eila; Hasan, Marja; Komulainen, Erkki

    2011-01-01

    This study reports the demographics, and the auditory and spoken language development as well as educational settings, for a total of 164 Finnish children with cochlear implants. Two questionnaires were employed: the first, concerning day care and educational placement, was filled in by professionals for rehabilitation guidance, and the second, evaluating language development (categories of auditory performance, spoken language skills, and main mode of communication), by speech and language therapists in audiology departments. Nearly half of the children were enrolled in normal kindergartens and 43% of school-aged children in mainstream schools. Categories of auditory performance were observed to grow in relation to age at cochlear implantation (p < 0.001) as well as in relation to proportional hearing age (p < 0.001). The composite scores for language development moved to more diversified ones in relation to increasing age at cochlear implantation and proportional hearing age (p < 0.001). Children without additional disorders outperformed those with additional disorders. The results indicate that the most favorable age for cochlear implantation could be earlier than 2. Compared to other children, spoken language evaluation scores of those with additional disabilities were significantly lower; however, these children showed gradual improvements in their auditory perception and language scores. Copyright © 2011 S. Karger AG, Basel.

  3. The Role of the Auditory Brainstem in Processing Linguistically-Relevant Pitch Patterns

    ERIC Educational Resources Information Center

    Krishnan, Ananthanarayan; Gandour, Jackson T.

    2009-01-01

    Historically, the brainstem has been neglected as a part of the brain involved in language processing. We review recent evidence of language-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem. We argue that there is enhancing…

  4. Towards an Auditory Account of Speech Rhythm: Application of a Model of the Auditory "Primal Sketch" to Two Multi-Language Corpora

    ERIC Educational Resources Information Center

    Lee, Christopher S.; Todd, Neil P. McAngus

    2004-01-01

    The world's languages display important differences in their rhythmic organization; most particularly, different languages seem to privilege different phonological units (mora, syllable, or stress foot) as their basic rhythmic unit. There is now considerable evidence that such differences have important consequences for crucial aspects of language…

  5. Backward and Simultaneous Masking in Children with Grammatical Specific Language Impairment: No Simple Link between Auditory and Language Abilities

    ERIC Educational Resources Information Center

    Rosen, Stuart; Adlard, Alan; van der Lely, Heather K. J.

    2009-01-01

    Purpose: We investigated claims that specific language impairment (SLI) typically arises from nonspeech auditory deficits by measuring tone-in-noise thresholds in a relatively homogeneous SLI subgroup exhibiting a primary deficit restricted to grammar (Grammatical[G]-SLI). Method: Fourteen children (mostly teenagers) with G-SLI were compared to…

  6. Linguistic Profiles of Children with CI as Compared with Children with Hearing or Specific Language Impairment

    ERIC Educational Resources Information Center

    Hoog, Brigitte E.; Langereis, Margreet C.; Weerdenburg, Marjolijn; Knoors, Harry E. T.; Verhoeven, Ludo

    2016-01-01

    Background: The spoken language difficulties of children with moderate or severe to profound hearing loss are mainly related to limited auditory speech perception. However, degraded or filtered auditory input as evidenced in children with cochlear implants (CIs) may result in less efficient or slower language processing as well. To provide insight…

  7. Picture naming in typically developing and language-impaired children: the role of sustained attention.

    PubMed

    Jongman, Suzanne R; Roelofs, Ardi; Scheper, Annette R; Meyer, Antje S

    2017-05-01

    Children with specific language impairment (SLI) have problems not only with language performance but also with sustained attention, which is the ability to maintain alertness over an extended period of time. Although there is consensus that this ability is impaired with respect to processing stimuli in the auditory perceptual modality, conflicting evidence exists concerning the visual modality. To address the outstanding issue whether the impairment in sustained attention is limited to the auditory domain, or if it is domain-general. Furthermore, to test whether children's sustained attention ability relates to their word-production skills. Groups of 7-9 year olds with SLI (N = 28) and typically developing (TD) children (N = 22) performed a picture-naming task and two sustained attention tasks, namely auditory and visual continuous performance tasks (CPTs). Children with SLI performed worse than TD children on picture naming and on both the auditory and visual CPTs. Moreover, performance on both the CPTs correlated with picture-naming latencies across developmental groups. These results provide evidence for a deficit in both auditory and visual sustained attention in children with SLI. Moreover, the study indicates there is a relationship between domain-general sustained attention and picture-naming performance in both TD and language-impaired children. Future studies should establish whether this relationship is causal. If attention influences language, training of sustained attention may improve language production in children from both developmental groups. © 2016 Royal College of Speech and Language Therapists.

  8. Assessment of cortical auditory evoked potentials in children with specific language impairment.

    PubMed

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Pilka, Adam; Skarżyński, Henryk

    2018-02-28

    The proper course of speech development heavily influences the cognitive and personal development of children. It is a condition for achieving preschool and school successes - it facilitates socializing and expressing feelings and needs. Impairment of language and its development in children represents a major diagnostic and therapeutic challenge for physicians and therapists. Early diagnosis of coexisting deficits and starting the therapy influence the therapeutic success. One of the basic diagnostic tests for children suffering from specific language impairment (SLI) is audiometry, thus far referred to as a hearing test. Auditory processing is just as important as a proper hearing threshold. Therefore, diagnosis of central auditory disorder may be a valuable supplementation of diagnosis of language impairment. Early diagnosis and implementation of appropriate treatment may contribute to an effective language therapy.

  9. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  10. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  11. Visual cortex entrains to sign language.

    PubMed

    Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel

    2017-06-13

    Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language <5 Hz, peaking at [Formula: see text]1 Hz. Coherence to sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.

  12. Cochlear implantation (CI) for prelingual deafness: the relevance of studies of brain organization and the role of first language acquisition in considering outcome success.

    PubMed

    Campbell, Ruth; MacSweeney, Mairéad; Woll, Bencie

    2014-01-01

    Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections-including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant-however that may be achieved, and whatever the success of auditory restoration.

  13. Cochlear implantation (CI) for prelingual deafness: the relevance of studies of brain organization and the role of first language acquisition in considering outcome success

    PubMed Central

    Campbell, Ruth; MacSweeney, Mairéad; Woll, Bencie

    2014-01-01

    Cochlear implantation (CI) for profound congenital hearing impairment, while often successful in restoring hearing to the deaf child, does not always result in effective speech processing. Exposure to non-auditory signals during the pre-implantation period is widely held to be responsible for such failures. Here, we question the inference that such exposure irreparably distorts the function of auditory cortex, negatively impacting the efficacy of CI. Animal studies suggest that in congenital early deafness there is a disconnection between (disordered) activation in primary auditory cortex (A1) and activation in secondary auditory cortex (A2). In humans, one factor contributing to this functional decoupling is assumed to be abnormal activation of A1 by visual projections—including exposure to sign language. In this paper we show that that this abnormal activation of A1 does not routinely occur, while A2 functions effectively supramodally and multimodally to deliver spoken language irrespective of hearing status. What, then, is responsible for poor outcomes for some individuals with CI and for apparent abnormalities in cortical organization in these people? Since infancy is a critical period for the acquisition of language, deaf children born to hearing parents are at risk of developing inefficient neural structures to support skilled language processing. A sign language, acquired by a deaf child as a first language in a signing environment, is cortically organized like a heard spoken language in terms of specialization of the dominant perisylvian system. However, very few deaf children are exposed to sign language in early infancy. Moreover, no studies to date have examined sign language proficiency in relation to cortical organization in individuals with CI. Given the paucity of such relevant findings, we suggest that the best guarantee of good language outcome after CI is the establishment of a secure first language pre-implant—however that may be achieved, and whatever the success of auditory restoration. PMID:25368567

  14. Perception of Small Frequency Differences in Children with Auditory Processing Disorder or Specific Language Impairment

    PubMed Central

    Rota-Donahue, Christine; Schwartz, Richard G.; Shafer, Valerie; Sussman, Elyse S.

    2016-01-01

    Background Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children’s auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. Purpose This study examined the perception of small frequency differences (Δf) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. Research Design An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of Δf from the 1000-Hz base frequency. Study Sample Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Data Collection and Analysis Behavioral data collected using headphone delivery were analyzed using the sensitivity index d′, calculated for three Δf was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d′ and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. Results TD children and children with APD and/or SLI differed in the detection of small-tone Δf. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d′ showed different strengths of correlation based on the magnitudes of the Δf. Auditory processing scores showed stronger correlation to the sensitivity index d′ for the small Δf, while language scores showed stronger correlation to the sensitivity index d′ for the large Δf. Conclusion Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. PMID:27310407

  15. Perception of Small Frequency Differences in Children with Auditory Processing Disorder or Specific Language Impairment.

    PubMed

    Rota-Donahue, Christine; Schwartz, Richard G; Shafer, Valerie; Sussman, Elyse S

    2016-06-01

    Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children's auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. : This study examined the perception of small frequency differences (∆ƒ) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of ∆ƒ from the 1000-Hz base frequency. Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Behavioral data collected using headphone delivery were analyzed using the sensitivity index d', calculated for three ∆ƒ was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d' and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. TD children and children with APD and/or SLI differed in the detection of small-tone ∆ƒ. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d' showed different strengths of correlation based on the magnitudes of the ∆ƒ. Auditory processing scores showed stronger correlation to the sensitivity index d' for the small ∆ƒ, while language scores showed stronger correlation to the sensitivity index d' for the large ∆ƒ. Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. American Academy of Audiology.

  16. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  17. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  18. A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie

    PubMed Central

    Hanke, Michael; Baumgartner, Florian J.; Ibe, Pierre; Kaule, Falko R.; Pollmann, Stefan; Speck, Oliver; Zinke, Wolf; Stadler, Jörg

    2014-01-01

    Here we present a high-resolution functional magnetic resonance (fMRI) dataset – 20 participants recorded at high field strength (7 Tesla) during prolonged stimulation with an auditory feature film (“Forrest Gump”). In addition, a comprehensive set of auxiliary data (T1w, T2w, DTI, susceptibility-weighted image, angiography) as well as measurements to assess technical and physiological noise components have been acquired. An initial analysis confirms that these data can be used to study common and idiosyncratic brain response patterns to complex auditory stimulation. Among the potential uses of this dataset are the study of auditory attention and cognition, language and music perception, and social perception. The auxiliary measurements enable a large variety of additional analysis strategies that relate functional response patterns to structural properties of the brain. Alongside the acquired data, we provide source code and detailed information on all employed procedures – from stimulus creation to data analysis. In order to facilitate replicative and derived works, only free and open-source software was utilized. PMID:25977761

  19. Auditory sequence analysis and phonological skill

    PubMed Central

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.

    2012-01-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  20. Relationships between Visual and Auditory Perceptual Skills and Comprehension in Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    Weaver, Phyllis A.; Rosner, Jerome

    1979-01-01

    Scores of 25 learning disabled students (aged 9 to 13) were compared on five tests: a visual-perceptual test (Coloured Progressive Matrices); an auditory-perceptual test (Auditory Motor Placement); a listening and reading comprehension test (Durrell Listening-Reading Series); and a word recognition test (Word Recognition subtest, Diagnostic…

  1. Production and Comprehension of Time Reference in Korean Nonfluent Aphasia

    PubMed Central

    Lee, Jiyeon; Kwon, Miseon; Na, Hae Ri; Bastiaanse, Roelien; Thompson, Cynthia K.

    2015-01-01

    Objectives Individuals with nonfluent agrammatic aphasia show impaired production and comprehension of time reference via verbal morphology. However, cross-linguistic findings to date suggest inconsistent evidence as to whether tense processing in general is impaired or time reference to the past is selectively difficult in this population. This study examined production and comprehension of time reference via verb morphology in Korean-speaking individuals with nonfluent aphasia. Methods A group of 9 healthy controls and 8 individuals with nonfluent aphasia (5 for the production task) participated in the study. Sentence priming production and auditory sentence to picture matching tasks were used, parallel with the previous cross-linguistic experiments in English, Chinese, Turkish, and others. Results The participants with nonfluent aphasia showed different patterns of impairment in production and comprehension. In production, they were impaired in all time references with errors being dominated by substitution of incorrect time references and other morpho-phonologically well-formed errors, indicating a largely intact morphological affixation process. In comprehension, they showed selective impairment of the past, consistent with the cross-linguistic evidence from English, Chinese, Turkish, and others. Conclusion The findings suggest that interpretation of past time reference poses particular difficulty in nonfluent aphasia irrespective of typological characteristics of languages; however, in production, language-specific morpho-semantic functions of verbal morphology may play a significant role in selective breakdowns of time reference. PMID:26290861

  2. Are deaf students' reading challenges really about reading?

    PubMed

    Marschark, Marc; Sapere, Patricia; Convertino, Carol M; Mayer, Connie; Wauters, Loes; Sarchet, Thomastine

    2009-01-01

    Reading achievement among deaf students typically lags significantly behind hearing peers, a situation that has changed little despite decades of research. This lack of progress and recent findings indicating that deaf students face many of the same challenges in comprehending sign language as they do in comprehending text suggest that difficulties frequently observed in their learning from text may involve more than just reading. Two experiments examined college students' learning of material from science texts. Passages were presented to deaf (signing) students in print or American Sign Language and to hearing students in print or auditorially. Several measures of learning indicated that the deaf students learned as much or more from print as they did from sign language, but less than hearing students in both cases. These and other results suggest that challenges to deaf students' reading comprehension may be more complex than is generally assumed.

  3. Teaching for Different Learning Styles.

    ERIC Educational Resources Information Center

    Cropper, Carolyn

    1994-01-01

    This study examined learning styles in 137 high ability fourth-grade students. All students were administered two learning styles inventories. Characteristics of students with the following learning styles are summarized: auditory language, visual language, auditory numerical, visual numerical, tactile concrete, individual learning, group…

  4. fMRI as a Preimplant Objective Tool to Predict Children's Postimplant Auditory and Language Outcomes as Measured by Parental Observations.

    PubMed

    Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K

    2018-05-01

    The trends in cochlear implantation candidacy and benefit have changed rapidly in the last two decades. It is now widely accepted that early implantation leads to better postimplant outcomes. Although some generalizations can be made about postimplant auditory and language performance, neural mechanisms need to be studied to predict individual prognosis. The aim of this study was to use functional magnetic resonance imaging (fMRI) to identify preimplant neuroimaging biomarkers that predict children's postimplant auditory and language outcomes as measured by parental observation/reports. This is a pre-post correlational measures study. Twelve possible cochlear implant candidates with bilateral severe to profound hearing loss were recruited via referrals for a clinical magnetic resonance imaging to ensure structural integrity of the auditory nerve for implantation. Participants underwent cochlear implantation at a mean age of 19.4 mo. All children used the advanced combination encoder strategy (ACE, Cochlear Corporation™, Nucleus ® Freedom cochlear implants). Three participants received an implant in the right ear; one in the left ear whereas eight participants received bilateral implants. Participants' preimplant neuronal activation in response to two auditory stimuli was studied using an event-related fMRI method. Blood oxygen level dependent contrast maps were calculated for speech and noise stimuli. The general linear model was used to create z-maps. The Auditory Skills Checklist (ASC) and the SKI-HI Language Development Scale (SKI-HI LDS) were administered to the parents 2 yr after implantation. A nonparametric correlation analysis was implemented between preimplant fMRI activation and postimplant auditory and language outcomes based on ASC and SKI-HI LDS. Statistical Parametric Mapping software was used to create regression maps between fMRI activation and scores on the aforementioned tests. Regression maps were overlaid on the Imaging Research Center infant template and visualized in MRIcro. Regression maps revealed two clusters of brain activation for the speech versus silence contrast and five clusters for the noise versus silence contrast that were significantly correlated with the parental reports. These clusters included auditory and extra-auditory regions such as the middle temporal gyrus, supramarginal gyrus, precuneus, cingulate gyrus, middle frontal gyrus, subgyral, and middle occipital gyrus. Both positive and negative correlations were observed. Correlation values for the different clusters ranged from -0.90 to 0.95 and were significant at a corrected p value of <0.05. Correlations suggest that postimplant performance may be predicted by activation in specific brain regions. The results of the present study suggest that (1) fMRI can be used to identify neuroimaging biomarkers of auditory and language performance before implantation and (2) activation in certain brain regions may be predictive of postimplant auditory and language performance as measured by parental observation/reports. American Academy of Audiology.

  5. Can You Hear Me Now? Musical Training Shapes Functional Brain Networks for Selective Auditory Attention and Hearing Speech in Noise

    PubMed Central

    Strait, Dana L.; Kraus, Nina

    2011-01-01

    Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636

  6. White matter anisotropy in the ventral language pathway predicts sound-to-word learning success

    PubMed Central

    Wong, Francis C. K.; Chandrasekaran, Bharath; Garibaldi, Kyla; Wong, Patrick C. M.

    2011-01-01

    According to the dual stream model of auditory language processing, the dorsal stream is responsible for mapping sound to articulation while the ventral stream plays the role of mapping sound to meaning. Most researchers agree that the arcuate fasciculus (AF) is the neuroanatomical correlate of the dorsal steam, however, less is known about what constitutes the ventral one. Nevertheless two hypotheses exist, one suggests that the segment of the AF that terminates in middle temporal gyrus corresponds to the ventral stream and the other suggests that it is the extreme capsule that underlies this sound to meaning pathway. The goal of this study is to evaluate these two competing hypotheses. We trained participants with a sound-to-word learning paradigm in which they learned to use a foreign phonetic contrast for signaling word meaning. Using diffusion tensor imaging (DTI), a brain imaging tool to investigate white matter connectivity in humans, we found that fractional anisotropy in the left parietal-temporal region positively correlated with the performance in sound-to-word learning. In addition, fiber tracking revealed a ventral pathway, composed of the extreme capsule and the inferior longitudinal fasciculus, that mediated auditory comprehension. Our findings provide converging evidence supporting the importance of the ventral steam, an extreme capsule system, in the frontal-temporal language network. Implications for current models of speech processing will also be discussed. PMID:21677162

  7. Simulating single word processing in the classic aphasia syndromes based on the Wernicke-Lichtheim-Geschwind theory.

    PubMed

    Weems, Scott A; Reggia, James A

    2006-09-01

    The Wernicke-Lichtheim-Geschwind (WLG) theory of the neurobiological basis of language is of great historical importance, and it continues to exert a substantial influence on most contemporary theories of language in spite of its widely recognized limitations. Here, we suggest that neurobiologically grounded computational models based on the WLG theory can provide a deeper understanding of which of its features are plausible and where the theory fails. As a first step in this direction, we created a model of the interconnected left and right neocortical areas that are most relevant to the WLG theory, and used it to study visual-confrontation naming, auditory repetition, and auditory comprehension performance. No specific functionality is assigned a priori to model cortical regions, other than that implicitly present due to their locations in the cortical network and a higher learning rate in left hemisphere regions. Following learning, the model successfully simulates confrontation naming and word repetition, and acquires a unique internal representation in parietal regions for each named object. Simulated lesions to the language-dominant cortical regions produce patterns of single word processing impairment reminiscent of those postulated historically in the classic aphasia syndromes. These results indicate that WLG theory, instantiated as a simple interconnected network of model neocortical regions familiar to any neuropsychologist/neurologist, captures several fundamental "low-level" aspects of neurobiological word processing and their impairment in aphasia.

  8. Cross-Domain Effects of Music and Language Experience on the Representation of Pitch in the Human Auditory Brainstem

    ERIC Educational Resources Information Center

    Bidelman, Gavin M.; Gandour, Jackson T.; Krishnan, Ananthanarayan

    2011-01-01

    Neural encoding of pitch in the auditory brainstem is known to be shaped by long-term experience with language or music, implying that early sensory processing is subject to experience-dependent neural plasticity. In language, pitch patterns consist of sequences of continuous, curvilinear contours; in music, pitch patterns consist of relatively…

  9. Different Origin of Auditory and Phonological Processing Problems in Children with Language Impairment: Evidence from a Twin Study.

    ERIC Educational Resources Information Center

    Bishop, D. V. M.; Bishop, Sonia J.; Bright, Peter; James, Cheryl; Delaney, Tom; Tallal, Paula

    1999-01-01

    A study involving 55 children with a language impairment and 76 with normal language investigated the heritability of auditory processing impairment in same-sex twins (ages 7 to 13, selected from a sample of 37 pairs). Although correlations between co-twins were high, lack of significant difference between monozygotic and dizygotic twins suggested…

  10. How Does the Linguistic Distance between Spoken and Standard Language in Arabic Affect Recall and Recognition Performances during Verbal Memory Examination

    ERIC Educational Resources Information Center

    Taha, Haitham

    2017-01-01

    The current research examined how Arabic diglossia affects verbal learning memory. Thirty native Arab college students were tested using auditory verbal memory test that was adapted according to the Rey Auditory Verbal Learning Test and developed in three versions: Pure spoken language version (SL), pure standard language version (SA), and…

  11. Auditory Processing and Speech Perception in Children with Specific Language Impairment: Relations with Oral Language and Literacy Skills

    ERIC Educational Resources Information Center

    Vandewalle, Ellen; Boets, Bart; Ghesquiere, Pol; Zink, Inge

    2012-01-01

    This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay…

  12. Infant Information Processing and Family History of Specific Language Impairment: Converging Evidence for RAP Deficits from Two Paradigms

    ERIC Educational Resources Information Center

    Choudhury, Naseem; Leppanen, Paavo H. T.; Leevers, Hilary J.; Benasich, April A.

    2007-01-01

    An infant's ability to process auditory signals presented in rapid succession (i.e. rapid auditory processing abilities [RAP]) has been shown to predict differences in language outcomes in toddlers and preschool children. Early deficits in RAP abilities may serve as a behavioral marker for language-based learning disabilities. The purpose of this…

  13. Impact of language on development of auditory-visual speech perception.

    PubMed

    Sekiyama, Kaoru; Burnham, Denis

    2008-03-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.

  14. [Receptive and expressive speech development in children with cochlear implant].

    PubMed

    Streicher, B; Kral, K; Hahn, M; Lang-Roth, R

    2015-04-01

    This study's aim is the assessment of language development of children with Cochlea Implant (CI). It focusses on receptive and expressive language development as well as auditory memory skills. Grimm's language development test (SETK 3-5) evaluates receptive, expressive language development and auditory memory. Data of 49 children who received their implant within their first 3 years of life were compared to the norms of hearing children at the age of 3.0-3.5 years. According to the age at implantation the cohort was subdivided in 3 groups: cochlear implantation within the first 12 months of life (group 1), during the 13th and 24th months of life (group 2) and after 25 or more months of life (group 3). It was possible to collect complete data of all SETK 3-5 subtests in 63% of the participants. A homogeneous profile of all subtests indicates a balanced receptive and expressive language development. Thus reduces the gap between hearing/language age and chronological age. Receptive and expressive language and auditory memory milestones in children implanted within their first year of life are achieved earlier in comparison to later implanted children. The Language Test for Children (SETK 3-5) is an appropriate test procedure to be used for language assessment of children who received a CI. It can be used from age 3 on to administer data on receptive and expressive language development and auditory memory. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Auditory skills, language development, and adaptive behavior of children with cochlear implants and additional disabilities

    PubMed Central

    Beer, Jessica; Harris, Michael S.; Kronenberger, William G.; Holt, Rachael Frush; Pisoni, David B.

    2012-01-01

    Objective The objective of this study was to evaluate the development of functional auditory skills, language, and adaptive behavior in deaf children with cochlear implants (CI) who also have additional disabilities (AD). Design A two-group, pre-test versus post-test design was used. Study sample Comparisons were made between 23 children with CIs and ADs, and an age-matched comparison group of 23 children with CIs without ADs (No-AD). Assessments were obtained pre-CI and within 12 months post-CI. Results All but two deaf children with ADs improved in auditory skills using the IT-MAIS. Most deaf children in the AD group made progress in receptive but not expressive language using the Preschool Language Scale, but their language quotients were lower than the No-AD group. Five of eight children with ADs made progress in daily living skills and socialization skills; two made progress in motor skills. Children with ADs who did not make progress in language, did show progress in adaptive behavior. Conclusions Children with deafness and ADs made progress in functional auditory skills, receptive language, and adaptive behavior. Expanded assessment that includes adaptive functioning and multi-center collaboration is recommended to best determine benefits of implantation in areas of expected growth in this clinical population. PMID:22509948

  16. Speech and language disorders in children from public schools in Belo Horizonte

    PubMed Central

    Rabelo, Alessandra Terra Vasconcelos; Campos, Fernanda Rodrigues; Friche, Clarice Passos; da Silva, Bárbara Suelen Vasconcelos; Friche, Amélia Augusta de Lima; Alves, Claudia Regina Lindgren; Goulart, Lúcia Maria Horta de Figueiredo

    2015-01-01

    Objective: To investigate the prevalence of oral language, orofacial motor skill and auditory processing disorders in children aged 4-10 years and verify their association with age and gender. Methods: Cross-sectional study with stratified, random sample consisting of 539 students. The evaluation consisted of three protocols: orofacial motor skill protocol, adapted from the Myofunctional Evaluation Guidelines; the Child Language Test ABFW - Phonology; and a simplified auditory processing evaluation. Descriptive and associative statistical analyses were performed using Epi Info software, release 6.04. Chi-square test was applied to compare proportion of events and analysis of variance was used to compare mean values. Significance was set at p≤0.05. Results: Of the studied subjects, 50.1% had at least one of the assessed disorders; of those, 33.6% had oral language disorder, 17.1% had orofacial motor skill impairment, and 27.3% had auditory processing disorder. There were significant associations between auditory processing skills’ impairment, oral language impairment and age, suggesting a decrease in the number of disorders with increasing age. Similarly, the variable "one or more speech, language and hearing disorders" was also associated with age. Conclusions: The prevalence of speech, language and hearing disorders in children was high, indicating the need for research and public health efforts to cope with this problem. PMID:26300524

  17. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment

    PubMed Central

    PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.

    2014-01-01

    Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648

  18. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment.

    PubMed

    Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J

    2013-06-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.

  19. Atypical brain lateralisation in the auditory cortex and language performance in 3- to 7-year-old children with high-functioning autism spectrum disorder: a child-customised magnetoencephalography (MEG) study.

    PubMed

    Yoshimura, Yuko; Kikuchi, Mitsuru; Shitamichi, Kiyomi; Ueno, Sanae; Munesue, Toshio; Ono, Yasuki; Tsubokawa, Tsunehisa; Haruta, Yasuhiro; Oi, Manabu; Niida, Yo; Remijn, Gerard B; Takahashi, Tsutomu; Suzuki, Michio; Higashida, Haruhiro; Minabe, Yoshio

    2013-10-08

    Magnetoencephalography (MEG) is used to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. In young children, however, the simultaneous quantification of the bilateral auditory-evoked response during binaural hearing is difficult using conventional adult-sized MEG systems. Recently, a child-customised MEG device has facilitated the acquisition of bi-hemispheric recordings, even in young children. Using the child-customised MEG device, we previously reported that language-related performance was reflected in the strength of the early component (P50m) of the auditory evoked magnetic field (AEF) in typically developing (TD) young children (2 to 5 years old) [Eur J Neurosci 2012, 35:644-650]. The aim of this study was to investigate how this neurophysiological index in each hemisphere is correlated with language performance in autism spectrum disorder (ASD) and TD children. We used magnetoencephalography (MEG) to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. We investigated the P50m that is evoked by voice stimuli (/ne/) bilaterally in 33 young children (3 to 7 years old) with ASD and in 30 young children who were typically developing (TD). The children were matched according to their age (in months) and gender. Most of the children with ASD were high-functioning subjects. The results showed that the children with ASD exhibited significantly less leftward lateralisation in their P50m intensity compared with the TD children. Furthermore, the results of a multiple regression analysis indicated that a shorter P50m latency in both hemispheres was specifically correlated with higher language-related performance in the TD children, whereas this latency was not correlated with non-verbal cognitive performance or chronological age. The children with ASD did not show any correlation between P50m latency and language-related performance; instead, increasing chronological age was a significant predictor of shorter P50m latency in the right hemisphere. Using a child-customised MEG device, we studied the P50m component that was evoked through binaural human voice stimuli in young ASD and TD children to examine differences in auditory cortex function that are associated with language development. Our results suggest that there is atypical brain function in the auditory cortex in young children with ASD, regardless of language development.

  20. How does visual language affect crossmodal plasticity and cochlear implant success?

    PubMed Central

    Lyness, C.R.; Woll, B.; Campbell, R.; Cardin, V.

    2013-01-01

    Cochlear implants (CI) are the most successful intervention for ameliorating hearing loss in severely or profoundly deaf children. Despite this, educational performance in children with CI continues to lag behind their hearing peers. From animal models and human neuroimaging studies it has been proposed the integrative functions of auditory cortex are compromised by crossmodal plasticity. This has been argued to result partly from the use of a visual language. Here we argue that ‘cochlear implant sensitive periods’ comprise both auditory and language sensitive periods, and thus cannot be fully described with animal models. Despite prevailing assumptions, there is no evidence to link the use of a visual language to poorer CI outcome. Crossmodal reorganisation of auditory cortex occurs regardless of compensatory strategies, such as sign language, used by the deaf person. In contrast, language deprivation during early sensitive periods has been repeatedly linked to poor language outcomes. Language sensitive periods have largely been ignored when considering variation in CI outcome, leading to ill-founded recommendations concerning visual language in CI habilitation. PMID:23999083

  1. Clinical significance and developmental changes of auditory-language-related gamma activity

    PubMed Central

    Kojima, Katsuaki; Brown, Erik C.; Rothermel, Robert; Carlson, Alanna; Fuerst, Darren; Matsuzaki, Naoyuki; Shah, Aashit; Atkinson, Marie; Basha, Maysaa; Mittal, Sandeep; Sood, Sandeep; Asano, Eishi

    2012-01-01

    OBJECTIVE We determined the clinical impact and developmental changes of auditory-language-related augmentation of gamma activity at 50–120 Hz recorded on electrocorticography (ECoG). METHODS We analyzed data from 77 epileptic patients ranging 4 – 56 years in age. We determined the effects of seizure-onset zone, electrode location, and patient-age upon gamma-augmentation elicited by an auditory-naming task. RESULTS Gamma-augmentation was less frequently elicited within seizure-onset sites compared to other sites. Regardless of age, gamma-augmentation most often involved the 80–100 Hz frequency band. Gamma-augmentation initially involved bilateral superior-temporal regions, followed by left-side dominant involvement in the middle-temporal, medial-temporal, inferior-frontal, dorsolateral-premotor, and medial-frontal regions and concluded with bilateral inferior-Rolandic involvement. Compared to younger patients, those older than 10 years had a larger proportion of left dorsolateral-premotor and right inferior-frontal sites showing gamma-augmentation. The incidence of a post-operative language deficit requiring speech therapy was predicted by the number of resected sites with gamma-augmentation in the superior-temporal, inferior-frontal, dorsolateral-premotor, and inferior-Rolandic regions of the left hemisphere assumed to contain essential language function (r2=0.59; p=0.001; odds ratio=6.04 [95% confidence-interval: 2.26 to 16.15]). CONCLUSIONS Auditory-language-related gamma-augmentation can provide additional information useful to localize the primary language areas. SIGNIFICANCE These results derived from a large sample of patients support the utility of auditory-language-related gamma-augmentation in presurgical evaluation. PMID:23141882

  2. Processing Problems and Language Impairment in Children.

    ERIC Educational Resources Information Center

    Watkins, Ruth V.

    1990-01-01

    The article reviews studies on the assessment of rapid auditory processing abilities. Issues in auditory processing research are identified including a link between otitis media with effusion and language learning problems. A theory that linguistically impaired children experience difficulty in perceiving and processing low phonetic substance…

  3. The effects of early auditory-based intervention on adult bilateral cochlear implant outcomes.

    PubMed

    Lim, Stacey R

    2017-09-01

    The goal of this exploratory study was to determine the types of improvement that sequentially implanted auditory-verbal and auditory-oral adults with prelingual and childhood hearing loss received in bilateral listening conditions, compared to their best unilateral listening condition. Five auditory-verbal adults and five auditory-oral adults were recruited for this study. Participants were seated in the center of a 6-loudspeaker array. BKB-SIN sentences were presented from 0° azimuth, while multi-talker babble was presented from various loudspeakers. BKB-SIN scores in bilateral and the best unilateral listening conditions were compared to determine the amount of improvement gained. As a group, the participants had improved speech understanding scores in the bilateral listening condition. Although not statistically significant, the auditory-verbal group tended to have greater speech understanding with greater levels of competing background noise, compared to the auditory-oral participants. Bilateral cochlear implantation provides individuals with prelingual and childhood hearing loss with improved speech understanding in noise. A higher emphasis on auditory development during the critical language development years may add to increased speech understanding in adulthood. However, other demographic factors such as age or device characteristics must also be considered. Although both auditory-verbal and auditory-oral approaches emphasize spoken language development, they emphasize auditory development to different degrees. This may affect cochlear implant (CI) outcomes. Further consideration should be made in future auditory research to determine whether these differences contribute to performance outcomes. Additional investigation with a larger participant pool, controlled for effects of age and CI devices and processing strategies, would be necessary to determine whether language learning approaches are associated with different levels of speech understanding performance.

  4. Bilingualism increases neural response consistency and attentional control: Evidence for sensory and cognitive coupling

    PubMed Central

    Krizman, Jennifer; Skoe, Erika; Marian, Viorica; Kraus, Nina

    2014-01-01

    Auditory processing is presumed to be influenced by cognitive processes – including attentional control – in a top-down manner. In bilinguals, activation of both languages during daily communication hones inhibitory skills, which subsequently bolster attentional control. We hypothesize that the heightened attentional demands of bilingual communication strengthens connections between cognitive (i.e., attentional control) and auditory processing, leading to greater across-trial consistency in the auditory evoked response (i.e., neural consistency) in bilinguals. To assess this, we collected passively-elicited auditory evoked responses to the syllable [da] and separately obtained measures of attentional control and language ability in adolescent Spanish-English bilinguals and English monolinguals. Bilinguals demonstrated enhanced attentional control and more consistent brainstem and cortical responses. In bilinguals, but not monolinguals, brainstem consistency tracked with language proficiency and attentional control. We interpret these enhancements in neural consistency as the outcome of strengthened attentional control that emerged from experience communicating in two languages. PMID:24413593

  5. Gestures, vocalizations, and memory in language origins.

    PubMed

    Aboitiz, Francisco

    2012-01-01

    THIS ARTICLE DISCUSSES THE POSSIBLE HOMOLOGIES BETWEEN THE HUMAN LANGUAGE NETWORKS AND COMPARABLE AUDITORY PROJECTION SYSTEMS IN THE MACAQUE BRAIN, IN AN ATTEMPT TO RECONCILE TWO EXISTING VIEWS ON LANGUAGE EVOLUTION: one that emphasizes hand control and gestures, and the other that emphasizes auditory-vocal mechanisms. The capacity for language is based on relatively well defined neural substrates whose rudiments have been traced in the non-human primate brain. At its core, this circuit constitutes an auditory-vocal sensorimotor circuit with two main components, a "ventral pathway" connecting anterior auditory regions with anterior ventrolateral prefrontal areas, and a "dorsal pathway" connecting auditory areas with parietal areas and with posterior ventrolateral prefrontal areas via the arcuate fasciculus and the superior longitudinal fasciculus. In humans, the dorsal circuit is especially important for phonological processing and phonological working memory, capacities that are critical for language acquisition and for complex syntax processing. In the macaque, the homolog of the dorsal circuit overlaps with an inferior parietal-premotor network for hand and gesture selection that is under voluntary control, while vocalizations are largely fixed and involuntary. The recruitment of the dorsal component for vocalization behavior in the human lineage, together with a direct cortical control of the subcortical vocalizing system, are proposed to represent a fundamental innovation in human evolution, generating an inflection point that permitted the explosion of vocal language and human communication. In this context, vocal communication and gesturing have a common history in primate communication.

  6. Auditory scene analysis in school-aged children with developmental language disorders

    PubMed Central

    Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.

    2014-01-01

    Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430

  7. The Effects of Aircraft Noise on the Auditory Language Processing Abilities of English First Language Primary School Learners in Durban, South Africa

    ERIC Educational Resources Information Center

    Hollander, Cara; de Andrade, Victor Manuel

    2014-01-01

    Schools located near to airports are exposed to high levels of noise which can cause cognitive, health, and hearing problems. Therefore, this study sought to explore whether this noise may cause auditory language processing (ALP) problems in primary school learners. Sixty-one children attending schools exposed to high levels of noise were matched…

  8. "When Music Speaks": Auditory Cortex Morphology as a Neuroanatomical Marker of Language Aptitude and Musicality.

    PubMed

    Turker, Sabrina; Reiterer, Susanne M; Seither-Preisler, Annemarie; Schneider, Peter

    2017-01-01

    Recent research has shown that the morphology of certain brain regions may indeed correlate with a number of cognitive skills such as musicality or language ability. The main aim of the present study was to explore the extent to which foreign language aptitude, in particular phonetic coding ability, is influenced by the morphology of Heschl's gyrus (HG; auditory cortex), working memory capacity, and musical ability. In this study, the auditory cortices of German-speaking individuals ( N = 30; 13 males/17 females; aged 20-40 years) with high and low scores in a number of language aptitude tests were compared. The subjects' language aptitude was measured by three different tests, namely a Hindi speech imitation task (phonetic coding ability), an English pronunciation assessment, and the Modern Language Aptitude Test (MLAT). Furthermore, working memory capacity and musical ability were assessed to reveal their relationship with foreign language aptitude. On the behavioral level, significant correlations were found between phonetic coding ability, English pronunciation skills, musical experience, and language aptitude as measured by the MLAT. Parts of all three tests measuring language aptitude correlated positively and significantly with each other, supporting their validity for measuring components of language aptitude. Remarkably, the number of instruments played by subjects showed significant correlations with all language aptitude measures and musicality, whereas, the number of foreign languages did not show any correlations. With regard to the neuroanatomy of auditory cortex, adults with very high scores in the Hindi testing and the musicality test (AMMA) demonstrated a clear predominance of complete posterior HG duplications in the right hemisphere. This may reignite the discussion of the importance of the right hemisphere for language processing, especially when linked or common resources are involved, such as the inter-dependency between phonetic and musical aptitude.

  9. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH).

    PubMed

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  10. The role of the speech-language pathologist in identifying and treating children with auditory processing disorder.

    PubMed

    Richard, Gail J

    2011-07-01

    A summary of issues regarding auditory processing disorder (APD) is presented, including some of the remaining questions and challenges raised by the articles included in the clinical forum. Evolution of APD as a diagnostic entity within audiology and speech-language pathology is reviewed. A summary of treatment efficacy results and issues is provided, as well as the continuing dilemma for speech-language pathologists (SLPs) charged with providing treatment for referred APD clients. The role of the SLP in diagnosing and treating APD remains under discussion, despite lack of efficacy data supporting auditory intervention and questions regarding the clinical relevance and validity of APD.

  11. The contribution of visual areas to speech comprehension: a PET study in cochlear implants patients and normal-hearing subjects.

    PubMed

    Giraud, Anne Lise; Truy, Eric

    2002-01-01

    Early visual cortex can be recruited by meaningful sounds in the absence of visual information. This occurs in particular in cochlear implant (CI) patients whose dependency on visual cues in speech comprehension is increased. Such cross-modal interaction mirrors the response of early auditory cortex to mouth movements (speech reading) and may reflect the natural expectancy of the visual counterpart of sounds, lip movements. Here we pursue the hypothesis that visual activations occur specifically in response to meaningful sounds. We performed PET in both CI patients and controls, while subjects listened either to their native language or to a completely unknown language. A recruitment of early visual cortex, the left posterior inferior temporal gyrus (ITG) and the left superior parietal cortex was observed in both groups. While no further activation occurred in the group of normal-hearing subjects, CI patients additionally recruited the right perirhinal/fusiform and mid-fusiform, the right temporo-occipito-parietal (TOP) junction and the left inferior prefrontal cortex (LIPF, Broca's area). This study confirms a participation of visual cortical areas in semantic processing of speech sounds. Observation of early visual activation in normal-hearing subjects shows that auditory-to-visual cross-modal effects can also be recruited under natural hearing conditions. In cochlear implant patients, speech activates the mid-fusiform gyrus in the vicinity of the so-called face area. This suggests that specific cross-modal interaction involving advanced stages in the visual processing hierarchy develops after cochlear implantation and may be the correlate of increased usage of lip-reading.

  12. Fluent aphasia in children: definition and natural history.

    PubMed

    Klein, S K; Masur, D; Farber, K; Shinnar, S; Rapin, I

    1992-01-01

    We compared the course of a preschool child we followed for 4 years with published reports of 24 children with fluent aphasia. Our patient spoke fluently within 3 weeks of the injury. She was severely anomic and made many semantic paraphasic errors. Unlike other children with fluent aphasia, her prosody of speech was impaired initially, and her spontaneous language was dominated by stock phrases. Residual deficits include chronic impairment of auditory comprehension, repetition, and word retrieval. She has more disfluencies in spontaneous speech 4 years after her head injury than acutely. School achievement in reading and mathematics remains below age level. Attention to the timing of recovery of fluent speech and to the characteristics of receptive and expressive language over time will permit more accurate description of fluent aphasia in childhood.

  13. Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.

    PubMed

    Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M

    2003-05-13

    Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.

  14. Slipped Lips: Onset Asynchrony Detection of Auditory-Visual Language in Autism

    ERIC Educational Resources Information Center

    Grossman, Ruth B.; Schneps, Matthew H.; Tager-Flusberg, Helen

    2009-01-01

    Background: It has frequently been suggested that individuals with autism spectrum disorder (ASD) have deficits in auditory-visual (AV) sensory integration. Studies of language integration have mostly used non-word syllables presented in congruent and incongruent AV combinations and demonstrated reduced influence of visual speech in individuals…

  15. [Speech and language disorders in children from public schools in Belo Horizonte].

    PubMed

    Rabelo, Alessandra Terra Vasconcelos; Campos, Fernanda Rodrigues; Friche, Clarice Passos; da Silva, Bárbara Suelen Vasconcelos; de Lima Friche, Amélia Augusta; Alves, Claudia Regina Lindgren; de Figueiredo Goulart, Lúcia Maria Horta

    2015-12-01

    To investigate the prevalence of oral language, orofacial motor skill and auditory processing disorders in children aged 4-10 years old and verify their association with age and gender. Cross-sectional study with stratified, random sample consisting of 539 students. The evaluation consisted of three protocols: orofacial motor skill protocol, adapted from the Myofunctional Evaluation Guidelines; the Child Language Test ABFW--Phonology, and a simplified auditory processing evaluation. Descriptive and associative statistical analyses were performed using Epi Info software, release 6.04. Chi-square test was applied to compare proportion of events and analysis of variance was used to compare mean values. Significance was set at p≤0.05. Of the studied subjects, 50.1% had at least one of the assessed disorders; of those, 33.6% had oral language disorder, 17.1%, had orofacial motor skill impairment, and 27.3% had auditory processing disorder. There were significant associations between auditory processing skills' impairment, oral language impairment and age, suggesting a decrease in the number of disorders with increasing age. Similarly, the variable "one or more speech, language and hearing disorders" was also associated with age. The prevalence of speech, language and hearing disorders in children was high, indicating the need for research and public health efforts to cope with this problem. Copyright © 2015 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  16. Surgical factors in pediatric cochlear implantation and their early effects on electrode activation and functional outcomes.

    PubMed

    Francis, Howard W; Buchman, Craig A; Visaya, Jiovani M; Wang, Nae-Yuh; Zwolan, Teresa A; Fink, Nancy E; Niparko, John K

    2008-06-01

    To assess the impact of surgical factors on electrode status and early communication outcomes in young children in the first 2 years of cochlear implantation. Prospective multicenter cohort study. Six tertiary referral centers. Children 5 years or younger before implantation with normal nonverbal intelligence. Cochlear implant operations in 209 ears of 188 children. Percent active channels, auditory behavior as measured by the Infant Toddler Meaningful Auditory Integration Scale/Meaningful Auditory Integration Scale and Reynell receptive language scores. Stable insertion of the full electrode array was accomplished in 96.2% of ears. At least 75% of electrode channels were active in 88% of ears. Electrode deactivation had a significant negative effect on Infant Toddler Meaningful Auditory Integration Scale/Meaningful Auditory Integration Scale scores at 24 months but no effect on receptive language scores. Significantly fewer active electrodes were associated with a history of meningitis. Surgical complications requiring additional hospitalization and/or revision surgery occurred in 6.7% of patients but had no measurable effect on the development of auditory behavior within the first 2 years. Negative, although insignificant, associations were observed between the need for perioperative revision of the device and 1) the percent of active electrodes and 2) the receptive language level at 2-year follow-up. Activation of the entire electrode array is associated with better early auditory outcomes. Decrements in the number of active electrodes and lower gains of receptive language after manipulation of the newly implanted device were not statistically significant but may be clinically relevant, underscoring the importance of surgical technique and the effective placement of the electrode array.

  17. Low-level neural auditory discrimination dysfunctions in specific language impairment-A review on mismatch negativity findings.

    PubMed

    Kujala, Teija; Leminen, Miika

    2017-12-01

    In specific language impairment (SLI), there is a delay in the child's oral language skills when compared with nonverbal cognitive abilities. The problems typically relate to phonological and morphological processing and word learning. This article reviews studies which have used mismatch negativity (MMN) in investigating low-level neural auditory dysfunctions in this disorder. With MMN, it is possible to tap the accuracy of neural sound discrimination and sensory memory functions. These studies have found smaller response amplitudes and longer latencies for speech and non-speech sound changes in children with SLI than in typically developing children, suggesting impaired and slow auditory discrimination in SLI. Furthermore, they suggest shortened sensory memory duration and vulnerability of the sensory memory to masking effects. Importantly, some studies reported associations between MMN parameters and language test measures. In addition, it was found that language intervention can influence the abnormal MMN in children with SLI, enhancing its amplitude. These results suggest that the MMN can shed light on the neural basis of various auditory and memory impairments in SLI, which are likely to influence speech perception. Copyright © 2017. Published by Elsevier Ltd.

  18. Utilizing Oral-Motor Feedback in Auditory Conceptualization.

    ERIC Educational Resources Information Center

    Howard, Marilyn

    The Auditory Discrimination in Depth (ADD) program, an oral-motor approach to beginning reading instruction, trains first grade children in auditory skills by a process in which language and oral-motor feedback are used to integrate auditory properties with visual properties. This emphasis of the ADD program makes the child's perceptual…

  19. Gender differences in identifying emotions from auditory and visual stimuli.

    PubMed

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  20. General Auditory Processing, Speech Perception and Phonological Awareness Skills in Chinese-English Biliteracy

    ERIC Educational Resources Information Center

    Chung, Kevin K. H.; McBride-Chang, Catherine; Cheung, Him; Wong, Simpson W. L.

    2013-01-01

    This study focused on the associations of general auditory processing, speech perception, phonological awareness and word reading in Cantonese-speaking children from Hong Kong learning to read both Chinese (first language [L1]) and English (second language [L2]). Children in Grades 2--4 ("N" = 133) participated and were administered…

  1. Selective Auditory Attention in Adults: Effects of Rhythmic Structure of the Competing Language

    ERIC Educational Resources Information Center

    Reel, Leigh Ann; Hicks, Candace Bourland

    2012-01-01

    Purpose: The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Method: Reception thresholds for English sentences were measured for 50…

  2. Auditory Frequency Discrimination in Children with Specific Language Impairment: A Longitudinal Study

    ERIC Educational Resources Information Center

    Hill, P. R.; Hogben, J. H.; Bishop, D. M. V.

    2005-01-01

    It has been proposed that specific language impairment (SLI) is caused by an impairment of auditory processing, but it is unclear whether this problem affects temporal processing, frequency discrimination (FD), or both. Furthermore, there are few longitudinal studies in this area, making it hard to establish whether any deficit represents a…

  3. Effects of Lips and Hands on Auditory Learning of Second-Language Speech Sounds

    ERIC Educational Resources Information Center

    Hirata, Yukari; Kelly, Spencer D.

    2010-01-01

    Purpose: Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the…

  4. "Can You Touch Your Imagination?" A Case Study of Schizophrenese.

    ERIC Educational Resources Information Center

    Greenday, Laura A.; Bennett, Clinton W.

    The study evaluated the effects of an auditory monitoring and feedback approach on an adolescent boy's schizophrenic language patterns. The approach attempted to increase the subject's auditory awareness and to train him to identify and correct the linguistic errors of others and, eventually, of himself. Language samples were analyzed at baseline…

  5. Electrophysiological Correlates of Rapid Auditory and Linguistic Processing in Adolescents with Specific Language Impairment

    ERIC Educational Resources Information Center

    Weber-Fox, Christine; Leonard, Laurence B.; Wray, Amanda Hampton; Tomblin, J. Bruce

    2010-01-01

    Brief tonal stimuli and spoken sentences were utilized to examine whether adolescents (aged 14;3-18;1) with specific language impairments (SLI) exhibit atypical neural activity for rapid auditory processing of non-linguistic stimuli and linguistic processing of verb-agreement and semantic constraints. Further, we examined whether the behavioral…

  6. The Development of Spoken Language in Deaf Children: Explaining the Unexplained Variance.

    ERIC Educational Resources Information Center

    Musselman, Carol; Kircaali-Iftar, Gonul

    1996-01-01

    This study compared 20 young deaf children with either exceptionally good or exceptionally poor spoken language for their hearing loss, age, and intelligence. Factors associated with high performance included earlier use of binaural ear-level aids, better educated mothers, auditory/verbal or auditory/oral instruction, reliance on spoken language…

  7. Auditory and Visual Sustained Attention in Children with Speech Sound Disorder

    PubMed Central

    Murphy, Cristina F. B.; Pagan-Neves, Luciana O.; Wertzner, Haydée F.; Schochat, Eliane

    2014-01-01

    Although research has demonstrated that children with specific language impairment (SLI) and reading disorder (RD) exhibit sustained attention deficits, no study has investigated sustained attention in children with speech sound disorder (SSD). Given the overlap of symptoms, such as phonological memory deficits, between these different language disorders (i.e., SLI, SSD and RD) and the relationships between working memory, attention and language processing, it is worthwhile to investigate whether deficits in sustained attention also occur in children with SSD. A total of 55 children (18 diagnosed with SSD (8.11±1.231) and 37 typically developing children (8.76±1.461)) were invited to participate in this study. Auditory and visual sustained-attention tasks were applied. Children with SSD performed worse on these tasks; they committed a greater number of auditory false alarms and exhibited a significant decline in performance over the course of the auditory detection task. The extent to which performance is related to auditory perceptual difficulties and probable working memory deficits is discussed. Further studies are needed to better understand the specific nature of these deficits and their clinical implications. PMID:24675815

  8. Brain Mapping of Language and Auditory Perception in High-Functioning Autistic Adults: A PET Study.

    ERIC Educational Resources Information Center

    Muller, R-A.; Behen, M. E.; Rothermel, R. D.; Chugani, D. C.; Muzik, O.; Mangner, T. J.; Chugani, H. T.

    1999-01-01

    A study used positron emission tomography (PET) to study patterns of brain activation during auditory processing in five high-functioning adults with autism. Results found that participants showed reversed hemispheric dominance during the verbal auditory stimulation and reduced activation of the auditory cortex and cerebellum. (CR)

  9. Maturation of Rapid Auditory Temporal Processing and Subsequent Nonword Repetition Performance in Children

    ERIC Educational Resources Information Center

    Fox, Allison M.; Reid, Corinne L.; Anderson, Mike; Richardson, Cassandra; Bishop, Dorothy V. M.

    2012-01-01

    According to the rapid auditory processing theory, the ability to parse incoming auditory information underpins learning of oral and written language. There is wide variation in this low-level perceptual ability, which appears to follow a protracted developmental course. We studied the development of rapid auditory processing using event-related…

  10. Areas Recruited during Action Understanding Are Not Modulated by Auditory or Sign Language Experience.

    PubMed

    Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao

    2016-01-01

    The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.

  11. Activations in temporal areas using visual and auditory naming stimuli: A language fMRI study in temporal lobe epilepsy.

    PubMed

    Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S

    2016-12-01

    Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated temporal lobe structures, which are resected during ATLR, more frequently than did verbal fluency. Controlling for auditory and visual input resulted in more left-lateralised activations. We hypothesise that these paradigms may be more predictive of postoperative language decline than verbal fluency fMRI. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. A comprehensive three-dimensional cortical map of vowel space.

    PubMed

    Scharinger, Mathias; Idsardi, William J; Poe, Samantha

    2011-12-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral-medial, anterior-posterior, and inferior-superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom-up information but crucially involves featural-phonetic top-down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.

  13. Speech Recognition and Parent Ratings From Auditory Development Questionnaires in Children Who Are Hard of Hearing.

    PubMed

    McCreery, Ryan W; Walker, Elizabeth A; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.

  14. [Test set for the evaluation of hearing and speech development after cochlear implantation in children].

    PubMed

    Lamprecht-Dinnesen, A; Sick, U; Sandrieser, P; Illg, A; Lesinski-Schiedat, A; Döring, W H; Müller-Deile, J; Kiefer, J; Matthias, K; Wüst, A; Konradi, E; Riebandt, M; Matulat, P; Von Der Haar-Heise, S; Swart, J; Elixmann, K; Neumann, K; Hildmann, A; Coninx, F; Meyer, V; Gross, M; Kruse, E; Lenarz, T

    2002-10-01

    Since autumn 1998 the multicenter interdisciplinary study group "Test Materials for CI Children" has been compiling a uniform examination tool for evaluation of speech and hearing development after cochlear implantation in childhood. After studying the relevant literature, suitable materials were checked for practical applicability, modified and provided with criteria for execution and break-off. For data acquisition, observation forms for preparation of a PC-version were developed. The evaluation set contains forms for master data with supplements relating to postoperative processes. The hearing tests check supra-threshold hearing with loudness scaling for children, speech comprehension in silence (Mainz and Göttingen Test for Speech Comprehension in Childhood) and phonemic differentiation (Oldenburg Rhyme Test for Children), the central auditory processes of detection, discrimination, identification and recognition (modification of the "Frankfurt Functional Hearing Test for Children") and audiovisual speech perception (Open Paragraph Tracking, Kiel Speech Track Program). The materials for speech and language development comprise phonetics-phonology, lexicon and semantics (LOGO Pronunciation Test), syntax and morphology (analysis of spontaneous speech), language comprehension (Reynell Scales), communication and pragmatics (observation forms). The MAIS and MUSS modified questionnaires are integrated. The evaluation set serves quality assurance and permits factor analysis as well as controls for regularity through the multicenter comparison of long-term developmental trends after cochlear implantation.

  15. Comparison of functional network connectivity for passive-listening and active-response narrative comprehension in adolescents.

    PubMed

    Wang, Yingying; Holland, Scott K

    2014-05-01

    Comprehension of narrative stories plays an important role in the development of language skills. In this study, we compared brain activity elicited by a passive-listening version and an active-response (AR) version of a narrative comprehension task by using independent component (IC) analysis on functional magnetic resonance imaging data from 21 adolescents (ages 14-18 years). Furthermore, we explored differences in functional network connectivity engaged by two versions of the task and investigated the relationship between the online response time and the strength of connectivity between each pair of ICs. Despite similar brain region involvements in auditory, temporoparietal, and frontoparietal language networks for both versions, the AR version engages some additional network elements including the left dorsolateral prefrontal, anterior cingulate, and sensorimotor networks. These additional involvements are likely associated with working memory and maintenance of attention, which can be attributed to the differences in cognitive strategic aspects of the two versions. We found significant positive correlation between the online response time and the strength of connectivity between an IC in left inferior frontal region and an IC in sensorimotor region. An explanation for this finding is that longer reaction time indicates stronger connection between the frontal and sensorimotor networks caused by increased activation in adolescents who require more effort to complete the task.

  16. How age and linguistic competence alter the interplay of perceptual and cognitive factors when listening to conversations in a noisy environment

    PubMed Central

    Avivi-Reich, Meital; Daneman, Meredyth; Schneider, Bruce A.

    2013-01-01

    Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs. top-down processes to speech comprehension. Younger and older L1s as well as young L2s listened to conversations played against a babble background, with or without spatial separation between the talkers and masker, when the spatial positions of the stimuli were specified either by loudspeaker placements (real location), or through use of the precedence effect (virtual location). After listening to a conversation, the participants were asked to answer questions regarding its content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. Once compensation was applied, the number of questions correctly answered increased when a real or virtual spatial separation was introduced between babble and talkers. There was no evidence that performance differed between real and virtual locations. The contribution of vocabulary knowledge to dialog comprehension was found to be larger in the virtual conditions than in the real whereas the contribution of reading comprehension skill did not depend on the listening environment but rather differed as a function of age and language proficiency. The results indicate that the acoustic scene and the cognitive and linguistic competencies of listeners modulate how and when top-down resources are engaged in aid of speech comprehension. PMID:24578684

  17. How age and linguistic competence alter the interplay of perceptual and cognitive factors when listening to conversations in a noisy environment.

    PubMed

    Avivi-Reich, Meital; Daneman, Meredyth; Schneider, Bruce A

    2014-01-01

    Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs. top-down processes to speech comprehension. Younger and older L1s as well as young L2s listened to conversations played against a babble background, with or without spatial separation between the talkers and masker, when the spatial positions of the stimuli were specified either by loudspeaker placements (real location), or through use of the precedence effect (virtual location). After listening to a conversation, the participants were asked to answer questions regarding its content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. Once compensation was applied, the number of questions correctly answered increased when a real or virtual spatial separation was introduced between babble and talkers. There was no evidence that performance differed between real and virtual locations. The contribution of vocabulary knowledge to dialog comprehension was found to be larger in the virtual conditions than in the real whereas the contribution of reading comprehension skill did not depend on the listening environment but rather differed as a function of age and language proficiency. The results indicate that the acoustic scene and the cognitive and linguistic competencies of listeners modulate how and when top-down resources are engaged in aid of speech comprehension.

  18. Auditory Exposure in the Neonatal Intensive Care Unit: Room Type and Other Predictors.

    PubMed

    Pineda, Roberta; Durant, Polly; Mathur, Amit; Inder, Terrie; Wallendorf, Michael; Schlaggar, Bradley L

    2017-04-01

    To quantify early auditory exposures in the neonatal intensive care unit (NICU) and evaluate how these are related to medical and environmental factors. We hypothesized that there would be less auditory exposure in the NICU private room, compared with the open ward. Preterm infants born at ≤ 28 weeks gestation (33 in the open ward, 25 in private rooms) had auditory exposure quantified at birth, 30 and 34 weeks postmenstrual age (PMA), and term equivalent age using the Language Environmental Acquisition device. Meaningful language (P < .0001), the number of adult words (P < .0001), and electronic noise (P < .0001) increased across PMA. Silence increased (P = .0007) and noise decreased (P < .0001) across PMA. There was more silence in the private room (P = .02) than the open ward, with an average of 1.9 hours more silence in a 16-hour period. There was an interaction between PMA and room type for distant words (P = .01) and average decibels (P = .04), indicating that changes in auditory exposure across PMA were different for infants in private rooms compared with infants in the open ward. Medical interventions were related to more noise in the environment, although parent presence (P = .009) and engagement (P  = .002) were related to greater language exposure. Average sound levels in the NICU were 58.9 ± 3.6 decibels, with an average peak level of 86.9 ± 1.4 decibels. Understanding the NICU auditory environment paves the way for interventions that reduce high levels of adverse sound and enhance positive forms of auditory exposure, such as language. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Effects of prematurity on language acquisition and auditory maturation: a systematic review.

    PubMed

    Rechia, Inaê Costa; Oliveira, Luciéle Dias; Crestani, Anelise Henrich; Biaggio, Eliara Pinto Vieira; Souza, Ana Paula Ramos de

    2016-01-01

    To verify which damages prematurity causes to hearing and language. We used the decriptors language/linguagem, hearing/audição, prematurity/prematuridade in databases LILACS, MEDLINE, Cochrane Library and Scielo. randomized controlled trials, non-randomized intervention studies and descriptive studies (cross-sectional, cohort, case-control projects). The articles were assessed independently by two authors according to the selection criteria. Twenty-six studies were selected, of which seven were published in Brazil and 19 in international literature. Nineteen studies comparing full-term and preterm infants. Two of the studies made comparisons between premature infants small for gestational age and appropriate for gestational age. In four studies, the sample consisted of children with extreme prematurity, while other studies have been conducted in children with severe and moderate prematurity. To assess hearing, these studies used otoacoustic emissions, brainstem evoked potentials, tympanometry, auditory steady-state response and visual reinforcement audiometry. For language assessment, most of the articles used the Bayley Scale of Infant and Toddler Development. Most studies reviewed observed that prematurity is directly or indirectly related to the acquisition of auditory and language abilities early in life. Thus, it could be seen that prematurity, as well as aspects related to it (gestational age, low weight at birth and complications at birth), affect maturation of the central auditory pathway and may cause negative effects on language acquisition.

  20. “When Music Speaks”: Auditory Cortex Morphology as a Neuroanatomical Marker of Language Aptitude and Musicality

    PubMed Central

    Turker, Sabrina; Reiterer, Susanne M.; Seither-Preisler, Annemarie; Schneider, Peter

    2017-01-01

    Recent research has shown that the morphology of certain brain regions may indeed correlate with a number of cognitive skills such as musicality or language ability. The main aim of the present study was to explore the extent to which foreign language aptitude, in particular phonetic coding ability, is influenced by the morphology of Heschl’s gyrus (HG; auditory cortex), working memory capacity, and musical ability. In this study, the auditory cortices of German-speaking individuals (N = 30; 13 males/17 females; aged 20–40 years) with high and low scores in a number of language aptitude tests were compared. The subjects’ language aptitude was measured by three different tests, namely a Hindi speech imitation task (phonetic coding ability), an English pronunciation assessment, and the Modern Language Aptitude Test (MLAT). Furthermore, working memory capacity and musical ability were assessed to reveal their relationship with foreign language aptitude. On the behavioral level, significant correlations were found between phonetic coding ability, English pronunciation skills, musical experience, and language aptitude as measured by the MLAT. Parts of all three tests measuring language aptitude correlated positively and significantly with each other, supporting their validity for measuring components of language aptitude. Remarkably, the number of instruments played by subjects showed significant correlations with all language aptitude measures and musicality, whereas, the number of foreign languages did not show any correlations. With regard to the neuroanatomy of auditory cortex, adults with very high scores in the Hindi testing and the musicality test (AMMA) demonstrated a clear predominance of complete posterior HG duplications in the right hemisphere. This may reignite the discussion of the importance of the right hemisphere for language processing, especially when linked or common resources are involved, such as the inter-dependency between phonetic and musical aptitude. PMID:29250017

  1. It's the deceiver, not the receiver: No individual differences when detecting deception in a foreign and a native language.

    PubMed

    Law, Marvin K H; Jackson, Simon A; Aidman, Eugene; Geiger, Mattis; Olderbak, Sally; Kleitman, Sabina

    2018-01-01

    Individual differences in lie detection remain poorly understood. Bond and DePaulo's meta-analysis examined judges (receivers) who were ascertaining lies from truths and senders (deceiver) who told these lies and truths. Bond and DePaulo found that the accuracy of detecting deception depended more on the characteristics of senders rather than the judges' ability to detect lies/truths. However, for many studies in this meta-analysis, judges could hear and understand senders. This made language comprehension a potential confound. This paper presents the results of two studies. Extending previous work, in Study 1, we removed language comprehension as a potential confound by having English-speakers (N = 126, mean age = 19.86) judge the veracity of German speakers (n = 12) in a lie detection task. The twelve lie-detection stimuli included emotional and non-emotional content, and were presented in three modalities-audio only, video only, and audio and video together. The intelligence (General, Auditory, Emotional) and personality (Dark Triads and Big 6) of participants was also assessed. In Study 2, a native German-speaking sample (N = 117, mean age = 29.10) were also tested on a similar lie detection task to provide a control condition. Despite significantly extending research design and the selection of constructs employed to capture individual differences, both studies replicated Bond and DePaulo's findings. The results of Study1 indicated that removing language comprehension did not amplify individual differences in judge's ability to ascertain lies from truths. Study 2 replicated these results confirming a lack of individual differences in judge's ability to detect lies. The results of both studies suggest that Sender (deceiver) characteristics exerted a stronger influence on the outcomes of lie detection than the judge's attributes.

  2. A Hierarchical Generative Framework of Language Processing: Linking Language Perception, Interpretation, and Production Abnormalities in Schizophrenia

    PubMed Central

    Brown, Meredith; Kuperberg, Gina R.

    2015-01-01

    Language and thought dysfunction are central to the schizophrenia syndrome. They are evident in the major symptoms of psychosis itself, particularly as disorganized language output (positive thought disorder) and auditory verbal hallucinations (AVHs), and they also manifest as abnormalities in both high-level semantic and contextual processing and low-level perception. However, the literatures characterizing these abnormalities have largely been separate and have sometimes provided mutually exclusive accounts of aberrant language in schizophrenia. In this review, we propose that recent generative probabilistic frameworks of language processing can provide crucial insights that link these four lines of research. We first outline neural and cognitive evidence that real-time language comprehension and production normally involve internal generative circuits that propagate probabilistic predictions to perceptual cortices — predictions that are incrementally updated based on prediction error signals as new inputs are encountered. We then explain how disruptions to these circuits may compromise communicative abilities in schizophrenia by reducing the efficiency and robustness of both high-level language processing and low-level speech perception. We also argue that such disruptions may contribute to the phenomenology of thought-disordered speech and false perceptual inferences in the language system (i.e., AVHs). This perspective suggests a number of productive avenues for future research that may elucidate not only the mechanisms of language abnormalities in schizophrenia, but also promising directions for cognitive rehabilitation. PMID:26640435

  3. Does dynamic information about the speaker's face contribute to semantic speech processing? ERP evidence.

    PubMed

    Hernández-Gutiérrez, David; Abdel Rahman, Rasha; Martín-Loeches, Manuel; Muñoz, Francisco; Schacht, Annekathrin; Sommer, Werner

    2018-07-01

    Face-to-face interactions characterize communication in social contexts. These situations are typically multimodal, requiring the integration of linguistic auditory input with facial information from the speaker. In particular, eye gaze and visual speech provide the listener with social and linguistic information, respectively. Despite the importance of this context for an ecological study of language, research on audiovisual integration has mainly focused on the phonological level, leaving aside effects on semantic comprehension. Here we used event-related potentials (ERPs) to investigate the influence of facial dynamic information on semantic processing of connected speech. Participants were presented with either a video or a still picture of the speaker, concomitant to auditory sentences. Along three experiments, we manipulated the presence or absence of the speaker's dynamic facial features (mouth and eyes) and compared the amplitudes of the semantic N400 elicited by unexpected words. Contrary to our predictions, the N400 was not modulated by dynamic facial information; therefore, semantic processing seems to be unaffected by the speaker's gaze and visual speech. Even though, during the processing of expected words, dynamic faces elicited a long-lasting late posterior positivity compared to the static condition. This effect was significantly reduced when the mouth of the speaker was covered. Our findings may indicate an increase of attentional processing to richer communicative contexts. The present findings also demonstrate that in natural communicative face-to-face encounters, perceiving the face of a speaker in motion provides supplementary information that is taken into account by the listener, especially when auditory comprehension is non-demanding. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Improvement of white matter and functional connectivity abnormalities by repetitive transcranial magnetic stimulation in crossed aphasia in dextral.

    PubMed

    Lu, Haitao; Wu, Haiyan; Cheng, Hewei; Wei, Dongjie; Wang, Xiaoyan; Fan, Yong; Zhang, Hao; Zhang, Tong

    2014-01-01

    As a special aphasia, the occurrence of crossed aphasia in dextral (CAD) is unusual. This study aims to improve the language ability by applying 1 Hz repetitive transcranial magnetic stimulation (rTMS). We studied multiple modality imaging of structural connectivity (diffusion tensor imaging), functional connectivity (resting fMRI), PET, and neurolinguistic analysis on a patient with CAD. Furthermore, we applied rTMS of 1 Hz for 40 times and observed the language function improvement. The results indicated that a significantly reduced structural and function connectivity was found in DTI and fMRI data compared with the control. The PET imaging showed hypo-metabolism in right hemisphere and left cerebellum. In conclusion, one of the mechanisms of CAD is that right hemisphere is the language dominance. Stimulating left Wernicke area could improve auditory comprehension, stimulating left Broca's area could enhance expression, and the results outlasted 6 months by 1 Hz rTMS balancing the excitability inter-hemisphere in CAD.

  5. Perception of Audio-Visual Speech Synchrony in Spanish-Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.

    2013-01-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…

  6. An Evaluation of Auditory Verbal Therapy Using the Rate of Early Language Development as an Outcome Measure

    ERIC Educational Resources Information Center

    Hogan, Sarah; Stokes, Jacqueline; White, Catherine; Tyszkiewicz, Elizabeth; Woolgar, Alexandra

    2008-01-01

    Providing unbiased data concerning the outcomes of particular intervention methods is imperative if professionals and parents are to assimilate information which could contribute to an "informed choice". An evaluation of Auditory Verbal Therapy (AVT) was conducted using a formal assessment of spoken language as an outcome measure. Spoken…

  7. Auditory Training for Experienced and Inexperienced Second-Language Learners: Native French Speakers Learning English Vowels

    ERIC Educational Resources Information Center

    Iverson, Paul; Pinet, Melanie; Evans, Bronwen G.

    2012-01-01

    This study examined whether high-variability auditory training on natural speech can benefit experienced second-language English speakers who already are exposed to natural variability in their daily use of English. The subjects were native French speakers who had learned English in school; experienced listeners were tested in England and the less…

  8. Clinical Use of AEVP- and AERP-Measures in Childhood Speech Disorders

    ERIC Educational Resources Information Center

    Maassen, Ben; Pasman, Jaco; Nijland, Lian; Rotteveel, Jan

    2006-01-01

    It has long been recognized that from the first months of life auditory perception plays a crucial role in speech and language development. Only in recent years, however, is the precise mechanism of auditory development and its interaction with the acquisition of speech and language beginning to be systematically revealed. This paper presents the…

  9. Unconscious improvement in foreign language learning using mismatch negativity neurofeedback: A preliminary study.

    PubMed

    Chang, Ming; Iizuka, Hiroyuki; Kashioka, Hideki; Naruse, Yasushi; Furukawa, Masahiro; Ando, Hideyuki; Maeda, Taro

    2017-01-01

    When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between 'l' and 'r' sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words 'light' and 'right'. There was also improvement in the recognition of other words containing 'l' and 'r' (e.g., 'blight' and 'bright'), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses.

  10. Unconscious improvement in foreign language learning using mismatch negativity neurofeedback: A preliminary study

    PubMed Central

    Iizuka, Hiroyuki; Kashioka, Hideki; Naruse, Yasushi; Furukawa, Masahiro; Ando, Hideyuki; Maeda, Taro

    2017-01-01

    When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between ‘l’ and ‘r’ sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words ‘light’ and ‘right’. There was also improvement in the recognition of other words containing ‘l’ and ‘r’ (e.g., ‘blight’ and ‘bright’), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses. PMID:28617861

  11. Audio-visual temporal perception in children with restored hearing.

    PubMed

    Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David

    2017-05-01

    It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.

  12. Pure word deafness following left temporal damage: Behavioral and neuroanatomical evidence from a new case.

    PubMed

    Maffei, Chiara; Capasso, Rita; Cazzolli, Giulia; Colosimo, Cesare; Dell'Acqua, Flavio; Piludu, Francesca; Catani, Marco; Miceli, Gabriele

    2017-12-01

    Pure Word Deafness (PWD) is a rare disorder, characterized by selective loss of speech input processing. Its most common cause is temporal damage to the primary auditory cortex of both hemispheres, but it has been reported also following unilateral lesions. In unilateral cases, PWD has been attributed to the disconnection of Wernicke's area from both right and left primary auditory cortex. Here we report behavioral and neuroimaging evidence from a new case of left unilateral PWD with both cortical and white matter damage due to a relatively small stroke lesion in the left temporal gyrus. Selective impairment in auditory language processing was accompanied by intact processing of nonspeech sounds and normal speech, reading and writing. Performance on dichotic listening was characterized by a reversal of the right-ear advantage typically observed in healthy subjects. Cortical thickness and gyral volume were severely reduced in the left superior temporal gyrus (STG), although abnormalities were not uniformly distributed and residual intact cortical areas were detected, for example in the medial portion of the Heschl's gyrus. Diffusion tractography documented partial damage to the acoustic radiations (AR), callosal temporal connections and intralobar tracts dedicated to single words comprehension. Behavioral and neuroimaging results in this case are difficult to integrate in a pure cortical or disconnection framework, as damage to primary auditory cortex in the left STG was only partial and Wernicke's area was not completely isolated from left or right-hemisphere input. On the basis of our findings we suggest that in this case of PWD, concurrent partial topological (cortical) and disconnection mechanisms have contributed to a selective impairment of speech sounds. The discrepancy between speech and non-speech sounds suggests selective damage to a language-specific left lateralized network involved in phoneme processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Dissociations and Associations of Performance in Syntactic Comprehension in Aphasia and their Implications for the Nature of Aphasic Deficits

    PubMed Central

    Caplan, David; Michaud, Jennifer; Hufford, Rebecca

    2013-01-01

    Sixty one pwa were tested on syntactic comprehension in three tasks: sentence-picture matching, sentence-picture matching with auditory moving window presentation, and object manipulation. There were significant correlations of performances on sentences across tasks. First factors in unrotated factor analyses accounted for most of the variance on which all sentence types loaded in each task. Dissociations in performance between sentence types that differed minimally in their syntactic structures were not consistent across tasks. These results replicate previous results with smaller samples and provide important validation of basic aspects of aphasic performance in this area of language processing. They point to the role of a reduction in processing resources and of the interaction of task demands and parsing and interpretive abilities in the genesis of patient performance. PMID:24061104

  14. Using the preschool language scale, fourth edition to characterize language in preschoolers with autism spectrum disorders.

    PubMed

    Volden, Joanne; Smith, Isabel M; Szatmari, Peter; Bryson, Susan; Fombonne, Eric; Mirenda, Pat; Roberts, Wendy; Vaillancourt, Tracy; Waddell, Charlotte; Zwaigenbaum, Lonnie; Georgiades, Stelios; Duku, Eric; Thompson, Ann

    2011-08-01

    The Preschool Language Scale, Fourth Edition (PLS-4; Zimmerman, Steiner, & Pond, 2002) was used to examine syntactic and semantic language skills in preschool children with autism spectrum disorders (ASD) to determine its suitability for use with this population. We expected that PLS-4 performance would be better in more intellectually able children and that receptive skills would be relatively more impaired than expressive abilities, consistent with previous findings in the area of vocabulary. Our sample consisted of 294 newly diagnosed preschool children with ASD. Children were assessed via a battery of developmental measures, including the PLS-4. As expected, PLS-4 scores were higher in more intellectually able children with ASD, and overall, expressive communication was higher than auditory comprehension. However, this overall advantage was not stable across nonverbal developmental levels. Expressive skills were significantly better than receptive skills at the youngest developmental levels, whereas the converse applied in children with more advanced development. The PLS-4 can be used to obtain a general index of early syntax and semantic skill in young children with ASD. Longitudinal data will be necessary to determine how the developmental relationship between receptive and expressive language skills unfolds in children with ASD.

  15. It Is Time to Rethink Central Auditory Processing Disorder Protocols for School-Aged Children.

    PubMed

    DeBonis, David A

    2015-06-01

    The purpose of this article is to review the literature that pertains to ongoing concerns regarding the central auditory processing construct among school-aged children and to assess whether the degree of uncertainty surrounding central auditory processing disorder (CAPD) warrants a change in current protocols. Methodology on this topic included a review of relevant and recent literature through electronic search tools (e.g., ComDisDome, PsycINFO, Medline, and Cochrane databases); published texts; as well as published articles from the Journal of the American Academy of Audiology; the American Journal of Audiology; the Journal of Speech, Language, and Hearing Research; and Language, Speech, and Hearing Services in Schools. This review revealed strong support for the following: (a) Current testing of CAPD is highly influenced by nonauditory factors, including memory, attention, language, and executive function; (b) the lack of agreement regarding the performance criteria for diagnosis is concerning; (c) the contribution of auditory processing abilities to language, reading, and academic and listening abilities, as assessed by current measures, is not significant; and (d) the effectiveness of auditory interventions for improving communication abilities has not been established. Routine use of CAPD test protocols cannot be supported, and strong consideration should be given to redirecting focus on assessing overall listening abilities. Also, intervention needs to be contextualized and functional. A suggested protocol is provided for consideration. All of these issues warrant ongoing research.

  16. Audiovisual spoken word recognition as a clinical criterion for sensory aids efficiency in Persian-language children with hearing loss.

    PubMed

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar

    2015-12-01

    The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (P<0.01) and cochlear implant (P<0.05); however, in the children with hearing aid, there was no significant difference between word perception score in auditory-only and audiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Listening comprehension across the adult lifespan

    PubMed Central

    Sommers, Mitchell S.; Hale, Sandra; Myerson, Joel; Rose, Nathan; Tye-Murray, Nancy; Spehar, Brent

    2011-01-01

    Short Summary The current study provides the first systematic assessment of listening comprehension across the adult lifespan. A total of 433 participants ranging in age from 20-90 listened to spoken passages and answered comprehension questions following each passage. In addition, measures of auditory sensitivity were obtained from all participants to determine if hearing loss and listening comprehension changed similarly across the adult lifespan. As expected, auditory sensitivity declined from age 20 to age 90. In contrast, listening comprehension remained relatively unchanged until approximately age 65-70, with declines evident only for the oldest participants. PMID:21716112

  18. P300 as a measure of processing capacity in auditory and visual domains in Specific Language Impairment

    PubMed Central

    Evans, Julia L.; Pollak, Seth D.

    2011-01-01

    This study examined the electrophysiological correlates of auditory and visual working memory in children with Specific Language Impairments (SLI). Children with SLI and age-matched controls (11;9 – 14;10) completed visual and auditory working memory tasks while event-related potentials (ERPs) were recorded. In the auditory condition, children with SLI performed similarly to controls when the memory load was kept low (1-back memory load). As expected, when demands for auditory working memory were higher, children with SLI showed decreases in accuracy and attenuated P3b responses. However, children with SLI also evinced difficulties in the visual working memory tasks. In both the low (1-back) and high (2-back) memory load conditions, P3b amplitude was significantly lower for the SLI as compared to CA groups. These data suggest a domain-general working memory deficit in SLI that is manifested across auditory and visual modalities. PMID:21316354

  19. Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment.

    PubMed

    Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas

    2010-07-01

    Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  20. Paediatric Cochlear Implantation in Patients with Waardenburg Syndrome

    PubMed Central

    van Nierop, Josephine W.I.; Snabel, Rebecca R.; Langereis, Margreet; Pennings, Ronald J.E.; Admiraal, Ronald J.C.; Mylanus, Emmanuel A.M.; Kunst, Henricus P.M.

    2016-01-01

    Objective To analyse the benefit of cochlear implantation in young deaf children with Waardenburg syndrome (WS) compared to a reference group of young deaf children without additional disabilities. Method A retrospective study was conducted on children with WS who underwent cochlear implantation at the age of 2 years or younger. The post-operative results for speech perception (phonetically balanced standard Dutch consonant-vocal-consonant word lists) and language comprehension (the Reynell Developmental Language Scales, RDLS), expressed as a language quotient (LQ), were compared between the WS group and the reference group by using multiple linear regression analysis. Results A total of 14 children were diagnosed with WS, and 6 of them had additional disabilities. The WS children were implanted at a mean age of 1.6 years and the 48 children of the reference group at a mean age of 1.3 years. The WS children had a mean phoneme score of 80% and a mean LQ of 0.74 at 3 years post-implantation, and these results were comparable to those of the reference group. Only the factor additional disabilities had a significant negative influence on auditory perception and language comprehension. Conclusions Children with WS performed similarly to the reference group in the present study, and these outcomes are in line with the previous literature. Although good counselling about additional disabilities concomitant to the syndrome is relevant, cochlear implantation is a good rehabilitation method for children with WS. PMID:27245679

  1. Paediatric Cochlear Implantation in Patients with Waardenburg Syndrome.

    PubMed

    van Nierop, Josephine W I; Snabel, Rebecca R; Langereis, Margreet; Pennings, Ronald J E; Admiraal, Ronald J C; Mylanus, Emmanuel A M; Kunst, Henricus P M

    2016-01-01

    To analyse the benefit of cochlear implantation in young deaf children with Waardenburg syndrome (WS) compared to a reference group of young deaf children without additional disabilities. A retrospective study was conducted on children with WS who underwent cochlear implantation at the age of 2 years or younger. The post-operative results for speech perception (phonetically balanced standard Dutch consonant-vocal-consonant word lists) and language comprehension (the Reynell Developmental Language Scales, RDLS), expressed as a language quotient (LQ), were compared between the WS group and the reference group by using multiple linear regression analysis. A total of 14 children were diagnosed with WS, and 6 of them had additional disabilities. The WS children were implanted at a mean age of 1.6 years and the 48 children of the reference group at a mean age of 1.3 years. The WS children had a mean phoneme score of 80% and a mean LQ of 0.74 at 3 years post-implantation, and these results were comparable to those of the reference group. Only the factor additional disabilities had a significant negative influence on auditory perception and language comprehension. Children with WS performed similarly to the reference group in the present study, and these outcomes are in line with the previous literature. Although good counselling about additional disabilities concomitant to the syndrome is relevant, cochlear implantation is a good rehabilitation method for children with WS. © 2016 S. Karger AG, Basel.

  2. Where Is the Beat? The Neural Correlates of Lexical Stress and Rhythmical Well-formedness in Auditory Story Comprehension.

    PubMed

    Kandylaki, Katerina D; Henrich, Karen; Nagels, Arne; Kircher, Tilo; Domahs, Ulrike; Schlesewsky, Matthias; Bornkessel-Schlesewsky, Ina; Wiese, Richard

    2017-07-01

    While listening to continuous speech, humans process beat information to correctly identify word boundaries. The beats of language are stress patterns that are created by combining lexical (word-specific) stress patterns and the rhythm of a specific language. Sometimes, the lexical stress pattern needs to be altered to obey the rhythm of the language. This study investigated the interplay of lexical stress patterns and rhythmical well-formedness in natural speech with fMRI. Previous electrophysiological studies on cases in which a regular lexical stress pattern may be altered to obtain rhythmical well-formedness showed that even subtle rhythmic deviations are detected by the brain if attention is directed toward prosody. Here, we present a new approach to this phenomenon by having participants listen to contextually rich stories in the absence of a task targeting the manipulation. For the interaction of lexical stress and rhythmical well-formedness, we found one suprathreshold cluster localized between the cerebellum and the brain stem. For the main effect of lexical stress, we found higher BOLD responses to the retained lexical stress pattern in the bilateral SMA, bilateral postcentral gyrus, bilateral middle fontal gyrus, bilateral inferior and right superior parietal lobule, and right precuneus. These results support the view that lexical stress is processed as part of a sensorimotor network of speech comprehension. Moreover, our results connect beat processing in language to domain-independent timing perception.

  3. Poor Auditory Task Scores in Children with Specific Reading and Language Difficulties: Some Poor Scores Are More Equal than Others

    ERIC Educational Resources Information Center

    McArthur, Genevieve M.; Hogben, John H.

    2012-01-01

    Children with specific reading disability (SRD) or specific language impairment (SLI), who scored poorly on an auditory discrimination task, did up to 140 runs on the failed task. Forty-one percent of the children produced widely fluctuating scores that did not improve across runs (untrainable errant performance), 23% produced widely fluctuating…

  4. Basic to Applied Research: The Benefits of Audio-Visual Speech Perception Research in Teaching Foreign Languages

    ERIC Educational Resources Information Center

    Erdener, Dogu

    2016-01-01

    Traditionally, second language (L2) instruction has emphasised auditory-based instruction methods. However, this approach is restrictive in the sense that speech perception by humans is not just an auditory phenomenon but a multimodal one, and specifically, a visual one as well. In the past decade, experimental studies have shown that the…

  5. Auditory-Verbal Therapy as an Intervention Approach for Children Who Are Deaf: A Review of the Evidence. EBP Briefs. Volume 11, Issue 6

    ERIC Educational Resources Information Center

    Bowers, Lisa M.

    2017-01-01

    Clinical Question: Would young deaf children who participate in Auditory-Verbal Therapy (AVT) provided by a Listening and Spoken Language Specialist (LSLS) certified in AVT demonstrate gains in receptive and expressive language skills similar to their typical hearing peers? Method: Systematic Review. Study Sources: EBSCOhost databases: Academic…

  6. Language Outcomes for Children of Low-Income Families Enrolled in Auditory Verbal Therapy

    ERIC Educational Resources Information Center

    Hogan, Sarah; Stokes, Jacqueline; Weller, Isobel

    2010-01-01

    A common misconception about families in the UK who choose to participate in an Auditory Verbal (AV) approach for their child with hearing impairment, is that they are uniformly from affluent backgrounds. It is asserted that the good spoken language outcomes in these children are a product of the child's social background and family's values…

  7. Rapid modulation of spoken word recognition by visual primes.

    PubMed

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  8. Rapid modulation of spoken word recognition by visual primes

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2015-01-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296

  9. Individual differences in language ability are related to variation in word recognition, not speech perception: evidence from eye movements.

    PubMed

    McMurray, Bob; Munson, Cheyenne; Tomblin, J Bruce

    2014-08-01

    The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Adolescents with a range of language abilities (N = 74, including 35 impaired) participated in an experiment based on McMurray, Tanenhaus, and Aslin (2002). Participants heard tokens from six 9-step voice onset time (VOT) continua spanning 2 words (beach/peach, beak/peak, etc.) while viewing a screen containing pictures of those words and 2 unrelated objects. Participants selected the referent while eye movements to each picture were monitored as a measure of lexical activation. Fixations were examined as a function of both VOT and language ability. Eye movements were sensitive to within-category VOT differences: As VOT approached the boundary, listeners made more fixations to the competing word. This did not interact with language ability, suggesting that language impairment is not associated with differential auditory sensitivity or phonetic categorization. Listeners with poorer language skills showed heightened competitors fixations overall, suggesting a deficit in lexical processes. Language impairment may be better characterized by a deficit in lexical competition (inability to suppress competing words), rather than differences in phonological categorization or auditory abilities.

  10. Auditory verbal habilitation is associated with improved outcome for children with cochlear implant.

    PubMed

    Percy-Smith, Lone; Tønning, Tenna Lindbjerg; Josvassen, Jane Lignel; Mikkelsen, Jeanette Hølledig; Nissen, Lena; Dieleman, Eveline; Hallstrøm, Maria; Cayé-Thomasen, Per

    2018-01-01

    To study the impact of (re)habilitation strategy on speech-language outcomes for early, cochlear implanted children enrolled in different intervention programmes post implant. Data relate to a total of 130 children representing two pediatric cohorts consisting of 94 and 36 subjects, respectively. The two cohorts had different speech and language intervention following cochlear implantation, i.e. standard habilitation vs. auditory verbal (AV) intervention. Three tests of speech and language were applied covering language areas of receptive and productive vocabulary and language understanding. Children in AV intervention outperformed children in standard habilitation on all three tests of speech and language. When effect of intervention was adjusted with other covariates children in AV intervention still had higher odds at performing at age equivalent speech and language levels. Compared to standard intervention, AV intervention is associated with improved outcome for children with CI. Based on this finding, we recommend that all children with HI should be offered this intervention and it is, therefore, highly relevant when National boards of Health and Social Affairs recommend basing the habilitation on principles from AV practice. It should be noted, that a minority of children use spoken language with sign support. For this group it is, however, still important that educational services provide auditory skills training.

  11. Auditory processing efficiency deficits in children with developmental language impairments

    NASA Astrophysics Data System (ADS)

    Hartley, Douglas E. H.; Moore, David R.

    2002-12-01

    The ``temporal processing hypothesis'' suggests that individuals with specific language impairments (SLIs) and dyslexia have severe deficits in processing rapidly presented or brief sensory information, both within the auditory and visual domains. This hypothesis has been supported through evidence that language-impaired individuals have excess auditory backward masking. This paper presents an analysis of masking results from several studies in terms of a model of temporal resolution. Results from this modeling suggest that the masking results can be better explained by an ``auditory efficiency'' hypothesis. If impaired or immature listeners have a normal temporal window, but require a higher signal-to-noise level (poor processing efficiency), this hypothesis predicts the observed small deficits in the simultaneous masking task, and the much larger deficits in backward and forward masking tasks amongst those listeners. The difference in performance on these masking tasks is predictable from the compressive nonlinearity of the basilar membrane. The model also correctly predicts that backward masking (i) is more prone to training effects, (ii) has greater inter- and intrasubject variability, and (iii) increases less with masker level than do other masking tasks. These findings provide a new perspective on the mechanisms underlying communication disorders and auditory masking.

  12. Improving language mapping in clinical fMRI through assessment of grammar.

    PubMed

    Połczyńska, Monika; Japardi, Kevin; Curtiss, Susan; Moody, Teena; Benjamin, Christopher; Cho, Andrew; Vigil, Celia; Kuhn, Taylor; Jones, Michael; Bookheimer, Susan

    2017-01-01

    Brain surgery in the language dominant hemisphere remains challenging due to unintended post-surgical language deficits, despite using pre-surgical functional magnetic resonance (fMRI) and intraoperative cortical stimulation. Moreover, patients are often recommended not to undergo surgery if the accompanying risk to language appears to be too high. While standard fMRI language mapping protocols may have relatively good predictive value at the group level, they remain sub-optimal on an individual level. The standard tests used typically assess lexico-semantic aspects of language, and they do not accurately reflect the complexity of language either in comprehension or production at the sentence level. Among patients who had left hemisphere language dominance we assessed which tests are best at activating language areas in the brain. We compared grammar tests (items testing word order in actives and passives, wh -subject and object questions, relativized subject and object clauses and past tense marking) with standard tests (object naming, auditory and visual responsive naming), using pre-operative fMRI. Twenty-five surgical candidates (13 females) participated in this study. Sixteen patients presented with a brain tumor, and nine with epilepsy. All participants underwent two pre-operative fMRI protocols: one including CYCLE-N grammar tests (items testing word order in actives and passives, wh-subject and object questions, relativized subject and object clauses and past tense marking); and a second one with standard fMRI tests (object naming, auditory and visual responsive naming). fMRI activations during performance in both protocols were compared at the group level, as well as in individual candidates. The grammar tests generated more volume of activation in the left hemisphere (left/right angular gyrus, right anterior/posterior superior temporal gyrus) and identified additional language regions not shown by the standard tests (e.g., left anterior/posterior supramarginal gyrus). The standard tests produced more activation in left BA 47. Ten participants had more robust activations in the left hemisphere in the grammar tests and two in the standard tests. The grammar tests also elicited substantial activations in the right hemisphere and thus turned out to be superior at identifying both right and left hemisphere contribution to language processing. The grammar tests may be an important addition to the standard pre-operative fMRI testing.

  13. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing

    PubMed Central

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use and better language abilities generally had higher parent ratings of auditory skills and better speech recognition abilities in quiet and in noise than peers with less audibility, more limited HA use or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions Children who are hard of hearing continue to experience delays in auditory skill development and speech recognition abilities compared to peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported prior to the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech recognition abilities, and may also enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children’s speech recognition. PMID:26731160

  14. Dual-stream accounts bridge the gap between monkey audition and human language processing. Comment on "Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain" by Michael Arbib

    NASA Astrophysics Data System (ADS)

    Garrod, Simon; Pickering, Martin J.

    2016-03-01

    Over the last few years there has been a resurgence of interest in dual-stream dorsal-ventral accounts of language processing [4]. This has led to recent attempts to bridge the gap between the neurobiology of primate audition and human language processing with the dorsal auditory stream assumed to underlie time-dependent (and syntactic) processing and the ventral to underlie some form of time-independent (and semantic) analysis of the auditory input [3,10]. Michael Arbib [1] considers these developments in relation to his earlier Mirror System Hypothesis about the origins of human language processing [11].

  15. Building Languages

    MedlinePlus

    ... Support Services Technology and Audiology Medical and Surgical Solutions Putting it all Together Building Language American Sign Language (ASL) Conceptually Accurate Signed English (CASE) Cued Speech Finger Spelling Listening/Auditory Training ...

  16. Learning, neural plasticity and sensitive periods: implications for language acquisition, music training and transfer across the lifespan

    PubMed Central

    White, Erin J.; Hutka, Stefanie A.; Williams, Lynne J.; Moreno, Sylvain

    2013-01-01

    Sensitive periods in human development have often been proposed to explain age-related differences in the attainment of a number of skills, such as a second language (L2) and musical expertise. It is difficult to reconcile the negative consequence this traditional view entails for learning after a sensitive period with our current understanding of the brain’s ability for experience-dependent plasticity across the lifespan. What is needed is a better understanding of the mechanisms underlying auditory learning and plasticity at different points in development. Drawing on research in language development and music training, this review examines not only what we learn and when we learn it, but also how learning occurs at different ages. First, we discuss differences in the mechanism of learning and plasticity during and after a sensitive period by examining how language exposure versus training forms language-specific phonetic representations in infants and adult L2 learners, respectively. Second, we examine the impact of musical training that begins at different ages on behavioral and neural indices of auditory and motor processing as well as sensorimotor integration. Third, we examine the extent to which childhood training in one auditory domain can enhance processing in another domain via the transfer of learning between shared neuro-cognitive systems. Specifically, we review evidence for a potential bi-directional transfer of skills between music and language by examining how speaking a tonal language may enhance music processing and, conversely, how early music training can enhance language processing. We conclude with a discussion of the role of attention in auditory learning for learning during and after sensitive periods and outline avenues of future research. PMID:24312022

  17. Using Music as a Background for Reading: An Exploratory Study.

    ERIC Educational Resources Information Center

    Mulliken, Colleen N.; Henk, William A.

    1985-01-01

    Reports on a study during which intermediate level students were exposed to three auditory backgrounds while reading (no music, classical music, and rock music), and their subsequent comprehension performance was measured. Concludes that the auditory background during reading may affect comprehension and that, for most students, rock music should…

  18. Audiovisual sentence repetition as a clinical criterion for auditory development in Persian-language children with hearing loss.

    PubMed

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Rahimi, Zahra; Mayahi, Anis

    2017-02-01

    It is important for clinician such as speech-language pathologists and audiologists to develop more efficient procedures to assess the development of auditory, speech and language skills in children using hearing aid and/or cochlear implant compared to their peers with normal hearing. So, the aim of study was the comparison of the performance of 5-to-7-year-old Persian-language children with and without hearing loss in visual-only, auditory-only, and audiovisual presentation of sentence repetition task. The research was administered as a cross-sectional study. The sample size was 92 Persian 5-7 year old children including: 60 with normal hearing and 32 with hearing loss. The children with hearing loss were recruited from Soroush rehabilitation center for Persian-language children with hearing loss in Shiraz, Iran, through consecutive sampling method. All the children had unilateral cochlear implant or bilateral hearing aid. The assessment tool was the Sentence Repetition Test. The study included three computer-based experiments including visual-only, auditory-only, and audiovisual. The scores were compared within and among the three groups through statistical tests in α = 0.05. The score of sentence repetition task between V-only, A-only, and AV presentation was significantly different in the three groups; in other words, the highest to lowest scores belonged respectively to audiovisual, auditory-only, and visual-only format in the children with normal hearing (P < 0.01), cochlear implant (P < 0.01), and hearing aid (P < 0.01). In addition, there was no significant correlationship between the visual-only and audiovisual sentence repetition scores in all the 5-to-7-year-old children (r = 0.179, n = 92, P = 0.088), but audiovisual sentence repetition scores were found to be strongly correlated with auditory-only scores in all the 5-to-7-year-old children (r = 0.943, n = 92, P = 0.000). According to the study's findings, audiovisual integration occurs in the 5-to-7-year-old Persian children using hearing aid or cochlear implant during sentence repetition similar to their peers with normal hearing. Therefore, it is recommended that audiovisual sentence repetition should be used as a clinical criterion for auditory development in Persian-language children with hearing loss. Copyright © 2016. Published by Elsevier B.V.

  19. "Neural overlap of L1 and L2 semantic representations across visual and auditory modalities: a decoding approach".

    PubMed

    Van de Putte, Eowyn; De Baene, Wouter; Price, Cathy J; Duyck, Wouter

    2018-05-01

    This study investigated whether brain activity in Dutch-French bilinguals during semantic access to concepts from one language could be used to predict neural activation during access to the same concepts from another language, in different language modalities/tasks. This was tested using multi-voxel pattern analysis (MVPA), within and across language comprehension (word listening and word reading) and production (picture naming). It was possible to identify the picture or word named, read or heard in one language (e.g. maan, meaning moon) based on the brain activity in a distributed bilateral brain network while, respectively, naming, reading or listening to the picture or word in the other language (e.g. lune). The brain regions identified differed across tasks. During picture naming, brain activation in the occipital and temporal regions allowed concepts to be predicted across languages. During word listening and word reading, across-language predictions were observed in the rolandic operculum and several motor-related areas (pre- and postcentral, the cerebellum). In addition, across-language predictions during reading were identified in regions typically associated with semantic processing (left inferior frontal, middle temporal cortex, right cerebellum and precuneus) and visual processing (inferior and middle occipital regions and calcarine sulcus). Furthermore, across modalities and languages, the left lingual gyrus showed semantic overlap across production and word reading. These findings support the idea of at least partially language- and modality-independent semantic neural representations. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Auditory Neuropathy Spectrum Disorder (ANSD) (For Parents)

    MedlinePlus

    ... speech-language-pathologist, who will monitor speech and language development to make sure the child is on track. ... Speech-Language Therapy Cochlear Implants Delayed Speech or Language Development Your Child's Checkup: Newborn Hearing Evaluation in Children ...

  1. Cross-Modal Recruitment of Auditory and Orofacial Areas During Sign Language in a Deaf Subject.

    PubMed

    Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa

    2017-09-01

    Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Assessment and Management of Unusual Auditory Behavior in Infants and Toddlers.

    ERIC Educational Resources Information Center

    Kile, Jack E.; And Others

    1994-01-01

    This article describes assessment and management strategies for infants and toddlers with normal hearing or fluctuating conductive hearing loss, who are identified as having central auditory impairment and/or judged to have abnormal auditory behavior. Management strategies include audiologic, medical, and speech and language management. Three case…

  3. Auditory Backward Masking Deficits in Children with Reading Disabilities

    ERIC Educational Resources Information Center

    Montgomery, Christine R.; Morris, Robin D.; Sevcik, Rose A.; Clarkson, Marsha G.

    2005-01-01

    Studies evaluating temporal auditory processing among individuals with reading and other language deficits have yielded inconsistent findings due to methodological problems (Studdert-Kennedy & Mody, 1995) and sample differences. In the current study, seven auditory masking thresholds were measured in fifty-two 7- to 10-year-old children (26…

  4. Language Proficiency and Sustained Attention in Monolingual and Bilingual Children with and without Language Impairment

    PubMed Central

    Boerma, Tessel; Leseman, Paul; Wijnen, Frank; Blom, Elma

    2017-01-01

    Background: The language profiles of children with language impairment (LI) and bilingual children can show partial, and possibly temporary, overlap. The current study examined the persistence of this overlap over time. Furthermore, we aimed to better understand why the language profiles of these two groups show resemblance, testing the hypothesis that the language difficulties of children with LI reflect a weakened ability to maintain attention to the stream of linguistic information. Consequent incomplete processing of language input may lead to delays that are similar to those originating from reductions in input frequency. Methods: Monolingual and bilingual children with and without LI (N = 128), aged 5–8 years old, participated in this study. Dutch receptive vocabulary and grammatical morphology were assessed at three waves. In addition, auditory and visual sustained attention were tested at wave 1. Mediation analyses were performed to examine relationships between LI, sustained attention, and language skills. Results: Children with LI and bilingual children were outperformed by their typically developing (TD) and monolingual peers, respectively, on vocabulary and morphology at all three waves. The vocabulary difference between monolinguals and bilinguals decreased over time. In addition, children with LI had weaker auditory and visual sustained attention skills relative to TD children, while no differences between monolinguals and bilinguals emerged. Auditory sustained attention mediated the effect of LI on vocabulary and morphology in both the monolingual and bilingual groups of children. Visual sustained attention only acted as a mediator in the bilingual group. Conclusion: The findings from the present study indicate that the overlap between the language profiles of children with LI and bilingual children is particularly large for vocabulary in early (pre)school years and reduces over time. Results furthermore suggest that the overlap may be explained by the weakened ability of children with LI to sustain their attention to auditory stimuli, interfering with how well incoming language is processed. PMID:28785235

  5. Auditory Support in Linguistically Diverse Classrooms: Factors Related to Bilingual Text-to-Speech Use

    ERIC Educational Resources Information Center

    Van Laere, E.; Braak, J.

    2017-01-01

    Text-to-speech technology can act as an important support tool in computer-based learning environments (CBLEs) as it provides auditory input, next to on-screen text. Particularly for students who use a language at home other than the language of instruction (LOI) applied at school, text-to-speech can be useful. The CBLE E-Validiv offers content in…

  6. Auditory neuropathy spectrum disorder in late preterm and term infants with severe jaundice.

    PubMed

    Saluja, Satish; Agarwal, Asha; Kler, Neelam; Amin, Sanjiv

    2010-11-01

    To evaluate if severe jaundice is associated with acute auditory neuropathy spectrum disorder in otherwise healthy late preterm and term neonates. In a prospective observational study, all neonates who were admitted with severe jaundice at which exchange transfusion may be indicated as per American Academy of Pediatrics guidelines had comprehensive auditory evaluation performed before discharge to home. Neonates with infection, perinatal asphyxia, chromosomal disorders, cranio-facial malformations, or family history of childhood hearing loss were excluded. Comprehensive auditory evaluations (tympanometry, oto-acoustic emission tests, and auditory brainstem evoked responses) were performed by an audiologist unaware of the severity of jaundice. Total serum bilirubin and serum albumin were measured at the institutional chemistry laboratory using the Diazo and Bromocresol purple method, respectively. A total of 13 neonates with total serum bilirubin concentration at which exchange transfusion is indicated as per American Academy of Pediatrics were admitted to the Neonatal Intensive Care Unit over 3 month period. Six out of 13 neonates (46%) had audiological findings of acute auditory neuropathy spectrum disorder. There was no significant difference in gestational age, birth weight, hemolysis, serum albumin concentration, peak total serum bilirubin concentrations, and peak bilirubin:albumin molar ratio between six neonates who developed acute auditory neuropathy and seven neonates who had normal audiological findings. Only two out of six infants with auditory neuropathy spectrum disorder had clinical signs and symptoms of acute bilirubin encephalopathy. Our findings strongly suggest that auditory neuropathy spectrum disorder is a common manifestation of acute bilirubin-induced neurotoxicity in late preterm and term infants with severe jaundice. Our findings also suggest that comprehensive auditory evaluations should be routinely performed in neonates with severe jaundice irrespective of the presence of clinical findings of acute bilirubin encephalopathy. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  7. Auditory-Verbal Music Play Therapy: An Integrated Approach (AVMPT).

    PubMed

    Mohammad Esmaeilzadeh, Sahar; Sharifi, Shahla; Tayarani Niknezhad, Hamid

    2013-09-01

    Hearing loss occurs when there is a problem with one or more parts of the ear or ears and causes children to have a delay in the language-learning process. Hearing loss affects children's lives and their development. Several approaches have been developed over recent decades to help hearing-impaired children develop language skills. Auditory-verbal therapy (AVT) is one such approach. Recently, researchers have found that music and play have a considerable effect on the communication skills of children, leading to the development of music therapy (MT) and play therapy (PT). There have been several studies which focus on the impact of music on hearing-impaired children. The aim of this article is to review studies conducted in AVT, MT, and PT and their efficacy in hearing-impaired children. Furthermore, the authors aim to introduce an integrated approach of AVT, MT, and PT which facilitates language and communication skills in hearing-impaired children. In this article we review studies of AVT, MT, and PT and their impact on hearing-impaired children. To achieve this goal, we searched databases and journals including Elsevier, Chor Teach, and Military Psychology, for example. We also used reliable websites such as American Choral Directors Association and Joint Committee on Infant Hearing websites. The websites were reviewed and key words in this article used to find appropriate references. Those articles which are related to ours in content were selected. VT, MT, and PT enhance children's communication and language skills from an early age. Each method has a meaningful impact on hearing loss, so by integrating them we have a comprehensive method in order to facilitate communication and language learning. To achieve this goal, the article offers methods and techniques to perform AVT and MT integrated with PT leading to an approach which offers all advantages of these three types of therapy.

  8. Auditory-Verbal Music Play Therapy: An Integrated Approach (AVMPT)

    PubMed Central

    Mohammad Esmaeilzadeh, Sahar; Sharifi, Shahla; Tayarani Niknezhad, Hamid

    2013-01-01

    Introduction: Hearing loss occurs when there is a problem with one or more parts of the ear or ears and causes children to have a delay in the language-learning process. Hearing loss affects children's lives and their development. Several approaches have been developed over recent decades to help hearing-impaired children develop language skills. Auditory-verbal therapy (AVT) is one such approach. Recently, researchers have found that music and play have a considerable effect on the communication skills of children, leading to the development of music therapy (MT) and play therapy (PT). There have been several studies which focus on the impact of music on hearing-impaired children. The aim of this article is to review studies conducted in AVT, MT, and PT and their efficacy in hearing-impaired children. Furthermore, the authors aim to introduce an integrated approach of AVT, MT, and PT which facilitates language and communication skills in hearing-impaired children. Materials and Methods: In this article we review studies of AVT, MT, and PT and their impact on hearing-impaired children. To achieve this goal, we searched databases and journals including Elsevier, Chor Teach, and Military Psychology, for example. We also used reliable websites such as American Choral Directors Association and Joint Committee on Infant Hearing websites. The websites were reviewed and key words in this article used to find appropriate references. Those articles which are related to ours in content were selected. Conclusion: VT, MT, and PT enhance children’s communication and language skills from an early age. Each method has a meaningful impact on hearing loss, so by integrating them we have a comprehensive method in order to facilitate communication and language learning. To achieve this goal, the article offers methods and techniques to perform AVT and MT integrated with PT leading to an approach which offers all advantages of these three types of therapy. PMID:24303441

  9. INDIVIDUAL DIFFERENCES IN AUDITORY PROCESSING IN SPECIFIC LANGUAGE IMPAIRMENT: A FOLLOW-UP STUDY USING EVENT-RELATED POTENTIALS AND BEHAVIOURAL THRESHOLDS

    PubMed Central

    Bishop, Dorothy V.M.; McArthur, Genevieve M.

    2005-01-01

    It has frequently been claimed that children with specific language impairment (SLI) have impaired auditory perception, but there is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area. Discrepancies in this field may arise because (a) a focus on mean results obscures the heterogeneity in the population and (b) insufficient attention has been paid to maturational aspects of auditory processing. We conducted a study of 16 young people with specific language impairment (SLI) and 16 control participants, 24 of whom had had auditory event-related potentials (ERPs) and frequency discrimination thresholds assessed 18 months previously. When originally assessed, around one third of the listeners with SLI had poor behavioural frequency discrimination thresholds, and these tended to be the younger participants. However, most of the SLI group had age-inappropriate late components of the auditory ERP, regardless of their frequency discrimination. At follow-up, the behavioural thresholds of those with poor frequency discrimination improved, though some remained outside the control range. At follow-up, ERPs for many of the individuals in the SLI group were still not age-appropriate. In several cases, waveforms of individuals in the SLI group resembled those of younger typically-developing children, though in other cases the waveform was unlike that of control cases at any age. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. This study emphasises the variability seen in SLI, and the importance of studying individual cases rather than focusing on group means. PMID:15871598

  10. Early neural disruption and auditory processing outcomes in rodent models: implications for developmental language disability

    PubMed Central

    Fitch, R. Holly; Alexander, Michelle L.; Threlkeld, Steven W.

    2013-01-01

    Most researchers in the field of neural plasticity are familiar with the “Kennard Principle,” which purports a positive relationship between age at brain injury and severity of subsequent deficits (plateauing in adulthood). As an example, a child with left hemispherectomy can recover seemingly normal language, while an adult with focal injury to sub-regions of left temporal and/or frontal cortex can suffer dramatic and permanent language loss. Here we present data regarding the impact of early brain injury in rat models as a function of type and timing, measuring long-term behavioral outcomes via auditory discrimination tasks varying in temporal demand. These tasks were created to model (in rodents) aspects of human sensory processing that may correlate—both developmentally and functionally—with typical and atypical language. We found that bilateral focal lesions to the cortical plate in rats during active neuronal migration led to worse auditory outcomes than comparable lesions induced after cortical migration was complete. Conversely, unilateral hypoxic-ischemic (HI) injuries (similar to those seen in premature infants and term infants with birth complications) led to permanent auditory processing deficits when induced at a neurodevelopmental point comparable to human “term,” but only transient deficits (undetectable in adulthood) when induced in a “preterm” window. Convergent evidence suggests that regardless of when or how disruption of early neural development occurs, the consequences may be particularly deleterious to rapid auditory processing (RAP) outcomes when they trigger developmental alterations that extend into subcortical structures (i.e., lower sensory processing stations). Collective findings hold implications for the study of behavioral outcomes following early brain injury as well as genetic/environmental disruption, and are relevant to our understanding of the neurologic risk factors underlying developmental language disability in human populations. PMID:24155699

  11. Language-Specific Attention Treatment for Aphasia: Description and Preliminary Findings.

    PubMed

    Peach, Richard K; Nathan, Meghana R; Beck, Katherine M

    2017-02-01

    The need for a specific, language-based treatment approach to aphasic impairments associated with attentional deficits is well documented. We describe language-specific attention treatment, a specific skill-based approach for aphasia that exploits increasingly complex linguistic tasks that focus attention. The program consists of eight tasks, some with multiple phases, to assess and treat lexical and sentence processing. Validation results demonstrate that these tasks load on six attentional domains: (1) executive attention; (2) attentional switching; (3) visual selective attention/processing speed; (4) sustained attention; (5) auditory-verbal working memory; and (6) auditory processing speed. The program demonstrates excellent inter- and intrarater reliability and adequate test-retest reliability. Two of four people with aphasia exposed to this program demonstrated good language recovery whereas three of the four participants showed improvements in auditory-verbal working memory. The results provide support for this treatment program in patients with aphasia having no greater than a moderate degree of attentional impairment. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  12. Musical Experience Influences Statistical Learning of a Novel Language

    PubMed Central

    Shook, Anthony; Marian, Viorica; Bartolotti, James; Schroeder, Scott R.

    2014-01-01

    Musical experience may benefit learning a new language by enhancing the fidelity with which the auditory system encodes sound. In the current study, participants with varying degrees of musical experience were exposed to two statistically-defined languages consisting of auditory Morse-code sequences which varied in difficulty. We found an advantage for highly-skilled musicians, relative to less-skilled musicians, in learning novel Morse-code based words. Furthermore, in the more difficult learning condition, performance of lower-skilled musicians was mediated by their general cognitive abilities. We suggest that musical experience may lead to enhanced processing of statistical information and that musicians’ enhanced ability to learn statistical probabilities in a novel Morse-code language may extend to natural language learning. PMID:23505962

  13. Effects of Asymmetric Cultural Experiences on the Auditory Pathway Evidence from Music

    PubMed Central

    Wong, Patrick C. M.; Perrachione, Tyler K.; Margulis, Elizabeth Hellmuth

    2009-01-01

    Cultural experiences come in many different forms, such as immersion in a particular linguistic community, exposure to faces of people with different racial backgrounds, or repeated encounters with music of a particular tradition. In most circumstances, these cultural experiences are asymmetric, meaning one type of experience occurs more frequently than other types (e.g., a person raised in India will likely encounter the Indian todi scale more so than a Westerner). In this paper, we will discuss recent findings from our laboratories that reveal the impact of short- and long-term asymmetric musical experiences on how the nervous system responds to complex sounds. We will discuss experiments examining how musical experience may facilitate the learning of a tone language, how musicians develop neural circuitries that are sensitive to musical melodies played on their instrument of expertise, and how even everyday listeners who have little formal training are particularly sensitive to music of their own culture(s). An understanding of these cultural asymmetries is useful in formulating a more comprehensive model of auditory perceptual expertise that considers how experiences shape auditory skill levels. Such a model has the potential to aid in the development of rehabilitation programs for the efficacious treatment of neurologic impairments. PMID:19673772

  14. Reading sentences describing high- or low-pitched auditory events: only pianists show evidence for a horizontal space-pitch association.

    PubMed

    Wolter, Sibylla; Dudschig, Carolin; Kaup, Barbara

    2017-11-01

    This study explored differences between pianists and non-musicians during reading of sentences describing high- or low-pitched auditory events. Based on the embodied model of language comprehension, it was hypothesized that the experience of playing the piano encourages a corresponding association between high-pitched sounds and the right and low-pitched sounds and the left. This pitch-space association is assumed to become elicited during understanding of sentences describing either a high- or low-pitched auditory event. In this study, pianists and non-musicians were tested based on the hypothesis that only pianists show a compatibility effect between implied pitch height and horizontal space, because only pianists have the corresponding experience with the piano keyboard. Participants read pitch-related sentences (e.g., the bear growls deeply, the soprano singer sings an aria) and judged whether the sentence was sensible or not by pressing either a left or right response key. The results indicated that only the pianists showed the predicted compatibility effect between implied pitch height and response location. Based on the results, it can be inferred that the experience of playing the piano led to an association between horizontal space and pitch height in pianists, while no such spatial association was elicited in non-musicians.

  15. Comparison of Functional Network Connectivity for Passive-Listening and Active-Response Narrative Comprehension in Adolescents

    PubMed Central

    Holland, Scott K.

    2014-01-01

    Abstract Comprehension of narrative stories plays an important role in the development of language skills. In this study, we compared brain activity elicited by a passive-listening version and an active-response (AR) version of a narrative comprehension task by using independent component (IC) analysis on functional magnetic resonance imaging data from 21 adolescents (ages 14–18 years). Furthermore, we explored differences in functional network connectivity engaged by two versions of the task and investigated the relationship between the online response time and the strength of connectivity between each pair of ICs. Despite similar brain region involvements in auditory, temporoparietal, and frontoparietal language networks for both versions, the AR version engages some additional network elements including the left dorsolateral prefrontal, anterior cingulate, and sensorimotor networks. These additional involvements are likely associated with working memory and maintenance of attention, which can be attributed to the differences in cognitive strategic aspects of the two versions. We found significant positive correlation between the online response time and the strength of connectivity between an IC in left inferior frontal region and an IC in sensorimotor region. An explanation for this finding is that longer reaction time indicates stronger connection between the frontal and sensorimotor networks caused by increased activation in adolescents who require more effort to complete the task. PMID:24689887

  16. Dissociations and associations of performance in syntactic comprehension in aphasia and their implications for the nature of aphasic deficits.

    PubMed

    Caplan, David; Michaud, Jennifer; Hufford, Rebecca

    2013-10-01

    Sixty-one pwa were tested on syntactic comprehension in three tasks: sentence-picture matching, sentence-picture matching with auditory moving window presentation, and object manipulation. There were significant correlations of performances on sentences across tasks. First factors on which all sentence types loaded in unrotated factor analyses accounted for most of the variance in each task. Dissociations in performance between sentence types that differed minimally in their syntactic structures were not consistent across tasks. These results replicate previous results with smaller samples and provide important validation of basic aspects of aphasic performance in this area of language processing. They point to the role of a reduction in processing resources and of the interaction of task demands and parsing and interpretive abilities in the genesis of patient performance. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Working memory predicts semantic comprehension in dichotic listening in older adults.

    PubMed

    James, Philip J; Krishnan, Saloni; Aydelott, Jennifer

    2014-10-01

    Older adults have difficulty understanding spoken language in the presence of competing voices. Everyday social situations involving multiple simultaneous talkers may become increasingly challenging in later life due to changes in the ability to focus attention. This study examined whether individual differences in cognitive function predict older adults' ability to access sentence-level meanings in competing speech using a dichotic priming paradigm. Older listeners showed faster responses to words that matched the meaning of spoken sentences presented to the left or right ear, relative to a neutral baseline. However, older adults were more vulnerable than younger adults to interference from competing speech when the competing signal was presented to the right ear. This pattern of performance was strongly correlated with a non-auditory working memory measure, suggesting that cognitive factors play a key role in semantic comprehension in competing speech in healthy aging. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. A Model of Auditory-Cognitive Processing and Relevance to Clinical Applicability.

    PubMed

    Edwards, Brent

    2016-01-01

    Hearing loss and cognitive function interact in both a bottom-up and top-down relationship. Listening effort is tied to these interactions, and models have been developed to explain their relationship. The Ease of Language Understanding model in particular has gained considerable attention in its explanation of the effect of signal distortion on speech understanding. Signal distortion can also affect auditory scene analysis ability, however, resulting in a distorted auditory scene that can affect cognitive function, listening effort, and the allocation of cognitive resources. These effects are explained through an addition to the Ease of Language Understanding model. This model can be generalized to apply to all sounds, not only speech, representing the increased effort required for auditory environmental awareness and other nonspeech auditory tasks. While the authors have measures of speech understanding and cognitive load to quantify these interactions, they are lacking measures of the effect of hearing aid technology on auditory scene analysis ability and how effort and attention varies with the quality of an auditory scene. Additionally, the clinical relevance of hearing aid technology on cognitive function and the application of cognitive measures in hearing aid fittings will be limited until effectiveness is demonstrated in real-world situations.

  19. Auditory processing theories of language disorders: past, present, and future.

    PubMed

    Miller, Carol A

    2011-07-01

    The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory processing disorder (APD), specific language impairment (SLI), and dyslexia. The history of auditory processing theories of these 3 disorders is described, points of convergence and controversy within and among the different branches of research literature are considered, and the influence of research on practice is discussed. The theoretical and clinical contributions of neurophysiological methods are also reviewed, and suggested approaches for critical reading of the research literature are provided. Research on the role of auditory processing in communication disorders springs from a variety of theoretical perspectives and assumptions, and this variety, combined with controversies over the interpretation of research results, makes it difficult to draw clinical implications from the literature. Neurophysiological research methods are a promising route to better understanding of auditory processing. Progress in theory development and its clinical application is most likely to be made when researchers from different disciplines and theoretical perspectives communicate clearly and combine the strengths of their approaches.

  20. Statistical learning of music- and language-like sequences and tolerance for spectral shifts.

    PubMed

    Daikoku, Tatsuya; Yatomi, Yutaka; Yumoto, Masato

    2015-02-01

    In our previous study (Daikoku, Yatomi, & Yumoto, 2014), we demonstrated that the N1m response could be a marker for the statistical learning process of pitch sequence, in which each tone was ordered by a Markov stochastic model. The aim of the present study was to investigate how the statistical learning of music- and language-like auditory sequences is reflected in the N1m responses based on the assumption that both language and music share domain generality. By using vowel sounds generated by a formant synthesizer, we devised music- and language-like auditory sequences in which higher-ordered transitional rules were embedded according to a Markov stochastic model by controlling fundamental (F0) and/or formant frequencies (F1-F2). In each sequence, F0 and/or F1-F2 were spectrally shifted in the last one-third of the tone sequence. Neuromagnetic responses to the tone sequences were recorded from 14 right-handed normal volunteers. In the music- and language-like sequences with pitch change, the N1m responses to the tones that appeared with higher transitional probability were significantly decreased compared with the responses to the tones that appeared with lower transitional probability within the first two-thirds of each sequence. Moreover, the amplitude difference was even retained within the last one-third of the sequence after the spectral shifts. However, in the language-like sequence without pitch change, no significant difference could be detected. The pitch change may facilitate the statistical learning in language and music. Statistically acquired knowledge may be appropriated to process altered auditory sequences with spectral shifts. The relative processing of spectral sequences may be a domain-general auditory mechanism that is innate to humans. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. The Diagnosis and Management of Auditory Processing Disorder

    ERIC Educational Resources Information Center

    Moore, David R.

    2011-01-01

    Purpose: To provide a personal perspective on auditory processing disorder (APD), with reference to the recent clinical forum on APD and the needs of clinical speech-language pathologists and audiologists. Method: The Medical Research Council-Institute of Hearing Research (MRC-IHR) has been engaged in research into APD and auditory learning for 8…

  2. Auditory Processing Disorder and Auditory/Language Interventions: An Evidence-Based Systematic Review

    ERIC Educational Resources Information Center

    Fey, Marc E.; Richard, Gail J.; Geffner, Donna; Kamhi, Alan G.; Medwetsky, Larry; Paul, Diane; Ross-Swain, Deborah; Wallach, Geraldine P.; Frymark, Tobi; Schooling, Tracy

    2011-01-01

    Purpose: In this systematic review, the peer-reviewed literature on the efficacy of interventions for school-age children with auditory processing disorder (APD) is critically evaluated. Method: Searches of 28 electronic databases yielded 25 studies for analysis. These studies were categorized by research phase (e.g., exploratory, efficacy) and…

  3. [Children with specific language impairment: electrophysiological and pedaudiological findings].

    PubMed

    Rinker, T; Hartmann, K; Smith, E; Reiter, R; Alku, P; Kiefer, M; Brosch, S

    2014-08-01

    Auditory deficits may be at the core of the language delay in children with Specific Language Impairment (SLI). It was therefore hypothesized that children with SLI perform poorly on 4 tests typically used to diagnose central auditory processing disorder (CAPD) as well in the processing of phonetic and tone stimuli in an electrophysiological experiment. 14 children with SLI (mean age 61,7 months) and 16 children without SLI (mean age 64,9 months) were tested with 4 tasks: non-word repetition, language discrimination in noise, directional hearing, and dichotic listening. The electrophysiological recording Mismatch Negativity (MMN) employed sine tones (600 vs. 650 Hz) and phonetic stimuli (/ε/ versus /e/). Control children and children with SLI differed significantly in the non-word repetition as well as in the dichotic listening task but not in the two other tasks. Only the control children recognized the frequency difference in the MMN-experiment. The phonetic difference was discriminated by both groups, however, effects were longer lasting for the control children. Group differences were not significant. Children with SLI show limitations in auditory processing that involve either a complex task repeating unfamiliar or difficult material and show subtle deficits in auditory processing at the neural level. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Suppression and Working Memory in Auditory Comprehension of L2 Narratives: Evidence from Cross-Modal Priming

    ERIC Educational Resources Information Center

    Wu, Shiyu; Ma, Zheng

    2016-01-01

    Using a cross-modal priming task, the present study explores whether Chinese-English bilinguals process goal related information during auditory comprehension of English narratives like native speakers. Results indicate that English native speakers adopted both mechanisms of suppression and enhancement to modulate the activation of goals and keep…

  5. The contribution of short-term memory capacity to reading ability in adolescents with cochlear implants.

    PubMed

    Edwards, Lindsey; Aitkenhead, Lynne; Langdon, Dawn

    2016-11-01

    This study aimed to establish the relationship between short-term memory capacity and reading skills in adolescents with cochlear implants. A between-groups design compared a group of young people with cochlear implants with a group of hearing peers on measures of reading, and auditory and visual short-term memory capacity. The groups were matched for non-verbal IQ and age. The adolescents with cochlear implants were recruited from the Cochlear Implant Programme at a specialist children's hospital. The hearing participants were recruited from the same schools as those attended by the implanted adolescents. Participants were 18 cochlear implant users and 14 hearing controls, aged between 12 and 18 years. All used English as their main language and had no significant learning disability or neuro-developmental disorder. Short-term memory capacity was assessed in the auditory modality using Forward and Reverse Digit Span from the WISC IV UK, and visually using Forward and Reverse Memory from the Leiter-R. Individual word reading, reading comprehension and pseudoword decoding were assessed using the WIAT II UK. A series of ANOVAs revealed that the adolescents with cochlear implants had significantly poorer auditory short-term memory capacity and reading skills (on all measures) compared with their hearing peers. However, when Forward Digit Span was entered into the analyses as a covariate, none of the differences remained statistically significant. Deficits in immediate auditory memory persist into adolescence in deaf children with cochlear implants. Short-term auditory memory capacity is an important neurocognitive process in the development of reading skills after cochlear implantation in childhood that remains evident in later adolescence. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Implicit semantic priming in Spanish-speaking children and adults: an auditory lexical decision task.

    PubMed

    Girbau, Dolors; Schwartz, Richard G

    2011-05-01

    Although receptive priming has long been used as a way to examine lexical access in adults, few studies have applied this method to children and rarely in an auditory modality. We compared auditory associative priming in children and adults. A testing battery and a Lexical Decision (LD) task was administered to 42 adults and 27 children (8;1-10; 11 years-old) from Spain. They listened to Spanish word pairs (semantically related/unrelated word pairs and word-pseudoword pairs), and tone pairs. Then participants pressed one key for word pairs, and another for pairs with a word and a pseudoword. They also had to press the two keys alternatively for tone pairs as a basic auditory control. Both groups of participants, children and adults, exhibited semantic priming, with significantly faster Reaction Times (RTs) to semantically related word pairs than to unrelated pairs and to the two word-pseudoword sets. The priming effect was twice as large in the adults compared to children, and the children (not the adults) were significantly slower in their response to word-pseudoword pairs than to the unrelated word pairs. Moreover, accuracy was somewhat higher in adults than children for each word pair type, but especially in the word-pseudoword pairs. As expected, children were significantly slower than adults in the RTs for all stimulus types, and their RTs decreased significantly from 8 to 10 years of age and they also decreased in relation to some of their language abilities development (e.g., relative clauses comprehension). In both age groups, the Reaction Time average for tone pairs was lower than for speech pairs, but only all adults obtained 100% accuracy (which was slightly lower in children). Auditory processing and semantic networks are still developing in 8-10 year old children.

  7. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.

    PubMed

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.

  8. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers

    PubMed Central

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829

  9. Language development in Japanese children who receive cochlear implant and/or hearing aid.

    PubMed

    Iwasaki, Satoshi; Nishio, Shinya; Moteki, Hideaki; Takumi, Yutaka; Fukushima, Kunihiro; Kasai, Norio; Usami, Shin-Ichi

    2012-03-01

    This study aimed to investigate a wide variety of factors that influence auditory, speech, and language development following pediatric cochlear implantation (CI). Prospective collection of language tested data in profound hearing-impaired children. Pediatric CI can potentially be effective to development of practical communication skills and early implantation is more effective. We proposed a set of language tests (assessment package of the language development for Japanese hearing-impaired children; ALADJIN) consisting of communication skills testing (test for question-answer interaction development; TQAID), comprehensive (Peabody Picture Vocabulary Test-Revised; PVT-R and Standardized Comprehension Test for Abstract Words; SCTAW) and productive vocabulary (Word Fluency Test; WFT), and comprehensive and productive syntax (Syntactic processing Test for Aphasia; STA). Of 638 hearing-impaired children recruited for this study, 282 (44.2%) with >70 dB hearing impairment had undergone CI. After excluding children with low birth weight (<1800 g), those with >11 points on the Pervasive Developmental Disorder ASJ Rating Scale for the test of autistic tendency, and those <2 SD on Raven's Colored Progressive Matrices for the test of non-verbal intelligence, 190 children were subjected to this set of language tests. Sixty children (31.6%) were unilateral CI-only users, 128 (67.4%) were CI-hearing aid (HA) users, and 2 (1.1%) were bilateral CI users. Hearing loss level of CI users was significantly (p<0.01) worse than that of HA-only users. However, the threshold level, maximum speech discrimination score, and speech intelligibility rating in CI users were significantly (p<0.01) better than those in HA-only users. The scores for PVT-R (p<0.01), SCTAW, and WFT in CI users were better than those in HA-only users. STA and TQAID scores in CI-HA users were significantly (p<0.05) better than those in unilateral CI-only users. The high correlation (r=0.52) has been found between the age of CI and maximum speech discrimination score. The scores of speech and language tests in the implanted children before 24 months of age have been better than those in the implanted children after 24 months of age. We could indicate that CI was effective for language development in Japanese hearing-impaired children and early CI was more effective for productive vocabulary and syntax. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Sensitivity to audio-visual synchrony and its relation to language abilities in children with and without ASD.

    PubMed

    Righi, Giulia; Tenenbaum, Elena J; McCormick, Carolyn; Blossom, Megan; Amso, Dima; Sheinkopf, Stephen J

    2018-04-01

    Autism Spectrum Disorder (ASD) is often accompanied by deficits in speech and language processing. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to examine whether young children with ASD show reduced sensitivity to temporal asynchronies in a speech processing task when compared to typically developing controls, and to examine how this sensitivity might relate to language proficiency. Using automated eye tracking methods, we found that children with ASD failed to demonstrate sensitivity to asynchronies of 0.3s, 0.6s, or 1.0s between a video of a woman speaking and the corresponding audio track. In contrast, typically developing children who were language-matched to the ASD group, were sensitive to both 0.6s and 1.0s asynchronies. We also demonstrated that individual differences in sensitivity to audiovisual asynchronies and individual differences in orientation to relevant facial features were both correlated with scores on a standardized measure of language abilities. Results are discussed in the context of attention to visual language and audio-visual processing as potential precursors to language impairment in ASD. Autism Res 2018, 11: 645-653. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to explore whether children with ASD process audio-visual synchrony in ways comparable to their typically developing peers, and the relationship between preference for synchrony and language ability. Results showed that there are differences in attention to audiovisual synchrony between typically developing children and children with ASD. Preference for synchrony was related to the language abilities of children across groups. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.

  11. Auditory Processing, Linguistic Prosody Awareness, and Word Reading in Mandarin-Speaking Children Learning English

    ERIC Educational Resources Information Center

    Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.

    2017-01-01

    This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…

  12. From CNTNAP2 to Early Expressive Language in Infancy: The Mediation Role of Rapid Auditory Processing.

    PubMed

    Riva, Valentina; Cantiani, Chiara; Benasich, April A; Molteni, Massimo; Piazza, Caterina; Giorda, Roberto; Dionne, Ginette; Marino, Cecilia

    2018-06-01

    Although it is clear that early language acquisition can be a target of CNTNAP2, the pathway between gene and language is still largely unknown. This research focused on the mediation role of rapid auditory processing (RAP). We tested RAP at 6 months of age by the use of event-related potentials, as a mediator between common variants of the CNTNAP2 gene (rs7794745 and rs2710102) and 20-month-old language outcome in a prospective longitudinal study of 96 Italian infants. The mediation model examines the hypothesis that language outcome is explained by a sequence of effects involving RAP and CNTNAP2. The ability to discriminate spectrotemporally complex auditory frequency changes at 6 months of age mediates the contribution of rs2710102 to expressive vocabulary at 20 months. The indirect effect revealed that rs2710102 C/C was associated with lower P3 amplitude in the right hemisphere, which, in turn, predicted poorer expressive vocabulary at 20 months of age. These findings add to a growing body of literature implicating RAP as a viable marker in genetic studies of language development. The results demonstrate a potential developmental cascade of effects, whereby CNTNAP2 drives RAP functioning that, in turn, contributes to early expressive outcome.

  13. Early electrophysiological markers of atypical language processing in prematurely born infants.

    PubMed

    Paquette, Natacha; Vannasing, Phetsamone; Tremblay, Julie; Lefebvre, Francine; Roy, Marie-Sylvie; McKerral, Michelle; Lepore, Franco; Lassonde, Maryse; Gallagher, Anne

    2015-12-01

    Because nervous system development may be affected by prematurity, many prematurely born children present language or cognitive disorders at school age. The goal of this study is to investigate whether these impairments can be identified early in life using electrophysiological auditory event-related potentials (AERPs) and mismatch negativity (MMN). Brain responses to speech and non-speech stimuli were assessed in prematurely born children to identify early electrophysiological markers of language and cognitive impairments. Participants were 74 children (41 full-term, 33 preterm) aged 3, 12, and 36 months. Pre-attentional auditory responses (MMN and AERPs) were assessed using an oddball paradigm, with speech and non-speech stimuli presented in counterbalanced order between participants. Language and cognitive development were assessed using the Bayley Scale of Infant Development, Third Edition (BSID-III). Results show that preterms as young as 3 months old had delayed MMN response to speech stimuli compared to full-terms. A significant negative correlation was also found between MMN latency to speech sounds and the BSID-III expressive language subscale. However, no significant differences between full-terms and preterms were found for the MMN to non-speech stimuli, suggesting preserved pre-attentional auditory discrimination abilities in these children. Identification of early electrophysiological markers for delayed language development could facilitate timely interventions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  15. Computer-based auditory training (CBAT): benefits for children with language- and reading-related learning difficulties.

    PubMed

    Loo, Jenny Hooi Yin; Bamiou, Doris-Eva; Campbell, Nicci; Luxon, Linda M

    2010-08-01

    This article reviews the evidence for computer-based auditory training (CBAT) in children with language, reading, and related learning difficulties, and evaluates the extent it can benefit children with auditory processing disorder (APD). Searches were confined to studies published between 2000 and 2008, and they are rated according to the level of evidence hierarchy proposed by the American Speech-Language Hearing Association (ASHA) in 2004. We identified 16 studies of two commercially available CBAT programs (13 studies of Fast ForWord (FFW) and three studies of Earobics) and five further outcome studies of other non-speech and simple speech sounds training, available for children with language, learning, and reading difficulties. The results suggest that, apart from the phonological awareness skills, the FFW and Earobics programs seem to have little effect on the language, spelling, and reading skills of children. Non-speech and simple speech sounds training may be effective in improving children's reading skills, but only if it is delivered by an audio-visual method. There is some initial evidence to suggest that CBAT may be of benefit for children with APD. Further research is necessary, however, to substantiate these preliminary findings.

  16. The perception of FM sweeps by Chinese and English listeners.

    PubMed

    Luo, Huan; Boemio, Anthony; Gordon, Michael; Poeppel, David

    2007-02-01

    Frequency-modulated (FM) signals are an integral acoustic component of ecologically natural sounds and are analyzed effectively in the auditory systems of humans and animals. Linearly frequency-modulated tone sweeps were used here to evaluate two questions. First, how rapid a sweep can listeners accurately perceive? Second, is there an effect of native language insofar as the language (phonology) is differentially associated with processing of FM signals? Speakers of English and Mandarin Chinese were tested to evaluate whether being a speaker of a tone language altered the perceptual identification of non-speech tone sweeps. In two psychophysical studies, we demonstrate that Chinese subjects perform better than English subjects in FM direction identification, but not in an FM discrimination task, in which English and Chinese speakers show similar detection thresholds of approximately 20 ms duration. We suggest that the better FM direction identification in Chinese subjects is related to their experience with FM direction analysis in the tone-language environment, even though supra-segmental tonal variation occurs over a longer time scale. Furthermore, the observed common discrimination temporal threshold across two language groups supports the conjecture that processing auditory signals at durations of approximately 20 ms constitutes a fundamental auditory perceptual threshold.

  17. Assessment of Auditory Functioning of Deaf-Blind Multihandicapped Children.

    ERIC Educational Resources Information Center

    Kukla, Deborah; Connolly, Theresa Thomas

    The manual describes a procedure to assess to what extent a deaf-blind multiply handicapped student uses his residual hearing in the classroom. Six levels of auditory functioning (awareness/reflexive, attention/alerting, localization, auditory discrimination, recognition, and comprehension) are analyzed, and assessment activities are detailed for…

  18. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  19. Attentional but not pre-attentive neural measures of auditory discrimination are atypical in children with developmental language disorder.

    PubMed

    Kornilov, Sergey A; Landi, Nicole; Rakhlin, Natalia; Fang, Shin-Yi; Grigorenko, Elena L; Magnuson, James S

    2014-01-01

    We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n = 23) and typically developing (n = 16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder.

  20. [In Process Citation

    PubMed

    Ackermann; Mathiak

    1999-11-01

    Pure word deafness (auditory verbal agnosia) is characterized by an impairment of auditory comprehension, repetition of verbal material and writing to dictation whereas spontaneous speech production and reading largely remain unaffected. Sometimes, this syndrome is preceded by complete deafness (cortical deafness) of varying duration. Perception of vowels and suprasegmental features of verbal utterances (e.g., intonation contours) seems to be less disrupted than the processing of consonants and, therefore, might mediate residual auditory functions. Often, lip reading and/or slowing of speaking rate allow within some limits to compensate for speech comprehension deficits. Apart from a few exceptions, the available reports of pure word deafness documented a bilateral temporal lesion. In these instances, as a rule, identification of nonverbal (environmental) sounds, perception of music, temporal resolution of sequential auditory cues and/or spatial localization of acoustic events were compromised as well. The observed variable constellation of auditory signs and symptoms in central hearing disorders following bilateral temporal disorders, most probably, reflects the multitude of functional maps at the level of the auditory cortices subserving, as documented in a variety of non-human species, the encoding of specific stimulus parameters each. Thus, verbal/nonverbal auditory agnosia may be considered a paradigm of distorted "auditory scene analysis" (Bregman 1990) affecting both primitive and schema-based perceptual processes. It cannot be excluded, however, that disconnection of the Wernicke-area from auditory input (Geschwind 1965) and/or an impairment of suggested "phonetic module" (Liberman 1996) contribute to the observed deficits as well. Conceivably, these latter mechanisms underly the rare cases of pure word deafness following a lesion restricted to the dominant hemisphere. Only few instances of a rather isolated disruption of the discrimination/identification of nonverbal sound sources, in the presence of uncompromised speech comprehension, have been reported so far (nonverbal auditory agnosia). As a rule, unilateral right-sided damage has been found to be the relevant lesion.

  1. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    PubMed

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  2. Co-occurring motor, language and emotional-behavioral problems in children 3-6 years of age.

    PubMed

    King-Dowling, Sara; Missiuna, Cheryl; Rodriguez, M Christine; Greenway, Matt; Cairney, John

    2015-02-01

    Developmental Coordination Disorder (DCD) has been shown to co-occur with behavioral and language problems in school-aged children, but little is known as to when these problems begin to emerge, or if they are inherent in children with DCD. The purpose of this study was to determine if deficits in language and emotional-behavioral problems are apparent in preschool-aged children with movement difficulties. Two hundred and fourteen children (mean age 4years 11months, SD 9.8months, 103 male) performed the Movement Assessment Battery for Children 2nd Edition (MABC-2). Children falling at or below the 16th percentile were classified as being at risk for movement difficulties (MD risk). Auditory comprehension and expressive communication were examined using the Preschool Language Scales 4th Edition (PLS-4). Parent-reported emotional and behavioral problems were assessed using the Child Behavior Checklist (CBCL). Preschool children with diminished motor coordination (n=37) were found to have lower language scores, higher externalizing behaviors in the form of increased aggression, as well as increased withdrawn and other behavior symptoms compared with their typically developing peers. Motor coordination, language and emotional-behavioral difficulties tend to co-occur in young children aged 3-6years. These results highlight the need for early intervention. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Laterality and unilateral deafness: Patients with congenital right ear deafness do not develop atypical language dominance.

    PubMed

    Van der Haegen, Lise; Acke, Frederic; Vingerhoets, Guy; Dhooge, Ingeborg; De Leenheer, Els; Cai, Qing; Brysbaert, Marc

    2016-12-01

    Auditory speech perception, speech production and reading lateralize to the left hemisphere in the majority of healthy right-handers. In this study, we investigated to what extent sensory input underlies the side of language dominance. We measured the lateralization of the three core subprocesses of language in patients who had profound hearing loss in the right ear from birth and in matched control subjects. They took part in a semantic decision listening task involving speech and sound stimuli (auditory perception), a word generation task (speech production) and a passive reading task (reading). The results show that a lack of sensory auditory input on the right side, which is strongly connected to the contralateral left hemisphere, does not lead to atypical lateralization of speech perception. Speech production and reading were also typically left lateralized in all but one patient, contradicting previous small scale studies. Other factors such as genetic constraints presumably overrule the role of sensory input in the development of (a)typical language lateralization. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Language impairment is reflected in auditory evoked fields.

    PubMed

    Pihko, Elina; Kujala, Teija; Mickos, Annika; Alku, Paavo; Byring, Roger; Korkman, Marit

    2008-05-01

    Specific language impairment (SLI) is diagnosed when a child has problems in producing or understanding language despite having a normal IQ and there being no other obvious explanation. There can be several associated problems, and no single underlying cause has yet been identified. Some theories propose problems in auditory processing, specifically in the discrimination of sound frequency or rapid temporal frequency changes. We compared automatic cortical speech-sound processing and discrimination between a group of children with SLI and control children with normal language development (mean age: 6.6 years; range: 5-7 years). We measured auditory evoked magnetic fields using two sets of CV syllables, one with a changing consonant /da/ba/ga/ and another one with a changing vowel /su/so/sy/ in an oddball paradigm. The P1m responses for onsets of repetitive stimuli were weaker in the SLI group whereas no significant group differences were found in the mismatch responses. The results indicate that the SLI group, having weaker responses to the onsets of sounds, might have slightly depressed sensory encoding.

  5. [Auditory event-related potentials in children with functional articulation disorders].

    PubMed

    Gao, Yan; Zheng, Xi-Fu; Hong, Qi; Luo, Xiao-Xing; Jiang, Tao-Tao

    2013-08-01

    To investigate the central auditory processing function in children with functional articulation disorders (FAD), and possible causes of FAD. Twenty-seven children with FAD were selected as the case group and 50 age-matched normal children were selected as the control group. The two groups were compared with respect to the following factors: percentage of individuals with a positive history of language development disorder, and the form, peak latency and peak amplitude of mismatch negativity (MMN) on auditory event-related potentials. Compared with the control group, the case group had a significantly higher percentage of individuals with a positive history of language development disorder (70% vs 8%; P<0.01), a significantly prolonged peak latency of MMN (209 ± 31 ms vs 175 ± 32 ms; P<0.01), and an insignificantly lower peak amplitude of MMN (P>0.05). Prolonged central auditory processing may be one of the causes of FAD in children.

  6. Perceptual elements in brain mechanisms of acoustic communication in humans and nonhuman primates.

    PubMed

    Reser, David H; Rosa, Marcello

    2014-12-01

    Ackermann et al. outline a model for elaboration of subcortical motor outputs as a driving force for the development of the apparently unique behaviour of language in humans. They emphasize circuits in the striatum and midbrain, and acknowledge, but do not explore, the importance of the auditory perceptual pathway for evolution of verbal communication. We suggest that understanding the evolution of language will also require understanding of vocalization perception, especially in the auditory cortex.

  7. Auditory Magnetic Mismatch Field Latency: A Biomarker for Language Impairment in Autism

    PubMed Central

    Roberts, Timothy P.L.; Cannon, Katelyn M.; Tavabi, Kambiz; Blaskey, Lisa; Khan, Sarah Y.; Monroe, Justin F.; Qasmieh, Saba; Levy, Susan E.; Edgar, J. Christopher

    2011-01-01

    Background Auditory processing abnormalities are frequently observed in Autism Spectrum Disorders (ASD), and these abnormalities may have sequelae in terms of clinical language impairment (LI). The present study assessed associations between language impairment and the amplitude and latency of the superior temporal gyrus magnetic mismatch field (MMF) in response to changes in an auditory stream of tones or vowels. Methods 51 children with ASD and 27 neurotypical controls, all aged 6-15 years, underwent neuropsychological evaluation, including tests of language function, as well as magnetoencephalographic (MEG) recording during presentation of tones and vowels. The MMF was identified in the difference waveform obtained from subtraction of responses to standard stimuli from deviant stimuli. Results MMF latency was significantly prolonged (p<0.001) in children with ASD compared to neurotypical controls. Furthermore, this delay was most pronounced (∼50ms) in children with concomitant LI, with significant differences in latency between children with ASD with LI and those without (p<0.01). Receiver operator characteristic analysis indicated a sensitivity of 82.4% and specificity of 71.2% for diagnosing LI based on MMF latency. Conclusion Neural correlates of auditory change detection (the MMF) are significantly delayed in children with ASD, and especially those with concomitant LI suggesting both a neurobiological basis for LI as well as a clinical biomarker for LI in ASD. PMID:21392733

  8. Patterns of Auditory Perception Skills in Children with Learning Disabilities: A Computer-Assisted Approach.

    ERIC Educational Resources Information Center

    Pressman, E.; And Others

    1986-01-01

    The auditory receptive language skills of 40 learning disabled (LD) and 40 non-disabled boys (all 7 - 11 years old) were assessed via computerized versions of subtests of the Goldman-Fristoe-Woodcock Auditory Skills Test Battery. The computerized assessment correctly identified 92.5% of the LD group and 65% of the normal control children. (DB)

  9. The Process of Auditory Distraction: Disrupted Attention and Impaired Recall in a Simulated Lecture Environment

    ERIC Educational Resources Information Center

    Zeamer, Charlotte; Fox Tree, Jean E.

    2013-01-01

    Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…

  10. Scanning silence: mental imagery of complex sounds.

    PubMed

    Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz

    2005-07-15

    In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.

  11. Ira Hirsh and oral deaf education: The role of audition in language development

    NASA Astrophysics Data System (ADS)

    Geers, Ann

    2002-05-01

    Prior to the 1960s, the teaching of speech to deaf children consisted primarily of instruction in lip reading and tactile perception accompanied by imitative exercises in speech sound production. Hirsh came to Central Institute for the Deaf with an interest in discovering the auditory capabilities of normal-hearing listeners. This interest led him to speculate that more normal speech development could be encouraged in deaf children by maximizing use of their limited residual hearing. Following the tradition of Max Goldstein, Edith Whetnall, and Dennis Fry, Hirsh gave scientific validity to the use of amplified speech as the primary avenue to oral language development in prelingually deaf children. This ``auditory approach,'' combined with an emphasis on early intervention, formed the basis for auditory-oral education as we know it today. This presentation will examine how the speech perception, language, and reading skills of prelingually deaf children have changed as a result of improvements in auditory technology that have occurred over the past 30 years. Current data from children using cochlear implants will be compared with data collected earlier from children with profound hearing loss who used hearing aids. [Work supported by NIH.

  12. A Meta-Analytic Study of the Neural Systems for Auditory Processing of Lexical Tones.

    PubMed

    Kwok, Veronica P Y; Dan, Guo; Yakpo, Kofi; Matthews, Stephen; Fox, Peter T; Li, Ping; Tan, Li-Hai

    2017-01-01

    The neural systems of lexical tone processing have been studied for many years. However, previous findings have been mixed with regard to the hemispheric specialization for the perception of linguistic pitch patterns in native speakers of tonal language. In this study, we performed two activation likelihood estimation (ALE) meta-analyses, one on neuroimaging studies of auditory processing of lexical tones in tonal languages (17 studies), and the other on auditory processing of lexical information in non-tonal languages as a control analysis for comparison (15 studies). The lexical tone ALE analysis showed significant brain activations in bilateral inferior prefrontal regions, bilateral superior temporal regions and the right caudate, while the control ALE analysis showed significant cortical activity in the left inferior frontal gyrus and left temporo-parietal regions. However, we failed to obtain significant differences from the contrast analysis between two auditory conditions, which might be caused by the limited number of studies available for comparison. Although the current study lacks evidence to argue for a lexical tone specific activation pattern, our results provide clues and directions for future investigations on this topic, more sophisticated methods are needed to explore this question in more depth as well.

  13. A Meta-Analytic Study of the Neural Systems for Auditory Processing of Lexical Tones

    PubMed Central

    Kwok, Veronica P. Y.; Dan, Guo; Yakpo, Kofi; Matthews, Stephen; Fox, Peter T.; Li, Ping; Tan, Li-Hai

    2017-01-01

    The neural systems of lexical tone processing have been studied for many years. However, previous findings have been mixed with regard to the hemispheric specialization for the perception of linguistic pitch patterns in native speakers of tonal language. In this study, we performed two activation likelihood estimation (ALE) meta-analyses, one on neuroimaging studies of auditory processing of lexical tones in tonal languages (17 studies), and the other on auditory processing of lexical information in non-tonal languages as a control analysis for comparison (15 studies). The lexical tone ALE analysis showed significant brain activations in bilateral inferior prefrontal regions, bilateral superior temporal regions and the right caudate, while the control ALE analysis showed significant cortical activity in the left inferior frontal gyrus and left temporo-parietal regions. However, we failed to obtain significant differences from the contrast analysis between two auditory conditions, which might be caused by the limited number of studies available for comparison. Although the current study lacks evidence to argue for a lexical tone specific activation pattern, our results provide clues and directions for future investigations on this topic, more sophisticated methods are needed to explore this question in more depth as well. PMID:28798670

  14. Serial auditory-evoked potentials in the diagnosis and monitoring of a child with Landau-Kleffner syndrome.

    PubMed

    Plyler, Erin; Harkrider, Ashley W

    2013-01-01

    A boy, aged 2 1/2 yr, experienced sudden deterioration of speech and language abilities. He saw multiple medical professionals across 2 yr. By almost 5 yr, his vocabulary diminished from 50 words to 4, and he was referred to our speech and hearing center. The purpose of this study was to heighten awareness of Landau-Kleffner syndrome (LKS) and emphasize the importance of an objective test battery that includes serial auditory-evoked potentials (AEPs) to audiologists who often are on the front lines of diagnosis and treatment delivery when faced with a child experiencing unexplained loss of the use of speech and language. Clinical report. Interview revealed a family history of seizure disorder. Normal social behaviors were observed. Acoustic reflexes and otoacoustic emissions were consistent with normal peripheral auditory function. The child could not complete behavioral audiometric testing or auditory processing tests, so serial AEPs were used to examine central nervous system function. Normal auditory brainstem responses, a replicable Na and absent Pa of the middle latency responses, and abnormal slow cortical potentials suggested dysfunction of auditory processing at the cortical level. The child was referred to a neurologist, who confirmed LKS. At age 7 1/2 yr, after 2 1/2 yr of antiepileptic medications, electroencephalographic (EEG) and audiometric measures normalized. Presently, the child communicates manually with limited use of oral information. Audiologists often are one of the first professionals to assess children with loss of speech and language of unknown origin. Objective, noninvasive, serial AEPs are a simple and valuable addition to the central audiometric test battery when evaluating a child with speech and language regression. The inclusion of these tests will markedly increase the chance for early and accurate referral, diagnosis, and monitoring of a child with LKS which is imperative for a positive prognosis. American Academy of Audiology.

  15. Hallucination- and speech-specific hypercoupling in frontotemporal auditory and language networks in schizophrenia using combined task-based fMRI data: An fBIRN study.

    PubMed

    Lavigne, Katie M; Woodward, Todd S

    2018-04-01

    Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.

  16. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations.

    PubMed

    Ćurčić-Blake, Branislava; Ford, Judith M; Hubl, Daniela; Orlov, Natasza D; Sommer, Iris E; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W; David, Olivier; Mulert, Christoph; Woodward, Todd S; Aleman, André

    2017-01-01

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of particular relevance. However, reconciliation of these theories with experimental evidence is missing. We review 50 studies investigating functional (EEG and fMRI) and anatomic (diffusion tensor imaging) connectivity in these networks, and explore the evidence supporting abnormal connectivity in these networks associated with AVH. We distinguish between functional connectivity during an actual hallucination experience (symptom capture) and functional connectivity during either the resting state or a task comparing individuals who hallucinate with those who do not (symptom association studies). Symptom capture studies clearly reveal a pattern of increased coupling among the auditory, language and striatal regions. Anatomical and symptom association functional studies suggest that the interhemispheric connectivity between posterior auditory regions may depend on the phase of illness, with increases in non-psychotic individuals and first episode patients and decreases in chronic patients. Leading hypotheses involving concepts as unstable memories, source monitoring, top-down attention, and hybrid models of hallucinations are supported in part by the published connectivity data, although several caveats and inconsistencies remain. Specifically, possible changes in fronto-temporal connectivity are still under debate. Precise hypotheses concerning the directionality of connections deduced from current theoretical approaches should be tested using experimental approaches that allow for discrimination of competing hypotheses. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Effect of Auditory Training on Reading Comprehension of Children with Hearing Impairment in Enugu State

    ERIC Educational Resources Information Center

    Ugwuanyi, L. T.; Adaka, T. A.

    2015-01-01

    The paper focused on the effect of auditory training on reading comprehension of children with hearing impairment in Enugu State. A total of 33 children with conductive, sensory neural and mixed hearing loss were sampled for the study in the two schools for the Deaf in Enugu State. The design employed for the study was a quasi experiment (pre-test…

  18. Ultra-fast speech comprehension in blind subjects engages primary visual cortex, fusiform gyrus, and pulvinar – a functional magnetic resonance imaging (fMRI) study

    PubMed Central

    2013-01-01

    Background Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate. Results Besides left inferior frontal gyrus (IFG), bilateral posterior superior temporal sulcus (pSTS) and left supplementary motor area (SMA), blind people highly proficient in ultra-fast speech perception showed significant hemodynamic activation of right-hemispheric primary visual cortex (V1), contralateral fusiform gyrus (FG), and bilateral pulvinar (Pv). Conclusions Presumably, FG supports the left-hemispheric perisylvian “language network”, i.e., IFG and superior temporal lobe, during the (segmental) sequencing of verbal utterances whereas the collaboration of bilateral pulvinar, right auditory cortex, and ipsilateral V1 implements a signal-driven timing mechanism related to syllabic (suprasegmental) modulation of the speech signal. These data structures, conveyed via left SMA to the perisylvian “language zones”, might facilitate – under time-critical conditions – the consolidation of linguistic information at the level of verbal working memory. PMID:23879896

  19. The influence of gender on auditory and language cortical activation patterns: preliminary data.

    PubMed

    Kocak, Mehmet; Ulmer, John L; Biswal, Bharat B; Aralasmak, Ayse; Daniels, David L; Mark, Leighton P

    2005-10-01

    Intersex cortical and functional asymmetry is an ongoing topic of investigation. In this pilot study, we sought to determine the influence of acoustic scanner noise and sex on auditory and language cortical activation patterns of the dominant hemisphere. Echoplanar functional MR imaging (fMRI; 1.5T) was performed on 12 healthy right-handed subjects (6 men and 6 women). Passive text listening tasks were employed in 2 different background acoustic scanner noise conditions (12 sections/2 seconds TR [6 Hz] and 4 sections/2 seconds TR [2 Hz]), with the first 4 sections in identical locations in the left hemisphere. Cross-correlation analysis was used to construct activation maps in subregions of auditory and language relevant cortex of the dominant (left) hemisphere, and activation areas were calculated by using coefficient thresholds of 0.5, 0.6, and 0.7. Text listening caused robust activation in anatomically defined auditory cortex, and weaker activation in language relevant cortex of all 12 individuals. As a whole, there was no significant difference in regional cortical activation between the 2 background acoustic scanner noise conditions. When sex was considered, men showed a significantly (P < .01) greater change in left hemisphere activation during the high scanner noise rate condition than did women. This effect was significant (P < .05) in the left superior temporal gyrus, the posterior aspect of the left middle temporal gyrus and superior temporal sulcus, and the left inferior frontal gyrus. Increase in the rate of background acoustic scanner noise caused increased activation in auditory and language relevant cortex of the dominant hemisphere in men compared with women where no such change in activation was observed. Our preliminary data suggest possible methodologic confounds of fMRI research and calls for larger investigations to substantiate our findings and further characterize sex-based influences on hemispheric activation patterns.

  20. Linguistic Input, Electronic Media, and Communication Outcomes of Toddlers with Hearing Loss

    PubMed Central

    Ambrose, Sophie E.; VanDam, Mark; Moeller, Mary Pat

    2013-01-01

    Objectives The objectives of this study were to examine the quantity of adult words, adult-child conversational turns, and electronic media in the auditory environments of toddlers who are hard of hearing (HH) and to examine whether these variables contributed to variability in children’s communication outcomes. Design Participants were 28 children with mild to severe hearing loss. Full-day recordings of children’s auditory environments were collected within 6 months of their 2nd birthdays by utilizing LENA (Language ENvironment Analysis) technology. The system analyzes full-day acoustic recordings, yielding estimates of the quantity of adult words, conversational turns, and electronic media exposure in the recordings. Children’s communication outcomes were assessed via the receptive and expressive scales of the Mullen Scales of Early Learning at 2 years of age and the Comprehensive Assessment of Spoken Language at 3 years of age. Results On average, the HH toddlers were exposed to approximately 1400 adult words per hour and participated in approximately 60 conversational turns per hour. An average of 8% of each recording was classified as electronic media. However, there was considerable within-group variability on all three measures. Frequency of conversational turns, but not adult words, was positively associated with children’s communication outcomes at 2 and 3 years of age. Amount of electronic media exposure was negatively associated with 2-year-old receptive language abilities; however, regression results indicate that the relationship was fully mediated by the quantity of conversational turns. Conclusions HH toddlers who were engaged in more conversational turns demonstrated stronger linguistic outcomes than HH toddlers who were engaged in fewer conversational turns. The frequency of these interactions was found to be decreased in households with high rates of electronic media exposure. Optimal language-learning environments for HH toddlers include frequent linguistic interactions between parents and children. To support this goal, parents should be encouraged to reduce their children’s exposure to electronic media. PMID:24441740

  1. Linguistic input, electronic media, and communication outcomes of toddlers with hearing loss.

    PubMed

    Ambrose, Sophie E; VanDam, Mark; Moeller, Mary Pat

    2014-01-01

    The objectives of this study were to examine the quantity of adult words, adult-child conversational turns, and electronic media in the auditory environments of toddlers who are hard of hearing (HH) and to examine whether these factors contributed to variability in children's communication outcomes. Participants were 28 children with mild to severe hearing loss. Full-day recordings of children's auditory environments were collected within 6 months of their second birthdays by using Language ENvironment Analysis technology. The system analyzes full-day acoustic recordings, yielding estimates of the quantity of adult words, conversational turns, and electronic media exposure in the recordings. Children's communication outcomes were assessed via the receptive and expressive scales of the Mullen Scales of Early Learning at 2 years of age and the Comprehensive Assessment of Spoken Language at 3 years of age. On average, the HH toddlers were exposed to approximately 1400 adult words per hour and participated in approximately 60 conversational turns per hour. An average of 8% of each recording was classified as electronic media. However, there was considerable within-group variability on all three measures. Frequency of conversational turns, but not adult words, was positively associated with children's communication outcomes at 2 and 3 years of age. Amount of electronic media exposure was negatively associated with 2-year-old receptive language abilities; however, regression results indicate that the relationship was fully mediated by the quantity of conversational turns. HH toddlers who were engaged in more conversational turns demonstrated stronger linguistic outcomes than HH toddlers who were engaged in fewer conversational turns. The frequency of these interactions was found to be decreased in households with high rates of electronic media exposure. Optimal language-learning environments for HH toddlers include frequent linguistic interactions between parents and children. To support this goal, parents should be encouraged to reduce their children's exposure to electronic media.

  2. fMRI as a Preimplant Objective Tool to Predict Postimplant Oral Language Outcomes in Children with Cochlear Implants.

    PubMed

    Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K

    2016-01-01

    Despite the positive effects of cochlear implantation, postimplant variability in speech perception and oral language outcomes is still difficult to predict. The aim of this study was to identify neuroimaging biomarkers of postimplant speech perception and oral language performance in children with hearing loss who receive a cochlear implant. The authors hypothesized positive correlations between blood oxygen level-dependent functional magnetic resonance imaging (fMRI) activation in brain regions related to auditory language processing and attention and scores on the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Early Speech Perception Test for Profoundly Hearing-Impaired Children (ESP), in children with congenital hearing loss. Eleven children with congenital hearing loss were recruited for the present study based on referral for clinical MRI and other inclusion criteria. All participants were <24 months at fMRI scanning and <36 months at first implantation. A silent background fMRI acquisition method was performed to acquire fMRI during auditory stimulation. A voxel-based analysis technique was utilized to generate z maps showing significant contrast in brain activation between auditory stimulation conditions (spoken narratives and narrow band noise). CELF-P2 and ESP were administered 2 years after implantation. Because most participants reached a ceiling on ESP, a voxel-wise regression analysis was performed between preimplant fMRI activation and postimplant CELF-P2 scores alone. Age at implantation and preimplant hearing thresholds were controlled in this regression analysis. Four brain regions were found to be significantly correlated with CELF-P2 scores. These clusters of positive correlation encompassed the temporo-parieto-occipital junction, areas in the prefrontal cortex and the cingulate gyrus. For the story versus silence contrast, CELF-P2 core language score demonstrated significant positive correlation with activation in the right angular gyrus (r = 0.95), left medial frontal gyrus (r = 0.94), and left cingulate gyrus (r = 0.96). For the narrow band noise versus silence contrast, the CELF-P2 core language score exhibited significant positive correlation with activation in the left angular gyrus (r = 0.89; for all clusters, corrected p < 0.05). Four brain regions related to language function and attention were identified that correlated with CELF-P2. Children with better oral language performance postimplant displayed greater activation in these regions preimplant. The results suggest that despite auditory deprivation, these regions are more receptive to gains in oral language development performance of children with hearing loss who receive early intervention via cochlear implantation. The present study suggests that oral language outcome following cochlear implant may be predicted by preimplant fMRI with auditory stimulation using natural speech.

  3. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  4. Compilation and Clinical Applicability of an Early Auditory Processing Assessment Battery for Young Children.

    ERIC Educational Resources Information Center

    Fair, Lisl; Louw, Brenda; Hugo, Rene

    2001-01-01

    This study compiled a comprehensive early auditory processing skills assessment battery and evaluated the battery to toddlers with (n=8) and without (n=9) early recurrent otitis media. The assessment battery successfully distinguished between normal and deficient early auditory processing development in the subjects. The study also found parents…

  5. Morphological Effects in Auditory Word Recognition: Evidence from Danish

    ERIC Educational Resources Information Center

    Balling, Laura Winther; Baayen, R. Harald

    2008-01-01

    In this study, we investigate the processing of morphologically complex words in Danish using auditory lexical decision. We document a second critical point in auditory comprehension in addition to the Uniqueness Point (UP), namely the point at which competing morphological continuation forms of the base cease to be compatible with the input,…

  6. Behavioral and subcortical signatures of musical expertise in Mandarin Chinese speakers

    PubMed Central

    Tervaniemi, Mari; Aalto, Daniel

    2018-01-01

    Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for fo or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers. PMID:29300756

  7. Atypical long-latency auditory event-related potentials in a subset of children with specific language impairment

    PubMed Central

    Bishop, Dorothy VM; Hardiman, Mervyn; Uwer, Ruth; von Suchodoletz, Waldemar

    2007-01-01

    It has been proposed that specific language impairment (SLI) is the consequence of low-level abnormalities in auditory perception. However, studies of long-latency auditory ERPs in children with SLI have generated inconsistent findings. A possible reason for this inconsistency is the heterogeneity of SLI. The intraclass correlation (ICC) has been proposed as a useful statistic for evaluating heterogeneity because it allows one to compare an individual's auditory ERP with the grand average waveform from a typically developing reference group. We used this method to reanalyse auditory ERPs from a sample previously described by Uwer, Albrecht and von Suchodoletz (2002). In a subset of children with receptive SLI, there was less correspondence (i.e. lower ICC) with the normative waveform (based on the control grand average) than for typically developing children. This poorer correspondence was seen in responses to both tone and speech stimuli for the period 100–228 ms post stimulus onset. The effect was lateralized and seen at right- but not left-sided electrodes. PMID:17683344

  8. The right posterior paravermis and the control of language interference.

    PubMed

    Filippi, Roberto; Richardson, Fiona M; Dick, Frederic; Leech, Robert; Green, David W; Thomas, Michael S C; Price, Cathy J

    2011-07-20

    Auditory and written language in humans' comprehension necessitates attention to the message of interest and suppression of interference from distracting sources. Investigating the brain areas associated with the control of interference is challenging because it is inevitable that activation of the brain regions that control interference co-occurs with activation related to interference per se. To isolate the mechanisms that control verbal interference, we used a combination of structural and functional imaging techniques in Italian and German participants who spoke English as a second language. First, we searched structural MRI images of Italian participants for brain regions in which brain structure correlated with the ability to suppress interference from the unattended dominant language (Italian) while processing heard sentences in their weaker language (English). This revealed an area in the posterior paravermis of the right cerebellum in which gray matter density was higher in individuals who were better at controlling verbal interference. Second, we found functional activation in the same region when our German participants made semantic decisions on written English words in the presence of interference from unrelated words in their dominant language (German). This combination of structural and functional imaging therefore highlights the contribution of the right posterior paravermis to the control of verbal interference. We suggest that the importance of this region for language processing has previously been missed because most fMRI studies limit the field of view to increase sensitivity, with the lower part of the cerebellum being the region most likely to be excluded.

  9. Tracking real-time neural activation of conceptual knowledge using single-trial event-related potentials.

    PubMed

    Amsel, Ben D

    2011-04-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Multiple Causal Links Between Magnocellular-Dorsal Pathway Deficit and Developmental Dyslexia.

    PubMed

    Gori, Simone; Seitz, Aaron R; Ronconi, Luca; Franceschini, Sandro; Facoetti, Andrea

    2016-10-17

    Although impaired auditory-phonological processing is the most popular explanation of developmental dyslexia (DD), the literature shows that the combination of several causes rather than a single factor contributes to DD. Functioning of the visual magnocellular-dorsal (MD) pathway, which plays a key role in motion perception, is a much debated, but heavily suspected factor contributing to DD. Here, we employ a comprehensive approach that incorporates all the accepted methods required to test the relationship between the MD pathway dysfunction and DD. The results of 4 experiments show that (1) Motion perception is impaired in children with dyslexia in comparison both with age-match and with reading-level controls; (2) pre-reading visual motion perception-independently from auditory-phonological skill-predicts future reading development, and (3) targeted MD trainings-not involving any auditory-phonological stimulation-leads to improved reading skill in children and adults with DD. Our findings demonstrate, for the first time, a causal relationship between MD deficits and DD, virtually closing a 30-year long debate. Since MD dysfunction can be diagnosed much earlier than reading and language disorders, our findings pave the way for low resource-intensive, early prevention programs that could drastically reduce the incidence of DD. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    PubMed Central

    Zaltz, Yael; Globerson, Eitan; Amir, Noam

    2017-01-01

    The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318

  12. Operator Performance Measures for Assessing Voice Communication Effectiveness

    DTIC Science & Technology

    1989-07-01

    performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor

  13. Attentional but not Pre-Attentive Neural Measures of Auditory Discrimination are Atypical in Children with Developmental Language Disorder

    PubMed Central

    Kornilov, Sergey A.; Landi, Nicole; Rakhlin, Natalia; Fang, Shin-Yi; Grigorenko, Elena L.; Magnuson, James S.

    2015-01-01

    We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n=23) and typically developing (n=16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder. PMID:25350759

  14. Application of a model of the auditory primal sketch to cross-linguistic differences in speech rhythm: Implications for the acquisition and recognition of speech

    NASA Astrophysics Data System (ADS)

    Todd, Neil P. M.; Lee, Christopher S.

    2002-05-01

    It has long been noted that the world's languages vary considerably in their rhythmic organization. Different languages seem to privilege different phonological units as their basic rhythmic unit, and there is now a large body of evidence that such differences have important consequences for crucial aspects of language acquisition and processing. The most fundamental finding is that the rhythmic structure of a language strongly influences the process of spoken-word recognition. This finding, together with evidence that infants are sensitive from birth to rhythmic differences between languages, and exploit rhythmic cues to segmentation at an earlier developmental stage than other cues prompted the claim that rhythm is the key which allows infants to begin building a lexicon and then go on to acquire syntax. It is therefore of interest to determine how differences in rhythmic organization arise at the acoustic/auditory level. In this paper, it is shown how an auditory model of the primitive representation of sound provides just such an account of rhythmic differences. Its performance is evaluated on a data set of French and English sentences and compared with the results yielded by the phonetic accounts of Frank Ramus and his colleagues and Esther Grabe and her colleagues.

  15. The function of the left anterior temporal pole: evidence from acute stroke and infarct volume

    PubMed Central

    Tsapkini, Kyrana; Frangakis, Constantine E.

    2011-01-01

    The role of the anterior temporal lobes in cognition and language has been much debated in the literature over the last few years. Most prevailing theories argue for an important role of the anterior temporal lobe as a semantic hub or a place for the representation of unique entities such as proper names of peoples and places. Lately, a few studies have investigated the role of the most anterior part of the left anterior temporal lobe, the left temporal pole in particular, and argued that the left anterior temporal pole is the area responsible for mapping meaning on to sound through evidence from tasks such as object naming. However, another recent study indicates that bilateral anterior temporal damage is required to cause a clinically significant semantic impairment. In the present study, we tested these hypotheses by evaluating patients with acute stroke before reorganization of structure–function relationships. We compared a group of 20 patients with acute stroke with anterior temporal pole damage to a group of 28 without anterior temporal pole damage matched for infarct volume. We calculated the average percent error in auditory comprehension and naming tasks as a function of infarct volume using a non-parametric regression method. We found that infarct volume was the only predictive variable in the production of semantic errors in both auditory comprehension and object naming tasks. This finding favours the hypothesis that left unilateral anterior temporal pole lesions, even acutely, are unlikely to cause significant deficits in mapping meaning to sound by themselves, although they contribute to networks underlying both naming and comprehension of objects. Therefore, the anterior temporal lobe may be a semantic hub for object meaning, but its role must be represented bilaterally and perhaps redundantly. PMID:21685458

  16. Computational modeling of the human auditory periphery: Auditory-nerve responses, evoked potentials and hearing loss.

    PubMed

    Verhulst, Sarah; Altoè, Alessandro; Vasilkov, Viacheslav

    2018-03-01

    Models of the human auditory periphery range from very basic functional descriptions of auditory filtering to detailed computational models of cochlear mechanics, inner-hair cell (IHC), auditory-nerve (AN) and brainstem signal processing. It is challenging to include detailed physiological descriptions of cellular components into human auditory models because single-cell data stems from invasive animal recordings while human reference data only exists in the form of population responses (e.g., otoacoustic emissions, auditory evoked potentials). To embed physiological models within a comprehensive human auditory periphery framework, it is important to capitalize on the success of basic functional models of hearing and render their descriptions more biophysical where possible. At the same time, comprehensive models should capture a variety of key auditory features, rather than fitting their parameters to a single reference dataset. In this study, we review and improve existing models of the IHC-AN complex by updating their equations and expressing their fitting parameters into biophysical quantities. The quality of the model framework for human auditory processing is evaluated using recorded auditory brainstem response (ABR) and envelope-following response (EFR) reference data from normal and hearing-impaired listeners. We present a model with 12 fitting parameters from the cochlea to the brainstem that can be rendered hearing impaired to simulate how cochlear gain loss and synaptopathy affect human population responses. The model description forms a compromise between capturing well-described single-unit IHC and AN properties and human population response features. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  17. A locus for an auditory processing deficit and language impairment in an extended pedigree maps to 12p13.31-q14.3

    PubMed Central

    Addis, L; Friederici, A D; Kotz, S A; Sabisch, B; Barry, J; Richter, N; Ludwig, A A; Rübsamen, R; Albert, F W; Pääbo, S; Newbury, D F; Monaco, A P

    2010-01-01

    Despite the apparent robustness of language learning in humans, a large number of children still fail to develop appropriate language skills despite adequate means and opportunity. Most cases of language impairment have a complex etiology, with genetic and environmental influences. In contrast, we describe a three-generation German family who present with an apparently simple segregation of language impairment. Investigations of the family indicate auditory processing difficulties as a core deficit. Affected members performed poorly on a nonword repetition task and present with communication impairments. The brain activation pattern for syllable duration as measured by event-related brain potentials showed clear differences between affected family members and controls, with only affected members displaying a late discrimination negativity. In conjunction with psychoacoustic data showing deficiencies in auditory duration discrimination, the present results indicate increased processing demands in discriminating syllables of different duration. This, we argue, forms the cognitive basis of the observed language impairment in this family. Genome-wide linkage analysis showed a haplotype in the central region of chromosome 12 which reaches the maximum possible logarithm of odds ratio (LOD) score and fully co-segregates with the language impairment, consistent with an autosomal dominant, fully penetrant mode of inheritance. Whole genome analysis yielded no novel inherited copy number variants strengthening the case for a simple inheritance pattern. Several genes in this region of chromosome 12 which are potentially implicated in language impairment did not contain polymorphisms likely to be the causative mutation, which is as yet unknown. PMID:20345892

  18. Speech rhythm facilitates syntactic ambiguity resolution: ERP evidence.

    PubMed

    Roncaglia-Denissen, Maria Paula; Schmidt-Kassow, Maren; Kotz, Sonja A

    2013-01-01

    In the current event-related potential (ERP) study, we investigated how speech rhythm impacts speech segmentation and facilitates the resolution of syntactic ambiguities in auditory sentence processing. Participants listened to syntactically ambiguous German subject- and object-first sentences that were spoken with either regular or irregular speech rhythm. Rhythmicity was established by a constant metric pattern of three unstressed syllables between two stressed ones that created rhythmic groups of constant size. Accuracy rates in a comprehension task revealed that participants understood rhythmically regular sentences better than rhythmically irregular ones. Furthermore, the mean amplitude of the P600 component was reduced in response to object-first sentences only when embedded in rhythmically regular but not rhythmically irregular context. This P600 reduction indicates facilitated processing of sentence structure possibly due to a decrease in processing costs for the less-preferred structure (object-first). Our data suggest an early and continuous use of rhythm by the syntactic parser and support language processing models assuming an interactive and incremental use of linguistic information during language processing.

  19. Speech Rhythm Facilitates Syntactic Ambiguity Resolution: ERP Evidence

    PubMed Central

    Roncaglia-Denissen, Maria Paula; Schmidt-Kassow, Maren; Kotz, Sonja A.

    2013-01-01

    In the current event-related potential (ERP) study, we investigated how speech rhythm impacts speech segmentation and facilitates the resolution of syntactic ambiguities in auditory sentence processing. Participants listened to syntactically ambiguous German subject- and object-first sentences that were spoken with either regular or irregular speech rhythm. Rhythmicity was established by a constant metric pattern of three unstressed syllables between two stressed ones that created rhythmic groups of constant size. Accuracy rates in a comprehension task revealed that participants understood rhythmically regular sentences better than rhythmically irregular ones. Furthermore, the mean amplitude of the P600 component was reduced in response to object-first sentences only when embedded in rhythmically regular but not rhythmically irregular context. This P600 reduction indicates facilitated processing of sentence structure possibly due to a decrease in processing costs for the less-preferred structure (object-first). Our data suggest an early and continuous use of rhythm by the syntactic parser and support language processing models assuming an interactive and incremental use of linguistic information during language processing. PMID:23409109

  20. Different Cognitive Profiles of Patients with Severe Aphasia.

    PubMed

    Marinelli, Chiara Valeria; Spaccavento, Simona; Craca, Angela; Marangolo, Paola; Angelelli, Paola

    2017-01-01

    Cognitive dysfunction frequently occurs in aphasic patients and primarily compromises linguistic skills. However, patients suffering from severe aphasia show heterogeneous performance in basic cognition. Our aim was to characterize the cognitive profiles of patients with severe aphasia and to determine whether they also differ as to residual linguistic abilities. We examined 189 patients with severe aphasia with standard language tests and with the CoBaGA (Cognitive Test Battery for Global Aphasia), a battery of nonverbal tests that assesses a wide range of cognitive domains such as attention, executive functions, intelligence, memory, visual-auditory recognition, and visual-spatial abilities. Twenty patients were also followed longitudinally in order to assess their improvement in cognitive skills after speech therapy. Three different subgroups of patients with different types and severity of cognitive impairment were evidenced. Subgroups differed as to residual linguistic skills, in particular comprehension and reading-writing abilities. Attention, reasoning, and executive functions improved after language rehabilitation. This study highlights the importance of an extensive evaluation of cognitive functions in patients with severe aphasia.

  1. The anterior temporal lobes support residual comprehension in Wernicke’s aphasia

    PubMed Central

    Robson, Holly; Zahn, Roland; Keidel, James L.; Binney, Richard J.; Sage, Karen; Lambon Ralph, Matthew A.

    2014-01-01

    Wernicke’s aphasia occurs after a stroke to classical language comprehension regions in the left temporoparietal cortex. Consequently, auditory–verbal comprehension is significantly impaired in Wernicke’s aphasia but the capacity to comprehend visually presented materials (written words and pictures) is partially spared. This study used functional magnetic resonance imaging to investigate the neural basis of written word and picture semantic processing in Wernicke’s aphasia, with the wider aim of examining how the semantic system is altered after damage to the classical comprehension regions. Twelve participants with chronic Wernicke’s aphasia and 12 control participants performed semantic animate–inanimate judgements and a visual height judgement baseline task. Whole brain and region of interest analysis in Wernicke’s aphasia and control participants found that semantic judgements were underpinned by activation in the ventral and anterior temporal lobes bilaterally. The Wernicke’s aphasia group displayed an ‘over-activation’ in comparison with control participants, indicating that anterior temporal lobe regions become increasingly influential following reduction in posterior semantic resources. Semantic processing of written words in Wernicke’s aphasia was additionally supported by recruitment of the right anterior superior temporal lobe, a region previously associated with recovery from auditory-verbal comprehension impairments. Overall, the results provide support for models in which the anterior temporal lobes are crucial for multimodal semantic processing and that these regions may be accessed without support from classic posterior comprehension regions. PMID:24519979

  2. Effect of education on listening comprehension of sentences on healthy elderly: analysis of number of correct responses and task execution time.

    PubMed

    Silagi, Marcela Lima; Rabelo, Camila Maia; Schochat, Eliane; Mansur, Letícia Lessa

    2017-11-13

    To analyze the effect of education on sentence listening comprehension on cognitively healthy elderly. A total of 111 healthy elderly, aged 60-80 years of both genders were divided into two groups according to educational level: low education (0-8 years of formal education) and high education (≥9 years of formal education). The participants were assessed using the Revised Token Test, an instrument that supports the evaluation of auditory comprehension of orders with different working memory and syntactic complexity demands. The indicators used for performance analysis were the number of correct responses (accuracy analysis) and task execution time (temporal analysis) in the different blocks. The low educated group had a lower number of correct responses than the high educated group on all blocks of the test. In the temporal analysis, participants with low education had longer execution time for commands on the first four blocks related to working memory. However, the two groups had similar execution time for blocks more related to syntactic comprehension. Education influenced sentence listening comprehension on elderly. Temporal analysis allowed to infer over the relationship between comprehension and other cognitive abilities, and to observe that the low educated elderly did not use effective compensation strategies to improve their performances on the task. Therefore, low educational level, associated with aging, may potentialize the risks for language decline.

  3. Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling

    DOE PAGES

    Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.; ...

    2017-06-30

    Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less

  4. Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.

    Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less

  5. Central Auditory Nervous System Dysfunction in Echolalic Autistic Individuals.

    ERIC Educational Resources Information Center

    Wetherby, Amy Miller; And Others

    1981-01-01

    The results showed that all the Ss had normal hearing on the monaural speech tests; however, there was indication of central auditory nervous system dysfunction in the language dominant hemisphere, inferred from the dichotic tests, for those Ss displaying echolalia. (Author)

  6. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory

    PubMed Central

    Bigand, Emmanuel; Delbé, Charles; Poulin-Charronnat, Bénédicte; Leman, Marc; Tillmann, Barbara

    2014-01-01

    During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds. PMID:24936174

  7. Grammatical Processing of Spoken Language in Child and Adult Language Learners

    ERIC Educational Resources Information Center

    Felser, Claudia; Clahsen, Harald

    2009-01-01

    This article presents a selective overview of studies that have investigated auditory language processing in children and late second-language (L2) learners using online methods such as event-related potentials (ERPs), eye-movement monitoring, or the cross-modal priming paradigm. Two grammatical phenomena are examined in detail, children's and…

  8. Language experience changes subsequent learning

    PubMed Central

    Onnis, Luca; Thiessen, Erik

    2013-01-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. PMID:23200510

  9. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    PubMed

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.

  10. Basic auditory processing and sensitivity to prosodic structure in children with specific language impairments: a new look at a perceptual hypothesis

    PubMed Central

    Cumming, Ruth; Wilson, Angela; Goswami, Usha

    2015-01-01

    Children with specific language impairments (SLIs) show impaired perception and production of spoken language, and can also present with motor, auditory, and phonological difficulties. Recent auditory studies have shown impaired sensitivity to amplitude rise time (ART) in children with SLIs, along with non-speech rhythmic timing difficulties. Linguistically, these perceptual impairments should affect sensitivity to speech prosody and syllable stress. Here we used two tasks requiring sensitivity to prosodic structure, the DeeDee task and a stress misperception task, to investigate this hypothesis. We also measured auditory processing of ART, rising pitch and sound duration, in both speech (“ba”) and non-speech (tone) stimuli. Participants were 45 children with SLI aged on average 9 years and 50 age-matched controls. We report data for all the SLI children (N = 45, IQ varying), as well as for two independent SLI subgroupings with intact IQ. One subgroup, “Pure SLI,” had intact phonology and reading (N = 16), the other, “SLI PPR” (N = 15), had impaired phonology and reading. Problems with syllable stress and prosodic structure were found for all the group comparisons. Both sub-groups with intact IQ showed reduced sensitivity to ART in speech stimuli, but the PPR subgroup also showed reduced sensitivity to sound duration in speech stimuli. Individual differences in processing syllable stress were associated with auditory processing. These data support a new hypothesis, the “prosodic phrasing” hypothesis, which proposes that grammatical difficulties in SLI may reflect perceptual difficulties with global prosodic structure related to auditory impairments in processing amplitude rise time and duration. PMID:26217286

  11. Neural preservation underlies speech improvement from auditory deprivation in young cochlear implant recipients.

    PubMed

    Feng, Gangyi; Ingvalson, Erin M; Grieco-Calub, Tina M; Roberts, Megan Y; Ryan, Maura E; Birmingham, Patrick; Burrowes, Delilah; Young, Nancy M; Wong, Patrick C M

    2018-01-30

    Although cochlear implantation enables some children to attain age-appropriate speech and language development, communicative delays persist in others, and outcomes are quite variable and difficult to predict, even for children implanted early in life. To understand the neurobiological basis of this variability, we used presurgical neural morphological data obtained from MRI of individual pediatric cochlear implant (CI) candidates implanted younger than 3.5 years to predict variability of their speech-perception improvement after surgery. We first compared neuroanatomical density and spatial pattern similarity of CI candidates to that of age-matched children with normal hearing, which allowed us to detail neuroanatomical networks that were either affected or unaffected by auditory deprivation. This information enables us to build machine-learning models to predict the individual children's speech development following CI. We found that regions of the brain that were unaffected by auditory deprivation, in particular the auditory association and cognitive brain regions, produced the highest accuracy, specificity, and sensitivity in patient classification and the most precise prediction results. These findings suggest that brain areas unaffected by auditory deprivation are critical to developing closer to typical speech outcomes. Moreover, the findings suggest that determination of the type of neural reorganization caused by auditory deprivation before implantation is valuable for predicting post-CI language outcomes for young children.

  12. Selective verbal recognition memory impairments are associated with atrophy of the language network in non-semantic variants of primary progressive aphasia.

    PubMed

    Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J

    2017-06-01

    Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Don't believe everything you hear: Routine validation of audiovisual information in children and adults.

    PubMed

    Piest, Benjamin A; Isberner, Maj-Britt; Richter, Tobias

    2018-04-05

    Previous research has shown that the validation of incoming information during language comprehension is a fast, efficient, and routine process (epistemic monitoring). Previous research on this topic has focused on epistemic monitoring during reading. The present study extended this research by investigating epistemic monitoring of audiovisual information. In a Stroop-like paradigm, participants (Experiment 1: adults; Experiment 2: 10-year-old children) responded to the probe words correct and false by keypress after the presentation of auditory assertions that could be either true or false with respect to concurrently presented pictures. Results provide evidence for routine validation of audiovisual information. Moreover, the results show a stronger and more stable interference effect for children compared with adults.

  14. Poor readers' retrieval mechanism: efficient access is not dependent on reading skill

    PubMed Central

    Johns, Clinton L.; Matsuki, Kazunaga; Van Dyke, Julie A.

    2015-01-01

    A substantial body of evidence points to a cue-based direct-access retrieval mechanism as a crucial component of skilled adult reading. We report two experiments aimed at examining whether poor readers are able to make use of the same retrieval mechanism. This is significant in light of findings that poor readers have difficulty retrieving linguistic information (e.g., Perfetti, 1985). Our experiments are based on a previous demonstration of direct-access retrieval in language processing, presented in McElree et al. (2003). Experiment 1 replicates the original result using an auditory implementation of the Speed-Accuracy Tradeoff (SAT) method. This finding represents a significant methodological advance, as it opens up the possibility of exploring retrieval speeds in non-reading populations. Experiment 2 provides evidence that poor readers do use a direct-access retrieval mechanism during listening comprehension, despite overall poorer accuracy and slower retrieval speeds relative to skilled readers. The findings are discussed with respect to hypotheses about the source of poor reading comprehension. PMID:26528212

  15. Using Films in Vocabulary Teaching of Turkish as a Foreign Language

    ERIC Educational Resources Information Center

    Iscan, Adem

    2017-01-01

    The use and utility of auditory and visual tools in language teaching is a common practice. Films constitute one of the tools. It has been found that using films in language teaching is also effective in the development of vocabulary of foreign language learners. The literature review reveals that while films are used in foreign language teaching…

  16. The Efficacy of Fast ForWord Language Intervention in School-Age Children with Language Impairment: A Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Gillam, Ronald B.; Loeb, Diane Frome; Hoffman, LaVae M.; Bohman, Thomas; Champlin, Craig A.; Thibodeau, Linda; Widen, Judith; Brandel, Jayne; Friel-Patti, Sandy

    2008-01-01

    Purpose: A randomized controlled trial was conducted to compare the language and auditory processing outcomes of children assigned to receive the Fast ForWord Language intervention (FFW-L) with the outcomes of children assigned to nonspecific or specific language intervention comparison treatments that did not contain modified speech. Method: Two…

  17. The role of Broca's area in speech perception: evidence from aphasia revisited.

    PubMed

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-12-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.

  18. Reading and language in 9- to 12-year olds prenatally exposed to cigarettes and marijuana.

    PubMed

    Fried, P A; Watkinson, B; Siegel, L S

    1997-01-01

    Facets of reading and language were examined in 131 9- to 12-year-old children for whom prenatal exposure to marijuana and cigarettes had been ascertained. The subjects were from a low-risk, predominantly middle class sample who are participants in an ongoing longitudinal study. Discriminant Function Analysis revealed a dose-dependent association that remained after controlling for potential confounds, between prenatal cigarette exposure and lower language and lower reading scores, particularly on auditory-related aspects of this latter measure. The findings are interpreted as consistent with earlier observations of an association between cigarette smoking during pregnancy and altered auditory functioning in the offspring. Similarities and differences between the reading observations and dyslexia are discussed. Maternal prenatal passive smoke exposure did not appear to contribute to either the language or reading outcomes at this age but postnatal secondhand smoke exposure by the child was associated with poorer language scores. Prenatal marijuana exposure was not significantly related to either the reading or language outcomes.

  19. Speech, language, and cognitive dysfunction in children with focal epileptiform activity: A follow-up study.

    PubMed

    Rejnö-Habte Selassie, Gunilla; Hedström, Anders; Viggedal, Gerd; Jennische, Margareta; Kyllerman, Mårten

    2010-07-01

    We reviewed the medical history, EEG recordings, and developmental milestones of 19 children with speech and language dysfunction and focal epileptiform activity. Speech, language, and neuropsychological assessments and EEG recordings were performed at follow-up, and prognostic indicators were analyzed. Three patterns of language development were observed: late start and slow development, late start and deterioration/regression, and normal start and later regression/deterioration. No differences in test results among these groups were seen, indicating a spectrum of related conditions including Landau-Kleffner syndrome and epileptic language disorder. More than half of the participants had speech and language dysfunction at follow-up. IQ levels, working memory, and processing speed were also affected. Dysfunction of auditory perception in noise was found in more than half of the participants, and dysfunction of auditory attention in all. Dysfunction of communication, oral motor ability, and stuttering were noted in a few. Family history of seizures and abundant epileptiform activity indicated a worse prognosis. Copyright 2010 Elsevier Inc. All rights reserved.

  20. Speech and language development in cognitively delayed children with cochlear implants.

    PubMed

    Holt, Rachael Frush; Kirk, Karen Iler

    2005-04-01

    The primary goals of this investigation were to examine the speech and language development of deaf children with cochlear implants and mild cognitive delay and to compare their gains with those of children with cochlear implants who do not have this additional impairment. We retrospectively examined the speech and language development of 69 children with pre-lingual deafness. The experimental group consisted of 19 children with cognitive delays and no other disabilities (mean age at implantation = 38 months). The control group consisted of 50 children who did not have cognitive delays or any other identified disability. The control group was stratified by primary communication mode: half used total communication (mean age at implantation = 32 months) and the other half used oral communication (mean age at implantation = 26 months). Children were tested on a variety of standard speech and language measures and one test of auditory skill development at 6-month intervals. The results from each test were collapsed from blocks of two consecutive 6-month intervals to calculate group mean scores before implantation and at 1-year intervals after implantation. The children with cognitive delays and those without such delays demonstrated significant improvement in their speech and language skills over time on every test administered. Children with cognitive delays had significantly lower scores than typically developing children on two of the three measures of receptive and expressive language and had significantly slower rates of auditory-only sentence recognition development. Finally, there were no significant group differences in auditory skill development based on parental reports or in auditory-only or multimodal word recognition. The results suggest that deaf children with mild cognitive impairments benefit from cochlear implantation. Specifically, improvements are evident in their ability to perceive speech and in their reception and use of language. However, it may be reduced relative to their typically developing peers with cochlear implants, particularly in domains that require higher level skills, such as sentence recognition and receptive and expressive language. These findings suggest that children with mild cognitive deficits be considered for cochlear implantation with less trepidation than has been the case in the past. Although their speech and language gains may be tempered by their cognitive abilities, these limitations do not appear to preclude benefit from cochlear implant stimulation, as assessed by traditional measures of speech and language development.

  1. Important considerations in lesion-symptom mapping: Illustrations from studies of word comprehension.

    PubMed

    Shahid, Hinna; Sebastian, Rajani; Schnur, Tatiana T; Hanayik, Taylor; Wright, Amy; Tippett, Donna C; Fridriksson, Julius; Rorden, Chris; Hillis, Argye E

    2017-06-01

    Lesion-symptom mapping is an important method of identifying networks of brain regions critical for functions. However, results might be influenced substantially by the imaging modality and timing of assessment. We tested the hypothesis that brain regions found to be associated with acute language deficits depend on (1) timing of behavioral measurement, (2) imaging sequences utilized to define the "lesion" (structural abnormality only or structural plus perfusion abnormality), and (3) power of the study. We studied 191 individuals with acute left hemisphere stroke with MRI and language testing to identify areas critical for spoken word comprehension. We use the data from this study to examine the potential impact of these three variables on lesion-symptom mapping. We found that only the combination of structural and perfusion imaging within 48 h of onset identified areas where more abnormal voxels was associated with more severe acute deficits, after controlling for lesion volume and multiple comparisons. The critical area identified with this methodology was the left posterior superior temporal gyrus, consistent with other methods that have identified an important role of this area in spoken word comprehension. Results have implications for interpretation of other lesion-symptom mapping studies, as well as for understanding areas critical for auditory word comprehension in the healthy brain. We propose that lesion-symptom mapping at the acute stage of stroke addresses a different sort of question about brain-behavior relationships than lesion-symptom mapping at the chronic stage, but that timing of behavioral measurement and imaging modalities should be considered in either case. Hum Brain Mapp 38:2990-3000, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Sequencing Stories in Spanish and English.

    ERIC Educational Resources Information Center

    Steckbeck, Pamela Meza

    The guide was designed for speech pathologists, bilingual teachers, and specialists in English as a second language who work with Spanish-speaking children. The guide contains twenty illustrated stories that facilitate the learning of auditory sequencing, auditory and visual memory, receptive and expressive vocabulary, and expressive language…

  3. The organization and reorganization of audiovisual speech perception in the first year of life.

    PubMed

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  4. The organization and reorganization of audiovisual speech perception in the first year of life

    PubMed Central

    Danielson, D. Kyle; Bruderer, Alison G.; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F.

    2017-01-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone. PMID:28970650

  5. Web-based auditory self-training system for adult and elderly users of hearing aids.

    PubMed

    Vitti, Simone Virginia; Blasca, Wanderléia Quinhoneiro; Sigulem, Daniel; Torres Pisa, Ivan

    2015-01-01

    Adults and elderly users of hearing aids suffer psychosocial reactions as a result of hearing loss. Auditory rehabilitation is typically carried out with support from a speech therapist, usually in a clinical center. For these cases, there is a lack of computer-based self-training tools for minimizing the psychosocial impact of hearing deficiency. To develop and evaluate a web-based auditory self-training system for adult and elderly users of hearing aids. Two modules were developed for the web system: an information module based on guidelines for using hearing aids; and an auditory training module presenting a sequence of training exercises for auditory abilities along the lines of the auditory skill steps within auditory processing. We built aweb system using PHP programming language and a MySQL database .from requirements surveyed through focus groups that were conducted by healthcare information technology experts. The web system was evaluated by speech therapists and hearing aid users. An initial sample of 150 patients at DSA/HRAC/USP was defined to apply the system with the inclusion criteria that: the individuals should be over the age of 25 years, presently have hearing impairment, be a hearing aid user, have a computer and have internet experience. They were divided into two groups: a control group (G1) and an experimental group (G2). These patients were evaluated clinically using the HHIE for adults and HHIA for elderly people, before and after system implementation. A third web group was formed with users who were invited through social networks for their opinions on using the system. A questionnaire evaluating hearing complaints was given to all three groups. The study hypothesis considered that G2 would present greater auditory perception, higher satisfaction and fewer complaints than G1 after the auditory training. It was expected that G3 would have fewer complaints regarding use and acceptance of the system. The web system, which was named SisTHA portal, was finalized, rated by experts and hearing aid users and approved for use. The system comprised auditory skills training along five lines: discrimination; recognition; comprehension and temporal sequencing; auditory closure; and cognitive-linguistic and communication strategies. Users needed to undergo auditory training over a minimum period of 1 month: 5 times a week for 30 minutes a day. Comparisons were made between G1 and G2 and web system use by G3. The web system developed was approved for release to hearing aid users. It is expected that the self-training will help improve effective use of hearing aids, thereby decreasing their rejection.

  6. A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study

    PubMed Central

    Weis, Susanne; Kircher, Tilo

    2012-01-01

    In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network. PMID:23226488

  7. Volitional control of attention and brain activation in dual task performance.

    PubMed

    Newman, Sharlene D; Keller, Timothy A; Just, Marcel Adam

    2007-02-01

    This study used functional MRI (fMRI) to examine the neural effects of willfully allocating one's attention to one of two ongoing tasks. In a dual task paradigm, participants were instructed to focus either on auditory sentence comprehension, mental rotation, or both. One of the major findings is that the distribution of brain activation was amenable to strategic control, such that the amount of activation per task was systematically related to the attention-dividing instructions. The activation in language processing regions was lower when attending to mental rotation than when attending to the sentences, and the activation in visuospatial processing regions was lower when attending to sentences than when attending to mental rotations. Additionally, the activation was found to be underadditive, with the dual-task condition eliciting less activation than the sum of the attend sentence and attend rotation conditions. We also observed a laterality shift across conditions within language-processing regions, with the attend sentence condition showing bilateral activation, while the dual task condition showed a left hemispheric dominance. This shift suggests multiple language-processing modes and may explain the underadditivity in activation observed in the current and previous studies.

  8. Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain.

    PubMed

    Arbib, Michael A

    2016-03-01

    We make the case for developing a Computational Comparative Neuroprimatology to inform the analysis of the function and evolution of the human brain. First, we update the mirror system hypothesis on the evolution of the language-ready brain by (i) modeling action and action recognition and opportunistic scheduling of macaque brains to hypothesize the nature of the last common ancestor of macaque and human (LCA-m); and then we (ii) introduce dynamic brain modeling to show how apes could acquire gesture through ontogenetic ritualization, hypothesizing the nature of evolution from LCA-m to the last common ancestor of chimpanzee and human (LCA-c). We then (iii) hypothesize the role of imitation, pantomime, protosign and protospeech in biological and cultural evolution from LCA-c to Homo sapiens with a language-ready brain. Second, we suggest how cultural evolution in Homo sapiens led from protolanguages to full languages with grammar and compositional semantics. Third, we assess the similarities and differences between the dorsal and ventral streams in audition and vision as the basis for presenting and comparing two models of language processing in the human brain: A model of (i) the auditory dorsal and ventral streams in sentence comprehension; and (ii) the visual dorsal and ventral streams in defining "what language is about" in both production and perception of utterances related to visual scenes provide the basis for (iii) a first step towards a synthesis and a look at challenges for further research. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Where language meets meaningful action: a combined behavior and lesion analysis of aphasia and apraxia.

    PubMed

    Weiss, Peter H; Ubben, Simon D; Kaesberg, Stephanie; Kalbe, Elke; Kessler, Josef; Liebig, Thomas; Fink, Gereon R

    2016-01-01

    It is debated how language and praxis are co-represented in the left hemisphere (LH). As voxel-based lesion-symptom mapping in LH stroke patients with aphasia and/or apraxia may contribute to this debate, we here investigated the relationship between language and praxis deficits at the behavioral and lesion levels in 50 sub-acute stroke patients. We hypothesized that language and (meaningful) action are linked via semantic processing in Broca's region. Behaviorally, half of the patients suffered from co-morbid aphasia and apraxia. While 24% (n = 12) of all patients exhibited aphasia without apraxia, apraxia without aphasia was rare (n = 2, 4%). Left inferior frontal, insular, inferior parietal, and superior temporal lesions were specifically associated with deficits in naming, reading, writing, or auditory comprehension. In contrast, lesions affecting the left inferior frontal gyrus, premotor cortex, and the central region as well as the inferior parietal lobe were associated with apraxic deficits (i.e., pantomime, imitation of meaningful and meaningless gestures). Thus, contrary to the predictions of the embodied cognition theory, lesions to sensorimotor and premotor areas were associated with the severity of praxis but not language deficits. Lesions of Brodmann area (BA) 44 led to combined apraxic and aphasic deficits. Data suggest that BA 44 acts as an interface between language and (meaningful) action thereby supporting parcellation schemes (based on connectivity and receptor mapping) which revealed a BA 44 sub-area involved in semantic processing.

  10. Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain

    NASA Astrophysics Data System (ADS)

    Arbib, Michael A.

    2016-03-01

    We make the case for developing a Computational Comparative Neuroprimatology to inform the analysis of the function and evolution of the human brain. First, we update the mirror system hypothesis on the evolution of the language-ready brain by (i) modeling action and action recognition and opportunistic scheduling of macaque brains to hypothesize the nature of the last common ancestor of macaque and human (LCA-m); and then we (ii) introduce dynamic brain modeling to show how apes could acquire gesture through ontogenetic ritualization, hypothesizing the nature of evolution from LCA-m to the last common ancestor of chimpanzee and human (LCA-c). We then (iii) hypothesize the role of imitation, pantomime, protosign and protospeech in biological and cultural evolution from LCA-c to Homo sapiens with a language-ready brain. Second, we suggest how cultural evolution in Homo sapiens led from protolanguages to full languages with grammar and compositional semantics. Third, we assess the similarities and differences between the dorsal and ventral streams in audition and vision as the basis for presenting and comparing two models of language processing in the human brain: A model of (i) the auditory dorsal and ventral streams in sentence comprehension; and (ii) the visual dorsal and ventral streams in defining ;what language is about; in both production and perception of utterances related to visual scenes provide the basis for (iii) a first step towards a synthesis and a look at challenges for further research.

  11. Two Tongues, One Brain: Imaging Bilingual Speech Production

    PubMed Central

    Simmonds, Anna J.; Wise, Richard J. S.; Leech, Robert

    2011-01-01

    This review considers speaking in a second language from the perspective of motor–sensory control. Previous studies relating brain function to the prior acquisition of two or more languages (neurobilingualism) have investigated the differential demands made on linguistic representations and processes, and the role of domain-general cognitive control systems when speakers switch between languages. In contrast to the detailed discussions on these higher functions, typically articulation is considered only as an underspecified stage of simple motor output. The present review considers speaking in a second language in terms of the accompanying foreign accent, which places demands on the integration of motor and sensory discharges not encountered when articulating in the most fluent language. We consider why there has been so little emphasis on this aspect of bilingualism to date, before turning to the motor and sensory complexities involved in learning to speak a second language as an adult. This must involve retuning the neural circuits involved in the motor control of articulation, to enable rapid unfamiliar sequences of movements to be performed with the goal of approximating, as closely as possible, the speech of a native speaker. Accompanying changes in motor networks is experience-dependent plasticity in auditory and somatosensory cortices to integrate auditory memories of the target sounds, copies of feedforward commands from premotor and primary motor cortex and post-articulatory auditory and somatosensory feedback. Finally, we consider the implications of taking a motor–sensory perspective on speaking a second language, both pedagogical regarding non-native learners and clinical regarding speakers with neurological conditions such as dysarthria. PMID:21811481

  12. Pitch expertise is not created equal: Cross-domain effects of musicianship and tone language experience on neural and behavioural discrimination of speech and music.

    PubMed

    Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain

    2015-05-01

    Psychophysiological evidence supports a music-language association, such that experience in one domain can impact processing required in the other domain. We investigated the bidirectionality of this association by measuring event-related potentials (ERPs) in native English-speaking musicians, native tone language (Cantonese) nonmusicians, and native English-speaking nonmusician controls. We tested the degree to which pitch expertise stemming from musicianship or tone language experience similarly enhances the neural encoding of auditory information necessary for speech and music processing. Early cortical discriminatory processing for music and speech sounds was characterized using the mismatch negativity (MMN). Stimuli included 'large deviant' and 'small deviant' pairs of sounds that differed minimally in pitch (fundamental frequency, F0; contrastive musical tones) or timbre (first formant, F1; contrastive speech vowels). Behavioural F0 and F1 difference limen tasks probed listeners' perceptual acuity for these same acoustic features. Musicians and Cantonese speakers performed comparably in pitch discrimination; only musicians showed an additional advantage on timbre discrimination performance and an enhanced MMN responses to both music and speech. Cantonese language experience was not associated with enhancements on neural measures, despite enhanced behavioural pitch acuity. These data suggest that while both musicianship and tone language experience enhance some aspects of auditory acuity (behavioural pitch discrimination), musicianship confers farther-reaching enhancements to auditory function, tuning both pitch and timbre-related brain processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Dynamics of hemispheric dominance for language assessed by magnetoencephalographic imaging.

    PubMed

    Findlay, Anne M; Ambrose, Josiah B; Cahn-Weiner, Deborah A; Houde, John F; Honma, Susanne; Hinkley, Leighton B N; Berger, Mitchel S; Nagarajan, Srikantan S; Kirsch, Heidi E

    2012-05-01

    The goal of the current study was to examine the dynamics of language lateralization using magnetoencephalographic (MEG) imaging, to determine the sensitivity and specificity of MEG imaging, and to determine whether MEG imaging can become a viable alternative to the intracarotid amobarbital procedure (IAP), the current gold standard for preoperative language lateralization in neurosurgical candidates. MEG was recorded during an auditory verb generation task and imaging analysis of oscillatory activity was initially performed in 21 subjects with epilepsy, brain tumor, or arteriovenous malformation who had undergone IAP and MEG. Time windows and brain regions of interest that best discriminated between IAP-determined left or right dominance for language were identified. Parameters derived in the retrospective analysis were applied to a prospective cohort of 14 patients and healthy controls. Power decreases in the beta frequency band were consistently observed following auditory stimulation in inferior frontal, superior temporal, and parietal cortices; similar power decreases were also seen in inferior frontal cortex prior to and during overt verb generation. Language lateralization was clearly observed to be a dynamic process that is bilateral for several hundred milliseconds during periods of auditory perception and overt speech production. Correlation with the IAP was seen in 13 of 14 (93%) prospective patients, with the test demonstrating a sensitivity of 100% and specificity of 92%. Our results demonstrate excellent correlation between MEG imaging findings and the IAP for language lateralization, and provide new insights into the spatiotemporal dynamics of cortical speech processing. Copyright © 2012 American Neurological Association.

  14. Perceptual context and individual differences in the language proficiency of preschool children.

    PubMed

    Banai, Karen; Yifat, Rachel

    2016-02-01

    Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Interconnected growing self-organizing maps for auditory and semantic acquisition modeling.

    PubMed

    Cao, Mengxue; Li, Aijun; Fang, Qiang; Kaufmann, Emily; Kröger, Bernd J

    2014-01-01

    Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM) algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic-semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners. A reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1) I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2) clear auditory and semantic boundaries can be found in the network representation; (3) cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4) reinforcing-by-link training leads to well-perceived auditory-semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model.

  16. Auditory Brainstem Responses in Childhood Psychosis.

    ERIC Educational Resources Information Center

    Gillberg, Christopher; And Others

    1983-01-01

    Auditory brainstem responses (ABR) were compared in 24 autistic children, seven children with other childhood psychoses, and 31 normal children. One-third of the autistic Ss showed abnormal ABR indicating brainstem dysfunction and correlating with muscular hypotonia and severe language impairment. Ss with other psychoses and normal Ss showed…

  17. Speech-Sound Duration Processing in a Second Language is Specific to Phonetic Categories

    ERIC Educational Resources Information Center

    Nenonen, Sari; Shestakova, Anna; Huotilainen, Minna; Naatanen, Risto

    2005-01-01

    The mismatch negativity (MMN) component of the auditory event-related potential was used to determine the effect of native language, Russian, on the processing of speech-sound duration in a second language, Finnish, that uses duration as a cue for phonological distinction. The native-language effect was compared with Finnish vowels that either can…

  18. A Systematic Meta-Analytic Review of Evidence for the Effectiveness of the "Fast ForWord" Language Intervention Program

    ERIC Educational Resources Information Center

    Strong, Gemma K.; Torgerson, Carole J.; Torgerson, David; Hulme, Charles

    2011-01-01

    Background: Fast ForWord is a suite of computer-based language intervention programs designed to improve children's reading and oral language skills. The programs are based on the hypothesis that oral language difficulties often arise from a rapid auditory temporal processing deficit that compromises the development of phonological…

  19. Is Word-Problem Solving a Form of Text Comprehension?

    PubMed Central

    Fuchs, Lynn S.; Fuchs, Douglas; Compton, Donald L.; Hamlett, Carol L.; Wang, Amber Y.

    2015-01-01

    This study’s hypotheses were that (a) word-problem (WP) solving is a form of text comprehension that involves language comprehension processes, working memory, and reasoning, but (b) WP solving differs from other forms of text comprehension by requiring WP-specific language comprehension as well as general language comprehension. At the start of the 2nd grade, children (n = 206; on average, 7 years, 6 months) were assessed on general language comprehension, working memory, nonlinguistic reasoning, processing speed (a control variable), and foundational skill (arithmetic for WPs; word reading for text comprehension). In spring, they were assessed on WP-specific language comprehension, WPs, and text comprehension. Path analytic mediation analysis indicated that effects of general language comprehension on text comprehension were entirely direct, whereas effects of general language comprehension on WPs were partially mediated by WP-specific language. By contrast, effects of working memory and reasoning operated in parallel ways for both outcomes. PMID:25866461

  20. Auditory training changes temporal lobe connectivity in 'Wernicke's aphasia': a randomised trial.

    PubMed

    Woodhead, Zoe Vj; Crinion, Jennifer; Teki, Sundeep; Penny, Will; Price, Cathy J; Leff, Alexander P

    2017-07-01

    Aphasia is one of the most disabling sequelae after stroke, occurring in 25%-40% of stroke survivors. However, there remains a lack of good evidence for the efficacy or mechanisms of speech comprehension rehabilitation. This within-subjects trial tested two concurrent interventions in 20 patients with chronic aphasia with speech comprehension impairment following left hemisphere stroke: (1) phonological training using 'Earobics' software and (2) a pharmacological intervention using donepezil, an acetylcholinesterase inhibitor. Donepezil was tested in a double-blind, placebo-controlled, cross-over design using block randomisation with bias minimisation. The primary outcome measure was speech comprehension score on the comprehensive aphasia test. Magnetoencephalography (MEG) with an established index of auditory perception, the mismatch negativity response, tested whether the therapies altered effective connectivity at the lower (primary) or higher (secondary) level of the auditory network. Phonological training improved speech comprehension abilities and was particularly effective for patients with severe deficits. No major adverse effects of donepezil were observed, but it had an unpredicted negative effect on speech comprehension. The MEG analysis demonstrated that phonological training increased synaptic gain in the left superior temporal gyrus (STG). Patients with more severe speech comprehension impairments also showed strengthening of bidirectional connections between the left and right STG. Phonological training resulted in a small but significant improvement in speech comprehension, whereas donepezil had a negative effect. The connectivity results indicated that training reshaped higher order phonological representations in the left STG and (in more severe patients) induced stronger interhemispheric transfer of information between higher levels of auditory cortex.Clinical trial registrationThis trial was registered with EudraCT (2005-004215-30, https:// eudract .ema.europa.eu/) and ISRCTN (68939136, http://www.isrctn.com/). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Overlapping neural circuitry for narrative comprehension and proficient reading in children and adolescents.

    PubMed

    Horowitz-Kraus, Tzipi; Vannest, Jennifer J; Holland, Scott K

    2013-11-01

    Narrative comprehension is a perinatal linguistic ability which is more intuitive than reading activity. Whether there are specific shared brain regions for narrative comprehension and reading that are tuned to reading proficiency, even before reading is acquired, is the question of the current study. We acquired fMRI data during a narrative comprehension task at two age points, when children are age 5-7 (K-2nd grade) and later when the same children were age 11 (5th-7th grade). We then examined correlations between this fMRI data and reading and reading comprehension scores from the same children at age 11. We found that greater frontal and supramarginal gyrus (BA 40) activation in narrative comprehension at the age of 5-7 years old was associated with better word reading and reading comprehension scores at the age of 11. A shift towards temporal and occipital activation was found when correlating their narrative comprehension functional data at age 11, with reading scores at the same age point. We suggest that increased reliance on executive functions and auditory-visual networks when listening to stories before reading is acquired, facilitates reading proficiency in older age and may be a biomarker for future reading ability. Children, who rely on use of imagination/visualization as well as auditory processing for narrative comprehension when they reach age 11, also show greater reading abilities. Understanding concordant neural pathways supporting auditory narrative and reading comprehension might be guide for development of effective tools for reading intervention programs. Published by Elsevier Ltd.

  2. [Low level auditory skills compared to writing skills in school children attending third and fourth grade: evidence for the rapid auditory processing deficit theory?].

    PubMed

    Ptok, M; Meisen, R

    2008-01-01

    The rapid auditory processing defi-cit theory holds that impaired reading/writing skills are not caused exclusively by a cognitive deficit specific to representation and processing of speech sounds but arise due to sensory, mainly auditory, deficits. To further explore this theory we compared different measures of auditory low level skills to writing skills in school children. prospective study. School children attending third and fourth grade. just noticeable differences for intensity and frequency (JNDI, JNDF), gap detection (GD) monaural and binaural temporal order judgement (TOJb and TOJm); grade in writing, language and mathematics. correlation analysis. No relevant correlation was found between any auditory low level processing variable and writing skills. These data do not support the rapid auditory processing deficit theory.

  3. Language Impairments in the Development of Sign: Do They Reside in a Specific Modality or Are They Modality-Independent Deficits?

    ERIC Educational Resources Information Center

    Woll, Bencie; Morgan, Gary

    2012-01-01

    Various theories of developmental language impairments have sought to explain these impairments in modality-specific ways--for example, that the language deficits in SLI or Down syndrome arise from impairments in auditory processing. Studies of signers with language impairments, especially those who are bilingual in a spoken language as well as a…

  4. Language Use, Language Ability, and Language Development: Abstracts of Doctoral Dissertations Published in "Dissertation Abstracts International," July through December 1977 (Vol. 38 No. 1 through 6).

    ERIC Educational Resources Information Center

    ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.

    This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 27 titles deal with a variety of topics, including the following: facilitation of language development in disadvantaged preschool children; auditory-visual discrimination skills, language performance, and development of manual…

  5. [Effect of sound amplification on parent's communicative modalities].

    PubMed

    Couto, Maria Inês Vieira; Lichtig, Ida

    2007-01-01

    auditory rehabilitation in deaf children users of sign language. to verify the effects of sound amplification on parent's communicative modalities when interacting with their deaf children. participants were twelve deaf children, aged 50 to 80 months and their hearing parents. Children had severe or profound hearing loss in their better ear and were fitted with hearing aids in both ears. Children communicated preferably through sign language. The cause-effect relation between the children's auditory skills profile (insertion gain, functional gain and The Meaningful Auditory Integration Scale--MAIS) and the communicative modalities (auditive-oral, visuo-spacial, bimodal) used by parents was analyzed. Communicative modalities were compared in two different experimental situations during a structured interaction between parents and children, i.e. when children were not fitted with their hearing aids (Situation 1) and when children were fitted with them (Situation 2). Data was analyzed using descriptive statistics. the profile of the deaf children's auditory skills demonstrated to be lower than 53% (unsatisfactory). Parents used predominately the bimodal modality to gain children's attention, to transmit and to end tasks. A slight positive effect of sound amplification on the communicative modalities was observed, once parents presented more turn-takings during communication when using the auditory-oral modality in Situation 2. hearing parents tend to use more turn-takings during communication in the auditory-oral modality to gain children's attention, to transmit and to end tasks, since they observe an improvement in the auditory skills of their children.

  6. Early preschool processing abilities predict subsequent reading outcomes in bilingual Spanish-Catalan children with Specific Language Impairment (SLI).

    PubMed

    Aguilar-Mediavilla, Eva; Buil-Legaz, Lucía; Pérez-Castelló, Josep A; Rigo-Carratalà, Eduard; Adrover-Roig, Daniel

    2014-01-01

    Children with Specific Language Impairment (SLI) have severe language difficulties without showing hearing impairments, cognitive deficits, neurological damage or socio-emotional deprivation. However, previous studies have shown that children with SLI show some cognitive and literacy problems. Our study analyses the relationship between preschool cognitive and linguistic abilities and the later development of reading abilities in Spanish-Catalan bilingual children with SLI. The sample consisted of 17 bilingual Spanish-Catalan children with SLI and 17 age-matched controls. We tested eight distinct processes related to phonological, attention, and language processing at the age of 6 years and reading at 8 years of age. Results show that bilingual Spanish-Catalan children with SLI show significantly lower scores, as compared to typically developing peers, in phonological awareness, phonological memory, and rapid automatized naming (RAN), together with a lower outcome in tasks measuring sentence repetition and verbal fluency. Regarding attentional processes, bilingual Spanish-Catalan children with SLI obtained lower scores in auditory attention, but not in visual attention. At the age of 8 years Spanish-Catalan children with SLI had lower scores than their age-matched controls in total reading score, letter identification (decoding), and in semantic task (comprehension). Regression analyses identified both phonological awareness and verbal fluency at the age of 6 years to be the best predictors of subsequent reading performance at the age of 8 years. Our data suggest that language acquisition problems and difficulties in reading acquisition in bilingual children with SLI might be related to the close interdependence between a limitation in cognitive processing and a deficit at the linguistic level. After reading this article, readers will be able to: identify their understanding of the relation between language difficulties and reading outcomes; explain how processing abilities influence reading performance in bilingual Spanish-Catalan children with SLI; and recognize the relation between language and reading via a developmental model in which the phonological system is considered central for the development of decoding abilities and comprehension. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Effects of task complexity on activation of language areas in a semantic decision fMRI protocol.

    PubMed

    Lopes, Tátila Martins; Yasuda, Clarissa Lin; de Campos, Brunno Machado; Balthazar, Marcio L F; Binder, Jeffrey R; Cendes, Fernando

    2016-01-29

    Language tasks used for clinical fMRI studies may be too complex for some patients with cognitive impairments, and "easier" versions are sometimes substituted, though the effects on brain activity of such changes in task complexity are largely unknown. To investigate these differences, we compared two versions of an fMRI language comprehension protocol, with different levels of difficulty, in 24 healthy right-handed adults. The protocol contrasted an auditory word comprehension task (semantic decision) with a nonspeech control task using tone sequences (tone decision). In the "complex" version (CV), the semantic decision task required two complex semantic decisions for each word, and the tone decision task required the participant to count the number of target tones in each sequence. In the "easy" version (EV), the semantic task required only a single easier decision, and the tone task required only detection of the presence or absence of a target tone in each sequence. The protocols were adapted for a Brazilian population. Typical left hemisphere language lateralization was observed in 92% of participants for both CV and EV using the whole-brain lateralization index, and typical language lateralization was also observed for others regions of interest. Task performance was superior on the EV compared to the CV (p=0.014). There were many common areas of activation across the two version; however, the CV produced greater activation in the left superior and middle frontal giri, angular gyrus, and left posterior cingulate gyrus compared to the EV, the majority of which are areas previously identified with language and semantic processing. The EV produced stronger activation only in a small area in the posterior middle temporal gyrus. These results reveal differences between two versions of the protocol and provide evidence that both are useful for language lateralization and worked well for Brazilian population. The complex version produces stronger activation in several nodes of the semantic network and therefore is elected for participants who can perform well these tasks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Strategies for Analyzing Tone Languages

    ERIC Educational Resources Information Center

    Coupe, Alexander R.

    2014-01-01

    This paper outlines a method of auditory and acoustic analysis for determining the tonemes of a language starting from scratch, drawing on the author's experience of recording and analyzing tone languages of north-east India. The methodology is applied to a preliminary analysis of tone in the Thang dialect of Khiamniungan, a virtually undocumented…

  9. Improving Memory Span in Children with Down Syndrome

    ERIC Educational Resources Information Center

    Conners, F. A.; Rosenquist, C. J.; Arnett, L.; Moore, M. S.; Hume, L. E.

    2008-01-01

    Background: Down syndrome (DS) is characterized by impaired memory span, particularly auditory verbal memory span. Memory span is linked developmentally to several language capabilities, and may be a basic capacity that enables language learning. If children with DS had better memory span, they might benefit more from language intervention. The…

  10. The Prediction of Success in Intensive Foreign Language Training.

    ERIC Educational Resources Information Center

    Carroll, John B.

    After a review of the problem of predicting foreign language success, this booklet describes the development, refinement, and validation of a battery of psychological tests, some involving tape-recorded auditory stimuli, for predicting rate of progress in learning a foreign language. Although the battery was developed for more general application…

  11. Narrative abilities, memory and attention in children with a specific language impairment.

    PubMed

    Duinmeijer, Iris; de Jong, Jan; Scheper, Annette

    2012-01-01

    While narrative tasks have proven to be valid measures for detecting language disorders, measuring communicative skills and predicting future academic performance, research into the comparability of different narrative tasks has shown that outcomes are dependent on the type of task used. Although many of the studies detecting task differences touch upon the fact that tasks place differential demands on cognitive abilities like auditory attention and memory, few studies have related specific narrative tasks to these cognitive abilities. Examining this relation is especially warranted for children with specific language impairment (SLI), who are characterized by language problems, but often have problems in other cognitive domains as well. In the current research, a comparison was made between a story retelling task (The Bus Story) and a story generation task (The Frog Story) in a group of children with SLI (n= 34) and a typically developing group (n= 38) from the same age range. In addition to the two narrative tasks, sustained auditory attention (TEA-Ch) and verbal working memory (WISC digit span and the Dutch version of the CVLT-C word list recall) were measured. Correlations were computed between the narrative, the memory and the attention scores. A group comparison showed that the children with SLI scored significantly worse than the typically developing children on several narrative measures as well as on sustained auditory attention and verbal working memory. A within-subjects comparison of the scores on the two narrative tasks showed a contrast between the tasks on several narrative measures. Furthermore, correlational analyses showed that, on the level of plot structure, the story generation task correlated with sustained auditory attention, while the story retelling task correlated with word list recall. Mean length of utterance (MLU) on the other hand correlated with digit span but not with sustained auditory attention. While children with SLI have problems with narratives in general, their performance is also dependent on the specific elicitation task used for research or diagnostics. Various narrative tasks generate different scores and are differentially correlated to cognitive skills like attention and memory, making the selection of a given task crucial in the clinical setting. © 2012 Royal College of Speech and Language Therapists.

  12. Selective auditory attention in adults: effects of rhythmic structure of the competing language.

    PubMed

    Reel, Leigh Ann; Hicks, Candace Bourland

    2012-02-01

    The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Reception thresholds for English sentences were measured for 50 monolingual English-speaking adults in conditions with 2-talker (male-female) competing speech spoken in a stress-based (English, German), syllable-based (Spanish, French), or mora-based (Japanese) language. Two different masking signals were created for each language (i.e., 2 different 2-talker pairs). All subjects were tested in 10 competing conditions (2 conditions for each of the 5 languages). A significant difference was noted between the 2 masking signals within each language. Across languages, significantly greater listening difficulty was observed in conditions where competing speech was spoken in English, German, or Japanese, as compared with Spanish or French. Results suggest that (a) for a particular language, masking effectiveness can vary between different male-female 2-talker maskers and (b) for stress-based vs. syllable-based languages, competing speech is more difficult to ignore when spoken in a language from the native rhythmic class as compared with a nonnative rhythmic class, regardless of whether the language is familiar or unfamiliar to the listener.

  13. Influence of family environment on language outcomes in children with myelomeningocele.

    PubMed

    Vachha, B; Adams, R

    2005-09-01

    Previously, our studies demonstrated language differences impacting academic performance among children with myelomeningocele and shunted hydrocephalus (MMSH). This follow-up study considers the environmental facilitators within families (achievement orientation, intellectual-cultural orientation, active recreational orientation, independence) among a cohort of children with MMSH and their relationship to language performance. Fifty-eight monolingual, English-speaking children (36 females; mean age: 10.1 years; age range: 7-16 years) with MMSH were evaluated. Exclusionary criteria were prior shunt infection; seizure or shunt malfunction within the previous 3 months; uncorrected visual or auditory impairments; prior diagnoses of mental retardation or attention deficit disorder. The Comprehensive Assessment of Spoken Language (CASL) and the Wechsler Abbreviated Scale of Intelligence (WASI) were administered individually to all participants. The CASL Measures four subsystems: lexical, syntactic, supralinguistic and pragmatic. Parents completed the Family Environment Scale (FES) questionnaire and provided background demographic information. Spearman correlation analyses and partial correlation analyses were performed. Mean intelligence scores for the MMSH group: full scale IQ 92.2 (SD = 11.9). The CASL revealed statistically significant difficulty for supralinguistic and pragmatic (or social) language tasks. FES scores fell within the average range for the group. Spearman correlation and partial correlation analyses revealed statistically significant positive relationships for the FES 'intellectual-cultural orientation' variable and performance within the four language subsystems. Socio-economic status (SES) characteristics were analyzed and did not discriminate language performance when the intellectual-cultural orientation factor was taken into account. The role of family facilitators on language skills in children with MMSH has not previously been described. The relationship between language performance and the families' value on intellectual/cultural activities seems both statistically and intuitively sound. Focused interest in the integration of family values and practices should assist developmental specialists in supporting families and children within their most natural environment.

  14. Infant communication and subsequent language development in children from low-income families: the role of early cognitive stimulation.

    PubMed

    Cates, Carolyn Brockmeyer; Dreyer, Benard P; Berkule, Samantha B; White, Lisa J; Arevalo, Jenny A; Mendelsohn, Alan L

    2012-09-01

    To explore the relationship between early cognitive stimulation in the home, 6-month infant communication, and 24-month toddler language in a low-socioeconomic status sample. Longitudinal analyses of mother-child dyads participating in larger study of early child development were performed. Dyads enrolled postpartum in an urban public hospital. Cognitive stimulation in the home at 6 months was assessed using StimQ-lnfant, including provision of toys, shared reading, teaching, and verbal responsivity. Early infant communication was assessed at 6 months including the following: (1) Emotion and eye gaze (Communication and Symbolic Behavior Scale DP-CSBS DP), (2) Communicative bids (CSBS DP), and (3) Expression of emotion (Short Temperament Scale for Infants). Toddler language was assessed at 24 months using the Preschool Language Scale-4, including the following: (1) expressive language and (2) auditory comprehension. Three hundred twenty families were assessed. In structural equation models, cognitive stimulation in the home was strongly associated with early infant communication (β = 0.63, p <.0001) and was predictive of 24-month language (β = 0.20, p <.05). The effect of early cognitive stimulation on 24-month language was mediated through early impacts on infant communication (Indirect β = 0.28, p =.001). Reading, teaching, availability of learning materials, and other reciprocal verbal interactions were all related directly to infant communication and indirectly to language outcomes. The impact of early cognitive stimulation on toddler language is manifested through early associations with infant communication. Pediatric primary care providers should promote cognitive stimulation beginning in early infancy and support the expansion and dissemination of intervention programs such as Reach Out and Read and the Video Interaction Project.

  15. Spontaneous language production of Italian children with cochlear implants and their mothers in two interactive contexts.

    PubMed

    Majorano, Marinella; Guidotti, Laura; Guerzoni, Letizia; Murri, Alessandra; Morelli, Marika; Cuda, Domenico; Lavelli, Manuela

    2018-01-01

    In recent years many studies have shown that the use of cochlear implants (CIs) improves children's skills in processing the auditory signal and, consequently, the development of both language comprehension and production. Nevertheless, many authors have also reported that the development of language skills in children with CIs is variable and influenced by individual factors (e.g., age at CI activation) and contextual aspects (e.g., maternal linguistic input). To assess the characteristics of the spontaneous language production of Italian children with CIs, their mothers' input and the relationship between the two during shared book reading and semi-structured play. Twenty preschool children with CIs and 40 typically developing children, 20 matched for chronological age (CATD group) and 20 matched for hearing age (HATD group), were observed during shared book reading and semi-structured play with their mothers. Samples of spontaneous language were transcribed and analysed for each participant. The numbers of types, tokens, mean length of utterance (MLU) and grammatical categories were considered, and the familiarity of each mother's word was calculated. The children with CIs produced shorter utterances than the children in the CATD group. Their mothers produced language with lower levels of lexical variability and grammatical complexity, and higher proportions of verbs with higher familiarity than did the mothers in the other groups during shared book reading. The children's language was more strongly related to that of their mothers in the CI group than in the other groups, and it was associated with the age at CI activation. The findings suggest that the language of children with CIs is related both to their mothers' input and to age at CI activation. They might prompt suggestions for intervention programs focused on shared-book reading. © 2017 Royal College of Speech and Language Therapists.

  16. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    PubMed

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information.

  17. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE

    PubMed Central

    Krishnan, Ananthanarayan; Gandour, Jackson T.

    2015-01-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long-term experience shapes this adaptive process wherein the top-down connections provide selective gating of inputs to both cortical and subcortical structures to enhance neural responses to specific behaviorally-relevant attributes of the stimulus. A theoretical framework for a neural network is proposed involving coordination between local, feedforward, and feedback components that can account for experience-dependent enhancement of pitch representations at multiple levels of the auditory pathway. The ability to record brainstem and cortical pitch relevant responses concurrently may provide a new window to evaluate the online interplay between feedback, feedforward, and local intrinsic components in the hierarchical processing of pitch relevant information. PMID:25838636

  18. English Auditory Discrimination Skills of Spanish-Speaking Children.

    ERIC Educational Resources Information Center

    Kramer, Virginia Reyes; Schell, Leo M.

    1982-01-01

    Eighteen Mexican American pupils in the grades 1-3 from two urban Kansas schools were tested, using 18 pairs of sound contrasts, for auditory discrimination problems related to their language-different background. Results showed v-b, ch-sh, and s-sp contrasts were the most difficult for subjects to discriminate. (LC)

  19. Suggested Outline for Auditory Perception Training.

    ERIC Educational Resources Information Center

    Kelley, Clare A.

    Presented are suggestions for speech therapists to use in auditory perception training and screening of language handicapped children in kindergarten through grade 3. Directions are given for using the program, which is based on games. Each component is presented in terms of purpose, materials, a description of the game, and directions for…

  20. Auditory Word Serial Recall Benefits from Orthographic Dissimilarity

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Lafontaine, Helene; Morais, Jose; Kolinsky, Regine

    2010-01-01

    The influence of orthographic knowledge has been consistently observed in speech recognition and metaphonological tasks. The present study provides data suggesting that such influence also pervades other cognitive domains related to language abilities, such as verbal working memory. Using serial recall of auditory seven-word lists, we observed…

  1. High-Risk Infants: Auditory Processing Deficits in Later Childhood.

    ERIC Educational Resources Information Center

    Gilbride, Kathleen E.; And Others

    To determine whether deficits warranting intervention are present in the later functioning of high-risk infants, 22 premature infants who experienced asphyxia or chronic lung disease (CLD) but who had no gross developmental abnormalities were evaluated. Assessments of auditory perception and receptive language ability were made during later…

  2. Readability of Questionnaires Assessing Listening Difficulties Associated with (Central) Auditory Processing Disorders

    ERIC Educational Resources Information Center

    Atcherson, Samuel R.; Richburg, Cynthia M.; Zraick, Richard I.; George, Cassandra M.

    2013-01-01

    Purpose: Eight English-language, student- or parent proxy-administered questionnaires for (central) auditory processing disorders, or (C)APD, were analyzed for readability. For student questionnaires, readability levels were checked against the approximate reading grade levels by intended administration age per the questionnaires' developers. For…

  3. The Goldilocks Effect in Infant Auditory Attention

    ERIC Educational Resources Information Center

    Kidd, Celeste; Piantadosi, Steven T.; Aslin, Richard N.

    2014-01-01

    Infants must learn about many cognitive domains (e.g., language, music) from auditory statistics, yet capacity limits on their cognitive resources restrict the quantity that they can encode. Previous research has established that infants can attend to only a subset of available acoustic input. Yet few previous studies have directly examined infant…

  4. Kansas Center for Research in Early Childhood Education Annual Report, FY 1973.

    ERIC Educational Resources Information Center

    Horowitz, Frances D.

    This monograph is a collection of papers describing a series of loosely related studies of visual attention, auditory stimulation, and language discrimination in young infants. Titles include: (1) Infant Attention and Discrimination: Methodological and Substantive Issues; (2) The Addition of Auditory Stimulation (Music) and an Interspersed…

  5. Riddle appreciation and reading comprehension in Cantonese-speaking children.

    PubMed

    Tang, Ivy N Y; To, Carol K S; Weekes, Brendan S

    2013-10-01

    Inference-making skills are necessary for reading comprehension. Training in riddle appreciation is an effective way to improve reading comprehension among English-speaking children. However, it is not clear whether these methods generalize to other writing systems. The goal of the present study was to investigate the relationship between inference-making skills, as measured by riddle appreciation ability, and reading comprehension performance in typically developing Cantonese-speaking children in the 4th grade. Forty Cantonese-speaking children between the ages of 9;1 (years;months) and 11;0 were given tests of riddle appreciation ability and reading comprehension. Chinese character reading and auditory comprehension abilities were also assessed using tests that had been standardized in Hong Kong. Regression analyses revealed that riddle appreciation ability explained a significant amount of variance in reading comprehension after variance due to character reading skills and auditory comprehension skills were first considered. Orthographic, lexical, morphological, and syntactic riddles were also significantly correlated with reading comprehension. Riddle appreciation ability predicts reading comprehension in Cantonese-speaking 4th-grade children. Therefore, training Cantonese speakers in riddle appreciation should improve their reading comprehension.

  6. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment

    PubMed Central

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed. PMID:24904454

  7. Atypical audio-visual speech perception and McGurk effects in children with specific language impairment.

    PubMed

    Leybaert, Jacqueline; Macchi, Lucie; Huyse, Aurélie; Champoux, François; Bayard, Clémence; Colin, Cécile; Berthommier, Frédéric

    2014-01-01

    Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.

  8. Impact of auditory training for perceptual assessment of voice executed by undergraduate students in Speech-Language Pathology.

    PubMed

    Silva, Regiane Serafim Abreu; Simões-Zenari, Marcia; Nemr, Nair Kátia

    2012-01-01

    To analyze the impact of auditory training for auditory-perceptual assessment carried out by Speech-Language Pathology undergraduate students. During two semesters, 17 undergraduate students enrolled in theoretical subjects regarding phonation (Phonation/Phonation Disorders) analyzed samples of altered and unaltered voices (selected for this purpose), using the GRBAS scale. All subjects received auditory training during nine 15-minute meetings. In each meeting, a different parameter was presented using the different voices sample, with predominance of the trained aspect in each session. Sample assessment using the scale was carried out before and after training, and in other four opportunities throughout the meetings. Students' assessments were compared to an assessment carried out by three voice-experts speech-language pathologists who were the judges. To verify training effectiveness, the Friedman's test and the Kappa index were used. The rate of correct answers in the pre-training was considered between regular and good. It was observed maintenance of the number of correct answers throughout assessments, for most of the scale parameters. In the post-training moment, the students showed improvements in the analysis of asthenia, a parameter that was emphasized during training after the students reported difficulties analyzing it. There was a decrease in the number of correct answers for the roughness parameter after it was approached segmented into hoarseness and harshness, and observed in association with different diagnoses and acoustic parameters. Auditory training enhances students' initial abilities to perform the evaluation, aside from guiding adjustments in the dynamics of the university subject.

  9. Linking prenatal experience to the emerging musical mind.

    PubMed

    Ullal-Gupta, Sangeeta; Vanden Bosch der Nederlanden, Christina M; Tichko, Parker; Lahav, Amir; Hannon, Erin E

    2013-09-03

    The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.

  10. Reading comprehension and its underlying components in second-language learners: A meta-analysis of studies comparing first- and second-language learners.

    PubMed

    Melby-Lervåg, Monica; Lervåg, Arne

    2014-03-01

    We report a systematic meta-analytic review of studies comparing reading comprehension and its underlying components (language comprehension, decoding, and phonological awareness) in first- and second-language learners. The review included 82 studies, and 576 effect sizes were calculated for reading comprehension and underlying components. Key findings were that, compared to first-language learners, second-language learners display a medium-sized deficit in reading comprehension (pooled effect size d = -0.62), a large deficit in language comprehension (pooled effect size d = -1.12), but only small differences in phonological awareness (pooled effect size d = -0.08) and decoding (pooled effect size d = -0.12). A moderator analysis showed that characteristics related to the type of reading comprehension test reliably explained the variation in the differences in reading comprehension between first- and second-language learners. For language comprehension, studies of samples from low socioeconomic backgrounds and samples where only the first language was used at home generated the largest group differences in favor of first-language learners. Test characteristics and study origin reliably contributed to the variations between the studies of language comprehension. For decoding, Canadian studies showed group differences in favor of second-language learners, whereas the opposite was the case for U.S. studies. Regarding implications, unless specific decoding problems are detected, interventions that aim to ameliorate reading comprehension problems among second-language learners should focus on language comprehension skills.

  11. The right hemisphere supports but does not replace left hemisphere auditory function in patients with persisting aphasia.

    PubMed

    Teki, Sundeep; Barnes, Gareth R; Penny, William D; Iverson, Paul; Woodhead, Zoe V J; Griffiths, Timothy D; Leff, Alexander P

    2013-06-01

    In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.

  12. The right hemisphere supports but does not replace left hemisphere auditory function in patients with persisting aphasia

    PubMed Central

    Barnes, Gareth R.; Penny, William D.; Iverson, Paul; Woodhead, Zoe V. J.; Griffiths, Timothy D.; Leff, Alexander P.

    2013-01-01

    In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics’ speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired. PMID:23715097

  13. Strength of German accent under altered auditory feedback

    PubMed Central

    HOWELL, PETER; DWORZYNSKI, KATHARINA

    2007-01-01

    Borden’s (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions—normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden’s hypothesis and other accounts about why altered auditory feedback disrupts speech control. PMID:11414137

  14. A basic study on universal design of auditory signals in automobiles.

    PubMed

    Yamauchi, Katsuya; Choi, Jong-dae; Maiguma, Ryo; Takada, Masayuki; Iwamiya, Shin-ichiro

    2004-11-01

    In this paper, the impression of various kinds of auditory signals currently used in automobiles and a comprehensive evaluation were measured by a semantic differential method. The desirable acoustic characteristic was examined for each type of auditory signal. Sharp sounds with dominant high-frequency components were not suitable for auditory signals in automobiles. This trend is expedient for the aged whose auditory sensitivity in the high frequency region is lower. When intermittent sounds were used, a longer OFF time was suitable. Generally, "dull (not sharp)" and "calm" sounds were appropriate for auditory signals. Furthermore, the comparison between the frequency spectrum of interior noise in automobiles and that of suitable sounds for various auditory signals indicates that the suitable sounds are not easily masked. The suitable auditory signals for various purposes is a good solution from the viewpoint of universal design.

  15. Language Outcomes in Children Who Are Deaf and Hard of Hearing: The Role of Language Ability before Hearing Aid Intervention

    ERIC Educational Resources Information Center

    Daub, Olivia; Bagatto, Marlene P.; Johnson, Andrew M.; Cardy, Janis Oram

    2017-01-01

    Purpose: Early auditory experiences are fundamental in infant language acquisition. Research consistently demonstrates the benefits of early intervention (i.e., hearing aids) to language outcomes in children who are deaf and hard of hearing. The nature of these benefits and their relation with prefitting development are, however, not well…

  16. Language experience changes subsequent learning.

    PubMed

    Onnis, Luca; Thiessen, Erik

    2013-02-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Individual differences in adult foreign language learning: the mediating effect of metalinguistic awareness.

    PubMed

    Brooks, Patricia J; Kempe, Vera

    2013-02-01

    In this study, we sought to identify cognitive predictors of individual differences in adult foreign-language learning and to test whether metalinguistic awareness mediated the observed relationships. Using a miniature language-learning paradigm, adults (N = 77) learned Russian vocabulary and grammar (gender agreement and case marking) over six 1-h sessions, completing tasks that encouraged attention to phrases without explicitly teaching grammatical rules. The participants' ability to describe the Russian gender and case-marking patterns mediated the effects of nonverbal intelligence and auditory sequence learning on grammar learning and generalization. Hence, even under implicit-learning conditions, individual differences stemmed from explicit metalinguistic awareness of the underlying grammar, which, in turn, was linked to nonverbal intelligence and auditory sequence learning. Prior knowledge of languages with grammatical gender (predominantly Spanish) predicted learning of gender agreement. Transfer of knowledge of gender from other languages to Russian was not mediated by awareness, which suggests that transfer operates through an implicit process akin to structural priming.

  18. Learning to match auditory and visual speech cues: social influences on acquisition of phonological categories.

    PubMed

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential looking paradigm, 44 German 6-month olds' ability to detect mismatches between concurrently presented auditory and visual native vowels was tested. Outcomes were related to mothers' speech style and interactive behavior assessed during free play with their infant, and to infant-specific factors assessed through a questionnaire. Results show that mothers' and infants' social behavior modulated infants' preference for matching audiovisual speech. Moreover, infants' audiovisual speech perception correlated with later vocabulary size, suggesting a lasting effect on language development. © 2014 The Authors. Child Development © 2014 Society for Research in Child Development, Inc.

  19. The speech naturalness of people who stutter speaking under delayed auditory feedback as perceived by different groups of listeners.

    PubMed

    Van Borsel, John; Eeckhout, Hannelore

    2008-09-01

    This study investigated listeners' perception of the speech naturalness of people who stutter (PWS) speaking under delayed auditory feedback (DAF) with particular attention for possible listener differences. Three panels of judges consisting of 14 stuttering individuals, 14 speech language pathologists, and 14 naive listeners rated the naturalness of speech samples of stuttering and non-stuttering individuals using a 9-point interval scale. Results clearly indicate that these three groups evaluate naturalness differently. Naive listeners appear to be more severe in their judgements than speech language pathologists and stuttering listeners, and speech language pathologists are apparently more severe than PWS. The three listener groups showed similar trends with respect to the relationship between speech naturalness and speech rate. Results of all three indicated that for PWS, the slower a speaker's rate was, the less natural speech was judged to sound. The three listener groups also showed similar trends with regard to naturalness of the stuttering versus the non-stuttering individuals. All three panels considered the speech of the non-stuttering participants more natural. The reader will be able to: (1) discuss the speech naturalness of people who stutter speaking under delayed auditory feedback, (2) discuss listener differences about the naturalness of people who stutter speaking under delayed auditory feedback, and (3) discuss the importance of speech rate for the naturalness of speech.

  20. Infants’ brain responses to speech suggest Analysis by Synthesis

    PubMed Central

    Kuhl, Patricia K.; Ramírez, Rey R.; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-01-01

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners’ knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca’s area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of “motherese” on early language learning, and (iii) the “social-gating” hypothesis and humans’ development of social understanding. PMID:25024207

Top