Sample records for speech comprehension linked

  1. A music perception disorder (congenital amusia) influences speech comprehension.

    PubMed

    Liu, Fang; Jiang, Cunmei; Wang, Bei; Xu, Yi; Patel, Aniruddh D

    2015-01-01

    This study investigated the underlying link between speech and music by examining whether and to what extent congenital amusia, a musical disorder characterized by degraded pitch processing, would impact spoken sentence comprehension for speakers of Mandarin, a tone language. Sixteen Mandarin-speaking amusics and 16 matched controls were tested on the intelligibility of news-like Mandarin sentences with natural and flat fundamental frequency (F0) contours (created via speech resynthesis) under four signal-to-noise (SNR) conditions (no noise, +5, 0, and -5dB SNR). While speech intelligibility in quiet and extremely noisy conditions (SNR=-5dB) was not significantly compromised by flattened F0, both amusic and control groups achieved better performance with natural-F0 sentences than flat-F0 sentences under moderately noisy conditions (SNR=+5 and 0dB). Relative to normal listeners, amusics demonstrated reduced speech intelligibility in both quiet and noise, regardless of whether the F0 contours of the sentences were natural or flattened. This deficit in speech intelligibility was not associated with impaired pitch perception in amusia. These findings provide evidence for impaired speech comprehension in congenital amusia, suggesting that the deficit of amusics extends beyond pitch processing and includes segmental processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Content validity of the Comprehensive ICF Core Set for multiple sclerosis from the perspective of speech and language therapists.

    PubMed

    Renom, Marta; Conrad, Andrea; Bascuñana, Helena; Cieza, Alarcos; Galán, Ingrid; Kesselring, Jürg; Coenen, Michaela

    2014-11-01

    The Comprehensive International Classification of Functioning, Disability and Health (ICF) Core Set for Multiple Sclerosis (MS) is a comprehensive framework to structure the information obtained in multidisciplinary clinical settings according to the biopsychosocial perspective of the International Classification of Functioning, Disability and Health (ICF) and to guide the treatment and rehabilitation process accordingly. It is now undergoing validation from the user perspective for which it has been developed in the first place. To validate the content of the Comprehensive ICF Core Set for MS from the perspective of speech and language therapists (SLTs) involved in the treatment of persons with MS (PwMS). Within a three-round e-mail-based Delphi Study 34 SLTs were asked about PwMS' problems, resources and aspects of the environment treated by SLTs. Responses were linked to ICF categories. Identified ICF categories were compared with those included in the Comprehensive ICF Core Set for MS to examine its content validity. Thirty-four SLTs named 524 problems and resources, as well as aspects of environment. Statements were linked to 129 ICF categories (60 Body-functions categories, two Body-structures categories, 42 Activities-&-participation categories, and 25 Environmental-factors categories). SLTs confirmed 46 categories in the Comprehensive ICF Core Set. Twenty-one ICF categories were identified as not-yet-included categories. This study contributes to the content validity of the Comprehensive ICF Core Set for MS from the perspective of SLTs. Study participants agreed on a few not-yet-included categories that should be further discussed for inclusion in a revised version of the Comprehensive ICF Core Set to strengthen SLTs' perspective in PwMS' neurorehabilitation. © 2014 Royal College of Speech and Language Therapists.

  3. Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis

    PubMed Central

    Vielsmeier, Veronika; Kreuzer, Peter M.; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R. O.; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin

    2016-01-01

    Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments (“How would you rate your ability to understand speech?”; “How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?”). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role. PMID:28018209

  4. Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis.

    PubMed

    Vielsmeier, Veronika; Kreuzer, Peter M; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R O; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin

    2016-01-01

    Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments ("How would you rate your ability to understand speech?"; "How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?"). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role.

  5. Does input influence uptake? Links between maternal talk, processing speed and vocabulary size in Spanish-learning children

    PubMed Central

    Hurtado, Nereyda; Marchman, Virginia A.; Fernald, Anne

    2010-01-01

    It is well established that variation in caregivers' speech is associated with language outcomes, yet little is known about the learning principles that mediate these effects. This longitudinal study (n = 27) explores whether Spanish-learning children's early experiences with language predict efficiency in real-time comprehension and vocabulary learning. Measures of mothers' speech at 18 months were examined in relation to children's speech processing efficiency and reported vocabulary at 18 and 24 months. Children of mothers who provided more input at 18 months knew more words and were faster in word recognition at 24 months. Moreover, multiple regression analyses indicated that the influences of caregiver speech on speed of word recognition and vocabulary were largely overlapping. This study provides the first evidence that input shapes children's lexical processing efficiency and that vocabulary growth and increasing facility in spoken word comprehension work together to support the uptake of the information that rich input affords the young language learner. PMID:19046145

  6. Measuring Speech Comprehensibility in Students with Down Syndrome

    PubMed Central

    Woynaroski, Tiffany; Camarata, Stephen

    2016-01-01

    Purpose There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based measure of the comprehensibility of conversational speech in students with Down syndrome. Method Participants were 10 elementary school students with Down syndrome and 4 unfamiliar adult raters. Averaged across-observer Likert ratings of speech comprehensibility were called a ratings-based measure of speech comprehensibility. The proportion of utterance attempts fully glossed constituted an orthography-based measure of speech comprehensibility. Results Averaging across 4 raters on four 5-min segments produced a reliable (G = .83) ratings-based measure of speech comprehensibility. The ratings-based measure was strongly (r > .80) correlated with the orthography-based measure for both the same and different conversational samples. Conclusion Reliable and valid measures of speech comprehensibility are achievable with the resources available to many researchers and some clinicians. PMID:27299989

  7. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    ERIC Educational Resources Information Center

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  8. Prediction is Production: The missing link between language production and comprehension.

    PubMed

    Martin, Clara D; Branzi, Francesca M; Bar, Moshe

    2018-01-18

    Language comprehension often involves the generation of predictions. It has been hypothesized that such prediction-for-comprehension entails actual language production. Recent studies provided evidence that the production system is recruited during language comprehension, but the link between production and prediction during comprehension remains hypothetical. Here, we tested this hypothesis by comparing prediction during sentence comprehension (primary task) in participants having the production system either available or not (non-verbal versus verbal secondary task). In the primary task, sentences containing an expected or unexpected target noun-phrase were presented during electroencephalography recording. Prediction, measured as the magnitude of the N400 effect elicited by the article (expected versus unexpected), was hindered only when the production system was taxed during sentence context reading. The present study provides the first direct evidence that the availability of the speech production system is necessary for generating lexical prediction during sentence comprehension. Furthermore, these important results provide an explanation for the recruitment of language production during comprehension.

  9. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech.

    PubMed

    Poole, Matthew L; Brodtmann, Amy; Darby, David; Vogel, Adam P

    2017-04-14

    Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Speech and neuroimaging data described in studies of FTD and PPA were systematically reviewed. A meta-analysis was conducted for speech measures that were used consistently in multiple studies. The methods and nomenclature used to describe speech in these disorders varied between studies. Our meta-analysis identified 3 speech measures which differentiate variants or healthy control-group participants (e.g., nonfluent and logopenic variants of PPA from all other groups, behavioral-variant FTD from a control group). Deficits within the frontal-lobe speech networks are linked to motor speech profiles of the nonfluent variant of PPA and progressive apraxia of speech. Motor speech impairment is rarely reported in semantic and logopenic variants of PPA. Limited data are available on motor speech impairment in the behavioral variant of FTD. Our review identified several measures of speech which may assist with diagnosis and classification, and consolidated the brain-behavior associations relating to speech in FTD, PPA, and progressive apraxia of speech.

  10. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension

    ERIC Educational Resources Information Center

    Drijvers, Linda; Ozyurek, Asli

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method:…

  11. Network Modeling for Functional Magnetic Resonance Imaging (fMRI) Signals during Ultra-Fast Speech Comprehension in Late-Blind Listeners

    PubMed Central

    Dietrich, Susanne; Hertrich, Ingo; Ackermann, Hermann

    2015-01-01

    In many functional magnetic resonance imaging (fMRI) studies blind humans were found to show cross-modal reorganization engaging the visual system in non-visual tasks. For example, blind people can manage to understand (synthetic) spoken language at very high speaking rates up to ca. 20 syllables/s (syl/s). FMRI data showed that hemodynamic activation within right-hemispheric primary visual cortex (V1), bilateral pulvinar (Pv), and left-hemispheric supplementary motor area (pre-SMA) covaried with their capability of ultra-fast speech (16 syllables/s) comprehension. It has been suggested that right V1 plays an important role with respect to the perception of ultra-fast speech features, particularly the detection of syllable onsets. Furthermore, left pre-SMA seems to be an interface between these syllabic representations and the frontal speech processing and working memory network. So far, little is known about the networks linking V1 to Pv, auditory cortex (A1), and (mesio-) frontal areas. Dynamic causal modeling (DCM) was applied to investigate (i) the input structure from A1 and Pv toward right V1 and (ii) output from right V1 and A1 to left pre-SMA. As concerns the input Pv was significantly connected to V1, in addition to A1, in blind participants, but not in sighted controls. Regarding the output V1 was significantly connected to pre-SMA in blind individuals, and the strength of V1-SMA connectivity correlated with the performance of ultra-fast speech comprehension. By contrast, in sighted controls, not understanding ultra-fast speech, pre-SMA did neither receive input from A1 nor V1. Taken together, right V1 might facilitate the “parsing” of the ultra-fast speech stream in blind subjects by receiving subcortical auditory input via the Pv (= secondary visual pathway) and transmitting this information toward contralateral pre-SMA. PMID:26148062

  12. Network Modeling for Functional Magnetic Resonance Imaging (fMRI) Signals during Ultra-Fast Speech Comprehension in Late-Blind Listeners.

    PubMed

    Dietrich, Susanne; Hertrich, Ingo; Ackermann, Hermann

    2015-01-01

    In many functional magnetic resonance imaging (fMRI) studies blind humans were found to show cross-modal reorganization engaging the visual system in non-visual tasks. For example, blind people can manage to understand (synthetic) spoken language at very high speaking rates up to ca. 20 syllables/s (syl/s). FMRI data showed that hemodynamic activation within right-hemispheric primary visual cortex (V1), bilateral pulvinar (Pv), and left-hemispheric supplementary motor area (pre-SMA) covaried with their capability of ultra-fast speech (16 syllables/s) comprehension. It has been suggested that right V1 plays an important role with respect to the perception of ultra-fast speech features, particularly the detection of syllable onsets. Furthermore, left pre-SMA seems to be an interface between these syllabic representations and the frontal speech processing and working memory network. So far, little is known about the networks linking V1 to Pv, auditory cortex (A1), and (mesio-) frontal areas. Dynamic causal modeling (DCM) was applied to investigate (i) the input structure from A1 and Pv toward right V1 and (ii) output from right V1 and A1 to left pre-SMA. As concerns the input Pv was significantly connected to V1, in addition to A1, in blind participants, but not in sighted controls. Regarding the output V1 was significantly connected to pre-SMA in blind individuals, and the strength of V1-SMA connectivity correlated with the performance of ultra-fast speech comprehension. By contrast, in sighted controls, not understanding ultra-fast speech, pre-SMA did neither receive input from A1 nor V1. Taken together, right V1 might facilitate the "parsing" of the ultra-fast speech stream in blind subjects by receiving subcortical auditory input via the Pv (= secondary visual pathway) and transmitting this information toward contralateral pre-SMA.

  13. THE COMPREHENSION OF RAPID SPEECH BY THE BLIND, PART III.

    ERIC Educational Resources Information Center

    FOULKE, EMERSON

    A REVIEW OF THE RESEARCH ON THE COMPREHENSION OF RAPID SPEECH BY THE BLIND IDENTIFIES FIVE METHODS OF SPEECH COMPRESSION--SPEECH CHANGING, ELECTROMECHANICAL SAMPLING, COMPUTER SAMPLING, SPEECH SYNTHESIS, AND FREQUENCY DIVIDING WITH THE HARMONIC COMPRESSOR. THE SPEECH CHANGING AND ELECTROMECHANICAL SAMPLING METHODS AND THE NECESSARY APPARATUS HAVE…

  14. Working Memory and Speech Comprehension in Older Adults With Hearing Impairment.

    PubMed

    Nagaraj, Naveen K

    2017-10-17

    This study examined the relationship between working memory (WM) and speech comprehension in older adults with hearing impairment (HI). It was hypothesized that WM would explain significant variance in speech comprehension measured in multitalker babble (MTB). Twenty-four older (59-73 years) adults with sensorineural HI participated. WM capacity (WMC) was measured using 3 complex span tasks. Speech comprehension was assessed using multiple passages, and speech identification ability was measured using recall of sentence final-word and key words. Speech measures were performed in quiet and in the presence of MTB at + 5 dB signal-to-noise ratio. Results suggested that participants' speech identification was poorer in MTB, but their ability to comprehend discourse in MTB was at least as good as in quiet. WMC did not explain significant variance in speech comprehension before and after controlling for age and audibility. However, WMC explained significant variance in low-context sentence key words identification in MTB. These results suggest that WMC plays an important role in identifying low-context sentences in MTB, but not when comprehending semantically rich discourse passages. In general, data did not support individual variability in WMC as a factor that predicts speech comprehension ability in older adults with HI.

  15. Perceived Liveliness and Speech Comprehensibility in Aphasia: The Effects of Direct Speech in Auditory Narratives

    ERIC Educational Resources Information Center

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in "healthy" communication direct speech constructions contribute to the liveliness, and indirectly to the comprehensibility, of speech.…

  16. Processing changes when listening to foreign-accented speech

    PubMed Central

    Romero-Rivas, Carlos; Martin, Clara D.; Costa, Albert

    2015-01-01

    This study investigates the mechanisms responsible for fast changes in processing foreign-accented speech. Event Related brain Potentials (ERPs) were obtained while native speakers of Spanish listened to native and foreign-accented speakers of Spanish. We observed a less positive P200 component for foreign-accented speech relative to native speech comprehension. This suggests that the extraction of spectral information and other important acoustic features was hampered during foreign-accented speech comprehension. However, the amplitude of the N400 component for foreign-accented speech comprehension decreased across the experiment, suggesting the use of a higher level, lexical mechanism. Furthermore, during native speech comprehension, semantic violations in the critical words elicited an N400 effect followed by a late positivity. During foreign-accented speech comprehension, semantic violations only elicited an N400 effect. Overall, our results suggest that, despite a lack of improvement in phonetic discrimination, native listeners experience changes at lexical-semantic levels of processing after brief exposure to foreign-accented speech. Moreover, these results suggest that lexical access, semantic integration and linguistic re-analysis processes are permeable to external factors, such as the accent of the speaker. PMID:25859209

  17. [Improving speech comprehension using a new cochlear implant speech processor].

    PubMed

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg sentences in the clinical setting S(0)N(CI), with speech signal at 0 degrees and noise lateral to the CI at 90 degrees . With the convincing findings from our evaluations of this multicenter study cohort, a trial with the Freedom 24 sound processor for all suitable CI users is recommended. For evaluating the benefits of a new processor, the comparative assessment paradigm used in our study design would be considered ideal for use with individual patients.

  18. Speech comprehension and emotional/behavioral problems in children with specific language impairment (SLI).

    PubMed

    Gregl, Ana; Kirigin, Marin; Bilać, Snjeiana; Sućeska Ligutić, Radojka; Jaksić, Nenad; Jakovljević, Miro

    2014-09-01

    This research aims to investigate differences in speech comprehension between children with specific language impairment (SLI) and their developmentally normal peers, and the relationship between speech comprehension and emotional/behavioral problems on Achenbach's Child Behavior Checklist (CBCL) and Caregiver Teacher's Report Form (C-TRF) according to the DSMIV The clinical sample comprised 97preschool children with SLI, while the peer sample comprised 60 developmentally normal preschool children. Children with SLI had significant delays in speech comprehension and more emotional/behavioral problems than peers. In children with SLI, speech comprehension significantly correlated with scores on Attention Deficit/Hyperactivity Problems (CBCL and C-TRF), and Pervasive Developmental Problems scales (CBCL)(p<0.05). In the peer sample, speech comprehension significantly correlated with scores on Affective Problems and Attention Deficit/Hyperactivity Problems (C-TRF) scales. Regression analysis showed that 12.8% of variance in speech comprehension is saturated with 5 CBCL variables, of which Attention Deficit/Hyperactivity (beta = -0.281) and Pervasive Developmental Problems (beta = -0.280) are statistically significant (p < 0.05). In the reduced regression model Attention Deficit/Hyperactivity explains 7.3% of the variance in speech comprehension, (beta = -0.270, p < 0.01). It is possible that, to a certain degree, the same neurodevelopmental process lies in the background of problems with speech comprehension, problems with attention and hyperactivity, and pervasive developmental problems. This study confirms the importance of triage for behavioral problems and attention training in the rehabilitation of children with SLI and children with normal language development that exhibit ADHD symptoms.

  19. Speech Perception Deficits in Mandarin-Speaking School-Aged Children with Poor Reading Comprehension

    PubMed Central

    Liu, Huei-Mei; Tsao, Feng-Ming

    2017-01-01

    Previous studies have shown that children learning alphabetic writing systems who have language impairment or dyslexia exhibit speech perception deficits. However, whether such deficits exist in children learning logographic writing systems who have poor reading comprehension remains uncertain. To further explore this issue, the present study examined speech perception deficits in Mandarin-speaking children with poor reading comprehension. Two self-designed tasks, consonant categorical perception task and lexical tone discrimination task were used to compare speech perception performance in children (n = 31, age range = 7;4–10;2) with poor reading comprehension and an age-matched typically developing group (n = 31, age range = 7;7–9;10). Results showed that the children with poor reading comprehension were less accurate in consonant and lexical tone discrimination tasks and perceived speech contrasts less categorically than the matched group. The correlations between speech perception skills (i.e., consonant and lexical tone discrimination sensitivities and slope of consonant identification curve) and individuals’ oral language and reading comprehension were stronger than the correlations between speech perception ability and word recognition ability. In conclusion, the results revealed that Mandarin-speaking children with poor reading comprehension exhibit less-categorized speech perception, suggesting that imprecise speech perception, especially lexical tone perception, is essential to account for reading learning difficulties in Mandarin-speaking children. PMID:29312031

  20. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    PubMed

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues.

    PubMed

    Peeters, David; Snijders, Tineke M; Hagoort, Peter; Özyürek, Aslı

    2017-01-27

    In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.

    PubMed

    Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo

    2009-04-01

    The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained from the subtitles. Speech comprehension improved even for relatively low ASR accuracy levels; for example, participants obtained about 2 dB SNR audiovisual benefit for ASR accuracies around 74%. Delaying the presentation of the text reduced the benefit and increased the listening effort. Participants with relatively low unimodal speech comprehension obtained greater benefit from the subtitles than participants with better unimodal speech comprehension. We observed an age-related decline in the working-memory capacity of the listeners with normal hearing. A higher age and a lower working memory capacity were associated with increased effort required to use the subtitles to improve speech comprehension. Participants were able to use partly incorrect and delayed subtitles to increase their comprehension of speech in noise, regardless of age and hearing loss. This supports the further development and evaluation of an assistive listening system that displays automatically recognized speech to aid speech comprehension by listeners with hearing impairment.

  3. Effects of noise and reverberation on speech perception and listening comprehension of children and adults in a classroom-like setting.

    PubMed

    Klatte, Maria; Lachmann, Thomas; Meis, Markus

    2010-01-01

    The effects of classroom noise and background speech on speech perception, measured by word-to-picture matching, and listening comprehension, measured by execution of oral instructions, were assessed in first- and third-grade children and adults in a classroom-like setting. For speech perception, in addition to noise, reverberation time (RT) was varied by conducting the experiment in two virtual classrooms with mean RT = 0.47 versus RT = 1.1 s. Children were more impaired than adults by background sounds in both speech perception and listening comprehension. Classroom noise evoked a reliable disruption in children's speech perception even under conditions of short reverberation. RT had no effect on speech perception in silence, but evoked a severe increase in the impairments due to background sounds in all age groups. For listening comprehension, impairments due to background sounds were found in the children, stronger for first- than for third-graders, whereas adults were unaffected. Compared to classroom noise, background speech had a smaller effect on speech perception, but a stronger effect on listening comprehension, remaining significant when speech perception was controlled. This indicates that background speech affects higher-order cognitive processes involved in children's comprehension. Children's ratings of the sound-induced disturbance were low overall and uncorrelated to the actual disruption, indicating that the children did not consciously realize the detrimental effects. The present results confirm earlier findings on the substantial impact of noise and reverberation on children's speech perception, and extend these to classroom-like environmental settings and listening demands closely resembling those faced by children at school.

  4. Neural Development of Networks for Audiovisual Speech Comprehension

    ERIC Educational Resources Information Center

    Dick, Anthony Steven; Solodkin, Ana; Small, Steven L.

    2010-01-01

    Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the…

  5. Working Memory and Speech Comprehension in Older Adults with Hearing Impairment

    ERIC Educational Resources Information Center

    Nagaraj, Naveen K.

    2017-01-01

    Purpose: This study examined the relationship between working memory (WM) and speech comprehension in older adults with hearing impairment (HI). It was hypothesized that WM would explain significant variance in speech comprehension measured in multitalker babble (MTB). Method: Twenty-four older (59-73 years) adults with sensorineural HI…

  6. Measuring Speech Comprehensibility in Students with Down Syndrome

    ERIC Educational Resources Information Center

    Yoder, Paul J.; Woynaroski, Tiffany; Camarata, Stephen

    2016-01-01

    Purpose: There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based…

  7. Linguistic Complexity, Speech Production, and Comprehension in Parkinson's Disease: Behavioral and Physiological Indices

    ERIC Educational Resources Information Center

    Walsh, Bridget; Smith, Anne

    2011-01-01

    Purpose: To investigate the effects of increased syntactic complexity and utterance length demands on speech production and comprehension in individuals with Parkinson's disease (PD) using behavioral and physiological measures. Method: Speech response latency, interarticulatory coordinative consistency, accuracy of speech production, and response…

  8. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension.

    PubMed

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-05-01

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  9. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension

    PubMed Central

    Özyürek, Asli; Jensen, Ole

    2018-01-01

    Abstract During face‐to‐face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued‐recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand‐area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low‐ and high‐frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low‐ and high‐frequency oscillations in predicting the integration of auditory and visual information at a semantic level. PMID:29380945

  10. Is Comprehension Necessary for Error Detection? A Conflict-Based Account of Monitoring in Speech Production

    ERIC Educational Resources Information Center

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the…

  11. Simultaneous Treatment of Grammatical and Speech-Comprehensibility Deficits in Children with Down Syndrome

    ERIC Educational Resources Information Center

    Camarata, Stephen; Yoder, Paul; Camarata, Mary

    2006-01-01

    Children with Down syndrome often display speech-comprehensibility and grammatical deficits beyond what would be predicted based upon general mental age. Historically, speech-comprehensibility has often been treated using traditional articulation therapy and oral-motor training so there may be little or no coordination of grammatical and…

  12. Processing and Comprehension of Accented Speech by Monolingual and Bilingual Children

    ERIC Educational Resources Information Center

    McDonald, Margarethe; Gross, Megan; Buac, Milijana; Batko, Michelle; Kaushanskaya, Margarita

    2018-01-01

    This study tested the effect of Spanish-accented speech on sentence comprehension in children with different degrees of Spanish experience. The hypothesis was that earlier acquisition of Spanish would be associated with enhanced comprehension of Spanish-accented speech. Three groups of 5-6-year-old children were tested: monolingual…

  13. Negative blood oxygen level dependent signals during speech comprehension.

    PubMed

    Rodriguez Moreno, Diana; Schiff, Nicholas D; Hirsch, Joy

    2015-05-01

    Speech comprehension studies have generally focused on the isolation and function of regions with positive blood oxygen level dependent (BOLD) signals with respect to a resting baseline. Although regions with negative BOLD signals in comparison to a resting baseline have been reported in language-related tasks, their relationship to regions of positive signals is not fully appreciated. Based on the emerging notion that the negative signals may represent an active function in language tasks, the authors test the hypothesis that negative BOLD signals during receptive language are more associated with comprehension than content-free versions of the same stimuli. Regions associated with comprehension of speech were isolated by comparing responses to passive listening to natural speech to two incomprehensible versions of the same speech: one that was digitally time reversed and one that was muffled by removal of high frequencies. The signal polarity was determined by comparing the BOLD signal during each speech condition to the BOLD signal during a resting baseline. As expected, stimulation-induced positive signals relative to resting baseline were observed in the canonical language areas with varying signal amplitudes for each condition. Negative BOLD responses relative to resting baseline were observed primarily in frontoparietal regions and were specific to the natural speech condition. However, the BOLD signal remained indistinguishable from baseline for the unintelligible speech conditions. Variations in connectivity between brain regions with positive and negative signals were also specifically related to the comprehension of natural speech. These observations of anticorrelated signals related to speech comprehension are consistent with emerging models of cooperative roles represented by BOLD signals of opposite polarity.

  14. Negative Blood Oxygen Level Dependent Signals During Speech Comprehension

    PubMed Central

    Rodriguez Moreno, Diana; Schiff, Nicholas D.

    2015-01-01

    Abstract Speech comprehension studies have generally focused on the isolation and function of regions with positive blood oxygen level dependent (BOLD) signals with respect to a resting baseline. Although regions with negative BOLD signals in comparison to a resting baseline have been reported in language-related tasks, their relationship to regions of positive signals is not fully appreciated. Based on the emerging notion that the negative signals may represent an active function in language tasks, the authors test the hypothesis that negative BOLD signals during receptive language are more associated with comprehension than content-free versions of the same stimuli. Regions associated with comprehension of speech were isolated by comparing responses to passive listening to natural speech to two incomprehensible versions of the same speech: one that was digitally time reversed and one that was muffled by removal of high frequencies. The signal polarity was determined by comparing the BOLD signal during each speech condition to the BOLD signal during a resting baseline. As expected, stimulation-induced positive signals relative to resting baseline were observed in the canonical language areas with varying signal amplitudes for each condition. Negative BOLD responses relative to resting baseline were observed primarily in frontoparietal regions and were specific to the natural speech condition. However, the BOLD signal remained indistinguishable from baseline for the unintelligible speech conditions. Variations in connectivity between brain regions with positive and negative signals were also specifically related to the comprehension of natural speech. These observations of anticorrelated signals related to speech comprehension are consistent with emerging models of cooperative roles represented by BOLD signals of opposite polarity. PMID:25412406

  15. Perceptual Learning of Noise Vocoded Words: Effects of Feedback and Lexicality

    ERIC Educational Resources Information Center

    Hervais-Adelman, Alexis; Davis, Matthew H.; Johnsrude, Ingrid S.; Carlyon, Robert P.

    2008-01-01

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word…

  16. Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study.

    PubMed

    Eggenberger, Noëmi; Preisig, Basil C; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M

    2016-01-01

    Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.

  17. Effects of Culture and Gender in Comprehension of Speech Acts of Indirect Request

    ERIC Educational Resources Information Center

    Shams, Rabe'a; Afghari, Akbar

    2011-01-01

    This study investigates the comprehension of indirect request speech act used by Iranian people in daily communication. The study is an attempt to find out whether different cultural backgrounds and the gender of the speakers affect the comprehension of the indirect request of speech act. The sample includes thirty males and females in Gachsaran…

  18. θ-Band and β-Band Neural Activity Reflects Independent Syllable Tracking and Comprehension of Time-Compressed Speech.

    PubMed

    Pefkou, Maria; Arnal, Luc H; Fontolan, Lorenzo; Giraud, Anne-Lise

    2017-08-16

    Recent psychophysics data suggest that speech perception is not limited by the capacity of the auditory system to encode fast acoustic variations through neural γ activity, but rather by the time given to the brain to decode them. Whether the decoding process is bounded by the capacity of θ rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mechanism, e.g., involving β activity, is unknown. We addressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking and speech decoding using comprehensible and incomprehensible time-compressed auditory sentences. We recorded EEGs in human participants and found that neural activity in both θ and γ ranges was sensitive to syllabic rate. Phase patterns of slow neural activity consistently followed the syllabic rate (4-14 Hz), even when this rate went beyond the classical θ range (4-8 Hz). The power of θ activity increased linearly with syllabic rate but showed no sensitivity to comprehension. Conversely, the power of β (14-21 Hz) activity was insensitive to the syllabic rate, yet reflected comprehension on a single-trial basis. We found different long-range dynamics for θ and β activity, with β activity building up in time while more contextual information becomes available. This is consistent with the roles of θ and β activity in stimulus-driven versus endogenous mechanisms. These data show that speech comprehension is constrained by concurrent stimulus-driven θ and low-γ activity, and by endogenous β activity, but not primarily by the capacity of θ activity to track the syllabic rhythm. SIGNIFICANCE STATEMENT Speech comprehension partly depends on the ability of the auditory cortex to track syllable boundaries with θ-range neural oscillations. The reason comprehension drops when speech is accelerated could hence be because θ oscillations can no longer follow the syllabic rate. Here, we presented subjects with comprehensible and incomprehensible accelerated speech, and show that neural phase patterns in the θ band consistently reflect the syllabic rate, even when speech becomes too fast to be intelligible. The drop in comprehension, however, is signaled by a significant decrease in the power of low-β oscillations (14-21 Hz). These data suggest that speech comprehension is not limited by the capacity of θ oscillations to adapt to syllabic rate, but by an endogenous decoding process. Copyright © 2017 the authors 0270-6474/17/377930-09$15.00/0.

  19. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension.

    PubMed

    Drijvers, Linda; Özyürek, Asli

    2017-01-01

    This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions. When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.

  20. Relationship Between Speech Intelligibility and Speech Comprehension in Babble Noise.

    PubMed

    Fontan, Lionel; Tardieu, Julien; Gaillard, Pascal; Woisard, Virginie; Ruiz, Robert

    2015-06-01

    The authors investigated the relationship between the intelligibility and comprehension of speech presented in babble noise. Forty participants listened to French imperative sentences (commands for moving objects) in a multitalker babble background for which intensity was experimentally controlled. Participants were instructed to transcribe what they heard and obey the commands in an interactive environment set up for this purpose. The former test provided intelligibility scores and the latter provided comprehension scores. Collected data revealed a globally weak correlation between intelligibility and comprehension scores (r = .35, p < .001). The discrepancy tended to grow as noise level increased. An analysis of standard deviations showed that variability in comprehension scores increased linearly with noise level, whereas higher variability in intelligibility scores was found for moderate noise level conditions. These results support the hypothesis that intelligibility scores are poor predictors of listeners' comprehension in real communication situations. Intelligibility and comprehension scores appear to provide different insights, the first measure being centered on speech signal transfer and the second on communicative performance. Both theoretical and practical implications for the use of speech intelligibility tests as indicators of speakers' performances are discussed.

  1. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    PubMed

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to facilitate lexical access, making it difficult for them to fully engage higher-order cognitive abilities in support of listening comprehension.

  2. Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study

    PubMed Central

    Eggenberger, Noëmi; Preisig, Basil C.; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M.

    2016-01-01

    Background Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Method Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. Results In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Conclusion Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients’ comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes. PMID:26735917

  3. The Comprehension of Rapid Speech by the Blind: Part III. Final Report.

    ERIC Educational Resources Information Center

    Foulke, Emerson

    Accounts of completed and ongoing research conducted from 1964 to 1968 are presented on the subject of accelerated speech as a substitute for the written word. Included are a review of the research on intelligibility and comprehension of accelerated speech, some methods for controlling the word rate of recorded speech, and a comparison of…

  4. Designing acoustics for linguistically diverse classrooms: Effects of background noise, reverberation and talker foreign accent on speech comprehension by native and non-native English-speaking listeners

    NASA Astrophysics Data System (ADS)

    Peng, Zhao Ellen

    The current classroom acoustics standard (ANSI S12.60-2010) recommends core learning spaces not to exceed background noise level (BNL) of 35 dBA and reverberation time (RT) of 0.6 second, based on speech intelligibility performance mainly by the native English-speaking population. Existing literature has not correlated these recommended values well with student learning outcomes. With a growing population of non-native English speakers in American classrooms, the special needs for perceiving degraded speech among non-native listeners, either due to realistic room acoustics or talker foreign accent, have not been addressed in the current standard. This research seeks to investigate the effects of BNL and RT on the comprehension of English speech from native English and native Mandarin Chinese talkers as perceived by native and non-native English listeners, and to provide acoustic design guidelines to supplement the existing standard. This dissertation presents two studies on the effects of RT and BNL on more realistic classroom learning experiences. How do native and non-native English-speaking listeners perform on speech comprehension tasks under adverse acoustic conditions, if the English speech is produced by talkers of native English (Study 1) versus native Mandarin Chinese (Study 2)? Speech comprehension materials were played back in a listening chamber to individual listeners: native and non-native English-speaking in Study 1; native English, native Mandarin Chinese, and other non-native English-speaking in Study 2. Each listener was screened for baseline English proficiency level, and completed dual tasks simultaneously involving speech comprehension and adaptive dot-tracing under 15 acoustic conditions, comprised of three BNL conditions (RC-30, 40, and 50) and five RT scenarios (0.4 to 1.2 seconds). The results show that BNL and RT negatively affect both objective performance and subjective perception of speech comprehension, more severely for non-native listeners than for native listeners. While the presence of foreign accent is generally detrimental, an interlanguage benefit was identified on both speech comprehension and the self-report frustration and perceived performance ratings, specifically for non-native listeners with matched foreign accent as the talker. Suggested design guidelines for BNL and RT are identified for attaining optimal speech comprehension performance to improve classroom acoustics for the non-native English-speaking population.

  5. The Effects of Phonological Short-Term Memory and Speech Perception on Spoken Sentence Comprehension in Children: Simulating Deficits in an Experimental Design.

    PubMed

    Higgins, Meaghan C; Penney, Sarah B; Robertson, Erin K

    2017-10-01

    The roles of phonological short-term memory (pSTM) and speech perception in spoken sentence comprehension were examined in an experimental design. Deficits in pSTM and speech perception were simulated through task demands while typically-developing children (N [Formula: see text] 71) completed a sentence-picture matching task. Children performed the control, simulated pSTM deficit, simulated speech perception deficit, or simulated double deficit condition. On long sentences, the double deficit group had lower scores than the control and speech perception deficit groups, and the pSTM deficit group had lower scores than the control group and marginally lower scores than the speech perception deficit group. The pSTM and speech perception groups performed similarly to groups with real deficits in these areas, who completed the control condition. Overall, scores were lowest on noncanonical long sentences. Results show pSTM has a greater effect than speech perception on sentence comprehension, at least in the tasks employed here.

  6. Relationship between Speech Intelligibility and Speech Comprehension in Babble Noise

    ERIC Educational Resources Information Center

    Fontan, Lionel; Tardieu, Julien; Gaillard, Pascal; Woisard, Virginie; Ruiz, Robert

    2015-01-01

    Purpose: The authors investigated the relationship between the intelligibility and comprehension of speech presented in babble noise. Method: Forty participants listened to French imperative sentences (commands for moving objects) in a multitalker babble background for which intensity was experimentally controlled. Participants were instructed to…

  7. Upregulation of cognitive control networks in older adults’ speech comprehension

    PubMed Central

    Erb, Julia; Obleser, Jonas

    2013-01-01

    Speech comprehension abilities decline with age and with age-related hearing loss, but it is unclear how this decline expresses in terms of central neural mechanisms. The current study examined neural speech processing in a group of older adults (aged 56–77, n = 16, with varying degrees of sensorineural hearing loss), and compared them to a cohort of young adults (aged 22–31, n = 30, self-reported normal hearing). In a functional MRI experiment, listeners heard and repeated back degraded sentences (4-band vocoded, where the temporal envelope of the acoustic signal is preserved, while the spectral information is substantially degraded). Behaviorally, older adults adapted to degraded speech at the same rate as young listeners, although their overall comprehension of degraded speech was lower. Neurally, both older and young adults relied on the left anterior insula for degraded more than clear speech perception. However, anterior insula engagement in older adults was dependent on hearing acuity. Young adults additionally employed the anterior cingulate cortex (ACC). Interestingly, this age group × degradation interaction was driven by a reduced dynamic range in older adults who displayed elevated levels of ACC activity for both degraded and clear speech, consistent with a persistent upregulation in cognitive control irrespective of task difficulty. For correct speech comprehension, older adults relied on the middle frontal gyrus in addition to a core speech comprehension network recruited by younger adults suggestive of a compensatory mechanism. Taken together, the results indicate that older adults increasingly recruit cognitive control networks, even under optimal listening conditions, at the expense of these systems’ dynamic range. PMID:24399939

  8. An Intentional Stance Modulates the Integration of Gesture and Speech during Comprehension

    ERIC Educational Resources Information Center

    Kelly, Spencer D.; Ward, Sarah; Creigh, Peter; Bartolotti, James

    2007-01-01

    The present study investigates whether knowledge about the intentional relationship between gesture and speech influences controlled processes when integrating the two modalities at comprehension. Thirty-five adults watched short videos of gesture and speech that conveyed semantically congruous and incongruous information. In half of the videos,…

  9. Seeing a singer helps comprehension of the song's lyrics.

    PubMed

    Jesse, Alexandra; Massaro, Dominic W

    2010-06-01

    When listening to speech, we often benefit when also seeing the speaker's face. If this advantage is not domain specific for speech, the recognition of sung lyrics should also benefit from seeing the singer's face. By independently varying the sight and sound of the lyrics, we found a substantial comprehension benefit of seeing a singer. This benefit was robust across participants, lyrics, and repetition of the test materials. This benefit was much larger than the benefit for sung lyrics obtained in previous research, which had not provided the visual information normally present in singing. Given that the comprehension of sung lyrics benefits from seeing the singer, just like speech comprehension benefits from seeing the speaker, both speech and music perception appear to be multisensory processes.

  10. Genetics and language: a neurobiological perspective on the missing link (-ing hypotheses).

    PubMed

    Poeppel, David

    2011-12-01

    The paper argues that both evolutionary and genetic approaches to studying the biological foundations of speech and language could benefit from fractionating the problem at a finer grain, aiming not to map genetics to "language"-or even subdomains of language such as "phonology" or "syntax"-but rather to link genetic results to component formal operations that underlie processing the comprehension and production of linguistic representations. Neuroanatomic and neurophysiological research suggests that language processing is broken down in space (distributed functional anatomy along concurrent pathways) and time (concurrent processing on multiple time scales). These parallel neuronal pathways and their local circuits form the infrastructure of speech and language and are the actual targets of evolution/genetics. Therefore, investigating the mapping from gene to brain circuit to linguistic phenotype at the level of generic computational operations (subroutines actually executable in these circuits) stands to provide a new perspective on the biological foundations in the healthy and challenged brain.

  11. Accent, intelligibility, and comprehensibility in the perception of foreign-accented Lombard speech

    NASA Astrophysics Data System (ADS)

    Li, Chi-Nin

    2003-10-01

    Speech produced in noise (Lombard speech) has been reported to be more intelligible than speech produced in quiet (normal speech). This study examined the perception of non-native Lombard speech in terms of intelligibility, comprehensibility, and degree of foreign accent. Twelve Cantonese speakers and a comparison group of English speakers read simple true and false English statements in quiet and in 70 dB of masking noise. Lombard and normal utterances were mixed with noise at a constant signal-to-noise ratio, and presented along with noise-free stimuli to eight new English listeners who provided transcription scores, comprehensibility ratings, and accent ratings. Analyses showed that, as expected, utterances presented in noise were less well perceived than were noise-free sentences, and that the Cantonese speakers' productions were more accented, but less intelligible and less comprehensible than those of the English speakers. For both groups of speakers, the Lombard sentences were correctly transcribed more often than their normal utterances in noisy conditions. However, the Cantonese-accented Lombard sentences were not rated as easier to understand than was the normal speech in all conditions. The assigned accent ratings were similar throughout all listening conditions. Implications of these findings will be discussed.

  12. The Bilingual Language Interaction Network for Comprehension of Speech*

    PubMed Central

    Marian, Viorica

    2013-01-01

    During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension. PMID:24363602

  13. Treating Speech Comprehensibility in Students with Down Syndrome

    ERIC Educational Resources Information Center

    Yoder, Paul J.; Camarata, Stephen; Woynaroski, Tiffany

    2016-01-01

    Purpose: This study examined whether a particular type of therapy (Broad Target Speech Recasts, BTSR) was superior to a contrast treatment in facilitating speech comprehensibility in conversations of students with Down syndrome who began treatment with initially high verbal imitation. Method: We randomly assigned 51 5- to 12-year-old students to…

  14. Introduction to the Clinical Forum: Reading Comprehension Is Not a Single Ability.

    PubMed

    Gray, Shelley

    2017-04-20

    In this introduction to the clinical forum on reading comprehension, the Editor-in-Chief of Language, Speech, and Hearing Services in Schools provides data on our national reading comprehension problem, resources for increasing our understanding of reading comprehension, and a call to action for speech-language pathologists to work with educational teams to address poor reading comprehension in school-age children.

  15. Electrophysiological Correlates of Semantic Dissimilarity Reflect the Comprehension of Natural, Narrative Speech.

    PubMed

    Broderick, Michael P; Anderson, Andrew J; Di Liberto, Giovanni M; Crosse, Michael J; Lalor, Edmund C

    2018-03-05

    People routinely hear and understand speech at rates of 120-200 words per minute [1, 2]. Thus, speech comprehension must involve rapid, online neural mechanisms that process words' meanings in an approximately time-locked fashion. However, electrophysiological evidence for such time-locked processing has been lacking for continuous speech. Although valuable insights into semantic processing have been provided by the "N400 component" of the event-related potential [3-6], this literature has been dominated by paradigms using incongruous words within specially constructed sentences, with less emphasis on natural, narrative speech comprehension. Building on the discovery that cortical activity "tracks" the dynamics of running speech [7-9] and psycholinguistic work demonstrating [10-12] and modeling [13-15] how context impacts on word processing, we describe a new approach for deriving an electrophysiological correlate of natural speech comprehension. We used a computational model [16] to quantify the meaning carried by words based on how semantically dissimilar they were to their preceding context and then regressed this measure against electroencephalographic (EEG) data recorded from subjects as they listened to narrative speech. This produced a prominent negativity at a time lag of 200-600 ms on centro-parietal EEG channels, characteristics common to the N400. Applying this approach to EEG datasets involving time-reversed speech, cocktail party attention, and audiovisual speech-in-noise demonstrated that this response was very sensitive to whether or not subjects understood the speech they heard. These findings demonstrate that, when successfully comprehending natural speech, the human brain responds to the contextual semantic content of each word in a relatively time-locked fashion. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Does Use of Text-to-Speech and Related Read-Aloud Tools Improve Reading Comprehension for Students with Reading Disabilities? A Meta-Analysis

    ERIC Educational Resources Information Center

    Wood, Sarah G.; Moxley, Jerad H.; Tighe, Elizabeth L.; Wagner, Richard K.

    2018-01-01

    Text-to-speech and related read-aloud tools are being widely implemented in an attempt to assist students' reading comprehension skills. Read-aloud software, including text-to-speech, is used to translate written text into spoken text, enabling one to listen to written text while reading along. It is not clear how effective text-to-speech is at…

  17. Neural dynamics of speech act comprehension: an MEG study of naming and requesting.

    PubMed

    Egorova, Natalia; Pulvermüller, Friedemann; Shtyrov, Yury

    2014-05-01

    The neurobiological basis and temporal dynamics of communicative language processing pose important yet unresolved questions. It has previously been suggested that comprehension of the communicative function of an utterance, i.e. the so-called speech act, is supported by an ensemble of neural networks, comprising lexico-semantic, action and mirror neuron as well as theory of mind circuits, all activated in concert. It has also been demonstrated that recognition of the speech act type occurs extremely rapidly. These findings however, were obtained in experiments with insufficient spatio-temporal resolution, thus possibly concealing important facets of the neural dynamics of the speech act comprehension process. Here, we used magnetoencephalography to investigate the comprehension of Naming and Request actions performed with utterances controlled for physical features, psycholinguistic properties and the probability of occurrence in variable contexts. The results show that different communicative actions are underpinned by a dynamic neural network, which differentiates between speech act types very early after the speech act onset. Within 50-90 ms, Requests engaged mirror-neuron action-comprehension systems in sensorimotor cortex, possibly for processing action knowledge and intentions. Still, within the first 200 ms of stimulus onset (100-150 ms), Naming activated brain areas involved in referential semantic retrieval. Subsequently (200-300 ms), theory of mind and mentalising circuits were activated in medial prefrontal and temporo-parietal areas, possibly indexing processing of intentions and assumptions of both communication partners. This cascade of stages of processing information about actions and intentions, referential semantics, and theory of mind may underlie dynamic and interactive speech act comprehension.

  18. The Impact of Dysphonic Voices on Healthy Listeners: Listener Reaction Times, Speech Intelligibility, and Listener Comprehension.

    PubMed

    Evitts, Paul M; Starmer, Heather; Teets, Kristine; Montgomery, Christen; Calhoun, Lauren; Schulze, Allison; MacKenzie, Jenna; Adams, Lauren

    2016-11-01

    There is currently minimal information on the impact of dysphonia secondary to phonotrauma on listeners. Considering the high incidence of voice disorders with professional voice users, it is important to understand the impact of a dysphonic voice on their audiences. Ninety-one healthy listeners (39 men, 52 women; mean age = 23.62 years) were presented with speech stimuli from 5 healthy speakers and 5 speakers diagnosed with dysphonia secondary to phonotrauma. Dependent variables included processing speed (reaction time [RT] ratio), speech intelligibility, and listener comprehension. Voice quality ratings were also obtained for all speakers by 3 expert listeners. Statistical results showed significant differences between RT ratio and number of speech intelligibility errors between healthy and dysphonic voices. There was not a significant difference in listener comprehension errors. Multiple regression analyses showed that voice quality ratings from the Consensus Assessment Perceptual Evaluation of Voice (Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kraemer, & Hillman, 2009) were able to predict RT ratio and speech intelligibility but not listener comprehension. Results of the study suggest that although listeners require more time to process and have more intelligibility errors when presented with speech stimuli from speakers with dysphonia secondary to phonotrauma, listener comprehension may not be affected.

  19. Auditory training changes temporal lobe connectivity in 'Wernicke's aphasia': a randomised trial.

    PubMed

    Woodhead, Zoe Vj; Crinion, Jennifer; Teki, Sundeep; Penny, Will; Price, Cathy J; Leff, Alexander P

    2017-07-01

    Aphasia is one of the most disabling sequelae after stroke, occurring in 25%-40% of stroke survivors. However, there remains a lack of good evidence for the efficacy or mechanisms of speech comprehension rehabilitation. This within-subjects trial tested two concurrent interventions in 20 patients with chronic aphasia with speech comprehension impairment following left hemisphere stroke: (1) phonological training using 'Earobics' software and (2) a pharmacological intervention using donepezil, an acetylcholinesterase inhibitor. Donepezil was tested in a double-blind, placebo-controlled, cross-over design using block randomisation with bias minimisation. The primary outcome measure was speech comprehension score on the comprehensive aphasia test. Magnetoencephalography (MEG) with an established index of auditory perception, the mismatch negativity response, tested whether the therapies altered effective connectivity at the lower (primary) or higher (secondary) level of the auditory network. Phonological training improved speech comprehension abilities and was particularly effective for patients with severe deficits. No major adverse effects of donepezil were observed, but it had an unpredicted negative effect on speech comprehension. The MEG analysis demonstrated that phonological training increased synaptic gain in the left superior temporal gyrus (STG). Patients with more severe speech comprehension impairments also showed strengthening of bidirectional connections between the left and right STG. Phonological training resulted in a small but significant improvement in speech comprehension, whereas donepezil had a negative effect. The connectivity results indicated that training reshaped higher order phonological representations in the left STG and (in more severe patients) induced stronger interhemispheric transfer of information between higher levels of auditory cortex.Clinical trial registrationThis trial was registered with EudraCT (2005-004215-30, https:// eudract .ema.europa.eu/) and ISRCTN (68939136, http://www.isrctn.com/). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Halting in Single Word Production: A Test of the Perceptual Loop Theory of Speech Monitoring

    ERIC Educational Resources Information Center

    Slevc, L. Robert; Ferreira, Victor S.

    2006-01-01

    The "perceptual loop theory" of speech monitoring (Levelt, 1983) claims that inner and overt speech are monitored by the comprehension system, which detects errors by comparing the comprehension of formulated utterances to originally intended utterances. To test the perceptual loop monitor, speakers named pictures and sometimes attempted to halt…

  1. The Role of the Right Hemisphere in Speech Act Comprehension

    ERIC Educational Resources Information Center

    Holtgraves, Thomas

    2012-01-01

    In this research the role of the RH in the comprehension of speech acts (or illocutionary force) was examined. Two split-screen experiments were conducted in which participants made lexical decisions for lateralized targets after reading a brief conversation remark. On one-half of the trials the target word named the speech act performed with the…

  2. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

    PubMed

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2013-01-01

    In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.

  3. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects?

    PubMed Central

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2013-01-01

    In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for “reading” texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the “bottleneck” for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition. PMID:23966968

  4. A positron emission tomography study of the neural basis of informational and energetic masking effects in speech perception

    NASA Astrophysics Data System (ADS)

    Scott, Sophie K.; Rosen, Stuart; Wickham, Lindsay; Wise, Richard J. S.

    2004-02-01

    Positron emission tomography (PET) was used to investigate the neural basis of the comprehension of speech in unmodulated noise (``energetic'' masking, dominated by effects at the auditory periphery), and when presented with another speaker (``informational'' masking, dominated by more central effects). Each type of signal was presented at four different signal-to-noise ratios (SNRs) (+3, 0, -3, -6 dB for the speech-in-speech, +6, +3, 0, -3 dB for the speech-in-noise), with listeners instructed to listen for meaning to the target speaker. Consistent with behavioral studies, there was SNR-dependent activation associated with the comprehension of speech in noise, with no SNR-dependent activity for the comprehension of speech-in-speech (at low or negative SNRs). There was, in addition, activation in bilateral superior temporal gyri which was associated with the informational masking condition. The extent to which this activation of classical ``speech'' areas of the temporal lobes might delineate the neural basis of the informational masking is considered, as is the relationship of these findings to the interfering effects of unattended speech and sound on more explicit working memory tasks. This study is a novel demonstration of candidate neural systems involved in the perception of speech in noisy environments, and of the processing of multiple speakers in the dorso-lateral temporal lobes.

  5. Language Comprehension in Language-Learning Impaired Children Improved with Acoustically Modified Speech

    NASA Astrophysics Data System (ADS)

    Tallal, Paula; Miller, Steve L.; Bedi, Gail; Byma, Gary; Wang, Xiaoqin; Nagarajan, Srikantan S.; Schreiner, Christoph; Jenkins, William M.; Merzenich, Michael M.

    1996-01-01

    A speech processing algorithm was developed to create more salient versions of the rapidly changing elements in the acoustic waveform of speech that have been shown to be deficiently processed by language-learning impaired (LLI) children. LLI children received extensive daily training, over a 4-week period, with listening exercises in which all speech was translated into this synthetic form. They also received daily training with computer "games" designed to adaptively drive improvements in temporal processing thresholds. Significant improvements in speech discrimination and language comprehension abilities were demonstrated in two independent groups of LLI children.

  6. Do Native Speakers of North American and Singapore English Differentially Perceive Comprehensibility in Second Language Speech?

    ERIC Educational Resources Information Center

    Saito, Kazuya; Shintani, Natsuko

    2016-01-01

    The current study examined the extent to which native speakers of North American and Singapore English differentially perceive the comprehensibility (ease of understanding) of second language (L2) speech. Spontaneous speech samples elicited from 50 Japanese learners of English with various proficiency levels were first rated by 10 Canadian and 10…

  7. Language Sampling for Preschoolers With Severe Speech Impairments

    PubMed Central

    Ragsdale, Jamie; Bustos, Aimee

    2016-01-01

    Purpose The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Method Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Results Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur–Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Conclusion Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information. PMID:27552110

  8. Language Sampling for Preschoolers With Severe Speech Impairments.

    PubMed

    Binger, Cathy; Ragsdale, Jamie; Bustos, Aimee

    2016-11-01

    The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information.

  9. Bilateral Capacity for Speech Sound Processing in Auditory Comprehension: Evidence from Wada Procedures

    ERIC Educational Resources Information Center

    Hickok, G.; Okada, K.; Barr, W.; Pa, J.; Rogalsky, C.; Donnelly, K.; Barde, L.; Grant, A.

    2008-01-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated…

  10. Disentangling Accent from Comprehensibility

    ERIC Educational Resources Information Center

    Trofimovich, Pavel; Isaacs, Talia

    2012-01-01

    The goal of this study was to determine which linguistic aspects of second language speech are related to accent and which to comprehensibility. To address this goal, 19 different speech measures in the oral productions of 40 native French speakers of English were examined in relation to accent and comprehensibility, as rated by 60 novice raters…

  11. The Role of Speech Prosody and Text Reading Prosody in Children's Reading Comprehension

    ERIC Educational Resources Information Center

    Veenendaal, Nathalie J.; Groen, Margriet A.; Verhoeven, Ludo

    2014-01-01

    Background: Text reading prosody has been associated with reading comprehension. However, text reading prosody is a reading-dependent measure that relies heavily on decoding skills. Investigation of the contribution of speech prosody--which is independent from reading skills--in addition to text reading prosody, to reading comprehension could…

  12. The Effects of Phonological Short-Term Memory and Speech Perception on Spoken Sentence Comprehension in Children: Simulating Deficits in an Experimental Design

    ERIC Educational Resources Information Center

    Higgins, Meaghan C.; Penney, Sarah B.; Robertson, Erin K.

    2017-01-01

    The roles of phonological short-term memory (pSTM) and speech perception in spoken sentence comprehension were examined in an experimental design. Deficits in pSTM and speech perception were simulated through task demands while typically-developing children (N = 71) completed a sentence-picture matching task. Children performed the control,…

  13. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    ERIC Educational Resources Information Center

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  14. Lexical Profiles of Comprehensible Second Language Speech: The Role of Appropriateness, Fluency, Variation, Sophistication, Abstractness, and Sense Relations

    ERIC Educational Resources Information Center

    Saito, Kazuya; Webb, Stuart; Trofimovich, Pavel; Isaacs, Talia

    2016-01-01

    This study examined contributions of lexical factors to native-speaking raters' assessments of comprehensibility (ease of understanding) of second language (L2) speech. Extemporaneous oral narratives elicited from 40 French speakers of L2 English were transcribed and evaluated for comprehensibility by 10 raters. Subsequently, the samples were…

  15. Examining the relationship between comprehension and production processes in code-switched language

    PubMed Central

    Guzzardo Tamargo, Rosa E.; Valdés Kroff, Jorge R.; Dussias, Paola E.

    2016-01-01

    We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish–English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants’ comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension. PMID:28670049

  16. Examining the relationship between comprehension and production processes in code-switched language.

    PubMed

    Guzzardo Tamargo, Rosa E; Valdés Kroff, Jorge R; Dussias, Paola E

    2016-08-01

    We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish-English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants' comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension.

  17. Experience-Related Structural Changes of Degenerated Occipital White Matter in Late-Blind Humans – A Diffusion Tensor Imaging Study

    PubMed Central

    Dietrich, Susanne; Hertrich, Ingo; Kumar, Vinod; Ackermann, Hermann

    2015-01-01

    Late-blind humans can learn to understand speech at ultra-fast syllable rates (ca. 20 syllables/s), a capability associated with hemodynamic activation of the central-visual system. Thus, the observed functional cross-modal recruitment of occipital cortex might facilitate ultra-fast speech processing in these individuals. To further elucidate the structural prerequisites of this skill, diffusion tensor imaging (DTI) was conducted in late-blind subjects differing in their capability of understanding ultra-fast speech. Fractional anisotropy (FA) was determined as a quantitative measure of the directionality of water diffusion, indicating fiber tract characteristics that might be influenced by blindness as well as the acquired perceptual skills. Analysis of the diffusion images revealed reduced FA in late-blind individuals relative to sighted controls at the level of the optic radiations at either side and the right-hemisphere dorsal thalamus (pulvinar). Moreover, late-blind subjects showed significant positive correlations between FA and the capacity of ultra-fast speech comprehension within right-hemisphere optic radiation and thalamus. Thus, experience-related structural alterations occurred in late-blind individuals within visual pathways that, presumably, are linked to higher order frontal language areas. PMID:25830371

  18. Visual activity predicts auditory recovery from deafness after adult cochlear implantation.

    PubMed

    Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal

    2013-12-01

    Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.

  19. Compressed Speech: Potential Application for Air Force Technical Training. Final Report, August 73-November 73.

    ERIC Educational Resources Information Center

    Dailey, K. Anne

    Time-compressed speech (also called compressed speech, speeded speech, or accelerated speech) is an extension of the normal recording procedure for reproducing the spoken word. Compressed speech can be used to achieve dramatic reductions in listening time without significant loss in comprehension. The implications of such temporal reductions in…

  20. [Test set for the evaluation of hearing and speech development after cochlear implantation in children].

    PubMed

    Lamprecht-Dinnesen, A; Sick, U; Sandrieser, P; Illg, A; Lesinski-Schiedat, A; Döring, W H; Müller-Deile, J; Kiefer, J; Matthias, K; Wüst, A; Konradi, E; Riebandt, M; Matulat, P; Von Der Haar-Heise, S; Swart, J; Elixmann, K; Neumann, K; Hildmann, A; Coninx, F; Meyer, V; Gross, M; Kruse, E; Lenarz, T

    2002-10-01

    Since autumn 1998 the multicenter interdisciplinary study group "Test Materials for CI Children" has been compiling a uniform examination tool for evaluation of speech and hearing development after cochlear implantation in childhood. After studying the relevant literature, suitable materials were checked for practical applicability, modified and provided with criteria for execution and break-off. For data acquisition, observation forms for preparation of a PC-version were developed. The evaluation set contains forms for master data with supplements relating to postoperative processes. The hearing tests check supra-threshold hearing with loudness scaling for children, speech comprehension in silence (Mainz and Göttingen Test for Speech Comprehension in Childhood) and phonemic differentiation (Oldenburg Rhyme Test for Children), the central auditory processes of detection, discrimination, identification and recognition (modification of the "Frankfurt Functional Hearing Test for Children") and audiovisual speech perception (Open Paragraph Tracking, Kiel Speech Track Program). The materials for speech and language development comprise phonetics-phonology, lexicon and semantics (LOGO Pronunciation Test), syntax and morphology (analysis of spontaneous speech), language comprehension (Reynell Scales), communication and pragmatics (observation forms). The MAIS and MUSS modified questionnaires are integrated. The evaluation set serves quality assurance and permits factor analysis as well as controls for regularity through the multicenter comparison of long-term developmental trends after cochlear implantation.

  1. Neural Tuning to Low-Level Features of Speech throughout the Perisylvian Cortex.

    PubMed

    Berezutskaya, Julia; Freudenburg, Zachary V; Güçlü, Umut; van Gerven, Marcel A J; Ramsey, Nick F

    2017-08-16

    Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain. SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the perisylvian cortex and potentially contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typically associated with high-level language processing. These findings add to the previous work on auditory processing and underline a distinctive role of inferior frontal gyrus in natural speech comprehension. Copyright © 2017 the authors 0270-6474/17/377906-15$15.00/0.

  2. Comparing the Impact of Rates of Text-to-Speech Software on Reading Fluency and Comprehension for Adults with Reading Difficulties

    ERIC Educational Resources Information Center

    Coleman, Mari Beth; Killdare, Laura K.; Bell, Sherry Mee; Carter, Amanda M.

    2014-01-01

    The purpose of this study was to determine the impact of text-to-speech software on reading fluency and comprehension for four postsecondary students with below average reading fluency and comprehension including three students diagnosed with learning disabilities and concomitant conditions (e.g., attention deficit hyperactivity disorder, seizure…

  3. Right anterior superior temporal activation predicts auditory sentence comprehension following aphasic stroke.

    PubMed

    Crinion, Jenny; Price, Cathy J

    2005-12-01

    Previous studies have suggested that recovery of speech comprehension after left hemisphere infarction may depend on a mechanism in the right hemisphere. However, the role that distinct right hemisphere regions play in speech comprehension following left hemisphere stroke has not been established. Here, we used functional magnetic resonance imaging (fMRI) to investigate narrative speech activation in 18 neurologically normal subjects and 17 patients with left hemisphere stroke and a history of aphasia. Activation for listening to meaningful stories relative to meaningless reversed speech was identified in the normal subjects and in each patient. Second level analyses were then used to investigate how story activation changed with the patients' auditory sentence comprehension skills and surprise story recognition memory tests post-scanning. Irrespective of lesion site, performance on tests of auditory sentence comprehension was positively correlated with activation in the right lateral superior temporal region, anterior to primary auditory cortex. In addition, when the stroke spared the left temporal cortex, good performance on tests of auditory sentence comprehension was also correlated with the left posterior superior temporal cortex (Wernicke's area). In distinct contrast to this, good story recognition memory predicted left inferior frontal and right cerebellar activation. The implication of this double dissociation in the effects of auditory sentence comprehension and story recognition memory is that left frontal and left temporal activations are dissociable. Our findings strongly support the role of the right temporal lobe in processing narrative speech and, in particular, auditory sentence comprehension following left hemisphere aphasic stroke. In addition, they highlight the importance of the right anterior superior temporal cortex where the response was dissociated from that in the left posterior temporal lobe.

  4. Evaluating the sources and functions of gradiency in phoneme categorization: An individual differences approach.

    PubMed

    Kapnoula, Efthymia C; Winn, Matthew B; Kong, Eun Jong; Edwards, Jan; McMurray, Bob

    2017-09-01

    During spoken language comprehension listeners transform continuous acoustic cues into categories (e.g., /b/ and /p/). While long-standing research suggests that phonetic categories are activated in a gradient way, there are also clear individual differences in that more gradient categorization has been linked to various communication impairments such as dyslexia and specific language impairments (Joanisse, Manis, Keating, & Seidenberg, 2000; López-Zamora, Luque, Álvarez, & Cobos, 2012; Serniclaes, Van Heghe, Mousty, Carré, & Sprenger-Charolles, 2004; Werker & Tees, 1987). Crucially, most studies have used 2-alternative forced choice (2AFC) tasks to measure the sharpness of between-category boundaries. Here we propose an alternative paradigm that allows us to measure categorization gradiency in a more direct way. Furthermore, we follow an individual differences approach to (a) link this measure of gradiency to multiple cue integration, (b) explore its relationship to a set of other cognitive processes, and (c) evaluate its role in individuals' ability to perceive speech in noise. Our results provide validation for this new method of assessing phoneme categorization gradiency and offer preliminary insights into how different aspects of speech perception may be linked to each other and to more general cognitive processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Functional Overlap between Regions Involved in Speech Perception and in Monitoring One's Own Voice during Speech Production

    ERIC Educational Resources Information Center

    Zheng, Zane Z.; Munhall, Kevin G.; Johnsrude, Ingrid S.

    2010-01-01

    The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or…

  6. Neuronal basis of speech comprehension.

    PubMed

    Specht, Karsten

    2014-01-01

    Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. N400 ERPs for actions: building meaning in context

    PubMed Central

    Amoruso, Lucía; Gelormini, Carlos; Aboitiz, Francisco; Alvarez González, Miguel; Manes, Facundo; Cardona, Juan F.; Ibanez, Agustín

    2013-01-01

    Converging neuroscientific evidence suggests the existence of close links between language and sensorimotor cognition. Accordingly, during the comprehension of meaningful actions, our brain would recruit semantic-related operations similar to those associated with the processing of language information. Consistent with this view, electrophysiological findings show that the N400 component, traditionally linked to the semantic processing of linguistic material, can also be elicited by action-related material. This review outlines recent data from N400 studies that examine the understanding of action events. We focus on three specific domains, including everyday action comprehension, co-speech gesture integration, and the semantics involved in motor planning and execution. Based on the reviewed findings, we suggest that both negativities (the N400 and the action-N400) reflect a common neurocognitive mechanism involved in the construction of meaning through the expectancies created by previous experiences and current contextual information. To shed light on how this process is instantiated in the brain, a testable contextual fronto-temporo-parietal model is proposed. PMID:23459873

  8. Phonological Awareness and Speech Comprehensibility: An Exploratory Study

    ERIC Educational Resources Information Center

    Venkatagiri, H. S.; Levis, John M.

    2007-01-01

    This study examined whether differences in phonological awareness were related to differences in speech comprehensibility. Seventeen adults who learned English as a foreign language (EFL) in academic settings completed 14 tests of phonological awareness that measured their explicit knowledge of English phonological structures, and three tests of…

  9. Further Research on Speeded Speech as an Educational Medium. Effects of Listening Aids and Self-Pacing on Comprehension and the Use of Compressed Speech for Review. Progress Report Number 4.

    ERIC Educational Resources Information Center

    Friedman, Herbert L.; And Others

    The studies reported here are a continuation of research into the comprehension of time-compressed speech by normal college students. In the Listening Aid Study II, an experiment was designed to retest the advantages of the precis as a listening aid when the precis expressed the overall meaning of a passage. Also, a new listening aid was…

  10. Cleft Audit Protocol for Speech (CAPS-A): A Comprehensive Training Package for Speech Analysis

    ERIC Educational Resources Information Center

    Sell, D.; John, A.; Harding-Bell, A.; Sweeney, T.; Hegarty, F.; Freeman, J.

    2009-01-01

    Background: The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been…

  11. Reduced Performance During a Sentence Repetition Task by Continuous Theta-Burst Magnetic Stimulation of the Pre-supplementary Motor Area.

    PubMed

    Dietrich, Susanne; Hertrich, Ingo; Müller-Dahlhaus, Florian; Ackermann, Hermann; Belardinelli, Paolo; Desideri, Debora; Seibold, Verena C; Ziemann, Ulf

    2018-01-01

    The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient "virtual lesion" using continuous theta-burst stimulation (cTBS). Participants were tested (1) prior to (pre-baseline), (2) 10 min after (test condition for the cTBS effect), and (3) 60 min after stimulation (post-baseline) using a sentence repetition task (formant-synthesized at rates of 8, 10, 12, 14, and 16 syllables/s). Speech comprehension was quantified by the percentage of correctly reproduced speech material. For high speech rates, subjects showed decreased performance after cTBS of pre-SMA. Regarding the error pattern, the number of incorrect words without any semantic or phonological similarity to the target context increased, while related words decreased. Thus, the transient impairment of pre-SMA seems to affect its inhibitory function that normally eliminates erroneous speech material prior to speaking or, in case of perception, prior to encoding into a semantically/pragmatically meaningful message.

  12. Reduced Performance During a Sentence Repetition Task by Continuous Theta-Burst Magnetic Stimulation of the Pre-supplementary Motor Area

    PubMed Central

    Dietrich, Susanne; Hertrich, Ingo; Müller-Dahlhaus, Florian; Ackermann, Hermann; Belardinelli, Paolo; Desideri, Debora; Seibold, Verena C.; Ziemann, Ulf

    2018-01-01

    The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient “virtual lesion” using continuous theta-burst stimulation (cTBS). Participants were tested (1) prior to (pre-baseline), (2) 10 min after (test condition for the cTBS effect), and (3) 60 min after stimulation (post-baseline) using a sentence repetition task (formant-synthesized at rates of 8, 10, 12, 14, and 16 syllables/s). Speech comprehension was quantified by the percentage of correctly reproduced speech material. For high speech rates, subjects showed decreased performance after cTBS of pre-SMA. Regarding the error pattern, the number of incorrect words without any semantic or phonological similarity to the target context increased, while related words decreased. Thus, the transient impairment of pre-SMA seems to affect its inhibitory function that normally eliminates erroneous speech material prior to speaking or, in case of perception, prior to encoding into a semantically/pragmatically meaningful message. PMID:29896086

  13. Cognitive Functions in Childhood Apraxia of Speech

    ERIC Educational Resources Information Center

    Nijland, Lian; Terband, Hayo; Maassen, Ben

    2015-01-01

    Purpose: Childhood apraxia of speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional problems. Method: Cognitive functions were investigated…

  14. Timing of Gestures: Gestures Anticipating or Simultaneous with Speech as Indexes of Text Comprehension in Children and Adults

    ERIC Educational Resources Information Center

    Ianì, Francesco; Cutica, Ilaria; Bucciarelli, Monica

    2017-01-01

    The deep comprehension of a text is tantamount to the construction of an articulated mental model of that text. The number of correct recollections is an index of a learner's mental model of a text. We assume that another index of comprehension is the timing of the gestures produced during text recall; gestures are simultaneous with speech when…

  15. Why do Alzheimer patients have difficulty with pronouns? Working memory, semantics, and reference in comprehension and production in Alzheimer's disease.

    PubMed

    Almor, A; Kempler, D; MacDonald, M C; Andersen, E S; Tyler, L K

    1999-05-01

    Three experiments investigated the extent to which semantic and working-memory deficits contribute to Alzheimer patients' impairments in producing and comprehending referring expressions. In Experiment 1, the spontaneous speech of 11 patients with Alzheimer's disease (AD) contained a greater ratio of pronouns to full noun phrases than did the spontaneous speech produced by 9 healthy controls. Experiments 2 and 3 used a cross-modal naming methodology to compare reference comprehension in another group of 10 patients and 10 age-matched controls. In Experiment 2, patients were less sensitive than healthy controls to the grammatical information necessary for processing pronouns. In Experiment 3, patients were better able to remember referent information in short paragraphs when reference was maintained with full noun phrases rather than pronouns, but healthy controls showed the reverse pattern. Performance in all three experiments was linked to working memory performance but not to word finding difficulty. We discuss these findings in terms of a theory of reference processing, the Informational Load Hypothesis, which views referential impairments in AD as the consequence of normal discourse processing in the context of a working memory impairment. Copyright 1999 Academic Press.

  16. The Bilingual Language Interaction Network for Comprehension of Speech

    ERIC Educational Resources Information Center

    Shook, Anthony; Marian, Viorica

    2013-01-01

    During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can…

  17. Adults Who Read Like Children: The Psycholinguistic Bases. Final Report.

    ERIC Educational Resources Information Center

    Read, Charles

    A study examined basic reading skills among men in prison, comparing poor and adequate readers with respect to comprehension, decoding, short-term memory, and speech perception. Subjects, 88 inmates of normal intelligence, normal hearing, and no significant speech abnormalities, at a minimum-security prison, were given reading comprehension tests…

  18. Speech comprehension aided by multiple modalities: behavioural and neural interactions

    PubMed Central

    McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K.

    2014-01-01

    Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources – e.g. voice, face, gesture, linguistic context – to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. PMID:22266262

  19. Speech comprehension aided by multiple modalities: behavioural and neural interactions.

    PubMed

    McGettigan, Carolyn; Faulkner, Andrew; Altarelli, Irene; Obleser, Jonas; Baverstock, Harriet; Scott, Sophie K

    2012-04-01

    Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Speech-language pathology program for reading comprehension and orthography: effects on the spelling of dyslexic individuals.

    PubMed

    Nogueira, Débora Manzano; Cárnio, Maria Silvia

    2018-01-01

    Purpose Prepare a Speech-language Pathology Program for Reading Comprehension and Orthography and verify its effects on the reading comprehension and spelling of students with Developmental Dyslexia. Methods The study sample was composed of eleven individuals (eight males), diagnosed with Developmental Dyslexia, aged 09-11 years. All participants underwent a Speech-language Pathology Program in Reading Comprehension and Orthography comprising 16 individual weekly sessions. In each session, tasks of reading comprehension of texts and orthography were developed. At the beginning and end of the Program, the participants were submitted to a specific assessment (pre- and post-test). Results The individuals presented difficulty in reading comprehension, but the Cloze technique proved to be a useful remediation tool, and significant improvement in their performance was observed in the post-test evaluation. The dyslexic individuals showed poor performance for their educational level in the spelling assessment. At the end of the program, their performance evolved, but it remained below the expected, showing the same error pattern at the pre- and post-tests, with errors in both natural and arbitrary spelling. Conclusion The proposed Speech-language Pathology Program for Reading Comprehension and Orthography produced positive effects on the reading comprehension, spelling, and motivation to reading and writing of the participants. This study presents an unprecedented contribution by proposing joint stimulation of reading and writing by means of a program easy to apply and analyze in individuals with Developmental Dyslexia.

  1. Brain Volume Differences Associated With Hearing Impairment in Adults

    PubMed Central

    Vriend, Chris; Heslenfeld, Dirk J.; Versfeld, Niek J.; Kramer, Sophia E.

    2018-01-01

    Speech comprehension depends on the successful operation of a network of brain regions. Processing of degraded speech is associated with different patterns of brain activity in comparison with that of high-quality speech. In this exploratory study, we studied whether processing degraded auditory input in daily life because of hearing impairment is associated with differences in brain volume. We compared T1-weighted structural magnetic resonance images of 17 hearing-impaired (HI) adults with those of 17 normal-hearing (NH) controls using a voxel-based morphometry analysis. HI adults were individually matched with NH adults based on age and educational level. Gray and white matter brain volumes were compared between the groups by region-of-interest analyses in structures associated with speech processing, and by whole-brain analyses. The results suggest increased gray matter volume in the right angular gyrus and decreased white matter volume in the left fusiform gyrus in HI listeners as compared with NH ones. In the HI group, there was a significant correlation between hearing acuity and cluster volume of the gray matter cluster in the right angular gyrus. This correlation supports the link between partial hearing loss and altered brain volume. The alterations in volume may reflect the operation of compensatory mechanisms that are related to decoding meaning from degraded auditory input. PMID:29557274

  2. The cortical representation of the speech envelope is earlier for audiovisual speech than audio speech.

    PubMed

    Crosse, Michael J; Lalor, Edmund C

    2014-04-01

    Visual speech can greatly enhance a listener's comprehension of auditory speech when they are presented simultaneously. Efforts to determine the neural underpinnings of this phenomenon have been hampered by the limited temporal resolution of hemodynamic imaging and the fact that EEG and magnetoencephalographic data are usually analyzed in response to simple, discrete stimuli. Recent research has shown that neuronal activity in human auditory cortex tracks the envelope of natural speech. Here, we exploit this finding by estimating a linear forward-mapping between the speech envelope and EEG data and show that the latency at which the envelope of natural speech is represented in cortex is shortened by >10 ms when continuous audiovisual speech is presented compared with audio-only speech. In addition, we use a reverse-mapping approach to reconstruct an estimate of the speech stimulus from the EEG data and, by comparing the bimodal estimate with the sum of the unimodal estimates, find no evidence of any nonlinear additive effects in the audiovisual speech condition. These findings point to an underlying mechanism that could account for enhanced comprehension during audiovisual speech. Specifically, we hypothesize that low-level acoustic features that are temporally coherent with the preceding visual stream may be synthesized into a speech object at an earlier latency, which may provide an extended period of low-level processing before extraction of semantic information.

  3. I "hear" what you're "saying": Auditory perceptual simulation, reading speed, and reading comprehension.

    PubMed

    Zhou, Peiyun; Christianson, Kiel

    2016-01-01

    Auditory perceptual simulation (APS) during silent reading refers to situations in which the reader actively simulates the voice of a character or other person depicted in a text. In three eye-tracking experiments, APS effects were investigated as people read utterances attributed to a native English speaker, a non-native English speaker, or no speaker at all. APS effects were measured via online eye movements and offline comprehension probes. Results demonstrated that inducing APS during silent reading resulted in observable differences in reading speed when readers simulated the speech of faster compared to slower speakers and compared to silent reading without APS. Social attitude survey results indicated that readers' attitudes towards the native and non-native speech did not consistently influence APS-related effects. APS of both native speech and non-native speech increased reading speed, facilitated deeper, less good-enough sentence processing, and improved comprehension compared to normal silent reading.

  4. Electrostimulation mapping of comprehension of auditory and visual words.

    PubMed

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Comprehension: an overlooked component in augmented language development.

    PubMed

    Sevcik, Rose A

    2006-02-15

    Despite the importance of children's receptive skills as a foundation for later productive word use, the role of receptive language traditionally has received very limited attention since the focus in linguistic development has centered on language production. For children with significant developmental disabilities and communication impairments, augmented language systems have been devised as a tool both for language input and output. The role of both speech and symbol comprehension skills is emphasized in this paper. Data collected from two longitudinal studies of children and youth with severe disabilities and limited speech serve as illustrations in this paper. The acquisition and use of the System for Augmenting Language (SAL) was studied in home and school settings. Communication behaviors of the children and youth and their communication partners were observed and language assessment measures were collected. Two patterns of symbol learning and achievement--beginning and advanced--were observed. Extant speech comprehension skills brought to the augmented language learning task impacted the participants' patterns of symbol learning and use. Though often overlooked, the importance of speech and symbol comprehension skills were underscored in the studies described. Future areas for research are identified.

  6. Interfering with Inner Speech Selectively Disrupts Problem Solving and Is Linked with Real-World Executive Functioning

    ERIC Educational Resources Information Center

    Wallace, Gregory L.; Peng, Cynthia S.; Williams, David

    2017-01-01

    Purpose: According to Vygotskian theory, verbal thinking serves to guide our behavior and underpins critical self-regulatory functions. Indeed, numerous studies now link inner speech usage with performance on tests of executive function (EF). However, the selectivity of inner speech contributions to multifactorial executive planning performance…

  7. Amusia results in abnormal brain activity following inappropriate intonation during speech comprehension.

    PubMed

    Jiang, Cunmei; Hamm, Jeff P; Lim, Vanessa K; Kirk, Ian J; Chen, Xuhai; Yang, Yufang

    2012-01-01

    Pitch processing is a critical ability on which humans' tonal musical experience depends, and which is also of paramount importance for decoding prosody in speech. Congenital amusia refers to deficits in the ability to properly process musical pitch, and recent evidence has suggested that this musical pitch disorder may impact upon the processing of speech sounds. Here we present the first electrophysiological evidence demonstrating that individuals with amusia who speak Mandarin Chinese are impaired in classifying prosody as appropriate or inappropriate during a speech comprehension task. When presented with inappropriate prosody stimuli, control participants elicited a larger P600 and smaller N100 relative to the appropriate condition. In contrast, amusics did not show significant differences between the appropriate and inappropriate conditions in either the N100 or the P600 component. This provides further evidence that the pitch perception deficits associated with amusia may also affect intonation processing during speech comprehension in those who speak a tonal language such as Mandarin, and suggests music and language share some cognitive and neural resources.

  8. Amusia Results in Abnormal Brain Activity following Inappropriate Intonation during Speech Comprehension

    PubMed Central

    Jiang, Cunmei; Hamm, Jeff P.; Lim, Vanessa K.; Kirk, Ian J.; Chen, Xuhai; Yang, Yufang

    2012-01-01

    Pitch processing is a critical ability on which humans’ tonal musical experience depends, and which is also of paramount importance for decoding prosody in speech. Congenital amusia refers to deficits in the ability to properly process musical pitch, and recent evidence has suggested that this musical pitch disorder may impact upon the processing of speech sounds. Here we present the first electrophysiological evidence demonstrating that individuals with amusia who speak Mandarin Chinese are impaired in classifying prosody as appropriate or inappropriate during a speech comprehension task. When presented with inappropriate prosody stimuli, control participants elicited a larger P600 and smaller N100 relative to the appropriate condition. In contrast, amusics did not show significant differences between the appropriate and inappropriate conditions in either the N100 or the P600 component. This provides further evidence that the pitch perception deficits associated with amusia may also affect intonation processing during speech comprehension in those who speak a tonal language such as Mandarin, and suggests music and language share some cognitive and neural resources. PMID:22859982

  9. Visually Impaired Persons' Comprehension of Text Presented with Speech Synthesis.

    ERIC Educational Resources Information Center

    Hjelmquist, E.; And Others

    1992-01-01

    This study of 48 individuals with visual impairments (16 middle-aged with experience in synthetic speech, 16 middle-aged inexperienced, and 16 older inexperienced) found that speech synthesis, compared to natural speech, generally yielded lower results with respect to memory and understanding of texts. Experience had no effect on performance.…

  10. Children's Perception of Conversational and Clear American-English Vowels in Noise

    ERIC Educational Resources Information Center

    Leone, Dorothy; Levy, Erika S.

    2015-01-01

    Purpose: Much of a child's day is spent listening to speech in the presence of background noise. Although accurate vowel perception is important for listeners' accurate speech perception and comprehension, little is known about children's vowel perception in noise. "Clear speech" is a speech style frequently used by talkers in the…

  11. Evaluation of the comprehension of noncontinuous sped-up vocoded speech - A strategy for coping with fading HF channels

    NASA Astrophysics Data System (ADS)

    Lynch, John T.

    1987-02-01

    The present technique for coping with fading and burst noise on HF channels used in digital voice communications transmits digital voice only during high S/N time intervals, and speeds up the speech when necessary to avoid conversation-hindering delays. On the basis of informal listening tests, four test conditions were selected in order to characterize those conditions of speech interruption which would render it comprehensible or incomprehensible. One of the test conditions, 2 secs on and 1/2-sec off, yielded test scores comparable to the reference continuous speech case and is a reasonable match to the temporal variations of a disturbed ionosphere.

  12. Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures.

    PubMed

    Hickok, G; Okada, K; Barr, W; Pa, J; Rogalsky, C; Donnelly, K; Barde, L; Grant, A

    2008-12-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.

  13. Utility of Language Comprehension Tests for Unintelligible or Non-Speaking Children with Cerebral Palsy: A Systematic Review

    ERIC Educational Resources Information Center

    Geytenbeek, Joke; Harlaar, Laurike; Stam, Marloes; Ket, Hans; Becher, Jules G.; Oostrom, Kim; Vermeulen, Jeroen

    2010-01-01

    Aim: To identify the use and utility of language comprehension tests for unintelligible or non-speaking children with severe cerebral palsy (CP). Method: Severe CP was defined as severe dysarthria (unintelligible speech) or anarthria (absence of speech) combined with severe limited mobility, corresponding to Gross Motor Function Classification…

  14. The Role of Interaction in Native Speaker Comprehension of Nonnative Speaker Speech.

    ERIC Educational Resources Information Center

    Polio, Charlene; Gass, Susan M.

    1998-01-01

    Because interaction gives language learners an opportunity to modify their speech upon a signal of noncomprehension, it should also have a positive effect on native speakers' (NS) comprehension of nonnative speakers (NNS). This study shows that interaction does help NSs comprehend NNSs, contrasting the claims of an earlier study that found no…

  15. Semantic Comprehension of the Action-Role Relationship in Early-Linguistic Infants.

    ERIC Educational Resources Information Center

    Fritz, Janet J.; Suci, George J.

    This study attempted to determine: (1) whether lower-order units (agent or agent-action) within the agent-action-recipient relationship exist in any functional way in the 1-word infant's comprehension of speech; and (2) whether the use of repetition and/or reduced length (common modifications in adult-to-infant speech) used to focus on these…

  16. Is comprehension necessary for error detection? A conflict-based account of monitoring in speech production

    PubMed Central

    Nozari, Nazbanou; Dell, Gary S.; Schwartz, Myrna F.

    2011-01-01

    Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients’ error-detection ability and the model’s characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system. PMID:21652015

  17. Automatic Speech Recognition Predicts Speech Intelligibility and Comprehension for Listeners With Simulated Age-Related Hearing Loss.

    PubMed

    Fontan, Lionel; Ferrané, Isabelle; Farinas, Jérôme; Pinquier, Julien; Tardieu, Julien; Magnen, Cynthia; Gaillard, Pascal; Aumont, Xavier; Füllgrabe, Christian

    2017-09-18

    The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist audiologists/hearing-aid dispensers in the fine-tuning of hearing aids. Sixty young participants with normal hearing listened to speech materials mimicking the perceptual consequences of ARHL at different levels of severity. Two intelligibility tests (repetition of words and sentences) and 1 comprehension test (responding to oral commands by moving virtual objects) were administered. Several language models were developed and used by the ASR system in order to fit human performances. Strong significant positive correlations were observed between human and ASR scores, with coefficients up to .99. However, the spectral smearing used to simulate losses in frequency selectivity caused larger declines in ASR performance than in human performance. Both intelligibility and comprehension scores for listeners with simulated ARHL are highly correlated with the performances of an ASR-based system. In the future, it needs to be determined if the ASR system is similarly successful in predicting speech processing in noise and by older people with ARHL.

  18. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    PubMed

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  19. The role of Broca's area in speech perception: evidence from aphasia revisited.

    PubMed

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-12-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.

  20. Lexical Effects on Second Language Acquisition

    ERIC Educational Resources Information Center

    Kemp, Renee Lorraine

    2017-01-01

    Speech production and perception are inextricably linked systems. Speakers modify their speech in response to listener characteristics, such as age, hearing ability, and language background. Listener-oriented modifications in speech production, commonly referred to as clear speech, have also been found to affect speech perception by enhancing…

  1. Cross-language differences in the brain network subserving intelligible speech.

    PubMed

    Ge, Jianqiao; Peng, Gang; Lyu, Bingjiang; Wang, Yi; Zhuo, Yan; Niu, Zhendong; Tan, Li Hai; Leff, Alexander P; Gao, Jia-Hong

    2015-03-10

    How is language processed in the brain by native speakers of different languages? Is there one brain system for all languages or are different languages subserved by different brain systems? The first view emphasizes commonality, whereas the second emphasizes specificity. We investigated the cortical dynamics involved in processing two very diverse languages: a tonal language (Chinese) and a nontonal language (English). We used functional MRI and dynamic causal modeling analysis to compute and compare brain network models exhaustively with all possible connections among nodes of language regions in temporal and frontal cortex and found that the information flow from the posterior to anterior portions of the temporal cortex was commonly shared by Chinese and English speakers during speech comprehension, whereas the inferior frontal gyrus received neural signals from the left posterior portion of the temporal cortex in English speakers and from the bilateral anterior portion of the temporal cortex in Chinese speakers. Our results revealed that, although speech processing is largely carried out in the common left hemisphere classical language areas (Broca's and Wernicke's areas) and anterior temporal cortex, speech comprehension across different language groups depends on how these brain regions interact with each other. Moreover, the right anterior temporal cortex, which is crucial for tone processing, is equally important as its left homolog, the left anterior temporal cortex, in modulating the cortical dynamics in tone language comprehension. The current study pinpoints the importance of the bilateral anterior temporal cortex in language comprehension that is downplayed or even ignored by popular contemporary models of speech comprehension.

  2. Cross-language differences in the brain network subserving intelligible speech

    PubMed Central

    Ge, Jianqiao; Peng, Gang; Lyu, Bingjiang; Wang, Yi; Zhuo, Yan; Niu, Zhendong; Tan, Li Hai; Leff, Alexander P.; Gao, Jia-Hong

    2015-01-01

    How is language processed in the brain by native speakers of different languages? Is there one brain system for all languages or are different languages subserved by different brain systems? The first view emphasizes commonality, whereas the second emphasizes specificity. We investigated the cortical dynamics involved in processing two very diverse languages: a tonal language (Chinese) and a nontonal language (English). We used functional MRI and dynamic causal modeling analysis to compute and compare brain network models exhaustively with all possible connections among nodes of language regions in temporal and frontal cortex and found that the information flow from the posterior to anterior portions of the temporal cortex was commonly shared by Chinese and English speakers during speech comprehension, whereas the inferior frontal gyrus received neural signals from the left posterior portion of the temporal cortex in English speakers and from the bilateral anterior portion of the temporal cortex in Chinese speakers. Our results revealed that, although speech processing is largely carried out in the common left hemisphere classical language areas (Broca’s and Wernicke’s areas) and anterior temporal cortex, speech comprehension across different language groups depends on how these brain regions interact with each other. Moreover, the right anterior temporal cortex, which is crucial for tone processing, is equally important as its left homolog, the left anterior temporal cortex, in modulating the cortical dynamics in tone language comprehension. The current study pinpoints the importance of the bilateral anterior temporal cortex in language comprehension that is downplayed or even ignored by popular contemporary models of speech comprehension. PMID:25713366

  3. A Dynamic Speech Comprehension Test for Assessing Real-World Listening Ability.

    PubMed

    Best, Virginia; Keidser, Gitte; Freeston, Katrina; Buchholz, Jörg M

    2016-07-01

    Many listeners with hearing loss report particular difficulties with multitalker communication situations, but these difficulties are not well predicted using current clinical and laboratory assessment tools. The overall aim of this work is to create new speech tests that capture key aspects of multitalker communication situations and ultimately provide better predictions of real-world communication abilities and the effect of hearing aids. A test of ongoing speech comprehension introduced previously was extended to include naturalistic conversations between multiple talkers as targets, and a reverberant background environment containing competing conversations. In this article, we describe the development of this test and present a validation study. Thirty listeners with normal hearing participated in this study. Speech comprehension was measured for one-, two-, and three-talker passages at three different signal-to-noise ratios (SNRs), and working memory ability was measured using the reading span test. Analyses were conducted to examine passage equivalence, learning effects, and test-retest reliability, and to characterize the effects of number of talkers and SNR. Although we observed differences in difficulty across passages, it was possible to group the passages into four equivalent sets. Using this grouping, we achieved good test-retest reliability and observed no significant learning effects. Comprehension performance was sensitive to the SNR but did not decrease as the number of talkers increased. Individual performance showed associations with age and reading span score. This new dynamic speech comprehension test appears to be valid and suitable for experimental purposes. Further work will explore its utility as a tool for predicting real-world communication ability and hearing aid benefit. American Academy of Audiology.

  4. Transcranial electric stimulation for the investigation of speech perception and comprehension

    PubMed Central

    Zoefel, Benedikt; Davis, Matthew H.

    2017-01-01

    ABSTRACT Transcranial electric stimulation (tES), comprising transcranial direct current stimulation (tDCS) and transcranial alternating current stimulation (tACS), involves applying weak electrical current to the scalp, which can be used to modulate membrane potentials and thereby modify neural activity. Critically, behavioural or perceptual consequences of this modulation provide evidence for a causal role of neural activity in the stimulated brain region for the observed outcome. We present tES as a tool for the investigation of which neural responses are necessary for successful speech perception and comprehension. We summarise existing studies, along with challenges that need to be overcome, potential solutions, and future directions. We conclude that, although standardised stimulation parameters still need to be established, tES is a promising tool for revealing the neural basis of speech processing. Future research can use this method to explore the causal role of brain regions and neural processes for the perception and comprehension of speech. PMID:28670598

  5. The Influence of Child-Directed Speech on Word Learning and Comprehension.

    PubMed

    Foursha-Stevenson, Cassandra; Schembri, Taylor; Nicoladis, Elena; Eriksen, Cody

    2017-04-01

    This paper describes an investigation into the function of child-directed speech (CDS) across development. In the first experiment, 10-21-month-olds were presented with familiar words in CDS and trained on novel words in CDS or adult-directed speech (ADS). All children preferred the matching display for familiar words. However, only older toddlers in the CDS condition preferred the matching display for novel words. In Experiment 2, children 3-6 years of age were presented with a sentence comprehension task in CDS or ADS. Older children performed better overall than younger children with 5- and 6-year-olds performing above chance regardless of speech condition, while 3- and 4-year-olds only performed above chance when the sentences were presented in CDS. These findings provide support for the theory that CDS is most effective at the beginning of acquisition for particular constructions (e.g. vocabulary acquisition, syntactic comprehension) rather than at a particular age or for a particular task.

  6. Functional overlap between regions involved in speech perception and in monitoring one's own voice during speech production.

    PubMed

    Zheng, Zane Z; Munhall, Kevin G; Johnsrude, Ingrid S

    2010-08-01

    The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not and by examining the overlap with the network recruited during passive listening to speech sounds. We used real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word ("Ted") and either heard this clearly or heard voice-gated masking noise. We compared this to when they listened to yoked stimuli (identical recordings of "Ted" or noise) without speaking. Activity along the STS and superior temporal gyrus bilaterally was significantly greater if the auditory stimulus was (a) processed as the auditory concomitant of speaking and (b) did not match the predicted outcome (noise). The network exhibiting this Feedback Type x Production/Perception interaction includes a superior temporal gyrus/middle temporal gyrus region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts and that processes an error signal in speech-sensitive regions when this and the sensory data do not match.

  7. Functional overlap between regions involved in speech perception and in monitoring one’s own voice during speech production

    PubMed Central

    Zheng, Zane Z.; Munhall, Kevin G; Johnsrude, Ingrid S

    2009-01-01

    The fluency and reliability of speech production suggests a mechanism that links motor commands and sensory feedback. Here, we examine the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not, and examining the overlap with the network recruited during passive listening to speech sounds. We use real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word (‘Ted’) and either heard this clearly, or heard voice-gated masking noise. We compare this to when they listened to yoked stimuli (identical recordings of ‘Ted’ or noise) without speaking. Activity along the superior temporal sulcus (STS) and superior temporal gyrus (STG) bilaterally was significantly greater if the auditory stimulus was a) processed as the auditory concomitant of speaking and b) did not match the predicted outcome (noise). The network exhibiting this Feedback type by Production/Perception interaction includes an STG/MTG region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts, and that processes an error signal in speech-sensitive regions when this and the sensory data do not match. PMID:19642886

  8. The role of accent imitation in sensorimotor integration during processing of intelligible speech

    PubMed Central

    Adank, Patti; Rueschemeyer, Shirley-Ann; Bekkering, Harold

    2013-01-01

    Recent theories on how listeners maintain perceptual invariance despite variation in the speech signal allocate a prominent role to imitation mechanisms. Notably, these simulation accounts propose that motor mechanisms support perception of ambiguous or noisy signals. Indeed, imitation of ambiguous signals, e.g., accented speech, has been found to aid effective speech comprehension. Here, we explored the possibility that imitation in speech benefits perception by increasing activation in speech perception and production areas. Participants rated the intelligibility of sentences spoken in an unfamiliar accent of Dutch in a functional Magnetic Resonance Imaging experiment. Next, participants in one group repeated the sentences in their own accent, while a second group vocally imitated the accent. Finally, both groups rated the intelligibility of accented sentences in a post-test. The neuroimaging results showed an interaction between type of training and pre- and post-test sessions in left Inferior Frontal Gyrus, Supplementary Motor Area, and left Superior Temporal Sulcus. Although alternative explanations such as task engagement and fatigue need to be considered as well, the results suggest that imitation may aid effective speech comprehension by supporting sensorimotor integration. PMID:24109447

  9. How musical expertise shapes speech perception: evidence from auditory classification images.

    PubMed

    Varnet, Léo; Wang, Tianyun; Peter, Chloe; Meunier, Fanny; Hoen, Michel

    2015-09-24

    It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.

  10. The Compensatory Effectiveness of Optical Character Recognition/Speech Synthesis on Reading Comprehension of Postsecondary Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    Higgins, Eleanor L.; Raskind, Marshall H.

    1997-01-01

    Thirty-seven college students with learning disabilities were given a reading comprehension task under the following conditions: (1) using an optical character recognition/speech synthesis system; (2) having the text read aloud by a human reader; or (3) reading silently without assistance. Findings indicated that the greater the disability, the…

  11. Using Text-to-Speech Reading Support for an Adult with Mild Aphasia and Cognitive Impairment

    ERIC Educational Resources Information Center

    Harvey, Judy; Hux, Karen; Snell, Jeffry

    2013-01-01

    This single case study served to examine text-to-speech (TTS) effects on reading rate and comprehension in an individual with mild aphasia and cognitive impairment. Findings showed faster reading, given TTS presented at a normal speaking rate, but no significant comprehension changes. TTS may support reading in people with aphasia when time…

  12. Neural Processing Associated with Comprehension of an Indirect Reply during a Scenario Reading Task

    ERIC Educational Resources Information Center

    Shibata, Midori; Abe, Jun-ichi; Itoh, Hiroaki; Shimada, Koji; Umeda, Satoshi

    2011-01-01

    In daily communication, we often use indirect speech to convey our intention. However, little is known about the brain mechanisms that underlie the comprehension of indirect speech. In this study, we conducted a functional MRI experiment using a scenario reading task to compare the neural activity induced by an indirect reply (a type of indirect…

  13. Automatic Speech Recognition Predicts Speech Intelligibility and Comprehension for Listeners with Simulated Age-Related Hearing Loss

    ERIC Educational Resources Information Center

    Fontan, Lionel; Ferrané, Isabelle; Farinas, Jérôme; Pinquier, Julien; Tardieu, Julien; Magnen, Cynthia; Gaillard, Pascal; Aumont, Xavier; Füllgrabe, Christian

    2017-01-01

    Purpose: The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist…

  14. Response latencies in auditory sentence comprehension: effects of linguistic versus perceptual challenge.

    PubMed

    Tun, Patricia A; Benichov, Jonathan; Wingfield, Arthur

    2010-09-01

    Older adults with good hearing and with mild-to-moderate hearing loss were tested for comprehension of spoken sentences that required perceptual effort (hearing speech at lower sound levels), and two degrees of cognitive load (sentences with simpler or more complex syntax). Although comprehension accuracy was equivalent for both participant groups and for young adults with good hearing, hearing loss was associated with longer response latencies to the correct comprehension judgments, especially for complex sentences heard at relatively low amplitudes. These findings demonstrate the need to take into account both sensory and cognitive demands of speech materials in older adults' language comprehension. (c) 2010 APA, all rights reserved.

  15. Synthesized Speech Output and Children: A Scoping Review

    ERIC Educational Resources Information Center

    Drager, Kathryn D. R.; Reichle, Joe; Pinkoski, Carrie

    2010-01-01

    Purpose: Many computer-based augmentative and alternative communication systems in use by children have speech output. This article (a) provides a scoping review of the literature addressing the intelligibility and listener comprehension of synthesized speech output with children and (b) discusses future research directions. Method: Studies…

  16. Second Language Learners and Speech Act Comprehension

    ERIC Educational Resources Information Center

    Holtgraves, Thomas

    2007-01-01

    Recognizing the specific speech act ( Searle, 1969) that a speaker performs with an utterance is a fundamental feature of pragmatic competence. Past research has demonstrated that native speakers of English automatically recognize speech acts when they comprehend utterances (Holtgraves & Ashley, 2001). The present research examined whether this…

  17. Investigation of the effect of cochlear implant electrode length on speech comprehension in quiet and noise compared with the results with users of electro-acoustic-stimulation, a retrospective analysis

    PubMed Central

    Majdani, Omid; Lenarz, Thomas

    2017-01-01

    Objectives This investigation evaluated the effect of cochlear implant (CI) electrode length on speech comprehension in quiet and noise and compare the results with those of EAS users. Methodes 91 adults with some degree of residual hearing were implanted with a FLEX20, FLEX24, or FLEX28 electrode. Some subjects were postoperative electric-acoustic-stimulation (EAS) users; the other subjects were in the groups of electric stimulation-only (ES-only). Speech perception was tested in quiet and noise at 3 and 6 months of ES or EAS use. Speech comprehension results were analyzed and correlated to electrode length. Results While the FLEX20 ES and FLEX24 ES groups were still in their learning phase between the 3 to 6 months interval, the FLEX28 ES group was already reaching a performance plateau at the three months appointment yielding remarkably high test scores. EAS subjects using FLEX20 or FLEX24 electrodes outscored ES-only subjects with the same short electrodes on all 3 tests at each interval, reaching significance with FLEX20 ES and FLEX24 ES subjects on all 3 tests at the 3-months interval and on 2 tests at the 6- months interval. Amongst ES-only subjects at the 3- months interval, FLEX28 ES subjects significantly outscored FLEX20 ES subjects on all 3 tests and the FLEX24 ES subjects on 2 tests. At the-6 months interval, FLEX28 ES subjects still exceeded the other ES-only subjects although the difference did not reach significance. Conclusions Among ES-only users, the FLEX28 ES users had the best speech comprehension scores, at the 3- months appointment and tendentially at the 6 months appointment. EAS users showed significantly better speech comprehension results compared to ES-only users with the same short electrodes. PMID:28505158

  18. Investigation of the effect of cochlear implant electrode length on speech comprehension in quiet and noise compared with the results with users of electro-acoustic-stimulation, a retrospective analysis.

    PubMed

    Büchner, Andreas; Illg, Angelika; Majdani, Omid; Lenarz, Thomas

    2017-01-01

    This investigation evaluated the effect of cochlear implant (CI) electrode length on speech comprehension in quiet and noise and compare the results with those of EAS users. 91 adults with some degree of residual hearing were implanted with a FLEX20, FLEX24, or FLEX28 electrode. Some subjects were postoperative electric-acoustic-stimulation (EAS) users; the other subjects were in the groups of electric stimulation-only (ES-only). Speech perception was tested in quiet and noise at 3 and 6 months of ES or EAS use. Speech comprehension results were analyzed and correlated to electrode length. While the FLEX20 ES and FLEX24 ES groups were still in their learning phase between the 3 to 6 months interval, the FLEX28 ES group was already reaching a performance plateau at the three months appointment yielding remarkably high test scores. EAS subjects using FLEX20 or FLEX24 electrodes outscored ES-only subjects with the same short electrodes on all 3 tests at each interval, reaching significance with FLEX20 ES and FLEX24 ES subjects on all 3 tests at the 3-months interval and on 2 tests at the 6- months interval. Amongst ES-only subjects at the 3- months interval, FLEX28 ES subjects significantly outscored FLEX20 ES subjects on all 3 tests and the FLEX24 ES subjects on 2 tests. At the-6 months interval, FLEX28 ES subjects still exceeded the other ES-only subjects although the difference did not reach significance. Among ES-only users, the FLEX28 ES users had the best speech comprehension scores, at the 3- months appointment and tendentially at the 6 months appointment. EAS users showed significantly better speech comprehension results compared to ES-only users with the same short electrodes.

  19. Functional Connectivity between Face-Movement and Speech-Intelligibility Areas during Auditory-Only Speech Perception

    PubMed Central

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026

  20. Children Mix Direct and Indirect Speech: Evidence from Pronoun Comprehension

    ERIC Educational Resources Information Center

    Köder, Franziska; Maier, Emar

    2016-01-01

    This study investigates children's acquisition of the distinction between direct speech (Elephant said, "I get the football") and indirect speech ("Elephant said that he gets the football"), by measuring children's interpretation of first, second, and third person pronouns. Based on evidence from various linguistic sources, we…

  1. Developmental Differences in Speech Act Recognition: A Pragmatic Awareness Study

    ERIC Educational Resources Information Center

    Garcia, Paula

    2004-01-01

    With the growing acknowledgement of the importance of pragmatic competence in second language (L2) learning, language researchers have identified the comprehension of speech acts as they occur in natural conversation as essential to communicative competence (e.g. Bardovi-Harlig, 2001; Thomas, 1983). Nonconventional indirect speech acts are formed…

  2. Voice Modulations in German Ironic Speech

    ERIC Educational Resources Information Center

    Scharrer, Lisa; Christmann, Ursula; Knoll, Monja

    2011-01-01

    Previous research has shown that in different languages ironic speech is acoustically modulated compared to literal speech, and these modulations are assumed to aid the listener in the comprehension process by acting as cues that mark utterances as ironic. The present study was conducted to identify paraverbal features of German "ironic…

  3. Functional significance of the electrocorticographic auditory responses in the premotor cortex.

    PubMed

    Tanji, Kazuyo; Sakurada, Kaori; Funiu, Hayato; Matsuda, Kenichiro; Kayama, Takamasa; Ito, Sayuri; Suzuki, Kyoko

    2015-01-01

    Other than well-known motor activities in the precentral gyrus, functional magnetic resonance imaging (fMRI) studies have found that the ventral part of the precentral gyrus is activated in response to linguistic auditory stimuli. It has been proposed that the premotor cortex in the precentral gyrus is responsible for the comprehension of speech, but the precise function of this area is still debated because patients with frontal lesions that include the precentral gyrus do not exhibit disturbances in speech comprehension. We report on a patient who underwent resection of the tumor in the precentral gyrus with electrocorticographic recordings while she performed the verb generation task during awake brain craniotomy. Consistent with previous fMRI studies, high-gamma band auditory activity was observed in the precentral gyrus. Due to the location of the tumor, the patient underwent resection of the auditory responsive precentral area which resulted in the post-operative expression of a characteristic articulatory disturbance known as apraxia of speech (AOS). The language function of the patient was otherwise preserved and she exhibited intact comprehension of both spoken and written language. The present findings demonstrated that a lesion restricted to the ventral precentral gyrus is sufficient for the expression of AOS and suggest that the auditory-responsive area plays an important role in the execution of fluent speech rather than the comprehension of speech. These findings also confirm that the function of the premotor area is predominantly motor in nature and its sensory responses is more consistent with the "sensory theory of speech production," in which it was proposed that sensory representations are used to guide motor-articulatory processes.

  4. A screening approach for classroom acoustics using web-based listening tests and subjective ratings.

    PubMed

    Persson Waye, Kerstin; Magnusson, Lennart; Fredriksson, Sofie; Croy, Ilona

    2015-01-01

    Perception of speech is crucial in school where speech is the main mode of communication. The aim of the study was to evaluate whether a web based approach including listening tests and questionnaires could be used as a screening tool for poor classroom acoustics. The prime focus was the relation between pupils' comprehension of speech, the classroom acoustics and their description of the acoustic qualities of the classroom. In total, 1106 pupils aged 13-19, from 59 classes and 38 schools in Sweden participated in a listening study using Hagerman's sentences administered via Internet. Four listening conditions were applied: high and low background noise level and positions close and far away from the loudspeaker. The pupils described the acoustic quality of the classroom and teachers provided information on the physical features of the classroom using questionnaires. In 69% of the classes, at least three pupils described the sound environment as adverse and in 88% of the classes one or more pupil reported often having difficulties concentrating due to noise. The pupils' comprehension of speech was strongly influenced by the background noise level (p<0.001) and distance to the loudspeakers (p<0.001). Of the physical classroom features, presence of suspended acoustic panels (p<0.05) and length of the classroom (p<0.01) predicted speech comprehension. Of the pupils' descriptions of acoustic qualities, clattery significantly (p<0.05) predicted speech comprehension. Clattery was furthermore associated to difficulties understanding each other, while the description noisy was associated to concentration difficulties. The majority of classrooms do not seem to have an optimal sound environment. The pupil's descriptions of acoustic qualities and listening tests can be one way of predicting sound conditions in the classroom.

  5. Speech processing in children with functional articulation disorders.

    PubMed

    Gósy, Mária; Horváth, Viktória

    2015-03-01

    This study explored auditory speech processing and comprehension abilities in 5-8-year-old monolingual Hungarian children with functional articulation disorders (FADs) and their typically developing peers. Our main hypothesis was that children with FAD would show co-existing auditory speech processing disorders, with different levels of these skills depending on the nature of the receptive processes. The tasks included (i) sentence and non-word repetitions, (ii) non-word discrimination and (iii) sentence and story comprehension. Results suggest that the auditory speech processing of children with FAD is underdeveloped compared with that of typically developing children, and largely varies across task types. In addition, there are differences between children with FAD and controls in all age groups from 5 to 8 years. Our results have several clinical implications.

  6. Dissociating speech perception and comprehension at reduced levels of awareness

    PubMed Central

    Davis, Matthew H.; Coleman, Martin R.; Absalom, Anthony R.; Rodd, Jennifer M.; Johnsrude, Ingrid S.; Matta, Basil F.; Owen, Adrian M.; Menon, David K.

    2007-01-01

    We used functional MRI and the anesthetic agent propofol to assess the relationship among neural responses to speech, successful comprehension, and conscious awareness. Volunteers were scanned while listening to sentences containing ambiguous words, matched sentences without ambiguous words, and signal-correlated noise (SCN). During three scanning sessions, participants were nonsedated (awake), lightly sedated (a slowed response to conversation), and deeply sedated (no conversational response, rousable by loud command). Bilateral temporal-lobe responses for sentences compared with signal-correlated noise were observed at all three levels of sedation, although prefrontal and premotor responses to speech were absent at the deepest level of sedation. Additional inferior frontal and posterior temporal responses to ambiguous sentences provide a neural correlate of semantic processes critical for comprehending sentences containing ambiguous words. However, this additional response was absent during light sedation, suggesting a marked impairment of sentence comprehension. A significant decline in postscan recognition memory for sentences also suggests that sedation impaired encoding of sentences into memory, with left inferior frontal and temporal lobe responses during light sedation predicting subsequent recognition memory. These findings suggest a graded degradation of cognitive function in response to sedation such that “higher-level” semantic and mnemonic processes can be impaired at relatively low levels of sedation, whereas perceptual processing of speech remains resilient even during deep sedation. These results have important implications for understanding the relationship between speech comprehension and awareness in the healthy brain in patients receiving sedation and in patients with disorders of consciousness. PMID:17938125

  7. The steady-state response of the cerebral cortex to the beat of music reflects both the comprehension of music and attention

    PubMed Central

    Meltzer, Benjamin; Reichenbach, Chagit S.; Braiman, Chananel; Schiff, Nicholas D.; Hudspeth, A. J.; Reichenbach, Tobias

    2015-01-01

    The brain’s analyses of speech and music share a range of neural resources and mechanisms. Music displays a temporal structure of complexity similar to that of speech, unfolds over comparable timescales, and elicits cognitive demands in tasks involving comprehension and attention. During speech processing, synchronized neural activity of the cerebral cortex in the delta and theta frequency bands tracks the envelope of a speech signal, and this neural activity is modulated by high-level cortical functions such as speech comprehension and attention. It remains unclear, however, whether the cortex also responds to the natural rhythmic structure of music and how the response, if present, is influenced by higher cognitive processes. Here we employ electroencephalography to show that the cortex responds to the beat of music and that this steady-state response reflects musical comprehension and attention. We show that the cortical response to the beat is weaker when subjects listen to a familiar tune than when they listen to an unfamiliar, non-sensical musical piece. Furthermore, we show that in a task of intermodal attention there is a larger neural response at the beat frequency when subjects attend to a musical stimulus than when they ignore the auditory signal and instead focus on a visual one. Our findings may be applied in clinical assessments of auditory processing and music cognition as well as in the construction of auditory brain-machine interfaces. PMID:26300760

  8. Children's comprehension of an unfamiliar speaker accent: a review.

    PubMed

    Harte, Jennifer; Oliveira, Ana; Frizelle, Pauline; Gibbon, Fiona

    2016-05-01

    The effect of speaker accent on listeners' comprehension has become a key focus of research given the increasing cultural diversity of society and the increased likelihood of an individual encountering a clinician with an unfamiliar accent. To review the studies exploring the effect of an unfamiliar accent on language comprehension in typically developing (TD) children and in children with speech and language difficulties. This review provides a methodological analysis of the relevant studies by exploring the challenges facing this field of research and highlighting the current gaps in the literature. A total of nine studies were identified using a systematic search and organized under studies investigating the effect of speaker accent on language comprehension in (1) TD children and (2) children with speech and/or language difficulties. This review synthesizes the evidence that an unfamiliar speaker accent may lead to a breakdown in language comprehension in TD children and in children with speech difficulties. Moreover, it exposes the inconsistencies found in this field of research and highlights the lack of studies investigating the effect of speaker accent in children with language deficits. Overall, research points towards a developmental trend in children's ability to comprehend accent-related variations in speech. Vocabulary size, language exposure, exposure to different accents and adequate processing resources (e.g. attention) seem to play a key role in children's ability to understand unfamiliar accents. This review uncovered some inconsistencies in the literature that highlight the methodological issues that must be considered when conducting research in this field. It explores how such issues may be controlled in order to increase the validity and reliability of future research. Key clinical implications are also discussed. © 2016 Royal College of Speech and Language Therapists.

  9. The Asia Pacific Rebalance: Tipping the Scale with Landpower

    DTIC Science & Technology

    2013-04-01

    Defense.gov Speech: Shangri - La Security Dialogue, As Delivered by Secretary of Defense Leon E. Panetta, Shangri - La Hotel , Singapore, June 02, 2012, linked... Shangri - La Hotel , Singapore, June 02, 2012, linked from the U.S. Department of Defense web site at: http://www.defense.gov/speeches/speech.aspx... Shangri - La Hotel , Singapore, June 02, 2012. 22 Training and Doctrine Command, “Operational Environments to 2028: The Strategic Environment for

  10. Asymmetric Switch Costs in Numeral Naming and Number Word Reading: Implications for Models of Bilingual Language Production.

    PubMed

    Reynolds, Michael G; Schlöffel, Sophie; Peressotti, Francesca

    2015-01-01

    One approach used to gain insight into the processes underlying bilingual language comprehension and production examines the costs that arise from switching languages. For unbalanced bilinguals, asymmetric switch costs are reported in speech production, where the switch cost for L1 is larger than the switch cost for L2, whereas, symmetric switch costs are reported in language comprehension tasks, where the cost of switching is the same for L1 and L2. Presently, it is unclear why asymmetric switch costs are observed in speech production, but not in language comprehension. Three experiments are reported that simultaneously examine methodological explanations of task related differences in the switch cost asymmetry and the predictions of three accounts of the switch cost asymmetry in speech production. The results of these experiments suggest that (1) the type of language task (comprehension vs. production) determines whether an asymmetric switch cost is observed and (2) at least some of the switch cost asymmetry arises within the language system.

  11. Asymmetric Switch Costs in Numeral Naming and Number Word Reading: Implications for Models of Bilingual Language Production

    PubMed Central

    Reynolds, Michael G.; Schlöffel, Sophie; Peressotti, Francesca

    2016-01-01

    One approach used to gain insight into the processes underlying bilingual language comprehension and production examines the costs that arise from switching languages. For unbalanced bilinguals, asymmetric switch costs are reported in speech production, where the switch cost for L1 is larger than the switch cost for L2, whereas, symmetric switch costs are reported in language comprehension tasks, where the cost of switching is the same for L1 and L2. Presently, it is unclear why asymmetric switch costs are observed in speech production, but not in language comprehension. Three experiments are reported that simultaneously examine methodological explanations of task related differences in the switch cost asymmetry and the predictions of three accounts of the switch cost asymmetry in speech production. The results of these experiments suggest that (1) the type of language task (comprehension vs. production) determines whether an asymmetric switch cost is observed and (2) at least some of the switch cost asymmetry arises within the language system. PMID:26834659

  12. Sensory-Cognitive Interaction in the Neural Encoding of Speech in Noise: A Review

    PubMed Central

    Anderson, Samira; Kraus, Nina

    2011-01-01

    Background Speech-in-noise (SIN) perception is one of the most complex tasks faced by listeners on a daily basis. Although listening in noise presents challenges for all listeners, background noise inordinately affects speech perception in older adults and in children with learning disabilities. Hearing thresholds are an important factor in SIN perception, but they are not the only factor. For successful comprehension, the listener must perceive and attend to relevant speech features, such as the pitch, timing, and timbre of the target speaker’s voice. Here, we review recent studies linking SIN and brainstem processing of speech sounds. Purpose To review recent work that has examined the ability of the auditory brainstem response to complex sounds (cABR), which reflects the nervous system’s transcription of pitch, timing, and timbre, to be used as an objective neural index for hearing-in-noise abilities. Study Sample We examined speech-evoked brainstem responses in a variety of populations, including children who are typically developing, children with language-based learning impairment, young adults, older adults, and auditory experts (i.e., musicians). Data Collection and Analysis In a number of studies, we recorded brainstem responses in quiet and babble noise conditions to the speech syllable /da/ in all age groups, as well as in a variable condition in children in which /da/ was presented in the context of seven other speech sounds. We also measured speech-in-noise perception using the Hearing-in-Noise Test (HINT) and the Quick Speech-in-Noise Test (QuickSIN). Results Children and adults with poor SIN perception have deficits in the subcortical spectrotemporal representation of speech, including low-frequency spectral magnitudes and the timing of transient response peaks. Furthermore, auditory expertise, as engendered by musical training, provides both behavioral and neural advantages for processing speech in noise. Conclusions These results have implications for future assessment and management strategies for young and old populations whose primary complaint is difficulty hearing in background noise. The cABR provides a clinically applicable metric for objective assessment of individuals with SIN deficits, for determination of the biologic nature of disorders affecting SIN perception, for evaluation of appropriate hearing aid algorithms, and for monitoring the efficacy of auditory remediation and training. PMID:21241645

  13. Post-treatment speech naturalness of comprehensive stuttering program clients and differences in ratings among listener groups.

    PubMed

    Teshima, Shelli; Langevin, Marilyn; Hagler, Paul; Kully, Deborah

    2010-03-01

    The purposes of this study were to investigate naturalness of the post-treatment speech of Comprehensive Stuttering Program (CSP) clients and differences in naturalness ratings by three listener groups. Listeners were 21 student speech-language pathologists, 9 community members, and 15 listeners who stutter. Listeners rated perceptually fluent speech samples of CSP clients obtained immediately post-treatment (Post) and at 5 years follow-up (F5), and speech samples of matched typically fluent (TF) speakers. A 9-point interval rating scale was used. A 3 (listener group)x2 (time)x2 (speaker) mixed ANOVA was used to test for differences among mean ratings. The difference between CSP Post and F5 mean ratings was statistically significant. The F5 mean rating was within the range reported for typically fluent speakers. Student speech-language pathologists were found to be less critical than community members and listeners who stutter in rating naturalness; however, there were no significant differences in ratings made by community members and listeners who stutter. Results indicate that the naturalness of post-treatment speech of CSP clients improves in the post-treatment period and that it is possible for clients to achieve levels of naturalness that appear to be acceptable to adults who stutter and that are within the range of naturalness ratings given to typically fluent speakers. Readers will be able to (a) summarize key findings of studies that have investigated naturalness ratings, and (b) interpret the naturalness ratings of Comprehensive Stuttering Program speaker samples and the ratings made by the three listener groups in this study.

  14. Trajectory and outcomes of speech language therapy in the Prader-Willi syndrome (PWS): case report.

    PubMed

    Misquiatti, Andréa Regina Nunes; Cristovão, Melina Pavini; Brito, Maria Claudia

    2011-03-01

    The aim of this study was to describe the trajectory and the outcomes of speech-language therapy in Prader-Willi syndrome through a longitudinal study of the case of an 8 year-old boy, along four years of speech-language therapy follow-up. The therapy sessions were filmed and documental analysis of information from the child's records regarding anamnesis, evaluation and speech-language therapy reports and multidisciplinary evaluations were carried out. The child presented typical characteristics of Prader-Willi syndrome, such as obesity, hyperfagia, anxiety, behavioral problems and self aggression episodes. Speech-language pathology evaluation showed orofacial hypotony, sialorrhea, hypernasal voice, cognitive deficits, oral comprehension difficulties, communication using gestures and unintelligible isolated words. Initially, speech-language therapy had the aim to promote the language development emphasizing social interaction through recreational activities. With the evolution of the case, the main focus became the development of conversation and narrative abilities. It were observed improvements in attention, symbolic play, social contact and behavior. Moreover, there was an increase in vocabulary, and evolution in oral comprehension and the development of narrative abilities. Hence, speech-language pathology intervention in the case described was effective in different linguistic levels, regarding phonological, syntactic, lexical and pragmatic abilities.

  15. How age and linguistic competence alter the interplay of perceptual and cognitive factors when listening to conversations in a noisy environment

    PubMed Central

    Avivi-Reich, Meital; Daneman, Meredyth; Schneider, Bruce A.

    2013-01-01

    Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs. top-down processes to speech comprehension. Younger and older L1s as well as young L2s listened to conversations played against a babble background, with or without spatial separation between the talkers and masker, when the spatial positions of the stimuli were specified either by loudspeaker placements (real location), or through use of the precedence effect (virtual location). After listening to a conversation, the participants were asked to answer questions regarding its content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. Once compensation was applied, the number of questions correctly answered increased when a real or virtual spatial separation was introduced between babble and talkers. There was no evidence that performance differed between real and virtual locations. The contribution of vocabulary knowledge to dialog comprehension was found to be larger in the virtual conditions than in the real whereas the contribution of reading comprehension skill did not depend on the listening environment but rather differed as a function of age and language proficiency. The results indicate that the acoustic scene and the cognitive and linguistic competencies of listeners modulate how and when top-down resources are engaged in aid of speech comprehension. PMID:24578684

  16. How age and linguistic competence alter the interplay of perceptual and cognitive factors when listening to conversations in a noisy environment.

    PubMed

    Avivi-Reich, Meital; Daneman, Meredyth; Schneider, Bruce A

    2014-01-01

    Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs. top-down processes to speech comprehension. Younger and older L1s as well as young L2s listened to conversations played against a babble background, with or without spatial separation between the talkers and masker, when the spatial positions of the stimuli were specified either by loudspeaker placements (real location), or through use of the precedence effect (virtual location). After listening to a conversation, the participants were asked to answer questions regarding its content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. Once compensation was applied, the number of questions correctly answered increased when a real or virtual spatial separation was introduced between babble and talkers. There was no evidence that performance differed between real and virtual locations. The contribution of vocabulary knowledge to dialog comprehension was found to be larger in the virtual conditions than in the real whereas the contribution of reading comprehension skill did not depend on the listening environment but rather differed as a function of age and language proficiency. The results indicate that the acoustic scene and the cognitive and linguistic competencies of listeners modulate how and when top-down resources are engaged in aid of speech comprehension.

  17. Pediatric traumatic brain injury: language outcomes and their relationship to the arcuate fasciculus.

    PubMed

    Liégeois, Frédérique J; Mahony, Kate; Connelly, Alan; Pigdon, Lauren; Tournier, Jacques-Donald; Morgan, Angela T

    2013-12-01

    Pediatric traumatic brain injury (TBI) may result in long-lasting language impairments alongside dysarthria, a motor-speech disorder. Whether this co-morbidity is due to the functional links between speech and language networks, or to widespread damage affecting both motor and language tracts, remains unknown. Here we investigated language function and diffusion metrics (using diffusion-weighted tractography) within the arcuate fasciculus, the uncinate fasciculus, and the corpus callosum in 32 young people after TBI (approximately half with dysarthria) and age-matched healthy controls (n=17). Only participants with dysarthria showed impairments in language, affecting sentence formulation and semantic association. In the whole TBI group, sentence formulation was best predicted by combined corpus callosum and left arcuate volumes, suggesting this "dual blow" seriously reduces the potential for functional reorganisation. Word comprehension was predicted by fractional anisotropy in the right arcuate. The co-morbidity between dysarthria and language deficits therefore seems to be the consequence of multiple tract damage. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. The Perception of "Sine-Wave Speech" by Adults with Developmental Dyslexia.

    ERIC Educational Resources Information Center

    Rosner, Burton S.; Talcott, Joel B.; Witton, Caroline; Hogg, James D.; Richardson, Alexandra J.; Hansen, Peter C.; Stein, John F.

    2003-01-01

    "Sine-wave speech" sentences contain only four frequency-modulated sine waves, lacking many acoustic cues present in natural speech. Adults with (n=19) and without (n=14) dyslexia were asked to reproduce orally sine-wave utterances in successive trials. Results suggest comprehension of sine-wave sentences is impaired in some adults with…

  19. Individual Differences in Premotor and Motor Recruitment during Speech Perception

    ERIC Educational Resources Information Center

    Szenkovits, Gayaneh; Peelle, Jonathan E.; Norris, Dennis; Davis, Matthew H.

    2012-01-01

    Although activity in premotor and motor cortices is commonly observed in neuroimaging studies of spoken language processing, the degree to which this activity is an obligatory part of everyday speech comprehension remains unclear. We hypothesised that rather than being a unitary phenomenon, the neural response to speech perception in motor regions…

  20. Evidence-Based Practice for Children with Speech Sound Disorders: Part 1 Narrative Review

    ERIC Educational Resources Information Center

    Baker, Elise; McLeod, Sharynne

    2011-01-01

    Purpose: This article provides a comprehensive narrative review of intervention studies for children with speech sound disorders (SSD). Its companion paper (Baker & McLeod, 2011) provides a tutorial and clinical example of how speech-language pathologists (SLPs) can engage in evidence-based practice (EBP) for this clinical population. Method:…

  1. Relationship between Speech, Oromotor, Language and Cognitive Abilities in Children with Down's Syndrome

    ERIC Educational Resources Information Center

    Cleland, Joanne; Wood, Sara; Hardcastle, William; Wishart, Jennifer; Timmins, Claire

    2010-01-01

    Background: Children and young people with Down's syndrome present with deficits in expressive speech and language, accompanied by strengths in vocabulary comprehension compared with non-verbal mental age. Intelligibility is particularly low, but whether speech is delayed or disordered is a controversial topic. Most studies suggest a delay, but no…

  2. [Attention deficit and understanding of non-literal meanings: the interpretation of indirect speech acts and idioms].

    PubMed

    Crespo, N; Manghi, D; García, G; Cáceres, P

    To report on the oral comprehension of the non-literal meanings of indirect speech acts and idioms in everyday speech by children with attention deficit hyperactivity disorder (ADHD). The subjects in this study consisted of a sample of 29 Chilean schoolchildren aged between 6 and 13 with ADHD and a control group of children without ADHD sharing similar socio-demographic characteristics. A quantitative method was utilised: comprehension was measured individually by means of an interactive instrument. The children listened to a dialogue taken from a cartoon series that included indirect speech acts and idioms and they had to choose one of the three options they were given: literal, non-literal or distracter. The children without ADHD identified the non-literal meaning more often, especially in idioms. Likewise, it should be pointed out that whereas the children without ADHD increased their scores as their ages went up, those with ADHD remained at the same point. ADHD not only interferes in the inferential comprehension of non-literal meanings but also inhibits the development of this skill in subjects affected by it.

  3. Cortical characterization of the perception of intelligible and unintelligible speech measured via high-density electroencephalography.

    PubMed

    Utianski, Rene L; Caviness, John N; Liss, Julie M

    2015-01-01

    High-density electroencephalography was used to evaluate cortical activity during speech comprehension via a sentence verification task. Twenty-four participants assigned true or false to sentences produced with 3 noise-vocoded channel levels (1--unintelligible, 6--decipherable, 16--intelligible), during simultaneous EEG recording. Participant data were sorted into higher- (HP) and lower-performing (LP) groups. The identification of a late-event related potential for LP listeners in the intelligible condition and in all listeners when challenged with a 6-Ch signal supports the notion that this induced potential may be related to either processing degraded speech, or degraded processing of intelligible speech. Different cortical locations are identified as neural generators responsible for this activity; HP listeners are engaging motor aspects of their language system, utilizing an acoustic-phonetic based strategy to help resolve the sentence, while LP listeners do not. This study presents evidence for neurophysiological indices associated with more or less successful speech comprehension performance across listening conditions. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Should pantomime and gesticulation be assessed separately for their comprehensibility in aphasia? A case study.

    PubMed

    van Nispen, Karin; van de Sandt-Koenderman, Mieke; Mol, Lisette; Krahmer, Emiel

    2014-01-01

    Gesticulation (gestures accompanying speech) and pantomime (gestures in the absence of speech) can each be comprehensible. Little is known about the differences between these two gesture modes in people with aphasia. To discover whether there are differences in the communicative use of gesticulation and pantomime in QH, a person with severe fluent aphasia. QH performed two tasks: naming objects and retelling a story. He did this once in a verbal condition (enabling gesticulation) and once in a pantomime condition. For both conditions, the comprehensibility of gestures was analysed in a forced-choice task by naïve judges. Secondly, a comparison was made between QH and healthy controls for the representation techniques used. Pantomimes produced by QH for naming objects were significantly more comprehensible than chance, whereas his gesticulation was not. For retelling a story the opposite pattern was found. When naming objects QH gesticulated much more than did healthy controls. His pantomimes for this task were simpler than those used by the control group. For retelling a story no differences were found. Although QH did not make full use of each gesture modes' potential, both did contribute to QH's comprehensibility. Crucially, the benefits of each mode differed across tasks. This implies that both gesture modes should be taken into account separately in models of speech and gesture production and in clinical practice for different communicative settings. © 2013 Royal College of Speech and Language Therapists.

  5. The Influence of Child-Directed Speech on Word Learning and Comprehension

    ERIC Educational Resources Information Center

    Foursha-Stevenson, Cassandra; Schembri, Taylor; Nicoladis, Elena; Eriksen, Cody

    2017-01-01

    This paper describes an investigation into the function of child-directed speech (CDS) across development. In the first experiment, 10-21-month-olds were presented with familiar words in CDS and trained on novel words in CDS or adult-directed speech (ADS). All children preferred the matching display for familiar words. However, only older toddlers…

  6. Phonological Memory, Attention Control, and Musical Ability: Effects of Individual Differences on Rater Judgments of Second Language Speech

    ERIC Educational Resources Information Center

    Isaacs, Talia; Trofimovich, Pavel

    2011-01-01

    This study examines how listener judgments of second language speech relate to individual differences in listeners' phonological memory, attention control, and musical ability. Sixty native English listeners (30 music majors, 30 nonmusic majors) rated 40 nonnative speech samples for accentedness, comprehensibility, and fluency. The listeners were…

  7. Brief Training with Co-Speech Gesture Lends a Hand to Word Learning in a Foreign Language

    ERIC Educational Resources Information Center

    Kelly, Spencer D.; McDevitt, Tara; Esch, Megan

    2009-01-01

    Recent research in psychology and neuroscience has demonstrated that co-speech gestures are semantically integrated with speech during language comprehension and development. The present study explored whether gestures also play a role in language learning in adults. In Experiment 1, we exposed adults to a brief training session presenting novel…

  8. Oral and Hand Movement Speeds Are Associated with Expressive Language Ability in Children with Speech Sound Disorder

    ERIC Educational Resources Information Center

    Peter, Beate

    2012-01-01

    This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD…

  9. How Language Is Embodied in Bilinguals and Children with Specific Language Impairment

    PubMed Central

    Adams, Ashley M.

    2016-01-01

    This manuscript explores the role of embodied views of language comprehension and production in bilingualism and specific language impairment. Reconceptualizing popular models of bilingual language processing, the embodied theory is first extended to this area. Issues such as semantic grounding in a second language and potential differences between early and late acquisition of a second language are discussed. Predictions are made about how this theory informs novel ways of thinking about teaching a second language. Secondly, the comorbidity of speech, language, and motor impairments and how embodiment theory informs the discussion of the etiology of these impairments is examined. A hypothesis is presented suggesting that what is often referred to as specific language impairment may not be so specific due to widespread subclinical motor deficits in this population. Predictions are made about how weaknesses and instabilities in speech motor control, even at a subclinical level, may disrupt the neural network that connects acoustic input, articulatory motor plans, and semantics. Finally, I make predictions about how this information informs clinical practice for professionals such as speech language pathologists and occupational and physical therapists. These new hypotheses are placed within the larger framework of the body of work pertaining to semantic grounding, action-based language acquisition, and action-perception links that underlie language learning and conceptual grounding. PMID:27582716

  10. Coupled neural systems underlie the production and comprehension of naturalistic narrative speech

    PubMed Central

    Silbert, Lauren J.; Honey, Christopher J.; Simony, Erez; Poeppel, David; Hasson, Uri

    2014-01-01

    Neuroimaging studies of language have typically focused on either production or comprehension of single speech utterances such as syllables, words, or sentences. In this study we used a new approach to functional MRI acquisition and analysis to characterize the neural responses during production and comprehension of complex real-life speech. First, using a time-warp based intrasubject correlation method, we identified all areas that are reliably activated in the brains of speakers telling a 15-min-long narrative. Next, we identified areas that are reliably activated in the brains of listeners as they comprehended that same narrative. This allowed us to identify networks of brain regions specific to production and comprehension, as well as those that are shared between the two processes. The results indicate that production of a real-life narrative is not localized to the left hemisphere but recruits an extensive bilateral network, which overlaps extensively with the comprehension system. Moreover, by directly comparing the neural activity time courses during production and comprehension of the same narrative we were able to identify not only the spatial overlap of activity but also areas in which the neural activity is coupled across the speaker’s and listener’s brains during production and comprehension of the same narrative. We demonstrate widespread bilateral coupling between production- and comprehension-related processing within both linguistic and nonlinguistic areas, exposing the surprising extent of shared processes across the two systems. PMID:25267658

  11. The McGurk effect in children with autism and Asperger syndrome.

    PubMed

    Bebko, James M; Schroeder, Jessica H; Weiss, Jonathan A

    2014-02-01

    Children with autism may have difficulties in audiovisual speech perception, which has been linked to speech perception and language development. However, little has been done to examine children with Asperger syndrome as a group on tasks assessing audiovisual speech perception, despite this group's often greater language skills. Samples of children with autism, Asperger syndrome, and Down syndrome, as well as a typically developing sample, were presented with an auditory-only condition, a speech-reading condition, and an audiovisual condition designed to elicit the McGurk effect. Children with autism demonstrated unimodal performance at the same level as the other groups, yet showed a lower rate of the McGurk effect compared with the Asperger, Down and typical samples. These results suggest that children with autism may have unique intermodal speech perception difficulties linked to their representations of speech sounds. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  12. Online collaboration environments in telemedicine applications of speech therapy.

    PubMed

    Pierrakeas, C; Georgopoulos, V; Malandraki, G

    2005-01-01

    The use of telemedicine in speech and language pathology provides patients in rural and remote areas with access to quality rehabilitation services that are sufficient, accessible, and user-friendly leading to new possibilities in comprehensive and long-term, cost-effective diagnosis and therapy. This paper discusses the use of online collaboration environments for various telemedicine applications of speech therapy which include online group speech therapy scenarios, multidisciplinary clinical consulting team, and online mentoring and continuing education.

  13. Intracranial mapping of auditory perception: event-related responses and electrocortical stimulation.

    PubMed

    Sinai, A; Crone, N E; Wied, H M; Franaszczuk, P J; Miglioretti, D; Boatman-Reich, D

    2009-01-01

    We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping.

  14. Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation

    PubMed Central

    Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.

    2010-01-01

    Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540

  15. Development and preliminary evaluation of a new test of ongoing speech comprehension.

    PubMed

    Best, Virginia; Keidser, Gitte; Buchholz, Jӧrg M; Freeston, Katrina

    2016-01-01

    The overall goal of this work is to create new speech perception tests that more closely resemble real world communication and offer an alternative or complement to the commonly used sentence recall test. We describe the development of a new ongoing speech comprehension test based on short everyday passages and on-the-go questions. We also describe the results of an experiment conducted to compare the psychometric properties of this test to those of a sentence test. Both tests were completed by a group of listeners that included normal hearers as well as hearing-impaired listeners who participated with and without their hearing aids. Overall, the psychometric properties of the two tests were similar, and thresholds were significantly correlated. However, there was some evidence of age/cognitive effects in the comprehension test that were not revealed by the sentence test. This new comprehension test promises to be useful for the larger goal of creating laboratory tests that combine realistic acoustic environments with realistic communication tasks. Further efforts will be required to assess whether the test can ultimately improve predictions of real-world outcomes.

  16. Relatively effortless listening promotes understanding and recall of medical instructions in older adults

    PubMed Central

    DiDonato, Roberta M.; Surprenant, Aimée M.

    2015-01-01

    Communication success under adverse conditions requires efficient and effective recruitment of both bottom-up (sensori-perceptual) and top-down (cognitive-linguistic) resources to decode the intended auditory-verbal message. Employing these limited capacity resources has been shown to vary across the lifespan, with evidence indicating that younger adults out-perform older adults for both comprehension and memory of the message. This study examined how sources of interference arising from the speaker (message spoken with conversational vs. clear speech technique), the listener (hearing-listening and cognitive-linguistic factors), and the environment (in competing speech babble noise vs. quiet) interact and influence learning and memory performance using more ecologically valid methods than has been done previously. The results suggest that when older adults listened to complex medical prescription instructions with “clear speech,” (presented at audible levels through insertion earphones) their learning efficiency, immediate, and delayed memory performance improved relative to their performance when they listened with a normal conversational speech rate (presented at audible levels in sound field). This better learning and memory performance for clear speech listening was maintained even in the presence of speech babble noise. The finding that there was the largest learning-practice effect on 2nd trial performance in the conversational speech when the clear speech listening condition was first is suggestive of greater experience-dependent perceptual learning or adaptation to the speaker's speech and voice pattern in clear speech. This suggests that experience-dependent perceptual learning plays a role in facilitating the language processing and comprehension of a message and subsequent memory encoding. PMID:26106353

  17. New Measures of Masked Text Recognition in Relation to Speech-in-Noise Perception and Their Associations with Age and Cognitive Abilities

    ERIC Educational Resources Information Center

    Besser, Jana; Zekveld, Adriana A.; Kramer, Sophia E.; Ronnberg, Jerker; Festen, Joost M.

    2012-01-01

    Purpose: In this research, the authors aimed to increase the analogy between Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) and Speech Reception Threshold (SRT; Plomp & Mimpen, 1979) and to examine the TRT's value in estimating cognitive abilities that are important for speech comprehension in noise. Method: The…

  18. Memory Effects of Speech and Gesture Binding: Cortical and Hippocampal Activation in Relation to Subsequent Memory Performance

    ERIC Educational Resources Information Center

    Straube, Benjamin; Green, Antonia; Weis, Susanne; Chatterjee, Anjan; Tilo, Kircher

    2009-01-01

    In human face-to-face communication, the content of speech is often illustrated by coverbal gestures. Behavioral evidence suggests that gestures provide advantages in the comprehension and memory of speech. Yet, how the human brain integrates abstract auditory and visual information into a common representation is not known. Our study investigates…

  19. A Comprehensive PEL-IEP Speech Curriculum Overview and Related Carryover and Summary Forms Designed for Speech Therapy Services for the Hearing-Impaired.

    ERIC Educational Resources Information Center

    Erlbaum, Sheila Judith

    1990-01-01

    This article presents a curriculum for speech-language-communication skills which combines the Present Education Level and Individualized Education Program formats for deaf and hearing-impaired students. Information on carryover procedures, parent/teacher contact, and report card format is presented. The curriculum was designed for preschool…

  20. The Impact of Augmentative and Alternative Communication Intervention on the Speech Production of Individuals with Developmental Disabilities: A Research Review

    ERIC Educational Resources Information Center

    Millar, Diane C.; Light, Janice C.; Schlosser, Ralf W.

    2006-01-01

    Purpose: This article presents the results of a meta-analysis to determine the effect of augmentative and alternative communication (AAC) on the speech production of individuals with developmental disabilities. Method: A comprehensive search of the literature published between 1975 and 2003, which included data on speech production before, during,…

  1. Hate Speech: The History of an American Controversy.

    ERIC Educational Resources Information Center

    Walker, Samuel

    Noting that no other country in the world offers protection to offensive speech, this book provides a comprehensive account of the history of the hate speech controversy in the United States. The book examines the issue, from the conflicts over the Ku Klux Klan in the 1920s and American Nazi groups in the 1930s, to the famous Skokie, Illinois…

  2. Assessment of Individuals with Primary Progressive Aphasia.

    PubMed

    Henry, Maya L; Grasso, Stephanie M

    2018-07-01

    Speech-language pathologists play a crucial role in the assessment and treatment of individuals with primary progressive aphasia (PPA). The speech-language evaluation is a critical aspect of the diagnostic and rehabilitative process, informing differential diagnosis as well as intervention planning and monitoring of cognitive-linguistic status over time. The evaluation should include a thorough case history and interview and a detailed assessment of speech-language and cognitive functions, with tasks designed to detect core and associated deficits outlined in current diagnostic criteria. In this paper, we review assessments that can be utilized to examine communication and cognition in PPA, including general aphasia batteries designed for stroke and/or progressive aphasia as well as tests of specific cognitive-linguistic functions, including naming, object/person knowledge, single-word and sentence comprehension, repetition, spontaneous speech/language production, motor speech, written language, and nonlinguistic cognitive domains. The comprehensive evaluation can inform diagnostic decision making and facilitate planning of interventions that are tailored to the patient's current status and likely progression of deficits. As such, the speech-language evaluation allows the medical team to provide individuals with PPA and their families with appropriate recommendations for the present and the future. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  3. [Multidimensionality of inner speech and its relationship with abnormal perceptions].

    PubMed

    Tamayo-Agudelo, William; Vélez-Urrego, Juan David; Gaviria-Castaño, Gilberto; Perona-Garcelán, Salvador

    Inner speech is a common human experience. Recently, there have been studies linking this experience with cognitive functions, such as problem solving, reading, writing, autobiographical memory, and some disorders, such as anxiety and depression. In addition, inner speech is recognised as the main source of auditory hallucinations. The main purpose of this study is to establish the factor structure of Varieties of Inner Speech Questionnaire (VISQ) in a sample of the Colombian population. Furthermore, it aims at establishing a link between VISQ and abnormal perceptions. This was a cross-sectional study in which 232 college students were assessed using the VISQ and the Cardiff Anomalous Perceptions Scale (CAPS). Through an exploratory factor analysis, a structure of three factors was found: Other Voices in the Internal Speech, Condensed Inner speech, and Dialogical/Evaluative Inner speech, all of them with acceptable levels of reliability. Gender differences were found in the second and third factor, with higher averages for women. Positive correlations were found among the three VISQ and the two CAPS factors: Multimodal Perceptual Alterations and Experiences Associated with the Temporal Lobe. The results are consistent with previous findings linking the factors of inner speech with the propensity to auditory hallucination, a phenomenon widely associated with temporal lobe abnormalities. The hallucinations associated with other perceptual systems, however, are still weakly explained. Copyright © 2016 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  4. The roles of family history of dyslexia, language, speech production and phonological processing in predicting literacy progress.

    PubMed

    Carroll, Julia M; Mundy, Ian R; Cunningham, Anna J

    2014-09-01

    It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language and phonological processing fully account for the increased risk of dyslexia in children with FRD. One hundred and fifty-three 4-6-year-old children, 44 of whom had FRD, completed a battery of speech, language, phonology and literacy tasks. Word reading and spelling were retested 6 months later, and text reading accuracy and reading comprehension were tested 3 years later. The children with FRD were at increased risk of developing difficulties in reading accuracy, but not reading comprehension. Four groups were compared: good and poor readers with and without FRD. In most cases good readers outperformed poor readers regardless of family history, but there was an effect of family history on naming and nonword repetition regardless of literacy outcome, suggesting a role for speech production skills as an endophenotype of dyslexia. Phonological processing predicted spelling, while language predicted text reading accuracy and comprehension. FRD was a significant additional predictor of reading and spelling after controlling for speech production, language and phonological processing, suggesting that children with FRD show additional difficulties in literacy that cannot be fully explained in terms of their language and phonological skills. © 2014 John Wiley & Sons Ltd.

  5. Estrogen and Comprehension of Metaphoric Speech in Women Suffering From Schizophrenia: Results of a Double-Blind, Placebo-Controlled Trial

    PubMed Central

    Bergemann, Niels; Parzer, Peter; Jaggy, Susanne; Auler, Beatrice; Mundt, Christoph; Maier-Braunleder, Sabine

    2008-01-01

    Objective: The effects of estrogen on comprehension of metaphoric speech, word fluency, and verbal ability were investigated in women suffering from schizophrenia. The issue of estrogen-dependent neuropsychological performance could be highly relevant because women with schizophrenia frequently suffer from hypoestrogenism. Method: A placebo-controlled, double-blind, crossover study using 17β-estradiol for replacement therapy and as an adjunct to a naturalistic maintenance antipsychotic treatment was carried out over a period of 8 months. Nineteen women (mean age = 38.0 years, SD = 9.9 years) with schizophrenia were included in the study. Comprehension of metaphoric speech was measured by a lexical decision paradigm, word fluency, and verbal ability by a paper-and-pencil test. Results: Significant improvement was seen for the activation of metaphoric meaning during estrogen treatment (P = .013); in contrast, no difference was found for the activation of concrete meaning under this condition. Verbal ability and word fluency did not improve under estrogen replacement therapy either. Conclusions: This is the very first study based on estrogen intervention instead of the physiological hormone changes to examine the estrogen effects on neuropsychological performance in women with schizophrenia. In addition, it is the first time that the effect of estrogen on metaphoric speech comprehension was investigated in this context. While in a previous study estrogen therapy as adjunct to a naturalistic maintenance treatment with antipsychotics did not show an effect on psychopathology measured by a rating scale, a significant effect of estrogen on the comprehension of metaphoric speech and/or concretism, a main feature of schizophrenic thought and language disturbance, was found in the present study. Because the improvement of formal thought disorders and language disturbances is crucial for social integration of patients with schizophrenia, the results may have implications for the treatment of these individuals. PMID:18156639

  6. Augmenting Comprehension of Speech in Noise with a Facial Avatar and Its Effect on Performance

    DTIC Science & Technology

    2010-12-01

    develop some aspects of speech more slowly than sighted children. In addition to “bleeping” or blanking the sound of censored words, network...the speech. Movie files were exported at a resolution of 600 by 800 pixels at 30 frames per second and were four seconds in length. It should be...noted that the speech, and synchronized facial movements, began one second after each movie file started. This delay was designed to ensure that the

  7. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    PubMed

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  8. Free Speech Yearbook 1975.

    ERIC Educational Resources Information Center

    Barbour, Alton, Ed.

    This issue of the "Free Speech Yearbook" contains the following: "Between Rhetoric and Disloyalty: Free Speech Standards for the Sunshire Soldier" by Richard A. Parker; "William A. Rehnquist: Ideologist on the Bench" by Peter E. Kane; "The First Amendment's Weakest Link: Government Regulation of Controversial…

  9. Mapping the cortical representation of speech sounds in a syllable repetition task.

    PubMed

    Markiewicz, Christopher J; Bohland, Jason W

    2016-11-01

    Speech repetition relies on a series of distributed cortical representations and functional pathways. A speaker must map auditory representations of incoming sounds onto learned speech items, maintain an accurate representation of those items in short-term memory, interface that representation with the motor output system, and fluently articulate the target sequence. A "dorsal stream" consisting of posterior temporal, inferior parietal and premotor regions is thought to mediate auditory-motor representations and transformations, but the nature and activation of these representations for different portions of speech repetition tasks remains unclear. Here we mapped the correlates of phonetic and/or phonological information related to the specific phonemes and syllables that were heard, remembered, and produced using a series of cortical searchlight multi-voxel pattern analyses trained on estimates of BOLD responses from individual trials. Based on responses linked to input events (auditory syllable presentation), predictive vowel-level information was found in the left inferior frontal sulcus, while syllable prediction revealed significant clusters in the left ventral premotor cortex and central sulcus and the left mid superior temporal sulcus. Responses linked to output events (the GO signal cueing overt production) revealed strong clusters of vowel-related information bilaterally in the mid to posterior superior temporal sulcus. For the prediction of onset and coda consonants, input-linked responses yielded distributed clusters in the superior temporal cortices, which were further informative for classifiers trained on output-linked responses. Output-linked responses in the Rolandic cortex made strong predictions for the syllables and consonants produced, but their predictive power was reduced for vowels. The results of this study provide a systematic survey of how cortical response patterns covary with the identity of speech sounds, which will help to constrain and guide theoretical models of speech perception, speech production, and phonological working memory. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Neural mechanisms of phonemic restoration for speech comprehension revealed by magnetoencephalography.

    PubMed

    Sunami, Kishiko; Ishii, Akira; Takano, Sakurako; Yamamoto, Hidefumi; Sakashita, Tetsushi; Tanaka, Masaaki; Watanabe, Yasuyoshi; Yamane, Hideo

    2013-11-06

    In daily communication, we can usually still hear the spoken words as if they had not been masked and can comprehend the speech when spoken words are masked by background noise. This phenomenon is known as phonemic restoration. Since little is known about the neural mechanisms underlying phonemic restoration for speech comprehension, we aimed to identify the neural mechanisms using magnetoencephalography (MEG). Twelve healthy male volunteers with normal hearing participated in the study. Participants were requested to carefully listen to and understand recorded spoken Japanese stories, which were either played forward (forward condition) or in reverse (reverse condition), with their eyes closed. Several syllables of spoken words were replaced by 300-ms white-noise stimuli with an inter-stimulus interval of 1.6-20.3s. We compared MEG responses to white-noise stimuli during the forward condition with those during the reverse condition using time-frequency analyses. Increased 3-5 Hz band power in the forward condition compared with the reverse condition was continuously observed in the left inferior frontal gyrus [Brodmann's areas (BAs) 45, 46, and 47] and decreased 18-22 Hz band powers caused by white-noise stimuli were seen in the left transverse temporal gyrus (BA 42) and superior temporal gyrus (BA 22). These results suggest that the left inferior frontal gyrus and left transverse and superior temporal gyri are involved in phonemic restoration for speech comprehension. Our findings may help clarify the neural mechanisms of phonemic restoration as well as develop innovative treatment methods for individuals suffering from impaired speech comprehension, particularly in noisy environments. © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Discriminating between auditory and motor cortical responses to speech and non-speech mouth sounds

    PubMed Central

    Agnew, Z.K.; McGettigan, C.; Scott, S.K.

    2012-01-01

    Several perspectives on speech perception posit a central role for the representation of articulations in speech comprehension, supported by evidence for premotor activation when participants listen to speech. However no experiments have directly tested whether motor responses mirror the profile of selective auditory cortical responses to native speech sounds, or whether motor and auditory areas respond in different ways to sounds. We used fMRI to investigate cortical responses to speech and non-speech mouth (ingressive click) sounds. Speech sounds activated bilateral superior temporal gyri more than other sounds, a profile not seen in motor and premotor cortices. These results suggest that there are qualitative differences in the ways that temporal and motor areas are activated by speech and click sounds: anterior temporal lobe areas are sensitive to the acoustic/phonetic properties while motor responses may show more generalised responses to the acoustic stimuli. PMID:21812557

  12. The Effect of Listening Drills Utilizing Compressed Speech and Standard Speech upon the Listening Comprehension of Second-Grade Children.

    ERIC Educational Resources Information Center

    Ihnat, Mary Ann

    This study was designed to investigate whether the listening ability of second-grade students could be improved using compressed-speech training as compared to normal listening training. The subjects were 95 second-grade pupils in a low-to-middle class suburban community in central New Jersey. The plan was to expose an experimental group to…

  13. Intelligibility and Acceptability Testing for Speech Technology

    DTIC Science & Technology

    1992-05-22

    information in memory (Luce, Feustel, and Pisoni, 1983). In high workload or multiple task situations, the added effort of listening to degraded speech can lead...the DRT provides diagnostic feature scores on six phonemic features: voicing, nasality, sustention , sibilation, graveness, and compactness, and on a...of other speech materials (e.g., polysyllabic words, paragraphs) and methods ( memory , comprehension, reaction time) have been used to evaluate the

  14. Preschool Speech, Language Skills, and Reading at 7, 9, and 10 Years: Etiology of the Relationship

    ERIC Educational Resources Information Center

    Hayiou-Thomas, Marianna E.; Harlaar, Nicole; Dale, Philip S.; Plomin, Robert

    2010-01-01

    Purpose: To examine the etiology of the relationship between preschool speech and language, and later reading skills. Method: One thousand six hundred seventy-two children from the Twins Early Development Study (B. R. Oliver & R. Plomin, 2007) were given a comprehensive speech and language assessment at 4 1/2 years. Reading was assessed at 7, 9,…

  15. A Hierarchical Generative Framework of Language Processing: Linking Language Perception, Interpretation, and Production Abnormalities in Schizophrenia

    PubMed Central

    Brown, Meredith; Kuperberg, Gina R.

    2015-01-01

    Language and thought dysfunction are central to the schizophrenia syndrome. They are evident in the major symptoms of psychosis itself, particularly as disorganized language output (positive thought disorder) and auditory verbal hallucinations (AVHs), and they also manifest as abnormalities in both high-level semantic and contextual processing and low-level perception. However, the literatures characterizing these abnormalities have largely been separate and have sometimes provided mutually exclusive accounts of aberrant language in schizophrenia. In this review, we propose that recent generative probabilistic frameworks of language processing can provide crucial insights that link these four lines of research. We first outline neural and cognitive evidence that real-time language comprehension and production normally involve internal generative circuits that propagate probabilistic predictions to perceptual cortices — predictions that are incrementally updated based on prediction error signals as new inputs are encountered. We then explain how disruptions to these circuits may compromise communicative abilities in schizophrenia by reducing the efficiency and robustness of both high-level language processing and low-level speech perception. We also argue that such disruptions may contribute to the phenomenology of thought-disordered speech and false perceptual inferences in the language system (i.e., AVHs). This perspective suggests a number of productive avenues for future research that may elucidate not only the mechanisms of language abnormalities in schizophrenia, but also promising directions for cognitive rehabilitation. PMID:26640435

  16. Phonological and semantic processing during comprehension in Wernicke's aphasia: An N400 and Phonological Mapping Negativity Study.

    PubMed

    Robson, Holly; Pilkington, Emma; Evans, Louise; DeLuca, Vincent; Keidel, James L

    2017-06-01

    Comprehension impairments in Wernicke's aphasia are thought to result from a combination of impaired phonological and semantic processes. However, the relationship between these cognitive processes and language comprehension has only been inferred through offline neuropsychological tasks. This study used ERPs to investigate phonological and semantic processing during online single word comprehension. EEG was recorded in a group of Wernicke's aphasia n=8 and control participants n=10 while performing a word-picture verification task. The N400 and Phonological Mapping Negativity/Phonological Mismatch Negativity (PMN) event-related potential components were investigated as an index of semantic and phonological processing, respectively. Individuals with Wernicke's aphasia displayed reduced and inconsistent N400 and PMN effects in comparison to control participants. Reduced N400 effects in the WA group were simulated in the control group by artificially degrading speech perception. Correlation analyses in the Wernicke's aphasia group found that PMN but not N400 amplitude was associated with behavioural word-picture verification performance. The results confirm impairments at both phonological and semantic stages of comprehension in Wernicke's aphasia. However, reduced N400 responses in Wernicke's aphasia are at least partially attributable to earlier phonological processing impairments. The results provide further support for the traditional model of Wernicke's aphasia which claims a causative link between phonological processing and language comprehension impairments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A little more conversation, a little less action - candidate roles for motor cortex in speech perception

    PubMed Central

    Scott, Sophie K; McGettigan, Carolyn; Eisner, Frank

    2014-01-01

    The motor theory of speech perception assumes that activation of the motor system is essential in the perception of speech. However, deficits in speech perception and comprehension do not arise from damage that is restricted to the motor cortex, few functional imaging studies reveal activity in motor cortex during speech perception, and the motor cortex is strongly activated by many different sound categories. Here, we evaluate alternative roles for the motor cortex in spoken communication and suggest a specific role in sensorimotor processing in conversation. We argue that motor-cortex activation it is essential in joint speech, particularly for the timing of turn-taking. PMID:19277052

  18. Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem

    PubMed Central

    Liu, Xunying; Zhang, Chao; Woodland, Phil; Fonteneau, Elisabeth

    2017-01-01

    There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important case in point. Automatic Speech Recognition (ASR) systems with near-human levels of performance are now available, which provide a computationally explicit solution for the recognition of words in continuous speech. This research aims to bridge the gap between speech recognition processes in humans and machines, using novel multivariate techniques to compare incremental ‘machine states’, generated as the ASR analysis progresses over time, to the incremental ‘brain states’, measured using combined electro- and magneto-encephalography (EMEG), generated as the same inputs are heard by human listeners. This direct comparison of dynamic human and machine internal states, as they respond to the same incrementally delivered sensory input, revealed a significant correspondence between neural response patterns in human superior temporal cortex and the structural properties of ASR-derived phonetic models. Spatially coherent patches in human temporal cortex responded selectively to individual phonetic features defined on the basis of machine-extracted regularities in the speech to lexicon mapping process. These results demonstrate the feasibility of relating human and ASR solutions to the problem of speech recognition, and suggest the potential for further studies relating complex neural computations in human speech comprehension to the rapidly evolving ASR systems that address the same problem domain. PMID:28945744

  19. Relationship between quality of life instruments and phonatory function in tracheoesophageal speech with voice prosthesis.

    PubMed

    Miyoshi, Masayuki; Fukuhara, Takahiro; Kataoka, Hideyuki; Hagino, Hiroshi

    2016-04-01

    The use of tracheoesophageal speech with voice prosthesis (T-E speech) after total laryngectomy has increased recently as a method of vocalization following laryngeal cancer. Previous research has not investigated the relationship between quality of life (QOL) and phonatory function in those using T-E speech. This study aimed to demonstrate the relationship between phonatory function and both comprehensive health-related QOL and QOL related to speech in people using T-E speech. The subjects of the study were 20 male patients using T-E speech after total laryngectomy. At a visit to our clinic, the subjects underwent a phonatory function test and completed three questionnaires: the MOS 8-Item Short-Form Health Survey (SF-8), the Voice Handicap Index-10 (VHI-10), and the Voice-Related Quality of Life (V-RQOL) Measure. A significant correlation was observed between the physical component summary (PCS), a summary score of SF-8, and VHI-10. Additionally, a significant correlation was observed between the SF-8 mental component summary (MCS) and both VHI-10 and VRQOL. Significant correlations were also observed between voice intensity in the phonatory function test and both VHI-10 and V-RQOL. Finally, voice intensity was significantly correlated with the SF-8 PCS. QOL questionnaires and phonatory function tests showed that, in people using T-E speech after total laryngectomy, voice intensity was correlated with comprehensive QOL, including physical and mental health. This finding suggests that voice intensity can be used as a performance index for speech rehabilitation.

  20. An integrated approach to improving noisy speech perception

    NASA Astrophysics Data System (ADS)

    Koval, Serguei; Stolbov, Mikhail; Smirnova, Natalia; Khitrov, Mikhail

    2002-05-01

    For a number of practical purposes and tasks, experts have to decode speech recordings of very poor quality. A combination of techniques is proposed to improve intelligibility and quality of distorted speech messages and thus facilitate their comprehension. Along with the application of noise cancellation and speech signal enhancement techniques removing and/or reducing various kinds of distortions and interference (primarily unmasking and normalization in time and frequency fields), the approach incorporates optimal listener expert tactics based on selective listening, nonstandard binaural listening, accounting for short-term and long-term human ear adaptation to noisy speech, as well as some methods of speech signal enhancement to support speech decoding during listening. The approach integrating the suggested techniques ensures high-quality ultimate results and has successfully been applied by Speech Technology Center experts and by numerous other users, mainly forensic institutions, to perform noisy speech records decoding for courts, law enforcement and emergency services, accident investigation bodies, etc.

  1. 42 CFR 485.58 - Condition of participation: Comprehensive rehabilitation program.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the services on its premises. (2) Exceptions. Physical therapy, occupational therapy, and speech... rehabilitation program that includes, at a minimum, physicians' services, physical therapy services, and social... patient and the physical therapist, occupational therapist, or speech-language pathologist, as appropriate...

  2. Deconstructing Comprehensibility: Identifying the Linguistic Influences on Listeners' L2 Comprehensibility Ratings

    ERIC Educational Resources Information Center

    Isaacs, Talia; Trofimovich, Pavel

    2012-01-01

    Comprehensibility, a major concept in second language (L2) pronunciation research that denotes listeners' perceptions of how easily they understand L2 speech, is central to interlocutors' communicative success in real-world contexts. Although comprehensibility has been modeled in several L2 oral proficiency scales--for example, the Test of English…

  3. Speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfullymore » regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.« less

  4. Impairments of speech fluency in Lewy body spectrum disorder.

    PubMed

    Ash, Sharon; McMillan, Corey; Gross, Rachel G; Cook, Philip; Gunawardena, Delani; Morgan, Brianna; Boller, Ashley; Siderowf, Andrew; Grossman, Murray

    2012-03-01

    Few studies have examined connected speech in demented and non-demented patients with Parkinson's disease (PD). We assessed the speech production of 35 patients with Lewy body spectrum disorder (LBSD), including non-demented PD patients, patients with PD dementia (PDD), and patients with dementia with Lewy bodies (DLB), in a semi-structured narrative speech sample in order to characterize impairments of speech fluency and to determine the factors contributing to reduced speech fluency in these patients. Both demented and non-demented PD patients exhibited reduced speech fluency, characterized by reduced overall speech rate and long pauses between sentences. Reduced speech rate in LBSD correlated with measures of between-utterance pauses, executive functioning, and grammatical comprehension. Regression analyses related non-fluent speech, grammatical difficulty, and executive difficulty to atrophy in frontal brain regions. These findings indicate that multiple factors contribute to slowed speech in LBSD, and this is mediated in part by disease in frontal brain regions. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Development and preliminary evaluation of a new test of ongoing speech comprehension

    PubMed Central

    Best, Virginia; Keidser, Gitte; Buchholz, Jörg M.; Freeston, Katrina

    2016-01-01

    Objective The overall goal of this work is to create new speech perception tests that more closely resemble real world communication and offer an alternative or complement to the commonly used sentence recall test. Design We describe the development of a new ongoing speech comprehension test based on short everyday passages and on-the-go questions. We also describe the results of an experiment conducted to compare the psychometric properties of this test to those of a sentence test. Study Sample Both tests were completed by a group of listeners that included normal hearers as well as hearing-impaired listeners who participated with and without their hearing aids. Results Overall, the psychometric properties of the two tests were similar, and thresholds were significantly correlated. However, there was some evidence of age/cognitive effects in the comprehension test that were not revealed by the sentence test. Conclusions This new comprehension test promises to be useful for the larger goal of creating laboratory tests that combine realistic acoustic environments with realistic communication tasks. Further efforts will be required to assess whether the test can ultimately improve predictions of real-world outcomes. PMID:26158403

  6. Telephone speech comprehension in children with multichannel cochlear implants.

    PubMed

    Aronson, L; Estienne, P; Arauz, S L; Pallante, S A

    1997-11-01

    Telephone speech comprehension is being evaluated in six prelingually deaf children implanted with the Nucleus 22 prosthesis fitted with the Speak strategy. All of them have had at least 1.5 years of experience with their implant. When the tests began, they had already had at least 2 months' experience with the same map in their speech processor. The children were trained in the use of the telephone as part of the rehabilitation program. None of them used it regularly but as a game that they found very entertaining. A special battery, the Bate-fon (batería para teléfono = telephone battery), was designed for training and evaluation purposes. It includes the five Spanish vowels in isolation, diphthongs, onomatopoetic animal voices, two-syllable, and three-syllable words. The tests were administered 1.5-2 years after the switch-on of their speech processor. Standard acoustic telephone coupling was used. The speech material was presented to the child on colored cards. Stimuli were presented twice. Children were informed when the response was incorrect. Averaged results indicated that the percentages of correct responses for all the speech material increase in the second presentation. All children have shown some degree of telephone communication abilities. As a result of the training, some of the children are using the telephone to communicate with their families.

  7. Effects of reverberation time on the cognitive load in speech communication: theoretical considerations.

    PubMed

    Kjellberg, A

    2004-01-01

    The paper presents a theoretical analysis of possible effects of reverberation time on the cognitive load in speech communication. Speech comprehension requires not only phonological processing of the spoken words. Simultaneously, this information must be further processed and stored. All this processing takes place in the working memory, which has a limited processing capacity. The more resources that are allocated to word identification, the fewer resources are therefore left for the further processing and storing of the information. Reverberation conditions that allow the identification of almost all words may therefore still interfere with speech comprehension and memory storing. These problems are likely to be especially serious in situations where speech has to be followed continuously for a long time. An unfavourable reverberation time (RT) then could contribute to the development of cognitive fatigue, which means that working memory resources are gradually reduced. RT may also affect the cognitive load in two other ways: RT may change the distracting effects of a sound and a person's mood. Both effects could influence the cognitive load of a listener. It is argued that we need studies of RT effects in realistic long-lasting listening situations to better understand the effect of RT on speech communication. Furthermore, the effect of RT on distraction and mood need to be better understood.

  8. Co-speech iconic gestures and visuo-spatial working memory.

    PubMed

    Wu, Ying Choon; Coulson, Seana

    2014-11-01

    Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech-gesture integration processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Phase-Locked Responses to Speech in Human Auditory Cortex are Enhanced During Comprehension

    PubMed Central

    Peelle, Jonathan E.; Gross, Joachim; Davis, Matthew H.

    2013-01-01

    A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners’ ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction. PMID:22610394

  10. Phase-locked responses to speech in human auditory cortex are enhanced during comprehension.

    PubMed

    Peelle, Jonathan E; Gross, Joachim; Davis, Matthew H

    2013-06-01

    A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners' ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.

  11. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    PubMed Central

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language. PMID:24904497

  12. Methods of Improving Speech Intelligibility for Listeners with Hearing Resolution Deficit

    PubMed Central

    2012-01-01

    Abstract Methods developed for real-time time scale modification (TSM) of speech signal are presented. They are based on the non-uniform, speech rate depended SOLA algorithm (Synchronous Overlap and Add). Influence of the proposed method on the intelligibility of speech was investigated for two separate groups of listeners, i.e. hearing impaired children and elderly listeners. It was shown that for the speech with average rate equal to or higher than 6.48 vowels/s, all of the proposed methods have statistically significant impact on the improvement of speech intelligibility for hearing impaired children with reduced hearing resolution and one of the proposed methods significantly improves comprehension of speech in the group of elderly listeners with reduced hearing resolution. Virtual slides http://www.diagnosticpathology.diagnomx.eu/vs/2065486371761991 PMID:23009662

  13. Anatomy and Physiology of the Speech Mechanism.

    ERIC Educational Resources Information Center

    Sheets, Boyd V.

    This monograph on the anatomical and physiological aspects of the speech mechanism stresses the importance of a general understanding of the process of verbal communication. Contents include "Positions of the Body,""Basic Concepts Linked with the Speech Mechanism,""The Nervous System,""The Respiratory System--Sound-Power Source,""The…

  14. Effects of language experience on pre-categorical perception: Distinguishing general from specialized processes in speech perception.

    PubMed

    Iverson, Paul; Wagner, Anita; Rosen, Stuart

    2016-04-01

    Cross-language differences in speech perception have traditionally been linked to phonological categories, but it has become increasingly clear that language experience has effects beginning at early stages of perception, which blurs the accepted distinctions between general and speech-specific processing. The present experiments explored this distinction by playing stimuli to English and Japanese speakers that manipulated the acoustic form of English /r/ and /l/, in order to determine how acoustically natural and phonologically identifiable a stimulus must be for cross-language discrimination differences to emerge. Discrimination differences were found for stimuli that did not sound subjectively like speech or /r/ and /l/, but overall they were strongly linked to phonological categorization. The results thus support the view that phonological categories are an important source of cross-language differences, but also show that these differences can extend to stimuli that do not clearly sound like speech.

  15. Children and adults integrate talker and verb information in online processing.

    PubMed Central

    Borovsky, Arielle; Creel, Sarah

    2015-01-01

    Children seem able to efficiently interpret a variety of linguistic cues during speech comprehension, yet have difficulty interpreting sources of non-linguistic and paralinguistic information that accompany speech. The current study asked whether (paralinguistic) voice-activated role knowledge is rapidly interpreted in coordination with a linguistic cue (a sentential action) during speech comprehension in an eye-tracked sentence comprehension task with children (aged 3-10) and college-aged adults. Participants were initially familiarized with two talkers who identified their respective roles (e.g. PRINCESS and PIRATE) before hearing a previously-introduced talker name an action and object (“I want to hold the sword,” in the pirate's voice). As the sentence was spoken, eye-movements were recorded to four objects that varied in relationship to the sentential talker and action (Target: SWORD, Talker-Related: SHIP, Action-Related: WAND, and Unrelated: CARRIAGE). The task was to select the named image. Even young child listeners rapidly combined inferences about talker identity with the action, allowing them to fixate on the Target before it was mentioned, although there were developmental and vocabulary differences on this task. Results suggest that children, like adults, store real-world knowledge of a talker's role and actively use this information to interpret speech. PMID:24611671

  16. Speaking under pressure: low linguistic complexity is linked to high physiological and emotional stress reactivity.

    PubMed

    Saslow, Laura R; McCoy, Shannon; van der Löwe, Ilmo; Cosley, Brandon; Vartan, Arbi; Oveis, Christopher; Keltner, Dacher; Moskowitz, Judith T; Epel, Elissa S

    2014-03-01

    What can a speech reveal about someone's state? We tested the idea that greater stress reactivity would relate to lower linguistic cognitive complexity while speaking. In Study 1, we tested whether heart rate and emotional stress reactivity to a stressful discussion would relate to lower linguistic complexity. In Studies 2 and 3, we tested whether a greater cortisol response to a standardized stressful task including a speech (Trier Social Stress Test) would be linked to speaking with less linguistic complexity during the task. We found evidence that measures of stress responsivity (emotional and physiological) and chronic stress are tied to variability in the cognitive complexity of speech. Taken together, these results provide evidence that our individual experiences of stress or "stress signatures"-how our body and mind react to stress both in the moment and over the longer term-are linked to how complex our speech under stress. Copyright © 2013 Society for Psychophysiological Research.

  17. Slowed Speech Input has a Differential Impact on On-line and Off-line Processing in Children’s Comprehension of Pronouns

    PubMed Central

    Walenski, Matthew; Swinney, David

    2009-01-01

    The central question underlying this study revolves around how children process co-reference relationships—such as those evidenced by pronouns (him) and reflexives (himself)—and how a slowed rate of speech input may critically affect this process. Previous studies of child language processing have demonstrated that typical language developing (TLD) children as young as 4 years of age process co-reference relations in a manner similar to adults on-line. In contrast, off-line measures of pronoun comprehension suggest a developmental delay for pronouns (relative to reflexives). The present study examines dependency relations in TLD children (ages 5–13) and investigates how a slowed rate of speech input affects the unconscious (on-line) and conscious (off-line) parsing of these constructions. For the on-line investigations (using a cross-modal picture priming paradigm), results indicate that at a normal rate of speech TLD children demonstrate adult-like syntactic reflexes. At a slowed rate of speech the typical language developing children displayed a breakdown in automatic syntactic parsing (again, similar to the pattern seen in unimpaired adults). As demonstrated in the literature, our off-line investigations (sentence/picture matching task) revealed that these children performed much better on reflexives than on pronouns at a regular speech rate. However, at the slow speech rate, performance on pronouns was substantially improved, whereas performance on reflexives was not different than at the regular speech rate. We interpret these results in light of a distinction between fast automatic processes (relied upon for on-line processing in real time) and conscious reflective processes (relied upon for off-line processing), such that slowed speech input disrupts the former, yet improves the latter. PMID:19343495

  18. Perceptual analysis of speech following traumatic brain injury in childhood.

    PubMed

    Cahill, Louise M; Murdoch, Bruce E; Theodoros, Deborah G

    2002-05-01

    To investigate perceptually the speech dimensions, oromotor function, and speech intelligibility of a group of individuals with traumatic brain injury (TBI) acquired in childhood. The speech of 24 children with TBI was analysed perceptually and compared with that of a group of non-neurologically impaired children matched for age and sex. The 16 dysarthric TBI subjects were significantly less intelligible than the control subjects, and demonstrated significant impairment in 12 of the 33 speech dimensions rated. In addition, the eight non-dysarthric TBI subjects were significantly impaired in many areas of oromotor function on the Frenchay Dysarthria Assessment, indicating some degree of pre-clinical speech impairment. The results of the perceptual analysis are discussed in terms of the possible underlying pathophysiological bases of the deviant speech features identified, and the need for a comprehensive instrumental assessment, to more accurately determine the level of breakdown in the speech production mechanism in children following TBI.

  19. Children and Adults Integrate Talker and Verb Information in Online Processing

    ERIC Educational Resources Information Center

    Borovsky, Arielle; Creel, Sarah C.

    2014-01-01

    Children seem able to efficiently interpret a variety of linguistic cues during speech comprehension, yet have difficulty interpreting sources of nonlinguistic and paralinguistic information that accompany speech. The current study asked whether (paralinguistic) voice-activated role knowledge is rapidly interpreted in coordination with a…

  20. Perceptions of University Instructors When Listening to International Student Speech

    ERIC Educational Resources Information Center

    Sheppard, Beth; Elliott, Nancy; Baese-Berk, Melissa

    2017-01-01

    Intensive English Program (IEP) Instructors and content faculty both listen to international students at the university. For these two groups of instructors, this study compared perceptions of international student speech by collecting comprehensibility ratings and transcription samples for intelligibility scores. No significant differences were…

  1. CETA Vocational Linkage.

    ERIC Educational Resources Information Center

    Campbell-Thrane, Lucille

    An overview of cooperation between CETA (Comprehensive Employment and Training Act) and vocational education is presented in this speech, including a look at data on legislation, history, and funding sources. In light of CETA legislation's specificity on how local sponsors are to work with vocational educators, the speech gives excerpts and…

  2. Direct speech quotations promote low relative-clause attachment in silent reading of English.

    PubMed

    Yao, Bo; Scheepers, Christoph

    2018-07-01

    The implicit prosody hypothesis (Fodor, 1998, 2002) proposes that silent reading coincides with a default, implicit form of prosody to facilitate sentence processing. Recent research demonstrated that a more vivid form of implicit prosody is mentally simulated during silent reading of direct speech quotations (e.g., Mary said, "This dress is beautiful"), with neural and behavioural consequences (e.g., Yao, Belin, & Scheepers, 2011; Yao & Scheepers, 2011). Here, we explored the relation between 'default' and 'simulated' implicit prosody in the context of relative-clause (RC) attachment in English. Apart from confirming a general low RC-attachment preference in both production (Experiment 1) and comprehension (Experiments 2 and 3), we found that during written sentence completion (Experiment 1) or when reading silently (Experiment 2), the low RC-attachment preference was reliably enhanced when the critical sentences were embedded in direct speech quotations as compared to indirect speech or narrative sentences. However, when reading aloud (Experiment 3), direct speech did not enhance the general low RC-attachment preference. The results from Experiments 1 and 2 suggest a quantitative boost to implicit prosody (via auditory perceptual simulation) during silent production/comprehension of direct speech. By contrast, when reading aloud (Experiment 3), prosody becomes equally salient across conditions due to its explicit nature; indirect speech and narrative sentences thus become as susceptible to prosody-induced syntactic biases as direct speech. The present findings suggest a shared cognitive basis between default implicit prosody and simulated implicit prosody, providing a new platform for studying the effects of implicit prosody on sentence processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Recognizing emotional speech in Persian: a validated database of Persian emotional speech (Persian ESD).

    PubMed

    Keshtiari, Niloofar; Kuhlmann, Michael; Eslami, Moharram; Klann-Delius, Gisela

    2015-03-01

    Research on emotional speech often requires valid stimuli for assessing perceived emotion through prosody and lexical content. To date, no comprehensive emotional speech database for Persian is officially available. The present article reports the process of designing, compiling, and evaluating a comprehensive emotional speech database for colloquial Persian. The database contains a set of 90 validated novel Persian sentences classified in five basic emotional categories (anger, disgust, fear, happiness, and sadness), as well as a neutral category. These sentences were validated in two experiments by a group of 1,126 native Persian speakers. The sentences were articulated by two native Persian speakers (one male, one female) in three conditions: (1) congruent (emotional lexical content articulated in a congruent emotional voice), (2) incongruent (neutral sentences articulated in an emotional voice), and (3) baseline (all emotional and neutral sentences articulated in neutral voice). The speech materials comprise about 470 sentences. The validity of the database was evaluated by a group of 34 native speakers in a perception test. Utterances recognized better than five times chance performance (71.4 %) were regarded as valid portrayals of the target emotions. Acoustic analysis of the valid emotional utterances revealed differences in pitch, intensity, and duration, attributes that may help listeners to correctly classify the intended emotion. The database is designed to be used as a reliable material source (for both text and speech) in future cross-cultural or cross-linguistic studies of emotional speech, and it is available for academic research purposes free of charge. To access the database, please contact the first author.

  4. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals

    PubMed Central

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26; 2;6 yrs). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children’s facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children’s ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language. PMID:19726000

  5. Learner Involvement and Comprehensible Input.

    ERIC Educational Resources Information Center

    Tsui, Amy B. M.

    1991-01-01

    Studies on comprehensible input generally emphasize how input is made comprehensible to the nonnative speaker by examining native speaker speech or teacher talk in the classroom. This paper uses Hong Kong secondary school data to show that only when modification devices involve learner participation do they serve as indicators of comprehensible…

  6. [Evaluation of language at 6 years in children born prematurely without cerebral palsy: prospective study of 55 children].

    PubMed

    Charollais, A; Stumpf, M-H; Beaugrand, D; Lemarchand, M; Radi, S; Pasquet, F; Khomsi, A; Marret, S

    2010-10-01

    Very premature birth carries a high risk of neurocognitive disabilities and learning disorders. Acquiring sufficient speech skills is crucial to good school performance. A prospective study was conducted in 2006 to evaluate speech development in 55 children born very prematurely in 2000 at the Rouen Teaching Hospital (Rouen, France), free of cerebral palsy, compared to 6-year-old born at full term. A computerized speech assessment tool was used (Bilan Informatisé du Langage Oral, BILO II). In the premature-birth group, 49 % of 6-year-old had at least 1 score below the 25th percentile on 1 of the 8 BILO II tests. Significant speech impairments were noted for 2 components of speech, namely, comprehension and phonology. Oral comprehension scores no higher than the 10th percentile were obtained by 23 % of prematurely born children (P<0.02 vs controls). On word repetition tasks used to test phonology, 21 % of prematurely born children obtained scores no higher than the 10th percentile (P<0.01 vs controls). An evaluation of sensorimotor language prerequisites (constraints) in 30 of the 55 prematurely born children showed significant differences with the controls for word memory, visual attention, and buccofacial praxis. The speech development impairments found in 6-year-old born very prematurely suggest a distinctive pattern of neurodevelopmental dysfunction that is consistent with the motor theory of speech perception. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  7. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour

    PubMed Central

    Özyürek, Aslı

    2014-01-01

    As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language. PMID:25092664

  8. Relationship between the caregiver's report on the patient's spontaneous-speech and the Brief Aphasia Evaluation.

    PubMed

    Vigliecca, Nora Silvana

    2017-11-09

    To study the relationship between the caregiver's perception about the patient's impairment in spontaneous speech, according to an item of four questions administered by semi-structured interview, and the patient's performance in the Brief Aphasia Evaluation (BAE). 102 right-handed patients with focal brain lesions of different types and location were examined. BAE is a valid and reliable instrument to assess aphasia. The caregiver's perception was correlated with the item of spontaneous speech, the total score and the three main factors of the BAE: Expression, Comprehension and Complementary factors. The precision (sensitivity/ specificity) about the caregiver's perception of the patient's spontaneous speech was analyzed with reference to the presence or absence of disorder, according to the professional, on the BAE item of spontaneous speech. The studied correlation was satisfactory, being greater (higher than 80%) for the following indicators: the item of spontaneous speech, the Expression factor and the total score of the scale; the correlation was a little smaller (higher than 70%) for the Comprehension and Complementary factors. Comparing two cut-off points that evaluated the precision of the caregiver's perception, satisfactory results were observed in terms of sensitivity and specificity (>70%) with likelihood ratios higher than three. By using the median as the cut-off point, more satisfactory diagnostic discriminations were obtained. Interviewing the caregiver specifically on the patient's spontaneous speech, in an abbreviated form, provides relevant information for the aphasia diagnosis.

  9. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour.

    PubMed

    Özyürek, Aslı

    2014-09-19

    As we speak, we use not only the arbitrary form-meaning mappings of the speech channel but also motivated form-meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal-posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  10. The Hierarchical Cortical Organization of Human Speech Processing

    PubMed Central

    de Heer, Wendy A.; Huth, Alexander G.; Griffiths, Thomas L.

    2017-01-01

    Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech. SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech. Both cerebral hemispheres were actively involved in speech processing in large and equal amounts. Also, the transformation from spectral features to semantic elements occurs early in the cortical speech-processing stream. Our experimental and analytical approaches are important alternatives and complements to standard approaches that use segmented speech and block designs, which report more laterality in speech processing and associated semantic processing to higher levels of cortex than reported here. PMID:28588065

  11. Using Computer Technology To Monitor Student Progress and Remediate Reading Problems.

    ERIC Educational Resources Information Center

    McCullough, C. Sue

    1995-01-01

    Focuses on research about application of text-to-speech systems in diagnosing and remediating word recognition, vocabulary knowledge, and comprehension disabilities. As school psychologists move toward a consultative model of service delivery, they need to know about technology such as speech synthesizers, digitizers, optical-character-recognition…

  12. Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.

    2014-01-01

    Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…

  13. Grammar without Speech Production: The Case of Labrador Inuttitut Heritage Receptive Bilinguals

    ERIC Educational Resources Information Center

    Sherkina-Lieber, Marina; Perez-Leroux, Ana T.; Johns, Alana

    2011-01-01

    We examine morphosyntactic knowledge of Labrador Inuttitut by Inuit receptive bilinguals (RBs)--heritage speakers who are capable of comprehension, but produce little or no speech. A grammaticality judgment study suggests that RBs possess sensitivity to morphosyntactic violations, though to a lesser degree than fluent bilinguals. Low-proficiency…

  14. Foreign-Accented Speech Perception Ratings: A Multifactorial Case Study

    ERIC Educational Resources Information Center

    Kraut, Rachel; Wulff, Stefanie

    2013-01-01

    Seventy-eight native English speakers rated the foreign-accented speech (FAS) of 24 international students enrolled in an Intensive English programme at a public university in Texas on degree of accent, comprehensibility and communicative ability. Variables considered to potentially impact listeners' ratings were the sex of the speaker, the first…

  15. Segregating polymorphisms of FOXP2 are associated with measures of inner speech, speech fluency and strength of handedness in a healthy population.

    PubMed

    Crespi, Bernard; Read, Silven; Hurd, Peter

    2017-10-01

    We genotyped a healthy population for three haplotype-tagging FOXP2 SNPs, and tested for associations of these SNPs with strength of handedness and questionnaire-based metrics of inner speech characteristics (ISP) and speech fluency (FLU), as derived from the Schizotypal Personality Questionnaire-BR. Levels of mixed-handedness were positively correlated with ISP and FLU, supporting prior work on these two domains. Genotype for rs7799109, a SNP previously linked with lateralization of left frontal regions underlying language, was associated with degree of mixed handedness and with scores for ISP and FLU phenotypes. Genotype of rs1456031, which has previously been linked with auditory hallucinations, was also associated with ISP phenotypes. These results provide evidence that FOXP2 SNPs influence aspects of human inner speech and fluency that are related to lateralized phenotypes, and suggest that the evolution of human language, as mediated by the adaptive evolution of FOXP2, involved features of inner speech. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. An Analysis of Individual Differences in Recognizing Monosyllabic Words Under the Speech Intelligibility Index Framework

    PubMed Central

    Shen, Yi; Kern, Allison B.

    2018-01-01

    Individual differences in the recognition of monosyllabic words, either in isolation (NU6 test) or in sentence context (SPIN test), were investigated under the theoretical framework of the speech intelligibility index (SII). An adaptive psychophysical procedure, namely the quick-band-importance-function procedure, was developed to enable the fitting of the SII model to individual listeners. Using this procedure, the band importance function (i.e., the relative weights of speech information across the spectrum) and the link function relating the SII to recognition scores can be simultaneously estimated while requiring only 200 to 300 trials of testing. Octave-frequency band importance functions and link functions were estimated separately for NU6 and SPIN materials from 30 normal-hearing listeners who were naïve to speech recognition experiments. For each type of speech material, considerable individual differences in the spectral weights were observed in some but not all frequency regions. At frequencies where the greatest intersubject variability was found, the spectral weights were correlated between the two speech materials, suggesting that the variability in spectral weights reflected listener-originated factors. PMID:29532711

  17. A Comparison of Five FMRI Protocols for Mapping Speech Comprehension Systems

    PubMed Central

    Binder, Jeffrey R.; Swanson, Sara J.; Hammeke, Thomas A.; Sabsevitz, David S.

    2008-01-01

    Aims Many fMRI protocols for localizing speech comprehension have been described, but there has been little quantitative comparison of these methods. We compared five such protocols in terms of areas activated, extent of activation, and lateralization. Methods FMRI BOLD signals were measured in 26 healthy adults during passive listening and active tasks using words and tones. Contrasts were designed to identify speech perception and semantic processing systems. Activation extent and lateralization were quantified by counting activated voxels in each hemisphere for each participant. Results Passive listening to words produced bilateral superior temporal activation. After controlling for pre-linguistic auditory processing, only a small area in the left superior temporal sulcus responded selectively to speech. Active tasks engaged an extensive, bilateral attention and executive processing network. Optimal results (consistent activation and strongly lateralized pattern) were obtained by contrasting an active semantic decision task with a tone decision task. There was striking similarity between the network of brain regions activated by the semantic task and the network of brain regions that showed task-induced deactivation, suggesting that semantic processing occurs during the resting state. Conclusions FMRI protocols for mapping speech comprehension systems differ dramatically in pattern, extent, and lateralization of activation. Brain regions involved in semantic processing were identified only when an active, non-linguistic task was used as a baseline, supporting the notion that semantic processing occurs whenever attentional resources are not controlled. Identification of these lexical-semantic regions is particularly important for predicting language outcome in patients undergoing temporal lobe surgery. PMID:18513352

  18. Effective Connectivity Hierarchically Links Temporoparietal and Frontal Areas of the Auditory Dorsal Stream with the Motor Cortex Lip Area during Speech Perception

    ERIC Educational Resources Information Center

    Murakami, Takenobu; Restle, Julia; Ziemann, Ulf

    2012-01-01

    A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…

  19. The Atlanta Motor Speech Disorders Corpus: Motivation, Development, and Utility.

    PubMed

    Laures-Gore, Jacqueline; Russell, Scott; Patel, Rupal; Frankel, Michael

    2016-01-01

    This paper describes the design and collection of a comprehensive spoken language dataset from speakers with motor speech disorders in Atlanta, Ga., USA. This collaborative project aimed to gather a spoken database consisting of nonmainstream American English speakers residing in the Southeastern US in order to provide a more diverse perspective of motor speech disorders. Ninety-nine adults with an acquired neurogenic disorder resulting in a motor speech disorder were recruited. Stimuli include isolated vowels, single words, sentences with contrastive focus, sentences with emotional content and prosody, sentences with acoustic and perceptual sensitivity to motor speech disorders, as well as 'The Caterpillar' and 'The Grandfather' passages. Utility of this data in understanding the potential interplay of dialect and dysarthria was demonstrated with a subset of the speech samples existing in the database. The Atlanta Motor Speech Disorders Corpus will enrich our understanding of motor speech disorders through the examination of speech from a diverse group of speakers. © 2016 S. Karger AG, Basel.

  20. Neuronal populations in the occipital cortex of the blind synchronize to the temporal dynamics of speech

    PubMed Central

    Van Ackeren, Markus Johannes; Barbero, Francesca M; Mattioni, Stefania; Bottini, Roberto

    2018-01-01

    The occipital cortex of early blind individuals (EB) activates during speech processing, challenging the notion of a hard-wired neurobiology of language. But, at what stage of speech processing do occipital regions participate in EB? Here we demonstrate that parieto-occipital regions in EB enhance their synchronization to acoustic fluctuations in human speech in the theta-range (corresponding to syllabic rate), irrespective of speech intelligibility. Crucially, enhanced synchronization to the intelligibility of speech was selectively observed in primary visual cortex in EB, suggesting that this region is at the interface between speech perception and comprehension. Moreover, EB showed overall enhanced functional connectivity between temporal and occipital cortices that are sensitive to speech intelligibility and altered directionality when compared to the sighted group. These findings suggest that the occipital cortex of the blind adopts an architecture that allows the tracking of speech material, and therefore does not fully abstract from the reorganized sensory inputs it receives. PMID:29338838

  1. Nonverbal auditory agnosia with lesion to Wernicke's area.

    PubMed

    Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic

    2010-01-01

    We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.

  2. New Developments in Understanding the Complexity of Human Speech Production.

    PubMed

    Simonyan, Kristina; Ackermann, Hermann; Chang, Edward F; Greenlee, Jeremy D

    2016-11-09

    Speech is one of the most unique features of human communication. Our ability to articulate our thoughts by means of speech production depends critically on the integrity of the motor cortex. Long thought to be a low-order brain region, exciting work in the past years is overturning this notion. Here, we highlight some of major experimental advances in speech motor control research and discuss the emerging findings about the complexity of speech motocortical organization and its large-scale networks. This review summarizes the talks presented at a symposium at the Annual Meeting of the Society of Neuroscience; it does not represent a comprehensive review of contemporary literature in the broader field of speech motor control. Copyright © 2016 the authors 0270-6474/16/3611440-09$15.00/0.

  3. The development of sentence interpretation: effects of perceptual, attentional and semantic interference.

    PubMed

    Leech, Robert; Aydelott, Jennifer; Symons, Germaine; Carnevale, Julia; Dick, Frederic

    2007-11-01

    How does the development and consolidation of perceptual, attentional, and higher cognitive abilities interact with language acquisition and processing? We explored children's (ages 5-17) and adults' (ages 18-51) comprehension of morphosyntactically varied sentences under several competing speech conditions that varied in the degree of attentional demands, auditory masking, and semantic interference. We also evaluated the relationship between subjects' syntactic comprehension and their word reading efficiency and general 'speed of processing'. We found that the interactions between perceptual and attentional processes and complex sentence interpretation changed considerably over the course of development. Perceptual masking of the speech signal had an early and lasting impact on comprehension, particularly for more complex sentence structures. In contrast, increased attentional demand in the absence of energetic auditory masking primarily affected younger children's comprehension of difficult sentence types. Finally, the predictability of syntactic comprehension abilities by external measures of development and expertise is contingent upon the perceptual, attentional, and semantic milieu in which language processing takes place.

  4. 42 CFR 460.104 - Participant assessment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) Basic requirement. The interdisciplinary team must conduct an initial comprehensive assessment on each... comprehensive assessment, each of the following members of the interdisciplinary team must evaluate the... individual team members, other professional disciplines (for example, speech-language pathology, dentistry...

  5. 42 CFR 460.104 - Participant assessment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) Basic requirement. The interdisciplinary team must conduct an initial comprehensive assessment on each... comprehensive assessment, each of the following members of the interdisciplinary team must evaluate the... individual team members, other professional disciplines (for example, speech-language pathology, dentistry...

  6. 42 CFR 460.104 - Participant assessment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) Basic requirement. The interdisciplinary team must conduct an initial comprehensive assessment on each... comprehensive assessment, each of the following members of the interdisciplinary team must evaluate the... individual team members, other professional disciplines (for example, speech-language pathology, dentistry...

  7. Content Validity of the Comprehensive ICF Core Set for Multiple Sclerosis from the Perspective of Speech and Language Therapists

    ERIC Educational Resources Information Center

    Renom, Marta; Conrad, Andrea; Bascuñana, Helena; Cieza, Alarcos; Galán, Ingrid; Kesselring, Jürg; Coenen, Michaela

    2014-01-01

    Background: The Comprehensive International Classification of Functioning, Disability and Health (ICF) Core Set for Multiple Sclerosis (MS) is a comprehensive framework to structure the information obtained in multidisciplinary clinical settings according to the biopsychosocial perspective of the International Classification of Functioning,…

  8. Lesion localization of speech comprehension deficits in chronic aphasia

    PubMed Central

    Binder, Jeffrey R.; Humphries, Colin; Gross, William L.; Book, Diane S.

    2017-01-01

    Objective: Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Methods: Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. Results: ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Conclusions: Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. PMID:28179469

  9. Lesion localization of speech comprehension deficits in chronic aphasia.

    PubMed

    Pillay, Sara B; Binder, Jeffrey R; Humphries, Colin; Gross, William L; Book, Diane S

    2017-03-07

    Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. © 2017 American Academy of Neurology.

  10. Greater freedom of speech on Web 2.0 correlates with dominance of views linking vaccines to autism.

    PubMed

    Venkatraman, Anand; Garg, Neetika; Kumar, Nilay

    2015-03-17

    It is suspected that Web 2.0 web sites, with a lot of user-generated content, often support viewpoints that link autism to vaccines. We assessed the prevalence of the views supporting a link between vaccines and autism online by comparing YouTube, Google and Wikipedia with PubMed. Freedom of speech is highest on YouTube and progressively decreases for the others. Support for a link between vaccines and autism is most prominent on YouTube, followed by Google search results. It is far lower on Wikipedia and PubMed. Anti-vaccine activists use scientific arguments, certified physicians and official-sounding titles to gain credibility, while also leaning on celebrity endorsement and personalized stories. Online communities with greater freedom of speech lead to a dominance of anti-vaccine voices. Moderation of content by editors can offer balance between free expression and factual accuracy. Health communicators and medical institutions need to step up their activity on the Internet. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Online Lexical Competition during Spoken Word Recognition and Word Learning in Children and Adults

    ERIC Educational Resources Information Center

    Henderson, Lisa; Weighall, Anna; Brown, Helen; Gaskell, Gareth

    2013-01-01

    Lexical competition that occurs as speech unfolds is a hallmark of adult oral language comprehension crucial to rapid incremental speech processing. This study used pause detection to examine whether lexical competition operates similarly at 7-8 years and tested variables that influence "online" lexical activity in adults. Children…

  12. The Speech Behavior and Language Comprehension of Autistic Children. A Report of Research.

    ERIC Educational Resources Information Center

    Pronovost, Wilbert

    Thirteen institutionalized children from 4-1/2 to 14 years old, diagnosed as autistic, atypical, or childhood schizophrenic, were observed for three years to obtain a detailed description of their speech and language behavior. Case histories were assembled from available medical and psychological data. During a program of experimental relationship…

  13. Effects of Recurrent Otitis Media on Language, Speech, and Educational Achievement in Menominee Indian Children.

    ERIC Educational Resources Information Center

    Thielke, Helen M.; Shriberg, Lawrence D.

    1990-01-01

    Among 28 monolingual English-speaking Menominee Indian children, a history of otitis media was associated with significantly lower scores on measures of language comprehension and speech perception and production at ages 3-5, and on school standardized tests 2 years later. Contains 38 references. (SV)

  14. Mental Load in Listening, Speech Shadowing and Simultaneous Interpreting: A Pupillometric Study.

    ERIC Educational Resources Information Center

    Tommola, Jorma; Hyona, Jukka

    This study investigated the sensitivity of the pupillary response as an indicator of average mental load during three language processing tasks of varying complexity. The tasks included: (1) listening (without any subsequent comprehension testing); (2) speech shadowing (repeating a message in the same language while listening to it); and (3)…

  15. The Relationship between Pre-Treatment Clinical Profile and Treatment Outcome in an Integrated Stuttering Program

    ERIC Educational Resources Information Center

    Huinck, Wendy J.; Langevin, Marilyn; Kully, Deborah; Graamans, Kees; Peters, Herman F. M.; Hulstijn, Wouter

    2006-01-01

    A procedure for subtyping individuals who stutter and its relationship to treatment outcome is explored. Twenty-five adult participants of the Comprehensive Stuttering Program (CSP) were classified according to: (1) stuttering severity and (2) severity of negative emotions and cognitions associated with their speech problem. Speech characteristics…

  16. Reading Skills of Students with Speech Sound Disorders at Three Stages of Literacy Development

    ERIC Educational Resources Information Center

    Skebo, Crysten M.; Lewis, Barbara A.; Freebairn, Lisa A.; Tag, Jessica; Ciesla, Allison Avrich; Stein, Catherine M.

    2013-01-01

    Purpose: The relationship between phonological awareness, overall language, vocabulary, and nonlinguistic cognitive skills to decoding and reading comprehension was examined for students at 3 stages of literacy development (i.e., early elementary school, middle school, and high school). Students with histories of speech sound disorders (SSD) with…

  17. A Survey of Speech Programs in Community Colleges.

    ERIC Educational Resources Information Center

    Meyer, Arthur C.

    The rapid growth of community colleges in the last decade resulted in large numbers of students enrolled in programs previously unavailable to them in a single comprehensive institution. The purpose of this study was to gather and analyze data to provide information about the speech programs that community colleges created or expanded as a result…

  18. Speech-Associated Gestures, Broca's Area, and the Human Mirror System

    ERIC Educational Resources Information Center

    Skipper, Jeremy I.; Goldin-Meadow, Susan; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca's area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a "mirror" or…

  19. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.

    PubMed

    Arnold, Denis; Tomaschek, Fabian; Sering, Konstantin; Lopez, Florence; Baayen, R Harald

    2017-01-01

    Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.

  20. A hypothesis on the biological origins and social evolution of music and dance.

    PubMed

    Wang, Tianyan

    2015-01-01

    The origins of music and musical emotions is still an enigma, here I propose a comprehensive hypothesis on the origins and evolution of music, dance, and speech from a biological and sociological perspective. I suggest that every pitch interval between neighboring notes in music represents corresponding movement pattern through interpreting the Doppler effect of sound, which not only provides a possible explanation for the transposition invariance of music, but also integrates music and dance into a common form-rhythmic movements. Accordingly, investigating the origins of music poses the question: why do humans appreciate rhythmic movements? I suggest that human appreciation of rhythmic movements and rhythmic events developed from the natural selection of organisms adapting to the internal and external rhythmic environments. The perception and production of, as well as synchronization with external and internal rhythms are so vital for an organism's survival and reproduction, that animals have a rhythm-related reward and emotion (RRRE) system. The RRRE system enables the appreciation of rhythmic movements and events, and is integral to the origination of music, dance and speech. The first type of rewards and emotions (rhythm-related rewards and emotions, RRREs) are evoked by music and dance, and have biological and social functions, which in turn, promote the evolution of music, dance and speech. These functions also evoke a second type of rewards and emotions, which I name society-related rewards and emotions (SRREs). The neural circuits of RRREs and SRREs develop in species formation and personal growth, with congenital and acquired characteristics, respectively, namely music is the combination of nature and culture. This hypothesis provides probable selection pressures and outlines the evolution of music, dance, and speech. The links between the Doppler effect and the RRREs and SRREs can be empirically tested, making the current hypothesis scientifically concrete.

  1. Comprehension of idioms in adolescents with language-based learning disabilities compared to their typically developing peers.

    PubMed

    Qualls, Constance Dean; Lantz, Jennifer M; Pietrzyk, Rose M; Blood, Gordon W; Hammer, Carol Scheffner

    2004-01-01

    Adolescents with language-based learning disabilities (LBLD) often interpret idioms literally. When idioms are provided in an enriched context, comprehension is compromised further because of the LBLD student's inability to assign multiple meanings to words, assemble and integrate information, and go beyond a local referent to derive a global, coherent meaning. This study tested the effects of context and familiarity on comprehension of 24 idioms in 22 adolescents with LBLD. The students completed the Idiom Comprehension Test (ICT) [Language, Speech, and Hearing Services in Schools 30 (1999) 141; LSHSS 34 (2003) 69] in one of two conditions: in a story or during a verification task. Within each condition were three familiarity levels: high, moderate, and low. The LBLD adolescents' data were then compared to previously collected data from 21 age-, gender-, and reading ability-matched typically developing (TD) peers. The relations between reading and language literacy and idiom comprehension were also examined in the LBLD adolescents. Results showed that: (a) the LBLD adolescents generally performed poorly relative to their TD counterparts; however, the groups performed comparably on the high and moderate familiarity idioms in the verification condition; (b) the LBLD adolescents performed significantly better in the verification condition than in the story condition; and (c) reading ability was associated with comprehension of the low familiarity idioms in the story condition only. Findings are discussed relative to implications for speech-language pathologists (SLPs) and educators working with adolescents with LBLD. As a result of this activity, the participant will be able to (1) describe the importance of metalinguistic maturity for comprehension of idioms and other figures of speech; (2) understand the roles of context and familiarity when assessing idiom comprehension in adolescents with LBLD; and (3) critically evaluate assessments of idiom comprehension and determine their appropriateness for use with adolescents with LBLD.

  2. Contributions of local speech encoding and functional connectivity to audio-visual speech perception

    PubMed Central

    Giordano, Bruno L; Ince, Robin A A; Gross, Joachim; Schyns, Philippe G; Panzeri, Stefano; Kayser, Christoph

    2017-01-01

    Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI: http://dx.doi.org/10.7554/eLife.24763.001 PMID:28590903

  3. Language for Winning Hearts and Minds: Verb Aspect in U.S. Presidential Campaign Speeches for Engaging Emotion.

    PubMed

    Havas, David A; Chapp, Christopher B

    2016-01-01

    How does language influence the emotions and actions of large audiences? Functionally, emotions help address environmental uncertainty by constraining the body to support adaptive responses and social coordination. We propose emotions provide a similar function in language processing by constraining the mental simulation of language content to facilitate comprehension, and to foster alignment of mental states in message recipients. Consequently, we predicted that emotion-inducing language should be found in speeches specifically designed to create audience alignment - stump speeches of United States presidential candidates. We focused on phrases in the past imperfective verb aspect ("a bad economy was burdening us") that leave a mental simulation of the language content open-ended, and thus unconstrained, relative to past perfective sentences ("we were burdened by a bad economy"). As predicted, imperfective phrases appeared more frequently in stump versus comparison speeches, relative to perfective phrases. In a subsequent experiment, participants rated phrases from presidential speeches as more emotionally intense when written in the imperfective aspect compared to the same phrases written in the perfective aspect, particularly for sentences perceived as negative in valence. These findings are consistent with the notion that emotions have a role in constraining the comprehension of language, a role that may be used in communication with large audiences.

  4. Speech feature discrimination in deaf children following cochlear implantation

    NASA Astrophysics Data System (ADS)

    Bergeson, Tonya R.; Pisoni, David B.; Kirk, Karen Iler

    2002-05-01

    Speech feature discrimination is a fundamental perceptual skill that is often assumed to underlie word recognition and sentence comprehension performance. To investigate the development of speech feature discrimination in deaf children with cochlear implants, we conducted a retrospective analysis of results from the Minimal Pairs Test (Robbins et al., 1988) selected from patients enrolled in a longitudinal study of speech perception and language development. The MP test uses a 2AFC procedure in which children hear a word and select one of two pictures (bat-pat). All 43 children were prelingually deafened, received a cochlear implant before 6 years of age or between ages 6 and 9, and used either oral or total communication. Children were tested once every 6 months to 1 year for 7 years; not all children were tested at each interval. By 2 years postimplant, the majority of these children achieved near-ceiling levels of discrimination performance for vowel height, vowel place, and consonant manner. Most of the children also achieved plateaus but did not reach ceiling performance for consonant place and voicing. The relationship between speech feature discrimination, spoken word recognition, and sentence comprehension will be discussed. [Work supported by NIH/NIDCD Research Grant No. R01DC00064 and NIH/NIDCD Training Grant No. T32DC00012.

  5. Processing Mechanisms in Hearing-Impaired Listeners: Evidence from Reaction Times and Sentence Interpretation.

    PubMed

    Carroll, Rebecca; Uslar, Verena; Brand, Thomas; Ruigendijk, Esther

    The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. Hearing impairment adds perceptual processing load during sentence processing, but affects grammatical processing beyond the word level to the same degree as in normal hearing, with minor differences in processing mechanisms. The data contribute to our understanding of individual differences in speech perception and language understanding. The authors interpret their results within the ease of language understanding model.

  6. Objective speech quality evaluation of real-time speech coders

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Russell, W. H.; Huggins, A. W. F.

    1984-02-01

    This report describes the work performed in two areas: subjective testing of a real-time 16 kbit/s adaptive predictive coder (APC) and objective speech quality evaluation of real-time coders. The speech intelligibility of the APC coder was tested using the Diagnostic Rhyme Test (DRT), and the speech quality was tested using the Diagnostic Acceptability Measure (DAM) test, under eight operating conditions involving channel error, acoustic background noise, and tandem link with two other coders. The test results showed that the DRT and DAM scores of the APC coder equalled or exceeded the corresponding test scores fo the 32 kbit/s CVSD coder. In the area of objective speech quality evaluation, the report describes the development, testing, and validation of a procedure for automatically computing several objective speech quality measures, given only the tape-recordings of the input speech and the corresponding output speech of a real-time speech coder.

  7. Working memory predicts semantic comprehension in dichotic listening in older adults.

    PubMed

    James, Philip J; Krishnan, Saloni; Aydelott, Jennifer

    2014-10-01

    Older adults have difficulty understanding spoken language in the presence of competing voices. Everyday social situations involving multiple simultaneous talkers may become increasingly challenging in later life due to changes in the ability to focus attention. This study examined whether individual differences in cognitive function predict older adults' ability to access sentence-level meanings in competing speech using a dichotic priming paradigm. Older listeners showed faster responses to words that matched the meaning of spoken sentences presented to the left or right ear, relative to a neutral baseline. However, older adults were more vulnerable than younger adults to interference from competing speech when the competing signal was presented to the right ear. This pattern of performance was strongly correlated with a non-auditory working memory measure, suggesting that cognitive factors play a key role in semantic comprehension in competing speech in healthy aging. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Cognitive spare capacity: evaluation data and its association with comprehension of dynamic conversations

    PubMed Central

    Keidser, Gitte; Best, Virginia; Freeston, Katrina; Boyce, Alexandra

    2015-01-01

    It is well-established that communication involves the working memory system, which becomes increasingly engaged in understanding speech as the input signal degrades. The more resources allocated to recovering a degraded input signal, the fewer resources, referred to as cognitive spare capacity (CSC), remain for higher-level processing of speech. Using simulated natural listening environments, the aims of this paper were to (1) evaluate an English version of a recently introduced auditory test to measure CSC that targets the updating process of the executive function, (2) investigate if the test predicts speech comprehension better than the reading span test (RST) commonly used to measure working memory capacity, and (3) determine if the test is sensitive to increasing the number of attended locations during listening. In Experiment I, the CSC test was presented using a male and a female talker, in quiet and in spatially separated babble- and cafeteria-noises, in an audio-only and in an audio-visual mode. Data collected on 21 listeners with normal and impaired hearing confirmed that the English version of the CSC test is sensitive to population group, noise condition, and clarity of speech, but not presentation modality. In Experiment II, performance by 27 normal-hearing listeners on a novel speech comprehension test presented in noise was significantly associated with working memory capacity, but not with CSC. Moreover, this group showed no significant difference in CSC as the number of talker locations in the test increased. There was no consistent association between the CSC test and the RST. It is recommended that future studies investigate the psychometric properties of the CSC test, and examine its sensitivity to the complexity of the listening environment in participants with both normal and impaired hearing. PMID:25999904

  9. Language Awareness and Perception of Connected Speech in a Second Language

    ERIC Educational Resources Information Center

    Kennedy, Sara; Blanchet, Josée

    2014-01-01

    To be effective second or additional language (L2) listeners, learners should be aware of typical processes in connected L2 speech (e.g. linking). This longitudinal study explored how learners' developing ability to perceive connected L2 speech was related to the quality of their language awareness. Thirty-two learners of L2 French at a university…

  10. On the Function of Stress Rhythms in Speech: Evidence of a Link with Grouping Effects on Serial Memory

    ERIC Educational Resources Information Center

    Boucher, Victor J.

    2006-01-01

    Language learning requires a capacity to recall novel series of speech sounds. Research shows that prosodic marks create grouping effects enhancing serial recall. However, any restriction on memory affecting the reproduction of prosody would limit the set of patterns that could be learned and subsequently used in speech. By implication, grouping…

  11. Is Discussion an Exchange of Ideas? On Education, Money, and Speech

    ERIC Educational Resources Information Center

    Backer, David I.

    2017-01-01

    How do we learn the link between speech and money? What is the process of formation that legitimates the logic whereby speech is equivalent to money? What are the experiences, events, and subjectivities that render the connection between currency and speaking/listening intuitive? As educators and researchers, what do we do and say to shore up this…

  12. Speaking under pressure: Low linguistic complexity is linked to high physiological and emotional stress reactivity

    PubMed Central

    Saslow, Laura R.; McCoy, Shannon; van der Löwe, Ilmo; Cosley, Brandon; Vartan, Arbi; Oveis, Christopher; Keltner, Dacher; Moskowitz, Judith T.; Epel, Elissa S.

    2014-01-01

    What can a speech reveal about someone's state? We tested the idea that greater stress reactivity would relate to lower linguistic cognitive complexity while speaking. In Study 1, we tested whether heart rate and emotional stress reactivity to a stressful discussion would relate to lower linguistic complexity. In Studies 2 and 3 we tested whether a greater cortisol response to a standardized stressful task including a speech (Trier Social Stress Test) would be linked to speaking with less linguistic complexity during the task. We found evidence that measures of stress responsivity (emotional and physiological) and chronic stress are tied to variability in the cognitive complexity of speech. Taken together, these results provide evidence that our individual experiences of stress or ‘stress signatures’—how our body and mind react to stress both in the moment and over the longer term—are linked to how complexly we speak under stress. PMID:24354732

  13. The Effects of Alcohol on the Emotional Displays of Whites in Interracial Groups

    PubMed Central

    Fairbairn, Catharine E.; Sayette, Michael A.; Levine, John M.; Cohn, Jeffrey F.; Creswell, Kasey G.

    2017-01-01

    Discomfort during interracial interactions is common among Whites in the U.S. and is linked to avoidance of interracial encounters. While the negative consequences of interracial discomfort are well-documented, understanding of its causes is still incomplete. Alcohol consumption has been shown to decrease negative emotions caused by self-presentational concern but increase negative emotions associated with racial prejudice. Using novel behavioral-expressive measures of emotion, we examined the impact of alcohol on displays of discomfort among 92 White individuals interacting in all-White or interracial groups. We used the Facial Action Coding System and comprehensive content-free speech analyses to examine affective and behavioral dynamics during these 36-minute exchanges (7.9 million frames of video data). Among Whites consuming nonalcoholic beverages, those assigned to interracial groups evidenced more facial and speech displays of discomfort than those in all-White groups. In contrast, among intoxicated Whites there were no differences in displays of discomfort between interracial and all-White groups. Results highlight the central role of self-presentational concerns in interracial discomfort and offer new directions for applying theory and methods from emotion science to the examination of intergroup relations. PMID:23356562

  14. The effects of alcohol on the emotional displays of Whites in interracial groups.

    PubMed

    Fairbairn, Catharine E; Sayette, Michael A; Levine, John M; Cohn, Jeffrey F; Creswell, Kasey G

    2013-06-01

    Discomfort during interracial interactions is common among Whites in the U.S. and is linked to avoidance of interracial encounters. While the negative consequences of interracial discomfort are well-documented, understanding of its causes is still incomplete. Alcohol consumption has been shown to decrease negative emotions caused by self-presentational concern but increase negative emotions associated with racial prejudice. Using novel behavioral-expressive measures of emotion, we examined the impact of alcohol on displays of discomfort among 92 White individuals interacting in all-White or interracial groups. We used the Facial Action Coding System and comprehensive content-free speech analyses to examine affective and behavioral dynamics during these 36-min exchanges (7.9 million frames of video data). Among Whites consuming nonalcoholic beverages, those assigned to interracial groups evidenced more facial and speech displays of discomfort than those in all-White groups. In contrast, among intoxicated Whites there were no differences in displays of discomfort between interracial and all-White groups. Results highlight the central role of self-presentational concerns in interracial discomfort and offer new directions for applying theory and methods from emotion science to the examination of intergroup relations.

  15. Learning to Comprehend Foreign-Accented Speech by Means of Production and Listening Training

    ERIC Educational Resources Information Center

    Grohe, Ann-Kathrin; Weber, Andrea

    2016-01-01

    The effects of production and listening training on the subsequent comprehension of foreign-accented speech were investigated in a training-test paradigm. During training, German nonnative (L2) and English native (L1) participants listened to a story spoken by a German speaker who replaced all English /?/s with /t/ (e.g., *"teft" for…

  16. Slowed Speech Input Has a Differential Impact on On-Line and Off-Line Processing in Children's Comprehension of Pronouns

    ERIC Educational Resources Information Center

    Love, Tracy; Walenski, Matthew; Swinney, David

    2009-01-01

    The central question underlying this study revolves around how children process co-reference relationships--such as those evidenced by pronouns ("him") and reflexives ("himself")--and how a slowed rate of speech input may critically affect this process. Previous studies of child language processing have demonstrated that typical language…

  17. Investigating an Innovative Computer Application to Improve L2 Word Recognition from Speech

    ERIC Educational Resources Information Center

    Matthews, Joshua; O'Toole, John Mitchell

    2015-01-01

    The ability to recognise words from the aural modality is a critical aspect of successful second language (L2) listening comprehension. However, little research has been reported on computer-mediated development of L2 word recognition from speech in L2 learning contexts. This report describes the development of an innovative computer application…

  18. Sequential Organization and Room Reverberation for Speech Segregation

    DTIC Science & Technology

    2012-02-28

    we have proposed two algorithms for sequential organization, an unsupervised clustering algorithm applicable to monaural recordings and a binaural ...algorithm that integrates monaural and binaural analyses. In addition, we have conducted speech intelligibility tests that Firmly establish the...comprehensive version is currently under review for journal publication. A binaural approach in room reverberation Most existing approaches to binaural or

  19. Suppressed Alpha Oscillations Predict Intelligibility of Speech and its Acoustic Details

    PubMed Central

    Weisz, Nathan

    2012-01-01

    Modulations of human alpha oscillations (8–13 Hz) accompany many cognitive processes, but their functional role in auditory perception has proven elusive: Do oscillatory dynamics of alpha reflect acoustic details of the speech signal and are they indicative of comprehension success? Acoustically presented words were degraded in acoustic envelope and spectrum in an orthogonal design, and electroencephalogram responses in the frequency domain were analyzed in 24 participants, who rated word comprehensibility after each trial. First, the alpha power suppression during and after a degraded word depended monotonically on spectral and, to a lesser extent, envelope detail. The magnitude of this alpha suppression exhibited an additional and independent influence on later comprehension ratings. Second, source localization of alpha suppression yielded superior parietal, prefrontal, as well as anterior temporal brain areas. Third, multivariate classification of the time–frequency pattern across participants showed that patterns of late posterior alpha power allowed best for above-chance classification of word intelligibility. Results suggest that both magnitude and topography of late alpha suppression in response to single words can indicate a listener's sensitivity to acoustic features and the ability to comprehend speech under adverse listening conditions. PMID:22100354

  20. Identification of a pathway for intelligible speech in the left temporal lobe

    PubMed Central

    Scott, Sophie K.; Blank, C. Catrin; Rosen, Stuart; Wise, Richard J. S.

    2017-01-01

    Summary It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension. PMID:11099443

  1. Do age-related word retrieval difficulties appear (or disappear) in connected speech?

    PubMed

    Kavé, Gitit; Goral, Mira

    2017-09-01

    We conducted a comprehensive literature review of studies of word retrieval in connected speech in healthy aging and reviewed relevant aphasia research that could shed light on the aging literature. Four main hypotheses guided the review: (1) Significant retrieval difficulties would lead to reduced output in connected speech. (2) Significant retrieval difficulties would lead to a more limited lexical variety in connected speech. (3) Significant retrieval difficulties would lead to an increase in word substitution errors and in pronoun use as well as to greater dysfluency and hesitation in connected speech. (4) Retrieval difficulties on tests of single-word production would be associated with measures of word retrieval in connected speech. Studies on aging did not confirm these four hypotheses, unlike studies on aphasia that generally did. The review suggests that future research should investigate how context facilitates word production in old age.

  2. Speech Cues Contribute to Audiovisual Spatial Integration

    PubMed Central

    Bishop, Christopher W.; Miller, Lee M.

    2011-01-01

    Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral ‘what’ and dorsal ‘where’ pathways. PMID:21909378

  3. Behavioral and neurobiological correlates of childhood apraxia of speech in Italian children.

    PubMed

    Chilosi, Anna Maria; Lorenzini, Irene; Fiori, Simona; Graziosi, Valentina; Rossi, Giuseppe; Pasquariello, Rosa; Cipriani, Paola; Cioni, Giovanni

    2015-11-01

    Childhood apraxia of speech (CAS) is a neurogenic Speech Sound Disorder whose etiology and neurobiological correlates are still unclear. In the present study, 32 Italian children with idiopathic CAS underwent a comprehensive speech and language, genetic and neuroradiological investigation aimed to gather information on the possible behavioral and neurobiological markers of the disorder. The results revealed four main aggregations of behavioral symptoms that indicate a multi-deficit disorder involving both motor-speech and language competence. Six children presented with chromosomal alterations. The familial aggregation rate for speech and language difficulties and the male to female ratio were both very high in the whole sample, supporting the hypothesis that genetic factors make substantial contribution to the risk of CAS. As expected in accordance with the diagnosis of idiopathic CAS, conventional MRI did not reveal macrostructural pathogenic neuroanatomical abnormalities, suggesting that CAS may be due to brain microstructural alterations. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Multi-sensory learning and learning to read.

    PubMed

    Blomert, Leo; Froyen, Dries

    2010-09-01

    The basis of literacy acquisition in alphabetic orthographies is the learning of the associations between the letters and the corresponding speech sounds. In spite of this primacy in learning to read, there is only scarce knowledge on how this audiovisual integration process works and which mechanisms are involved. Recent electrophysiological studies of letter-speech sound processing have revealed that normally developing readers take years to automate these associations and dyslexic readers hardly exhibit automation of these associations. It is argued that the reason for this effortful learning may reside in the nature of the audiovisual process that is recruited for the integration of in principle arbitrarily linked elements. It is shown that letter-speech sound integration does not resemble the processes involved in the integration of natural audiovisual objects such as audiovisual speech. The automatic symmetrical recruitment of the assumedly uni-sensory visual and auditory cortices in audiovisual speech integration does not occur for letter and speech sound integration. It is also argued that letter-speech sound integration only partly resembles the integration of arbitrarily linked unfamiliar audiovisual objects. Letter-sound integration and artificial audiovisual objects share the necessity of a narrow time window for integration to occur. However, they differ from these artificial objects, because they constitute an integration of partly familiar elements which acquire meaning through the learning of an orthography. Although letter-speech sound pairs share similarities with audiovisual speech processing as well as with unfamiliar, arbitrary objects, it seems that letter-speech sound pairs develop into unique audiovisual objects that furthermore have to be processed in a unique way in order to enable fluent reading and thus very likely recruit other neurobiological learning mechanisms than the ones involved in learning natural or arbitrary unfamiliar audiovisual associations. Copyright 2010 Elsevier B.V. All rights reserved.

  5. Reproduction of auditory and visual standards in monochannel cochlear implant users.

    PubMed

    Kanabus, Magdalena; Szelag, Elzbieta; Kolodziejczyk, Iwona; Szuchnik, Joanna

    2004-01-01

    The temporal reproduction of standard durations ranging from 1 to 9 seconds was investigated in monochannel cochlear implant (CI) users and in normally hearing subjects for the auditory and visual modality. The results showed that the pattern of performance in patients depended on their level of auditory comprehension. Results for CI users, who displayed relatively good auditory comprehension, did not differ from that of normally hearing subjects for both modalities. Patients with poor auditory comprehension significantly overestimated shorter auditory standards (1, 1.5 and 2.5 s), compared to both patients with good comprehension and controls. For the visual modality the between-group comparisons were not significant. These deficits in the reproduction of auditory standards were explained in accordance with both the attentional-gate model and the role of working memory in prospective time judgment. The impairments described above can influence the functioning of the temporal integration mechanism that is crucial for auditory speech comprehension on the level of words and phrases. We postulate that the deficits in time reproduction of short standards may be one of the possible reasons for poor speech understanding in monochannel CI users.

  6. Cross-Cultural Communication through Course Linkage: Utilizing Experiential Learning in Speech 110 (Introduction to Speech/Communication) & ESL 009 (Oral Skills).

    ERIC Educational Resources Information Center

    Mackler, Tobi; Savard, Theresa

    Taking advantage of the opportunity to heighten cultural awareness and create an intercultural exchange, this paper presents two articles that provide a summary of the rationale, methodology, and assignments used to teach the linked courses of an introductory speech communication course and an English-as-a-Second-Language Oral Skills course. The…

  7. The Teaching of Reading, Writing and Language in a Clinical Speech and Language Setting: A Blended Therapy Intervention Approach

    ERIC Educational Resources Information Center

    Ammons, Kerrie Allen

    2013-01-01

    With a growing body of research that supports a link between language and literacy, governing bodies in the field of speech and language pathology have recognized the need to reconsider the role of speech-language pathologists in addressing the emergent literacy needs of preschoolers who struggle with literacy and language concepts. This study…

  8. Look Who's Talking: Speech Style and Social Context in Language Input to Infants Are Linked to Concurrent and Future Speech Development

    ERIC Educational Resources Information Center

    Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.

    2014-01-01

    Language input is necessary for language learning, yet little is known about whether, in natural environments, the speech style and social context of language input to children impacts language development. In the present study we investigated the relationship between language input and language development, examining both the style of parental…

  9. Adaptation to spectrally-rotated speech.

    PubMed

    Green, Tim; Rosen, Stuart; Faulkner, Andrew; Paterson, Ruth

    2013-08-01

    Much recent interest surrounds listeners' abilities to adapt to various transformations that distort speech. An extreme example is spectral rotation, in which the spectrum of low-pass filtered speech is inverted around a center frequency (2 kHz here). Spectral shape and its dynamics are completely altered, rendering speech virtually unintelligible initially. However, intonation, rhythm, and contrasts in periodicity and aperiodicity are largely unaffected. Four normal hearing adults underwent 6 h of training with spectrally-rotated speech using Continuous Discourse Tracking. They and an untrained control group completed pre- and post-training speech perception tests, for which talkers differed from the training talker. Significantly improved recognition of spectrally-rotated sentences was observed for trained, but not untrained, participants. However, there were no significant improvements in the identification of medial vowels in /bVd/ syllables or intervocalic consonants. Additional tests were performed with speech materials manipulated so as to isolate the contribution of various speech features. These showed that preserving intonational contrasts did not contribute to the comprehension of spectrally-rotated speech after training, and suggested that improvements involved adaptation to altered spectral shape and dynamics, rather than just learning to focus on speech features relatively unaffected by the transformation.

  10. High-frequency neural activity predicts word parsing in ambiguous speech streams.

    PubMed

    Kösem, Anne; Basirat, Anahita; Azizi, Leila; van Wassenhove, Virginie

    2016-12-01

    During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept. Copyright © 2016 the American Physiological Society.

  11. High-frequency neural activity predicts word parsing in ambiguous speech streams

    PubMed Central

    Basirat, Anahita; Azizi, Leila; van Wassenhove, Virginie

    2016-01-01

    During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept. PMID:27605528

  12. A Method for Determining the Timing of Displaying the Speaker's Face and Captions for a Real-Time Speech-to-Caption System

    NASA Astrophysics Data System (ADS)

    Kuroki, Hayato; Ino, Shuichi; Nakano, Satoko; Hori, Kotaro; Ifukube, Tohru

    The authors of this paper have been studying a real-time speech-to-caption system using speech recognition technology with a “repeat-speaking” method. In this system, they used a “repeat-speaker” who listens to a lecturer's voice and then speaks back the lecturer's speech utterances into a speech recognition computer. The througoing system showed that the accuracy of the captions is about 97% in Japanese-Japanese conversion and the conversion time from voices to captions is about 4 seconds in English-English conversion in some international conferences. Of course it required a lot of costs to achieve these high performances. In human communications, speech understanding depends not only on verbal information but also on non-verbal information such as speaker's gestures, and face and mouth movements. So the authors found the idea to display information of captions and speaker's face movement images with a suitable way to achieve a higher comprehension after storing information once into a computer briefly. In this paper, we investigate the relationship of the display sequence and display timing between captions that have speech recognition errors and the speaker's face movement images. The results show that the sequence “to display the caption before the speaker's face image” improves the comprehension of the captions. The sequence “to display both simultaneously” shows an improvement only a few percent higher than the question sentence, and the sequence “to display the speaker's face image before the caption” shows almost no change. In addition, the sequence “to display the caption 1 second before the speaker's face shows the most significant improvement of all the conditions.

  13. Does Formal Assessment of Comprehension by SLT Agree with Teachers' Perceptions of Functional Comprehension Skills in the Classroom?

    ERIC Educational Resources Information Center

    Purse, Katie; Gardner, Hilary

    2013-01-01

    This study aimed to consider collaborative practice in contributing to joint assessment and producing appropriate referral of children to speech and language therapy (SLT). Results of formal testing of selected comprehension skills are compared with functional/classroom performance as rated by class teachers. Thirty children aged 6.5-8.4 years,…

  14. Accommodating Remedial Readers in the General Education Setting: Is Listening-while-Reading Sufficient to Improve Factual and Inferential Comprehension?

    ERIC Educational Resources Information Center

    Schmitt, Ara J.; Hale, Andrea D.; McCallum, Elizabeth; Mauck, Brittany

    2011-01-01

    Word reading accommodations are commonly applied in the general education setting in an attempt to improve student comprehension and learning of curriculum content. This study examined the effects of listening-while-reading (LWR) and silent reading (SR) using text-to-speech assistive technology on the comprehension of 25 middle-school remedial…

  15. Type of iconicity influences children's comprehension of gesture.

    PubMed

    Hodges, Leslie E; Özçalışkan, Şeyda; Williamson, Rebecca

    2018-02-01

    Children produce iconic gestures conveying action information earlier than the ones conveying attribute information (Özçalışkan, Gentner, & Goldin-Meadow, 2014). In this study, we ask whether children's comprehension of iconic gestures follows a similar pattern, also with earlier comprehension of iconic gestures conveying action. Children, ages 2-4years, were presented with 12 minimally-informative speech+iconic gesture combinations, conveying either an action (e.g., open palm flapping as if bird flying) or an attribute (e.g., fingers spread as if bird's wings) associated with a referent. They were asked to choose the correct match for each gesture in a forced-choice task. Our results showed that children could identify the referent of an iconic gesture conveying characteristic action earlier (age 2) than the referent of an iconic gesture conveying characteristic attribute (age 3). Overall, our study identifies ages 2-3 as important in the development of comprehension of iconic co-speech gestures, and indicates that the comprehension of iconic gestures with action meanings is easier than, and may even precede, the comprehension of iconic gestures with attribute meanings. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. The neural basis of hand gesture comprehension: A meta-analysis of functional magnetic resonance imaging studies.

    PubMed

    Yang, Jie; Andric, Michael; Mathew, Mili M

    2015-10-01

    Gestures play an important role in face-to-face communication and have been increasingly studied via functional magnetic resonance imaging. Although a large amount of data has been provided to describe the neural substrates of gesture comprehension, these findings have never been quantitatively summarized and the conclusion is still unclear. This activation likelihood estimation meta-analysis investigated the brain networks underpinning gesture comprehension while considering the impact of gesture type (co-speech gestures vs. speech-independent gestures) and task demand (implicit vs. explicit) on the brain activation of gesture comprehension. The meta-analysis of 31 papers showed that as hand actions, gestures involve a perceptual-motor network important for action recognition. As meaningful symbols, gestures involve a semantic network for conceptual processing. Finally, during face-to-face interactions, gestures involve a network for social emotive processes. Our finding also indicated that gesture type and task demand influence the involvement of the brain networks during gesture comprehension. The results highlight the complexity of gesture comprehension, and suggest that future research is necessary to clarify the dynamic interactions among these networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. How Age, Linguistic Status, and the Nature of the Auditory Scene Alter the Manner in Which Listening Comprehension Is Achieved in Multitalker Conversations.

    PubMed

    Avivi-Reich, Meital; Jakubczyk, Agnes; Daneman, Meredyth; Schneider, Bruce A

    2015-10-01

    We investigated how age and linguistic status affected listeners' ability to follow and comprehend 3-talker conversations, and the extent to which individual differences in language proficiency predict speech comprehension under difficult listening conditions. Younger and older L1s as well as young L2s listened to 3-talker conversations, with or without spatial separation between talkers, in either quiet or against moderate or high 12-talker babble background, and were asked to answer questions regarding their contents. After compensating for individual differences in speech recognition, no significant differences in conversation comprehension were found among the groups. As expected, conversation comprehension decreased as babble level increased. Individual differences in reading comprehension skill contributed positively to performance in younger EL1s and in young EL2s to a lesser degree but not in older EL1s. Vocabulary knowledge was significantly and positively related to performance only at the intermediate babble level. The results indicate that the manner in which spoken language comprehension is achieved is modulated by the listeners' age and linguistic status.

  18. Neural Basis of Action Understanding: Evidence from Sign Language Aphasia.

    PubMed

    Rogalsky, Corianne; Raphel, Kristin; Tomkovicz, Vivian; O'Grady, Lucinda; Damasio, Hanna; Bellugi, Ursula; Hickok, Gregory

    2013-01-01

    The neural basis of action understanding is a hotly debated issue. The mirror neuron account holds that motor simulation in fronto-parietal circuits is critical to action understanding including speech comprehension, while others emphasize the ventral stream in the temporal lobe. Evidence from speech strongly supports the ventral stream account, but on the other hand, evidence from manual gesture comprehension (e.g., in limb apraxia) has led to contradictory findings. Here we present a lesion analysis of sign language comprehension. Sign language is an excellent model for studying mirror system function in that it bridges the gap between the visual-manual system in which mirror neurons are best characterized and language systems which have represented a theoretical target of mirror neuron research. Twenty-one life long deaf signers with focal cortical lesions performed two tasks: one involving the comprehension of individual signs and the other involving comprehension of signed sentences (commands). Participants' lesions, as indicated on MRI or CT scans, were mapped onto a template brain to explore the relationship between lesion location and sign comprehension measures. Single sign comprehension was not significantly affected by left hemisphere damage. Sentence sign comprehension impairments were associated with left temporal-parietal damage. We found that damage to mirror system related regions in the left frontal lobe were not associated with deficits on either of these comprehension tasks. We conclude that the mirror system is not critically involved in action understanding.

  19. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    PubMed

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p < .01) and with mean length of utterance produced during a written picture description (r = .96, p < .01). Correlations between inner speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  20. Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia.

    PubMed

    Fridriksson, Julius; Guo, Dazhou; Fillmore, Paul; Holland, Audrey; Rorden, Chris

    2013-11-01

    Non-fluent aphasia implies a relatively straightforward neurological condition characterized by limited speech output. However, it is an umbrella term for different underlying impairments affecting speech production. Several studies have sought the critical lesion location that gives rise to non-fluent aphasia. The results have been mixed but typically implicate anterior cortical regions such as Broca's area, the left anterior insula, and deep white matter regions. To provide a clearer picture of cortical damage in non-fluent aphasia, the current study examined brain damage that negatively influences speech fluency in patients with aphasia. It controlled for some basic speech and language comprehension factors in order to better isolate the contribution of different mechanisms to fluency, or its lack. Cortical damage was related to overall speech fluency, as estimated by clinical judgements using the Western Aphasia Battery speech fluency scale, diadochokinetic rate, rudimentary auditory language comprehension, and executive functioning (scores on a matrix reasoning test) in 64 patients with chronic left hemisphere stroke. A region of interest analysis that included brain regions typically implicated in speech and language processing revealed that non-fluency in aphasia is primarily predicted by damage to the anterior segment of the left arcuate fasciculus. An improved prediction model also included the left uncinate fasciculus, a white matter tract connecting the middle and anterior temporal lobe with frontal lobe regions, including the pars triangularis. Models that controlled for diadochokinetic rate, picture-word recognition, or executive functioning also revealed a strong relationship between anterior segment involvement and speech fluency. Whole brain analyses corroborated the findings from the region of interest analyses. An additional exploratory analysis revealed that involvement of the uncinate fasciculus adjudicated between Broca's and global aphasia, the two most common kinds of non-fluent aphasia. In summary, the current results suggest that the anterior segment of the left arcuate fasciculus, a white matter tract that lies deep to posterior portions of Broca's area and the sensory-motor cortex, is a robust predictor of impaired speech fluency in aphasic patients, even when motor speech, lexical processing, and executive functioning are included as co-factors. Simply put, damage to those regions results in non-fluent aphasic speech; when they are undamaged, fluent aphasias result.

  1. Damage to the anterior arcuate fasciculus predicts non-fluent speech production in aphasia

    PubMed Central

    Guo, Dazhou; Fillmore, Paul; Holland, Audrey; Rorden, Chris

    2013-01-01

    Non-fluent aphasia implies a relatively straightforward neurological condition characterized by limited speech output. However, it is an umbrella term for different underlying impairments affecting speech production. Several studies have sought the critical lesion location that gives rise to non-fluent aphasia. The results have been mixed but typically implicate anterior cortical regions such as Broca’s area, the left anterior insula, and deep white matter regions. To provide a clearer picture of cortical damage in non-fluent aphasia, the current study examined brain damage that negatively influences speech fluency in patients with aphasia. It controlled for some basic speech and language comprehension factors in order to better isolate the contribution of different mechanisms to fluency, or its lack. Cortical damage was related to overall speech fluency, as estimated by clinical judgements using the Western Aphasia Battery speech fluency scale, diadochokinetic rate, rudimentary auditory language comprehension, and executive functioning (scores on a matrix reasoning test) in 64 patients with chronic left hemisphere stroke. A region of interest analysis that included brain regions typically implicated in speech and language processing revealed that non-fluency in aphasia is primarily predicted by damage to the anterior segment of the left arcuate fasciculus. An improved prediction model also included the left uncinate fasciculus, a white matter tract connecting the middle and anterior temporal lobe with frontal lobe regions, including the pars triangularis. Models that controlled for diadochokinetic rate, picture-word recognition, or executive functioning also revealed a strong relationship between anterior segment involvement and speech fluency. Whole brain analyses corroborated the findings from the region of interest analyses. An additional exploratory analysis revealed that involvement of the uncinate fasciculus adjudicated between Broca’s and global aphasia, the two most common kinds of non-fluent aphasia. In summary, the current results suggest that the anterior segment of the left arcuate fasciculus, a white matter tract that lies deep to posterior portions of Broca’s area and the sensory-motor cortex, is a robust predictor of impaired speech fluency in aphasic patients, even when motor speech, lexical processing, and executive functioning are included as co-factors. Simply put, damage to those regions results in non-fluent aphasic speech; when they are undamaged, fluent aphasias result. PMID:24131592

  2. Neurophysiological Influence of Musical Training on Speech Perception

    PubMed Central

    Shahin, Antoine J.

    2011-01-01

    Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one's ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss (HL), who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skills acquired through musical training for specific acoustical processes may transfer to, and thereby improve, speech perception. The neurophysiological mechanisms underlying the influence of musical training on speech processing and the extent of this influence remains a rich area to be explored. A prerequisite for such transfer is the facilitation of greater neurophysiological overlap between speech and music processing following musical training. This review first establishes a neurophysiological link between musical training and speech perception, and subsequently provides further hypotheses on the neurophysiological implications of musical training on speech perception in adverse acoustical environments and in individuals with HL. PMID:21716639

  3. Neurophysiological influence of musical training on speech perception.

    PubMed

    Shahin, Antoine J

    2011-01-01

    Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one's ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss (HL), who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skills acquired through musical training for specific acoustical processes may transfer to, and thereby improve, speech perception. The neurophysiological mechanisms underlying the influence of musical training on speech processing and the extent of this influence remains a rich area to be explored. A prerequisite for such transfer is the facilitation of greater neurophysiological overlap between speech and music processing following musical training. This review first establishes a neurophysiological link between musical training and speech perception, and subsequently provides further hypotheses on the neurophysiological implications of musical training on speech perception in adverse acoustical environments and in individuals with HL.

  4. Processing Time and Question Type in the Comprehension of Compressed Speech with Adjunct Pictures.

    ERIC Educational Resources Information Center

    Tantiblarphol, Subhreawpun; Hughes, Lawson H.

    The effect of adding time for processing compressed speech and the effects of questions that gave adjunct pictures either a redundant or contextual function were investigated. Subjects were 144 fourth- and fifth-grade students randomly assigned to 24 groups. They listened individually to a 20-sentence story at either 225 or 300 words-per-minute…

  5. Error Monitoring in Speech Production: A Computational Test of the Perceptual Loop Theory.

    ERIC Educational Resources Information Center

    Hartsuiker, Robert J.; Kolk, Herman H. J.

    2001-01-01

    Tested whether an elaborated version of the perceptual loop theory (W. Levelt, 1983) and the main interruption rule was consistent with existing time course data (E. Blackmer and E. Mitton, 1991; C. Oomen and A. Postma, in press). The study suggests that including an inner loop through the speech comprehension system generates predictions that fit…

  6. Speech, Sign, or Multilingualism for Children with Hearing Loss: Quantitative Insights into Caregivers' Decision Making

    ERIC Educational Resources Information Center

    Crowe, Kathryn; McLeod, Sharynne; McKinnon, David H.; Ching, Teresa Y. C.

    2014-01-01

    Purpose: The authors sought to investigate the influence of a comprehensive range of factors on the decision making of caregivers of children with hearing loss regarding the use of speech, the use of sign, spoken language multilingualism, and spoken language choice. This is a companion article to the qualitative investigation described in Crowe,…

  7. Addressing Math Comprehension of Children with Attention, Speech, and Language Disabilities: A Case Study on Singapore Math

    ERIC Educational Resources Information Center

    Uzhansky, Jane

    2018-01-01

    The co-occurrence of learning disabilities (LD), such as speech and language impairment (SLI) and attention deficit disorder/attention deficit-hyperactivity disorder (ADD/ADHD), also classified as other health impairment (OHI), is significant. Many of these students are being placed in the general education setting and need to obtain the learning…

  8. L2 Learners' Assessments of Accentedness, Fluency, and Comprehensibility of Native and Nonnative German Speech

    ERIC Educational Resources Information Center

    O'Brien, Mary Grantham

    2014-01-01

    In early stages of classroom language learning, many adult second language (L2) learners communicate primarily with one another, yet we know little about which speech stream characteristics learners tune into or the extent to which they understand this lingua franca communication. In the current study, 25 native English speakers learning German as…

  9. FURTHER RESEARCH ON SPEEDED SPEECH AS AN EDUCATIONAL MEDIUM. FINAL REPORT, PARTS 1-5, JULY 1965--SEPTEMBER 1967.

    ERIC Educational Resources Information Center

    FRIEDMAN, HERBERT L.; AND OTHERS

    UNDER TWO GRANTS FROM THE NEW EDUCATIONAL MEDIA BRANCH OF THE OFFICE OF EDUCATION, RESEARCH CONDUCTED AT THE AMERICAN INSTITUTES FOR RESEARCH FROM 1963 THROUGH 1967 EXAMINED MAJOR VARIABLES IN LISTENING COMPREHENSION WHEN COLLEGE-AGE STUDENTS ARE EXPOSED TO RATE-CONTROLLED SPEECH. THE TECHNIQUE USED TO ALTER THE RATE OF PRESENTATION OF…

  10. Children's Comprehension and Use of Indirect Speech Acts: The Case of Soliciting Praise.

    ERIC Educational Resources Information Center

    Kovac, Ceil

    Children in school cooperate in the evaluation of their products and activities by teachers and other students by calling attention to these products and activities with various language strategies. The requests that someone notice something and/or praise it are the data base for this study. The unmarked speech act for this request type is in the…

  11. Development of Second Language French Oral Skills in an Instructed Setting: A Focus on Speech Ratings

    ERIC Educational Resources Information Center

    Trofimovich, Pavel; Kennedy, Sara; Blanchet, Josée

    2017-01-01

    This study examined the relationship between targeted pronunciation instruction in French as a second language (L2) and listener-based ratings of accent, comprehensibility, and fluency. The ratings by 20 French listeners evaluating the speech of 30 adult L2 French learners enrolled in a 15-week listening and speaking course targeting segments,…

  12. Effects of irrelevant sounds on phonological coding in reading comprehension and short-term memory.

    PubMed

    Boyle, R; Coltheart, V

    1996-05-01

    The effects of irrelevant sounds on reading comprehension and short-term memory were studied in two experiments. In Experiment 1, adults judged the acceptability of written sentences during irrelevant speech, accompanied and unaccompanied singing, instrumental music, and in silence. Sentences varied in syntactic complexity: Simple sentences contained a right-branching relative clause (The applause pleased the woman that gave the speech) and syntactically complex sentences included a centre-embedded relative clause (The hay that the farmer stored fed the hungry animals). Unacceptable sentences either sounded acceptable (The dog chased the cat that eight up all his food) or did not (The man praised the child that sight up his spinach). Decision accuracy was impaired by syntactic complexity but not by irrelevant sounds. Phonological coding was indicated by increased errors on unacceptable sentences that sounded correct. These errors rates were unaffected by irrelevant sounds. Experiment 2 examined effects of irrelevant sounds on ordered recall of phonologically similar and dissimilar word lists. Phonological similarity impaired recall. Irrelevant speech reduced recall but did not interact with phonological similarity. The results of these experiments question assumptions about the relationship between speech input and phonological coding in reading and the short-term store.

  13. Disentangling syntax and intelligibility in auditory language comprehension.

    PubMed

    Friederici, Angela D; Kotz, Sonja A; Scott, Sophie K; Obleser, Jonas

    2010-03-01

    Studies of the neural basis of spoken language comprehension typically focus on aspects of auditory processing by varying signal intelligibility, or on higher-level aspects of language processing such as syntax. Most studies in either of these threads of language research report brain activation including peaks in the superior temporal gyrus (STG) and/or the superior temporal sulcus (STS), but it is not clear why these areas are recruited in functionally different studies. The current fMRI study aims to disentangle the functional neuroanatomy of intelligibility and syntax in an orthogonal design. The data substantiate functional dissociations between STS and STG in the left and right hemispheres: first, manipulations of speech intelligibility yield bilateral mid-anterior STS peak activation, whereas syntactic phrase structure violations elicit strongly left-lateralized mid STG and posterior STS activation. Second, ROI analyses indicate all interactions of speech intelligibility and syntactic correctness to be located in the left frontal and temporal cortex, while the observed right-hemispheric activations reflect less specific responses to intelligibility and syntax. Our data demonstrate that the mid-to-anterior STS activation is associated with increasing speech intelligibility, while the mid-to-posterior STG/STS is more sensitive to syntactic information within the speech. 2009 Wiley-Liss, Inc.

  14. Giving speech a hand: gesture modulates activity in auditory cortex during speech perception.

    PubMed

    Hubbard, Amy L; Wilson, Stephen M; Callan, Daniel E; Dapretto, Mirella

    2009-03-01

    Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture-a fundamental type of hand gesture that marks speech prosody-might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions.

  15. Giving Speech a Hand: Gesture Modulates Activity in Auditory Cortex During Speech Perception

    PubMed Central

    Hubbard, Amy L.; Wilson, Stephen M.; Callan, Daniel E.; Dapretto, Mirella

    2008-01-01

    Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture – a fundamental type of hand gesture that marks speech prosody – might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions. PMID:18412134

  16. Characterising receptive language processing in schizophrenia using word and sentence tasks.

    PubMed

    Tan, Eric J; Yelland, Gregory W; Rossell, Susan L

    2016-01-01

    Language dysfunction is proposed to relate to the speech disturbances in schizophrenia, which are more commonly referred to as formal thought disorder (FTD). Presently, language production deficits in schizophrenia are better characterised than language comprehension difficulties. This study thus aimed to examine three aspects of language comprehension in schizophrenia: (1) the role of lexical processing, (2) meaning attribution for words and sentences, and (3) the relationship between comprehension and production. Fifty-seven schizophrenia/schizoaffective disorder patients and 48 healthy controls completed a clinical assessment and three language tasks assessing word recognition, synonym identification, and sentence comprehension. Poorer patient performance was expected on the latter two tasks. Recognition of word form was not impaired in schizophrenia, indicating intact lexical processing. Whereas single-word synonym identification was not significantly impaired, there was a tendency to attribute word meanings based on phonological similarity with increasing FTD severity. Importantly, there was a significant sentence comprehension deficit for processing deep structure, which correlated with FTD severity. These findings established a receptive language deficit in schizophrenia at the syntactic level. There was also evidence for a relationship between some aspects of language comprehension and speech production/FTD. Apart from indicating language as another mechanism in FTD aetiology, the data also suggest that remediating language comprehension problems may be an avenue to pursue in alleviating FTD symptomatology.

  17. Reading skills of students with speech sound disorders at three stages of literacy development.

    PubMed

    Skebo, Crysten M; Lewis, Barbara A; Freebairn, Lisa A; Tag, Jessica; Avrich Ciesla, Allison; Stein, Catherine M

    2013-10-01

    The relationship between phonological awareness, overall language, vocabulary, and nonlinguistic cognitive skills to decoding and reading comprehension was examined for students at 3 stages of literacy development (i.e., early elementary school, middle school, and high school). Students with histories of speech sound disorders (SSD) with and without language impairment (LI) were compared to students without histories of SSD or LI (typical language; TL). In a cross-sectional design, students ages 7;0 (years;months) to 17;9 completed tests that measured reading, language, and nonlinguistic cognitive skills. For the TL group, phonological awareness predicted decoding at early elementary school, and overall language predicted reading comprehension at early elementary school and both decoding and reading comprehension at middle school and high school. For the SSD-only group, vocabulary predicted both decoding and reading comprehension at early elementary school, and overall language predicted both decoding and reading comprehension at middle school and decoding at high school. For the SSD and LI group, overall language predicted decoding at all 3 literacy stages and reading comprehension at early elementary school and middle school, and vocabulary predicted reading comprehension at high school. Although similar skills contribute to reading across the age span, the relative importance of these skills changes with children's literacy stages.

  18. Reading Skills of Students With Speech Sound Disorders at Three Stages of Literacy Development

    PubMed Central

    Skebo, Crysten M.; Lewis, Barbara A.; Freebairn, Lisa A.; Tag, Jessica; Ciesla, Allison Avrich; Stein, Catherine M.

    2015-01-01

    Purpose The relationship between phonological awareness, overall language, vocabulary, and nonlinguistic cognitive skills to decoding and reading comprehension was examined for students at 3 stages of literacy development (i.e., early elementary school, middle school, and high school). Students with histories of speech sound disorders (SSD) with and without language impairment (LI) were compared to students without histories of SSD or LI (typical language; TL). Method In a cross-sectional design, students ages 7;0 (years; months) to 17;9 completed tests that measured reading, language, and nonlinguistic cognitive skills. Results For the TL group, phonological awareness predicted decoding at early elementary school, and overall language predicted reading comprehension at early elementary school and both decoding and reading comprehension at middle school and high school. For the SSD-only group, vocabulary predicted both decoding and reading comprehension at early elementary school, and overall language predicted both decoding and reading comprehension at middle school and decoding at high school. For the SSD and LI group, overall language predicted decoding at all 3 literacy stages and reading comprehension at early elementary school and middle school, and vocabulary predicted reading comprehension at high school. Conclusion Although similar skills contribute to reading across the age span, the relative importance of these skills changes with children’s literacy stages. PMID:23833280

  19. Listeners' Comprehension of Uptalk in Spontaneous Speech

    ERIC Educational Resources Information Center

    Tomlinson, John M., Jr.; Tree, Jean E. Fox

    2011-01-01

    Listeners' comprehension of phrase final rising pitch on declarative utterances, or "uptalk", was examined to test the hypothesis that prolongations might differentiate conflicting functions of rising pitch. In Experiment 1 we found that listeners rated prolongations as indicating more speaker uncertainty, but that rising pitch was unrelated to…

  20. Partnerships to Support Reading Comprehension for Students with Language Impairment

    ERIC Educational Resources Information Center

    Ehren, Barbara J.

    2006-01-01

    Students with language impairment often experience serious and far-reaching effects of reading comprehension problems on their academic performance. The complexity of the problems and the characteristics of effective intervention necessitate a collaborative approach among general education teachers, special education teachers, and speech-language…

  1. Listening Comprehension Training in Teaching English to Beginners.

    ERIC Educational Resources Information Center

    Thiele, Angelika; Schneibner-Herzig, Gudrun

    1983-01-01

    A test comparing two groups of beginning learners of English as a second language shows that teaching listening comprehension accompanied by prescribed gestures - "total physical response" - instead of speech production, provides better language acquisition than conventional methods, as well as less anxiety and higher motivation for…

  2. Investigating Joint Attention Mechanisms through Spoken Human-Robot Interaction

    ERIC Educational Resources Information Center

    Staudte, Maria; Crocker, Matthew W.

    2011-01-01

    Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream (Griffin, 2001; Meyer, Sleiderink, & Levelt, 1998; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker's…

  3. Investigation of habitual pitch during free play activities for preschool-aged children.

    PubMed

    Chen, Yang; Kimelman, Mikael D Z; Micco, Katie

    2009-01-01

    This study is designed to compare the habitual pitch measured in two different speech activities (free play activity and traditionally used structured speech activity) for normally developing preschool-aged children to explore to what extent preschoolers vary their vocal pitch among different speech environments. Habitual pitch measurements were conducted for 10 normally developing children (2 boys, 8 girls) between the ages of 31 months and 71 months during two different activities: (1) free play; and (2) structured speech. Speech samples were recorded using a throat microphone connected with a wireless transmitter in both activities. The habitual pitch (in Hz) was measured for all collected speech samples by using voice analysis software (Real-Time Pitch). Significantly higher habitual pitch is found during free play in contrast to structured speech activities. In addition, there is no showing of significant difference of habitual pitch elicited across a variety of structured speech activities. Findings suggest that the vocal usage of preschoolers appears to be more effortful during free play than during structured activities. It is recommended that a comprehensive evaluation for young children's voice needs to be based on the speech/voice samples collected from both free play and structured activities.

  4. Can you hear my age? Influences of speech rate and speech spontaneity on estimation of speaker age

    PubMed Central

    Skoog Waller, Sara; Eriksson, Mårten; Sörqvist, Patrik

    2015-01-01

    Cognitive hearing science is mainly about the study of how cognitive factors contribute to speech comprehension, but cognitive factors also partake in speech processing to infer non-linguistic information from speech signals, such as the intentions of the talker and the speaker’s age. Here, we report two experiments on age estimation by “naïve” listeners. The aim was to study how speech rate influences estimation of speaker age by comparing the speakers’ natural speech rate with increased or decreased speech rate. In Experiment 1, listeners were presented with audio samples of read speech from three different speaker age groups (young, middle aged, and old adults). They estimated the speakers as younger when speech rate was faster than normal and as older when speech rate was slower than normal. This speech rate effect was slightly greater in magnitude for older (60–65 years) speakers in comparison with younger (20–25 years) speakers, suggesting that speech rate may gain greater importance as a perceptual age cue with increased speaker age. This pattern was more pronounced in Experiment 2, in which listeners estimated age from spontaneous speech. Faster speech rate was associated with lower age estimates, but only for older and middle aged (40–45 years) speakers. Taken together, speakers of all age groups were estimated as older when speech rate decreased, except for the youngest speakers in Experiment 2. The absence of a linear speech rate effect in estimates of younger speakers, for spontaneous speech, implies that listeners use different age estimation strategies or cues (possibly vocabulary) depending on the age of the speaker and the spontaneity of the speech. Potential implications for forensic investigations and other applied domains are discussed. PMID:26236259

  5. Characterizing speech and language pathology outcomes in stroke rehabilitation.

    PubMed

    Hatfield, Brooke; Millet, Deborah; Coles, Janice; Gassaway, Julie; Conroy, Brendan; Smout, Randall J

    2005-12-01

    Hatfield B, Millet D, Coles J, Gassaway J, Conroy B, Smout RJ. Characterizing speech and language pathology outcomes in stroke rehabilitation. To describe a subset of speech-language pathology (SLP) patients in the Post-Stroke Rehabilitation Outcomes Project and to examine outcomes for patients with low admission FIM levels of auditory comprehension and verbal expression. Observational cohort study. Five inpatient rehabilitation hospitals. Patients (N=397) receiving post-stroke SLP with admission FIM cognitive components at levels 1 through 5. Not applicable. Increase in comprehension and expression FIM scores from admission to discharge. Cognitively and linguistically complex SLP activities (problem-solving and executive functioning skills) were associated with greater likelihood of success in low- to mid-level functioning communicators in the acute post-stroke rehabilitation period. The results challenge common clinical practice by suggesting that use of high-level cognitively and linguistically complex SLP activities early in a patient's stay may result in more efficient practice and better outcomes regardless of the patient's functional communication severity level on admission.

  6. Speech perception at the interface of neurobiology and linguistics.

    PubMed

    Poeppel, David; Idsardi, William J; van Wassenhove, Virginie

    2008-03-12

    Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20-80 ms, approx. 150-300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an 'analysis-by-synthesis' approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.

  7. Shot through with voices: Dissociation mediates the relationship between varieties of inner speech and auditory hallucination proneness

    PubMed Central

    Alderson-Day, Ben; McCarthy-Jones, Simon; Bedford, Sarah; Collins, Hannah; Dunne, Holly; Rooke, Chloe; Fernyhough, Charles

    2014-01-01

    Inner speech is a commonly experienced but poorly understood phenomenon. The Varieties of Inner Speech Questionnaire (VISQ; McCarthy-Jones & Fernyhough, 2011) assesses four characteristics of inner speech: dialogicality, evaluative/motivational content, condensation, and the presence of other people. Prior findings have linked anxiety and proneness to auditory hallucinations (AH) to these types of inner speech. This study extends that work by examining how inner speech relates to self-esteem and dissociation, and their combined impact upon AH-proneness. 156 students completed the VISQ and measures of self-esteem, dissociation and AH-proneness. Correlational analyses indicated that evaluative inner speech and other people in inner speech were associated with lower self-esteem and greater frequency of dissociative experiences. Dissociation and VISQ scores, but not self-esteem, predicted AH-proneness. Structural equation modelling supported a mediating role for dissociation between specific components of inner speech (evaluative and other people) and AH-proneness. Implications for the development of “hearing voices” are discussed. PMID:24980910

  8. EEG oscillations entrain their phase to high-level features of speech sound.

    PubMed

    Zoefel, Benedikt; VanRullen, Rufin

    2016-01-01

    Phase entrainment of neural oscillations, the brain's adjustment to rhythmic stimulation, is a central component in recent theories of speech comprehension: the alignment between brain oscillations and speech sound improves speech intelligibility. However, phase entrainment to everyday speech sound could also be explained by oscillations passively following the low-level periodicities (e.g., in sound amplitude and spectral content) of auditory stimulation-and not by an adjustment to the speech rhythm per se. Recently, using novel speech/noise mixture stimuli, we have shown that behavioral performance can entrain to speech sound even when high-level features (including phonetic information) are not accompanied by fluctuations in sound amplitude and spectral content. In the present study, we report that neural phase entrainment might underlie our behavioral findings. We observed phase-locking between electroencephalogram (EEG) and speech sound in response not only to original (unprocessed) speech but also to our constructed "high-level" speech/noise mixture stimuli. Phase entrainment to original speech and speech/noise sound did not differ in the degree of entrainment, but rather in the actual phase difference between EEG signal and sound. Phase entrainment was not abolished when speech/noise stimuli were presented in reverse (which disrupts semantic processing), indicating that acoustic (rather than linguistic) high-level features play a major role in the observed neural entrainment. Our results provide further evidence for phase entrainment as a potential mechanism underlying speech processing and segmentation, and for the involvement of high-level processes in the adjustment to the rhythm of speech. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Factors contributing to speech perception scores in long-term pediatric cochlear implant users.

    PubMed

    Davidson, Lisa S; Geers, Ann E; Blamey, Peter J; Tobey, Emily A; Brenner, Christine A

    2011-02-01

    The objectives of this report are to (1) describe the speech perception abilities of long-term pediatric cochlear implant (CI) recipients by comparing scores obtained at elementary school (CI-E, 8 to 9 yrs) with scores obtained at high school (CI-HS, 15 to 18 yrs); (2) evaluate speech perception abilities in demanding listening conditions (i.e., noise and lower intensity levels) at adolescence; and (3) examine the relation of speech perception scores to speech and language development over this longitudinal timeframe. All 112 teenagers were part of a previous nationwide study of 8- and 9-yr-olds (N = 181) who received a CI between 2 and 5 yrs of age. The test battery included (1) the Lexical Neighborhood Test (LNT; hard and easy word lists); (2) the Bamford Kowal Bench sentence test; (3) the Children's Auditory-Visual Enhancement Test; (4) the Test of Auditory Comprehension of Language at CI-E; (5) the Peabody Picture Vocabulary Test at CI-HS; and (6) the McGarr sentences (consonants correct) at CI-E and CI-HS. CI-HS speech perception was measured in both optimal and demanding listening conditions (i.e., background noise and low-intensity level). Speech perception scores were compared based on age at test, lexical difficulty of stimuli, listening environment (optimal and demanding), input mode (visual and auditory-visual), and language age. All group mean scores significantly increased with age across the two test sessions. Scores of adolescents significantly decreased in demanding listening conditions. The effect of lexical difficulty on the LNT scores, as evidenced by the difference in performance between easy versus hard lists, increased with age and decreased for adolescents in challenging listening conditions. Calculated curves for percent correct speech perception scores (LNT and Bamford Kowal Bench) and consonants correct on the McGarr sentences plotted against age-equivalent language scores on the Test of Auditory Comprehension of Language and Peabody Picture Vocabulary Test achieved asymptote at similar ages, around 10 to 11 yrs. On average, children receiving CIs between 2 and 5 yrs of age exhibited significant improvement on tests of speech perception, lipreading, speech production, and language skills measured between primary grades and adolescence. Evidence suggests that improvement in speech perception scores with age reflects increased spoken language level up to a language age of about 10 yrs. Speech perception performance significantly decreased with softer stimulus intensity level and with introduction of background noise. Upgrades to newer speech processing strategies and greater use of frequency-modulated systems may be beneficial for ameliorating performance under these demanding listening conditions.

  10. Universal and language-specific sublexical cues in speech perception: a novel electroencephalography-lesion approach.

    PubMed

    Obrig, Hellmuth; Mentzel, Julia; Rossi, Sonja

    2016-06-01

    SEE CAPPA DOI101093/BRAIN/AWW090 FOR A SCIENTIFIC COMMENTARY ON THIS ARTICLE  : The phonological structure of speech supports the highly automatic mapping of sound to meaning. While it is uncontroversial that phonotactic knowledge acts upon lexical access, it is unclear at what stage these combinatorial rules, governing phonological well-formedness in a given language, shape speech comprehension. Moreover few studies have investigated the neuronal network affording this important step in speech comprehension. Therefore we asked 70 participants-half of whom suffered from a chronic left hemispheric lesion-to listen to 252 different monosyllabic pseudowords. The material models universal preferences of phonotactic well-formedness by including naturally spoken pseudowords and digitally reversed exemplars. The latter partially violate phonological structure of all human speech and are rich in universally dispreferred phoneme sequences while preserving basic auditory parameters. Language-specific constraints were modelled in that half of the naturally spoken pseudowords complied with the phonotactics of the native language of the monolingual participants (German) while the other half did not. To ensure universal well-formedness and naturalness, the latter stimuli comply with Slovak phonotactics and all stimuli were produced by an early bilingual speaker. To maximally attenuate lexico-semantic influences, transparent pseudowords were avoided and participants had to detect immediate repetitions, a task orthogonal to the contrasts of interest. The results show that phonological 'well-formedness' modulates implicit processing of speech at different levels: universally dispreferred phonological structure elicits early, medium and late latency differences in the evoked potential. On the contrary, the language-specific phonotactic contrast selectively modulates a medium latency component of the event-related potentials around 400 ms. Using a novel event-related potential-lesion approach allowed us to furthermore supply first evidence that implicit processing of these different phonotactic levels relies on partially separable brain areas in the left hemisphere: contrasting forward to reversed speech the approach delineated an area comprising supramarginal and angular gyri. Conversely, the contrast between legal versus illegal phonotactics consistently projected to anterior and middle portions of the middle temporal and superior temporal gyri. Our data support the notion that phonological structure acts on different stages of phonologically and lexically driven steps of speech comprehension. In the context of previous work we propose context-dependent sensitivity to different levels of phonotactic well-formedness. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Classification of speech and language profiles in 4-year old children with cerebral palsy: A prospective preliminary study

    PubMed Central

    Hustad, Katherine C.; Gorton, Kristin; Lee, Jimin

    2010-01-01

    Purpose Little is known about the speech and language abilities of children with cerebral palsy (CP) and there is currently no system for classifying speech and language profiles. Such a system would have epidemiological value and would have the potential to advance the development of interventions that improve outcomes. In this study, we propose and test a preliminary speech and language classification system by quantifying how well speech and language data differentiate among children classified into different hypothesized profile groups. Method Speech and language assessment data were collected in a laboratory setting from 34 children with CP (18 males; 16 females) who were a mean age of 54 months (SD 1.8 months). Measures of interest were vowel area, speech rate, language comprehension scores, and speech intelligibility ratings. Results Canonical discriminant function analysis showed that three functions accounted for 100% of the variance among profile groups, with speech variables accounting for 93% of the variance. Classification agreement varied from 74% to 97% using four different classification paradigms. Conclusions Results provide preliminary support for the classification of speech and language abilities of children with CP into four initial profile groups. Further research is necessary to validate the full classification system. PMID:20643795

  12. Redistribution of neural phase coherence reflects establishment of feedforward map in speech motor adaptation

    PubMed Central

    Sengupta, Ranit

    2015-01-01

    Despite recent progress in our understanding of sensorimotor integration in speech learning, a comprehensive framework to investigate its neural basis is lacking at behaviorally relevant timescales. Structural and functional imaging studies in humans have helped us identify brain networks that support speech but fail to capture the precise spatiotemporal coordination within the networks that takes place during speech learning. Here we use neuronal oscillations to investigate interactions within speech motor networks in a paradigm of speech motor adaptation under altered feedback with continuous recording of EEG in which subjects adapted to the real-time auditory perturbation of a target vowel sound. As subjects adapted to the task, concurrent changes were observed in the theta-gamma phase coherence during speech planning at several distinct scalp regions that is consistent with the establishment of a feedforward map. In particular, there was an increase in coherence over the central region and a decrease over the fronto-temporal regions, revealing a redistribution of coherence over an interacting network of brain regions that could be a general feature of error-based motor learning in general. Our findings have implications for understanding the neural basis of speech motor learning and could elucidate how transient breakdown of neuronal communication within speech networks relates to speech disorders. PMID:25632078

  13. Early Recovery of Aphasia through Thrombolysis: The Significance of Spontaneous Speech.

    PubMed

    Furlanis, Giovanni; Ridolfi, Mariana; Polverino, Paola; Menichelli, Alina; Caruso, Paola; Naccarato, Marcello; Sartori, Arianna; Torelli, Lucio; Pesavento, Valentina; Manganotti, Paolo

    2018-07-01

    Aphasia is one of the most devastating stroke-related consequences for social interaction and daily activities. Aphasia recovery in acute stroke depends on the degree of reperfusion after thrombolysis or thrombectomy. As aphasia assessment tests are often time-consuming for patients with acute stroke, physicians have been developing rapid and simple tests. The aim of our study is to evaluate the improvement of language functions in the earliest stage in patients treated with thrombolysis and in nontreated patients using our rapid screening test. Our study is a single-center prospective observational study conducted at the Stroke Unit of the University Medical Hospital of Trieste (January-December 2016). Patients treated with thrombolysis and nontreated patients underwent 3 aphasia assessments through our rapid screening test (at baseline, 24 hours, and 72 hours). The screening test assesses spontaneous speech, oral comprehension of words, reading aloud and comprehension of written words, oral comprehension of sentences, naming, repetition of words and a sentence, and writing words. The study included 40 patients: 18 patients treated with thrombolysis and 22 nontreated patients. Both groups improved over time. Among all language parameters, spontaneous speech was statistically significant between 24 and 72 hours (P value = .012), and between baseline and 72 hours (P value = .017). Our study demonstrates that patients treated with thrombolysis experience greater improvement in language than the nontreated patients. The difference between the 2 groups is increasingly evident over time. Moreover, spontaneous speech is the parameter marked by the greatest improvement. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  14. BEHAVIOUR CHARATERISTICS OF THE MENTALLY RETARTED IN A STATE MENTAL HOSPITAL—A COMPARATIVE STUDY1

    PubMed Central

    Somasundaram, O.; Kumar, M. Suresh

    1984-01-01

    SUMMARY 30 institutionalised severely subnormal (SSN) subjects and 30 matched severely subnormal individuals attending the outpatient services of the Institute of Mental Health, Madras were evaluated for their behaviour characteristics using a schedule containing two scales,the social and physical incapacity (SPI) scale and the speech, self help and literacy (SSL) scale. Destructive behaviour, self injury, overall poor speech, self help and literacy ability, overall social and physical incapacity, poor speech ability, poor speech comprehensibility, poor self help and poor literacy were the discriminating factors much more common for the institutionalised subjects than for the outpatient individuals. The usefulness of this informations in the planning and implementation of services for the institutionalised mentally retarded is discussed. PMID:21965969

  15. Application of artifical intelligence principles to the analysis of "crazy" speech.

    PubMed

    Garfield, D A; Rapp, C

    1994-04-01

    Artificial intelligence computer simulation methods can be used to investigate psychotic or "crazy" speech. Here, symbolic reasoning algorithms establish semantic networks that schematize speech. These semantic networks consist of two main structures: case frames and object taxonomies. Node-based reasoning rules apply to object taxonomies and pathway-based reasoning rules apply to case frames. Normal listeners may recognize speech as "crazy talk" based on violations of node- and pathway-based reasoning rules. In this article, three separate segments of schizophrenic speech illustrate violations of these rules. This artificial intelligence approach is compared and contrasted with other neurolinguistic approaches and is discussed as a conceptual link between neurobiological and psychodynamic understandings of psychopathology.

  16. Gesture production and comprehension in children with specific language impairment.

    PubMed

    Botting, Nicola; Riches, Nicholas; Gaynor, Marguerite; Morgan, Gary

    2010-03-01

    Children with specific language impairment (SLI) have difficulties with spoken language. However, some recent research suggests that these impairments reflect underlying cognitive limitations. Studying gesture may inform us clinically and theoretically about the nature of the association between language and cognition. A total of 20 children with SLI and 19 typically developing (TD) peers were assessed on a novel measure of gesture production. Children were also assessed for sentence comprehension errors in a speech-gesture integration task. Children with SLI performed equally to peers on gesture production but performed less well when comprehending integrated speech and gesture. Error patterns revealed a significant group interaction: children with SLI made more gesture-based errors, whilst TD children made semantically based ones. Children with SLI accessed and produced lexically encoded gestures despite having impaired spoken vocabulary and this group also showed stronger associations between gesture and language than TD children. When SLI comprehension breaks down, gesture may be relied on over speech, whilst TD children have a preference for spoken cues. The findings suggest that for children with SLI, gesture scaffolds are still more related to language development than for TD peers who have out-grown earlier reliance on gestures. Future clinical implications may include standardized assessment of symbolic gesture and classroom based gesture support for clinical groups.

  17. Hearing loss in older adults affects neural systems supporting speech comprehension.

    PubMed

    Peelle, Jonathan E; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur

    2011-08-31

    Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment, we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry, demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally, these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task.

  18. Hearing loss in older adults affects neural systems supporting speech comprehension

    PubMed Central

    Peelle, Jonathan E.; Troiani, Vanessa; Grossman, Murray; Wingfield, Arthur

    2011-01-01

    Hearing loss is one of the most common complaints in adults over the age of 60 and a major contributor to difficulties in speech comprehension. To examine the effects of hearing ability on the neural processes supporting spoken language processing in humans, we used functional magnetic resonance imaging (fMRI) to monitor brain activity while older adults with age-normal hearing listened to sentences that varied in their linguistic demands. Individual differences in hearing ability predicted the degree of language-driven neural recruitment during auditory sentence comprehension in bilateral superior temporal gyri (including primary auditory cortex), thalamus, and brainstem. In a second experiment we examined the relationship of hearing ability to cortical structural integrity using voxel-based morphometry (VBM), demonstrating a significant linear relationship between hearing ability and gray matter volume in primary auditory cortex. Together, these results suggest that even moderate declines in peripheral auditory acuity lead to a systematic downregulation of neural activity during the processing of higher-level aspects of speech, and may also contribute to loss of gray matter volume in primary auditory cortex. More generally these findings support a resource-allocation framework in which individual differences in sensory ability help define the degree to which brain regions are recruited in service of a particular task. PMID:21880924

  19. [Post-stroke speech disorder treated with acupuncture and psychological intervention combined with rehabilitation training: a randomized controlled trial].

    PubMed

    Wang, Ling; Liu, Shao-ming; Liu, Min; Li, Bao-jun; Hui, Zhen-liang; Gao, Xiang

    2011-06-01

    To assess the clinical efficacy on post-stroke speech disorder treated with acupuncture and psychological intervention combined with rehabilitation training. The multi-central randomized controlled study was adopted. One hundred and twenty cases of brain stroke were divided into a speech rehabilitation group (control group), a speech rehabilitation plus acupuncture group (observation group 1) and a speech rehabilitation plus acupuncture combined with psychotherapy group (observation group 2), 40 cases in each one. The rehabilitation training was conducted by a professional speech trainer. In acupuncture treatment, speech function area in scalp acupuncture, Jinjin (EX-HN 12) and Yuye (EX-HN 13) in tongue acupuncture and Lianquan (CV 23) were the basic points. The supplementary points were selected according to syndrome differentiation. Bloodletting method was used in combination with acupuncture. Psychotherapy was applied by the physician in psychiatric department of the hospital. The corresponding programs were used in each group. Examination of Aphasia of Chinese of Beijing Hospital was adopted to observe the oral speech expression, listening comprehension and reading and writing ability. After 21-day treatment, the total effective rate was 92.5% (37/40) in observation group 1, 97.5% (39/40) in observation group 2 and 87.5% (35/40) in control group. The efficacies were similar in comparison among 3 groups. The remarkable effective rate was 15.0% (6/40) in observation group 1, 50.0% (20/40) in observation group 2 and 2.5% (1/40) in control group. The result in observation group 2 was superior to the other two groups (P<0.01, P<0.001). In comparison of the improvements of oral expression, listening comprehension, reading and writing ability, all of the 3 groups had achieved the improvements to different extents after treatment (P<0.01, P<0.001). The results in observation group 2 were better than those in observation group 1 and control group. Acupuncture and psychological intervention combined with rehabilitation training is obviously advantageous in the treatment of post-stroke speech disorder.

  20. Cognitive Load in Voice Therapy Carry-Over Exercises.

    PubMed

    Iwarsson, Jenny; Morris, David Jackson; Balling, Laura Winther

    2017-01-01

    The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication. Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire. Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks. The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.

  1. Iconic Gestures Facilitate Discourse Comprehension in Individuals With Superior Immediate Memory for Body Configurations.

    PubMed

    Wu, Ying Choon; Coulson, Seana

    2015-11-01

    To understand a speaker's gestures, people may draw on kinesthetic working memory (KWM)-a system for temporarily remembering body movements. The present study explored whether sensitivity to gesture meaning was related to differences in KWM capacity. KWM was evaluated through sequences of novel movements that participants viewed and reproduced with their own bodies. Gesture sensitivity was assessed through a priming paradigm. Participants judged whether multimodal utterances containing congruent, incongruent, or no gestures were related to subsequent picture probes depicting the referents of those utterances. Individuals with low KWM were primarily inhibited by incongruent speech-gesture primes, whereas those with high KWM showed facilitation-that is, they were able to identify picture probes more quickly when preceded by congruent speech and gestures than by speech alone. Group differences were most apparent for discourse with weakly congruent speech and gestures. Overall, speech-gesture congruency effects were positively correlated with KWM abilities, which may help listeners match spatial properties of gestures to concepts evoked by speech. © The Author(s) 2015.

  2. Speech disorders in Israeli Arab children.

    PubMed

    Jaber, L; Nahmani, A; Shohat, M

    1997-10-01

    The aim of this work was to study the frequency of speech disorders in Israeli Arab children and its association with parental consanguinity. A questionnaire was sent to the parents of 1,495 Arab children attending kindergarten and the first two grades of the seven primary schools in the town of Taibe. Eight-six percent (1,282 parents) responded. The answers to the questionnaire revealed that 25% of the children reportedly had a speech and language disorder. Of the children identified by their parents as having a speech disorder, 44 were selected randomly for examination by a speech specialist. The disorders noted in this subgroup included errors in articulation (48.0%), poor language (18%), poor voice quality (15.9%); stuttering (13.6%), and other problems (4.5%). Rates of affected children of consanguineous and non-consanguineous marriages were 31% and 22.4%, respectively (p < 0.01). We conclude that speech disorders are an important problem among Israeli Arab schoolchildren. More comprehensive programs are needed to facilitate diagnosis and treatment.

  3. Lip movements entrain the observers’ low-frequency brain oscillations to facilitate speech intelligibility

    PubMed Central

    Park, Hyojin; Kayser, Christoph; Thut, Gregor; Gross, Joachim

    2016-01-01

    During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker’s lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker’s lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing. DOI: http://dx.doi.org/10.7554/eLife.14521.001 PMID:27146891

  4. Concurrent processing of vehicle lane keeping and speech comprehension tasks.

    PubMed

    Cao, Shi; Liu, Yili

    2013-10-01

    With the growing prevalence of using in-vehicle devices and mobile devices while driving, a major concern is their impact on driving performance and safety. However, the effects of cognitive load such as conversation on driving performance are still controversial and not well understood. In this study, an experiment was conducted to investigate the concurrent performance of vehicle lane keeping and speech comprehension tasks with improved experimental control of the confounding factors identified in previous studies. The results showed that the standard deviation of lane position (SDLP) was increased when the driving speed was faster (0.30 m at 36 km/h; 0.36 m at 72 km/h). The concurrent comprehension task had no significant effect on SDLP (0.34 m on average) or the standard deviation of steering wheel angle (SDSWA; 5.20° on average). The correct rate of the comprehension task was reduced in the dual-task condition (from 93.4% to 91.3%) compared with the comprehension single-task condition. Mental workload was significantly higher in the dual-task condition compared with the single-task conditions. Implications for driving safety were discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Auditory-Perceptual Learning Improves Speech Motor Adaptation in Children

    PubMed Central

    Shiller, Douglas M.; Rochon, Marie-Lyne

    2015-01-01

    Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5–7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children’s ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation. PMID:24842067

  6. Identifying Residual Speech Sound Disorders in Bilingual Children: A Japanese-English Case Study

    PubMed Central

    Preston, Jonathan L.; Seki, Ayumi

    2012-01-01

    Purpose The purposes are to (1) describe the assessment of residual speech sound disorders (SSD) in bilinguals by distinguishing speech patterns associated with second language acquisition from patterns associated with misarticulations, and (2) describe how assessment of domains such as speech motor control and phonological awareness can provide a more complete understanding of SSDs in bilinguals. Method A review of Japanese phonology is provided to offer a context for understanding the transfer of Japanese to English productions. A case study of an 11-year-old is presented, demonstrating parallel speech assessments in English and Japanese. Speech motor and phonological awareness tasks were conducted in both languages. Results Several patterns were observed in the participant’s English that could be plausibly explained by the influence of Japanese phonology. However, errors indicating a residual SSD were observed in both Japanese and English. A speech motor assessment suggested possible speech motor control problems, and phonological awareness was judged to be within the typical range of performance in both languages. Conclusion Understanding the phonological characteristics of L1 can help clinicians recognize speech patterns in L2 associated with transfer. Once these differences are understood, patterns associated with a residual SSD can be identified. Supplementing a relational speech analysis with measures of speech motor control and phonological awareness can provide a more comprehensive understanding of a client’s strengths and needs. PMID:21386046

  7. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    PubMed

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. The brain dynamics of rapid perceptual adaptation to adverse listening conditions.

    PubMed

    Erb, Julia; Henry, Molly J; Eisner, Frank; Obleser, Jonas

    2013-06-26

    Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an "executive" network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic "language" areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory-language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.

  9. The right hemisphere supports but does not replace left hemisphere auditory function in patients with persisting aphasia.

    PubMed

    Teki, Sundeep; Barnes, Gareth R; Penny, William D; Iverson, Paul; Woodhead, Zoe V J; Griffiths, Timothy D; Leff, Alexander P

    2013-06-01

    In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.

  10. The right hemisphere supports but does not replace left hemisphere auditory function in patients with persisting aphasia

    PubMed Central

    Barnes, Gareth R.; Penny, William D.; Iverson, Paul; Woodhead, Zoe V. J.; Griffiths, Timothy D.; Leff, Alexander P.

    2013-01-01

    In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics’ speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired. PMID:23715097

  11. User Evaluation of a Communication System That Automatically Generates Captions to Improve Telephone Communication

    PubMed Central

    Zekveld, Adriana A.; Kramer, Sophia E.; Kessens, Judith M.; Vlaming, Marcel S. M. G.; Houtgast, Tammo

    2009-01-01

    This study examined the subjective benefit obtained from automatically generated captions during telephone-speech comprehension in the presence of babble noise. Short stories were presented by telephone either with or without captions that were generated offline by an automatic speech recognition (ASR) system. To simulate online ASR, the word accuracy (WA) level of the captions was 60% or 70% and the text was presented delayed to the speech. After each test, the hearing impaired participants (n = 20) completed the NASA-Task Load Index and several rating scales evaluating the support from the captions. Participants indicated that using the erroneous text in speech comprehension was difficult and the reported task load did not differ between the audio + text and audio-only conditions. In a follow-up experiment (n = 10), the perceived benefit of presenting captions increased with an increase of WA levels to 80% and 90%, and elimination of the text delay. However, in general, the task load did not decrease when captions were presented. These results suggest that the extra effort required to process the text could have been compensated for by less effort required to comprehend the speech. Future research should aim at reducing the complexity of the task to increase the willingness of hearing impaired persons to use an assistive communication system automatically providing captions. The current results underline the need for obtaining both objective and subjective measures of benefit when evaluating assistive communication systems. PMID:19126551

  12. The Processing and Interpretation of Verb Phrase Ellipsis Constructions by Children at Normal and Slowed Speech Rates

    PubMed Central

    Callahan, Sarah M.; Walenski, Matthew; Love, Tracy

    2013-01-01

    Purpose To examine children’s comprehension of verb phrase (VP) ellipsis constructions in light of their automatic, online structural processing abilities and conscious, metalinguistic reflective skill. Method Forty-two children ages 5 through 12 years listened to VP ellipsis constructions involving the strict/sloppy ambiguity (e.g., “The janitor untangled himself from the rope and the fireman in the elementary school did too after the accident.”) in which the ellipsis phrase (“did too”) had 2 interpretations: (a) strict (“untangled the janitor”) and (b) sloppy (“untangled the fireman”). We examined these sentences at a normal speech rate with an online cross-modal picture priming task (n = 14) and an offline sentence–picture matching task (n = 11). Both tasks were also given with slowed speech input (n = 17). Results Children showed priming for both the strict and sloppy interpretations at a normal speech rate but only for the strict interpretation with slowed input. Offline, children displayed an adultlike preference for the sloppy interpretation with normal-rate input but a divergent pattern with slowed speech. Conclusions Our results suggest that children and adults rely on a hybrid syntax-discourse model for the online comprehension and offline interpretation of VP ellipsis constructions. This model incorporates a temporally sensitive syntactic process of VP reconstruction (disrupted with slow input) and a temporally protracted discourse effect attributed to parallelism (preserved with slow input). PMID:22223886

  13. Music and speech prosody: a common rhythm.

    PubMed

    Hausen, Maija; Torppa, Ritva; Salmela, Viljami R; Vainio, Martti; Särkämö, Teppo

    2013-01-01

    Disorders of music and speech perception, known as amusia and aphasia, have traditionally been regarded as dissociated deficits based on studies of brain damaged patients. This has been taken as evidence that music and speech are perceived by largely separate and independent networks in the brain. However, recent studies of congenital amusia have broadened this view by showing that the deficit is associated with problems in perceiving speech prosody, especially intonation and emotional prosody. In the present study the association between the perception of music and speech prosody was investigated with healthy Finnish adults (n = 61) using an on-line music perception test including the Scale subtest of Montreal Battery of Evaluation of Amusia (MBEA) and Off-Beat and Out-of-key tasks as well as a prosodic verbal task that measures the perception of word stress. Regression analyses showed that there was a clear association between prosody perception and music perception, especially in the domain of rhythm perception. This association was evident after controlling for music education, age, pitch perception, visuospatial perception, and working memory. Pitch perception was significantly associated with music perception but not with prosody perception. The association between music perception and visuospatial perception (measured using analogous tasks) was less clear. Overall, the pattern of results indicates that there is a robust link between music and speech perception and that this link can be mediated by rhythmic cues (time and stress).

  14. Music and speech prosody: a common rhythm

    PubMed Central

    Hausen, Maija; Torppa, Ritva; Salmela, Viljami R.; Vainio, Martti; Särkämö, Teppo

    2013-01-01

    Disorders of music and speech perception, known as amusia and aphasia, have traditionally been regarded as dissociated deficits based on studies of brain damaged patients. This has been taken as evidence that music and speech are perceived by largely separate and independent networks in the brain. However, recent studies of congenital amusia have broadened this view by showing that the deficit is associated with problems in perceiving speech prosody, especially intonation and emotional prosody. In the present study the association between the perception of music and speech prosody was investigated with healthy Finnish adults (n = 61) using an on-line music perception test including the Scale subtest of Montreal Battery of Evaluation of Amusia (MBEA) and Off-Beat and Out-of-key tasks as well as a prosodic verbal task that measures the perception of word stress. Regression analyses showed that there was a clear association between prosody perception and music perception, especially in the domain of rhythm perception. This association was evident after controlling for music education, age, pitch perception, visuospatial perception, and working memory. Pitch perception was significantly associated with music perception but not with prosody perception. The association between music perception and visuospatial perception (measured using analogous tasks) was less clear. Overall, the pattern of results indicates that there is a robust link between music and speech perception and that this link can be mediated by rhythmic cues (time and stress). PMID:24032022

  15. Developing a User-Oriented Second Language Comprehensibility Scale for English-Medium Universities

    ERIC Educational Resources Information Center

    Isaacs, Talia; Trofimovich, Pavel; Foote, Jennifer Ann

    2018-01-01

    There is growing research on the linguistic features that most contribute to making second language (L2) speech easy or difficult to understand. Comprehensibility, which is usually captured through listener judgments, is increasingly viewed as integral to the L2 speaking construct. However, there are shortcomings in how this construct is…

  16. Using Listener Judgments to Investigate Linguistic Influences on L2 Comprehensibility and Accentedness: A Validation and Generalization Study

    ERIC Educational Resources Information Center

    Saito, Kazuya; Trofimovich, Pavel; Isaacs, Talia

    2017-01-01

    The current study investigated linguistic influences on comprehensibility (ease of understanding) and accentedness (linguistic nativelikeness) in second language (L2) learners' extemporaneous speech. Target materials included picture narratives from 40 native French speakers of English from different proficiency levels. The narratives were…

  17. The Effect of Three Message Organization Variables Upon Listener Comprehension.

    ERIC Educational Resources Information Center

    Johnson, Arlee W.

    Public speaking texts urge speakers to organize their message in order to increase their audience's comprehension of it. Tests were run to determine if listeners understand better when three message organization variables are employed in a speech: explicit statement of the central idea, explicit statement of the main points, and transitions before…

  18. Brain Regions Recruited for the Effortful Comprehension of Noise-Vocoded Words

    ERIC Educational Resources Information Center

    Hervais-Adelman, Alexis G.; Carlyon, Robert P.; Johnsrude, Ingrid S.; Davis, Matthew H.

    2012-01-01

    We used functional magnetic resonance imaging (fMRI) to investigate the neural basis of comprehension and perceptual learning of artificially degraded [noise vocoded (NV)] speech. Fifteen participants were scanned while listening to 6-channel vocoded words, which are difficult for naive listeners to comprehend, but can be readily learned with…

  19. Intervention to Improve Expository Reading Comprehension Skills in Older Children and Adolescents with Language Disorders

    ERIC Educational Resources Information Center

    Ward-Lonergan, Jeannene M.; Duthie, Jill K.

    2016-01-01

    With the recent renewed emphasis on the importance of providing instruction to improve expository discourse comprehension and production skills, speech-language pathologists need to be prepared to implement effective intervention to meet this critical need in older children and adolescents with language disorders. The purpose of this review…

  20. Second Language Comprehensibility Revisited: Investigating the Effects of Learner Background

    ERIC Educational Resources Information Center

    Crowther, Dustin; Trofimovich, Pavel; Saito, Kazuya; Isaacs, Talia

    2015-01-01

    The current study investigated first language (L1) effects on listener judgment of comprehensibility and accentedness in second language (L2) speech. The participants were 45 university-level adult speakers of English from three L1 backgrounds (Chinese, Hindi, Farsi), performing a picture narrative task. Ten native English listeners used…

  1. Gesture and Metaphor Comprehension: Electrophysiological Evidence of Cross-Modal Coordination by Audiovisual Stimulation

    ERIC Educational Resources Information Center

    Cornejo, Carlos; Simonetti, Franco; Ibanez, Agustin; Aldunate, Nerea; Ceric, Francisco; Lopez, Vladimir; Nunez, Rafael E.

    2009-01-01

    In recent years, studies have suggested that gestures influence comprehension of linguistic expressions, for example, eliciting an N400 component in response to a speech/gesture mismatch. In this paper, we investigate the role of gestural information in the understanding of metaphors. Event related potentials (ERPs) were recorded while…

  2. Captions and Reduced Forms Instruction: The Impact on EFL Students' Listening Comprehension

    ERIC Educational Resources Information Center

    Yang, Jie Chi; Chang, Peichin

    2014-01-01

    For many EFL learners, listening poses a grave challenge. The difficulty in segmenting a stream of speech and limited capacity in short-term memory are common weaknesses for language learners. Specifically, reduced forms, which frequently appear in authentic informal conversations, compound the challenges in listening comprehension. Numerous…

  3. The Relationship between Psychopathology and Speech and Language Disorders in Neurologic Patients.

    ERIC Educational Resources Information Center

    Sapir, Shimon; Aronson, Arnold E.

    1990-01-01

    This paper reviews findings that suggest a causal relationship between depression, anxiety, or conversion reaction and voice, speech, and language disorders in neurologic patients. The paper emphasizes the need to consider the psychosocial and psychopathological aspects of neurologic communicative disorders, the link between emotional and…

  4. On pure word deafness, temporal processing, and the left hemisphere.

    PubMed

    Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean

    2005-07-01

    Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.

  5. An eye-tracking paradigm for analyzing the processing time of sentences with different linguistic complexities.

    PubMed

    Wendt, Dorothea; Brand, Thomas; Kollmeier, Birger

    2014-01-01

    An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics.

  6. An Eye-Tracking Paradigm for Analyzing the Processing Time of Sentences with Different Linguistic Complexities

    PubMed Central

    Wendt, Dorothea; Brand, Thomas; Kollmeier, Birger

    2014-01-01

    An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics. PMID:24950184

  7. Patterns of Post-Stroke Brain Damage that Predict Speech Production Errors in Apraxia of Speech and Aphasia Dissociate

    PubMed Central

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-01-01

    Background and Purpose Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions regarding if AOS emerges from a unique pattern of brain damage or as a sub-element of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Methods Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The Apraxia of Speech Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with AOS and/or aphasia. Localized brain damage was identified using structural MRI, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS and/or aphasia, and brain damage. Results The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS and/or aphasia were associated with damage to the temporal lobe and the inferior pre-central frontal regions. Conclusion AOS likely occurs in conjunction with aphasia due to the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. PMID:25908457

  8. Looking at a contrast object before speaking boosts referential informativeness, but is not essential.

    PubMed

    Davies, Catherine; Kreysa, Helene

    2017-07-01

    Variation in referential form has traditionally been accounted for by theoretical frameworks focusing on linguistic and discourse features. Despite the explosion of interest in eye tracking methods in psycholinguistics, the role of visual scanning behaviour in informative reference production is yet to be comprehensively investigated. Here we examine the relationship between speakers' fixations to relevant referents and the form of the referring expressions they produce. Overall, speakers were fully informative across simple and (to a lesser extent) more complex displays, providing appropriately modified referring expressions to enable their addressee to locate the target object. Analysis of contrast fixations revealed that looking at a contrast object boosts but is not essential for full informativeness. Contrast fixations which take place immediately before speaking provide the greatest boost. Informative referring expressions were also associated with later speech onsets than underinformative ones. Based on the finding that fixations during speech planning facilitate but do not fully predict informative referring, direct visual scanning is ruled out as a prerequisite for informativeness. Instead, pragmatic expectations of informativeness may play a more important role. Results are consistent with a goal-based link between eye movements and language processing, here applied for the first time to production processes. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A hypothesis on the biological origins and social evolution of music and dance

    PubMed Central

    Wang, Tianyan

    2015-01-01

    The origins of music and musical emotions is still an enigma, here I propose a comprehensive hypothesis on the origins and evolution of music, dance, and speech from a biological and sociological perspective. I suggest that every pitch interval between neighboring notes in music represents corresponding movement pattern through interpreting the Doppler effect of sound, which not only provides a possible explanation for the transposition invariance of music, but also integrates music and dance into a common form—rhythmic movements. Accordingly, investigating the origins of music poses the question: why do humans appreciate rhythmic movements? I suggest that human appreciation of rhythmic movements and rhythmic events developed from the natural selection of organisms adapting to the internal and external rhythmic environments. The perception and production of, as well as synchronization with external and internal rhythms are so vital for an organism's survival and reproduction, that animals have a rhythm-related reward and emotion (RRRE) system. The RRRE system enables the appreciation of rhythmic movements and events, and is integral to the origination of music, dance and speech. The first type of rewards and emotions (rhythm-related rewards and emotions, RRREs) are evoked by music and dance, and have biological and social functions, which in turn, promote the evolution of music, dance and speech. These functions also evoke a second type of rewards and emotions, which I name society-related rewards and emotions (SRREs). The neural circuits of RRREs and SRREs develop in species formation and personal growth, with congenital and acquired characteristics, respectively, namely music is the combination of nature and culture. This hypothesis provides probable selection pressures and outlines the evolution of music, dance, and speech. The links between the Doppler effect and the RRREs and SRREs can be empirically tested, making the current hypothesis scientifically concrete. PMID:25741232

  10. Examining the Echolalia Literature: Where Do Speech-Language Pathologists Stand?

    PubMed

    Stiegler, Lillian N

    2015-11-01

    Echolalia is a common element in the communication of individuals with autism spectrum disorders. Recent contributions to the literature reflect significant disagreement regarding how echolalia should be defined, understood, and managed. The purpose of this review article is to give speech-language pathologists and others a comprehensive view of the available perspectives on echolalia. Published literature from the disciplines of behavioral intervention, linguistics, and speech-language intervention is discussed. Special areas of focus include operational definitions, rationales associated with various approaches, specific procedures used to treat or study echolalic behavior, and reported conclusions. Dissimilarities in the definition and understanding of echolalia have led to vastly different approaches to management. Evidence-based practice protocols are available to guide speech-language interventionists in their work with individuals with autism spectrum disorders.

  11. Getting the cocktail party started: masking effects in speech perception

    PubMed Central

    Evans, S; McGettigan, C; Agnew, ZK; Rosen, S; Scott, SK

    2016-01-01

    Spoken conversations typically take place in noisy environments and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous functional Magnetic Resonance Imaging (fMRI), whilst they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioural task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream, and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment, activity was found within right lateralised frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise. PMID:26696297

  12. Alpha and Beta Oscillations Index Semantic Congruency between Speech and Gestures in Clear and Degraded Speech.

    PubMed

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-06-19

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech-gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + "mixing") or mismatching (drinking gesture + "walking") gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.

  13. Patterns of poststroke brain damage that predict speech production errors in apraxia of speech and aphasia dissociate.

    PubMed

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-06-01

    Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions on whether AOS emerges from a unique pattern of brain damage or as a subelement of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The AOS Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with both AOS and aphasia. Localized brain damage was identified using structural magnetic resonance imaging, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS or aphasia, and brain damage. The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS or aphasia were associated with damage to the temporal lobe and the inferior precentral frontal regions. AOS likely occurs in conjunction with aphasia because of the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. © 2015 American Heart Association, Inc.

  14. Discrimination of speech stimuli based on neuronal response phase patterns depends on acoustics but not comprehension.

    PubMed

    Howard, Mary F; Poeppel, David

    2010-11-01

    Speech stimuli give rise to neural activity in the listener that can be observed as waveforms using magnetoencephalography. Although waveforms vary greatly from trial to trial due to activity unrelated to the stimulus, it has been demonstrated that spoken sentences can be discriminated based on theta-band (3-7 Hz) phase patterns in single-trial response waveforms. Furthermore, manipulations of the speech signal envelope and fine structure that reduced intelligibility were found to produce correlated reductions in discrimination performance, suggesting a relationship between theta-band phase patterns and speech comprehension. This study investigates the nature of this relationship, hypothesizing that theta-band phase patterns primarily reflect cortical processing of low-frequency (<40 Hz) modulations present in the acoustic signal and required for intelligibility, rather than processing exclusively related to comprehension (e.g., lexical, syntactic, semantic). Using stimuli that are quite similar to normal spoken sentences in terms of low-frequency modulation characteristics but are unintelligible (i.e., their time-inverted counterparts), we find that discrimination performance based on theta-band phase patterns is equal for both types of stimuli. Consistent with earlier findings, we also observe that whereas theta-band phase patterns differ across stimuli, power patterns do not. We use a simulation model of the single-trial response to spoken sentence stimuli to demonstrate that phase-locked responses to low-frequency modulations of the acoustic signal can account not only for the phase but also for the power results. The simulation offers insight into the interpretation of the empirical results with respect to phase-resetting and power-enhancement models of the evoked response.

  15. The right hemisphere is highlighted in connected natural speech production and perception.

    PubMed

    Alexandrou, Anna Maria; Saarinen, Timo; Mäkelä, Sasu; Kujala, Jan; Salmelin, Riitta

    2017-05-15

    Current understanding of the cortical mechanisms of speech perception and production stems mostly from studies that focus on single words or sentences. However, it has been suggested that processing of real-life connected speech may rely on additional cortical mechanisms. In the present study, we examined the neural substrates of natural speech production and perception with magnetoencephalography by modulating three central features related to speech: amount of linguistic content, speaking rate and social relevance. The amount of linguistic content was modulated by contrasting natural speech production and perception to speech-like non-linguistic tasks. Meaningful speech was produced and perceived at three speaking rates: normal, slow and fast. Social relevance was probed by having participants attend to speech produced by themselves and an unknown person. These speech-related features were each associated with distinct spatiospectral modulation patterns that involved cortical regions in both hemispheres. Natural speech processing markedly engaged the right hemisphere in addition to the left. In particular, the right temporo-parietal junction, previously linked to attentional processes and social cognition, was highlighted in the task modulations. The present findings suggest that its functional role extends to active generation and perception of meaningful, socially relevant speech. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Musical rhythm and reading development: does beat processing matter?

    PubMed

    Ozernov-Palchik, Ola; Patel, Aniruddh D

    2018-05-20

    There is mounting evidence for links between musical rhythm processing and reading-related cognitive skills, such as phonological awareness. This may be because music and speech are rhythmic: both involve processing complex sound sequences with systematic patterns of timing, accent, and grouping. Yet, there is a salient difference between musical and speech rhythm: musical rhythm is often beat-based (based on an underlying grid of equal time intervals), while speech rhythm is not. Thus, the role of beat-based processing in the reading-rhythm relationship is not clear. Is there is a distinct relation between beat-based processing mechanisms and reading-related language skills, or is the rhythm-reading link entirely due to shared mechanisms for processing nonbeat-based aspects of temporal structure? We discuss recent evidence for a distinct link between beat-based processing and early reading abilities in young children, and suggest experimental designs that would allow one to further methodically investigate this relationship. We propose that beat-based processing taps into a listener's ability to use rich contextual regularities to form predictions, a skill important for reading development. © 2018 New York Academy of Sciences.

  17. The Contribution of Cognitive Factors to Individual Differences in Understanding Noise-Vocoded Speech in Young and Older Adults

    PubMed Central

    Rosemann, Stephanie; Gießing, Carsten; Özyurt, Jale; Carroll, Rebecca; Puschmann, Sebastian; Thiel, Christiane M.

    2017-01-01

    Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome. PMID:28638329

  18. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography.

    PubMed

    Ozker, Muge; Schepers, Inga M; Magnotti, John F; Yoshor, Daniel; Beauchamp, Michael S

    2017-06-01

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.

  19. Reference in Action: Links between Pointing and Language

    ERIC Educational Resources Information Center

    Cooperrider, Kensy Andrew

    2011-01-01

    When referring to things in the world, speakers produce utterances that are composites of speech and action. Pointing gestures are a pervasive part of such composite utterances, but many questions remain about exactly how pointing is integrated with speech. In this dissertation I present three strands of research that investigate relations of…

  20. Use of "um" in the Deceptive Speech of a Convicted Murderer

    ERIC Educational Resources Information Center

    Villar, Gina; Arciuli, Joanne; Mallard, David

    2012-01-01

    Previous studies have demonstrated a link between language behaviors and deception; however, questions remain about the role of specific linguistic cues, especially in real-life high-stakes lies. This study investigated use of the so-called filler, "um," in externally verifiable truthful versus deceptive speech of a convicted murderer. The data…

  1. Why Not Non-Native Varieties of English as Listening Comprehension Test Input?

    ERIC Educational Resources Information Center

    Abeywickrama, Priyanvada

    2013-01-01

    The existence of different varieties of English in target language use (TLU) domains calls into question the usefulness of listening comprehension tests whose input is limited only to a native speaker variety. This study investigated the impact of non-native varieties or accented English speech on test takers from three different English use…

  2. Computer-Assisted Training in the Comprehension of Authentic French Speech: A Closer View

    ERIC Educational Resources Information Center

    Hoeflaak, Arie

    2004-01-01

    In this article, the development of a computer-assisted listening comprehension project is described. First, we comment briefly on the points of departure, the need for autonomous learning against the background of recent changes in Dutch education, and the role of learning strategies. Then, an error analysis, the programs used for this project,…

  3. The Effect of Training in Listening to Speeded Discourse on Listening Comprehension.

    ERIC Educational Resources Information Center

    Krall, W. Richard

    A study to investigate the effect of training in listening to speeded discourse on listening comprehension was conducted. Specifically, the study was designed to test the following hypothesis: There is no signifant difference in the amount of gain in listening achievement of the sixth-grade pupils who received speeded discourse speech training…

  4. Children's Reading Comprehension and Narrative Recall in Sung and Spoken Story Contexts

    ERIC Educational Resources Information Center

    Kouri, Theresa; Telander, Karen

    2008-01-01

    A growing number of reading professionals have advocated teaching literacy through music and song; however, little research exists supporting such practices. The purpose of this study was to determine if sung story book readings would enhance story comprehension and narrative re-tellings in children with histories of speech and language delay.…

  5. Fingerspelled and Printed Words Are Recoded into a Speech-based Code in Short-term Memory.

    PubMed

    Sehyr, Zed Sevcikova; Petrich, Jennifer; Emmorey, Karen

    2017-01-01

    We conducted three immediate serial recall experiments that manipulated type of stimulus presentation (printed or fingerspelled words) and word similarity (speech-based or manual). Matched deaf American Sign Language signers and hearing non-signers participated (mean reading age = 14-15 years). Speech-based similarity effects were found for both stimulus types indicating that deaf signers recoded both printed and fingerspelled words into a speech-based phonological code. A manual similarity effect was not observed for printed words indicating that print was not recoded into fingerspelling (FS). A manual similarity effect was observed for fingerspelled words when similarity was based on joint angles rather than on handshape compactness. However, a follow-up experiment suggested that the manual similarity effect was due to perceptual confusion at encoding. Overall, these findings suggest that FS is strongly linked to English phonology for deaf adult signers who are relatively skilled readers. This link between fingerspelled words and English phonology allows for the use of a more efficient speech-based code for retaining fingerspelled words in short-term memory and may strengthen the representation of English vocabulary. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    PubMed

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.

  7. [A case of crossed aphasia with echolalia after the resection of tumor in the right medial frontal lobe].

    PubMed

    Endo, K; Suzuki, K; Yamadori, A; Kumabe, T; Seki, K; Fujii, T

    2001-03-01

    We report a right-handed woman, who developed a non-fluent aphasia after resection of astrocytoma (grade III) in the right medial frontal lobe. On admission to the rehabilitation department, neurological examination revealed mild left hemiparesis, hyperreflexia on the left side and grasp reflex on the left hand. Neuropsychologically she showed general inattention, non-fluent aphasia, acalculia, constructional disability, and mild buccofacial apraxia. No other apraxia, unilateral spatial neglect or extinction phenomena were observed. An MRI demonstrated resected areas in the right superior frontal gyrus, subcortical region in the right middle frontal gyrus, anterior part of the cingulate gyrus, a part of supplementary motor area. Surrounding area in the right frontal lobe showed diffuse signal change. She demonstrated non-fluent aprosodic speech with word finding difficulty. No phonemic paraphasia, or anarthria was observed. Auditory comprehension was fair with some difficulty in comprehending complex commands. Naming was good, but verbal fluency tests for a category or phonemic cuing was severely impaired. She could repeat words but not sentences. Reading comprehension was disturbed by semantic paralexia and writing words was poor for both Kana (syllabogram) and Kanji(logogram) characters. A significant feature of her speech was mitigated echolalia. In both free conversation and examination setting, she often repeated phrases spoken to her which she used to start her speech. In addition, she repeated words spoken to others which were totally irrelevant to her conversation. She was aware of her echoing, which always embarrassed her. She described her echolalic tendency as a great nuisance. However, once echoing being forbidden, she could not initiate her speech and made incorrect responses after long delay. Thus, her compulsive echolalia helped to start her speech. Only four patients with crossed aphasia demonstrated echolalia in the literature. They showed severe aphasia with markedly decreased speech and severe comprehension deficit. A patient with a similar lesion in the right medial frontal lobe had aspontaneity in general and language function per se could not be examined properly. Echolalia related to the medial frontal lesion in the language dominant hemisphere was described as a compulsive speech response, because some other 'echoing' phenomena or compulsive behavior were also observed in these patients. On the other hand, some patients with a large lesion in the right hemisphere tended to respond to stimuli directed to other patients, so called 'response-to-next-patient-stimulation'. This behavior was explained by disinhibited shift of attention or perseveration of the set. Both compulsive speech responses and 'response-to-next-patient-stimulation' like phenomena may have contributed to the echolalia phenomena of the present case.

  8. 38 CFR 52.160 - Specialized rehabilitative services.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., but not limited to, physical therapy, speech therapy, occupational therapy, and mental health services for mental illness are required in the participant's comprehensive plan of care, program management...

  9. Cortical Plasticity after Cochlear Implantation

    PubMed Central

    Petersen, B.; Gjedde, A.; Wallentin, M.; Vuust, P.

    2013-01-01

    The most dramatic progress in the restoration of hearing takes place in the first months after cochlear implantation. To map the brain activity underlying this process, we used positron emission tomography at three time points: within 14 days, three months, and six months after switch-on. Fifteen recently implanted adult implant recipients listened to running speech or speech-like noise in four sequential PET sessions at each milestone. CI listeners with postlingual hearing loss showed differential activation of left superior temporal gyrus during speech and speech-like stimuli, unlike CI listeners with prelingual hearing loss. Furthermore, Broca's area was activated as an effect of time, but only in CI listeners with postlingual hearing loss. The study demonstrates that adaptation to the cochlear implant is highly related to the history of hearing loss. Speech processing in patients whose hearing loss occurred after the acquisition of language involves brain areas associated with speech comprehension, which is not the case for patients whose hearing loss occurred before the acquisition of language. Finally, the findings confirm the key role of Broca's area in restoration of speech perception, but only in individuals in whom Broca's area has been active prior to the loss of hearing. PMID:24377050

  10. Robust relationship between reading span and speech recognition in noise

    PubMed Central

    Souza, Pamela; Arehart, Kathryn

    2015-01-01

    Objective Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. Design The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. Study sample The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Results Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Conclusions Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition. PMID:25975360

  11. Robust relationship between reading span and speech recognition in noise.

    PubMed

    Souza, Pamela; Arehart, Kathryn

    2015-01-01

    Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition.

  12. Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing

    PubMed Central

    Rauschecker, Josef P; Scott, Sophie K

    2010-01-01

    Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production. PMID:19471271

  13. Nonverbal oral apraxia in primary progressive aphasia and apraxia of speech.

    PubMed

    Botha, Hugo; Duffy, Joseph R; Strand, Edythe A; Machulda, Mary M; Whitwell, Jennifer L; Josephs, Keith A

    2014-05-13

    The goal of this study was to explore the prevalence of nonverbal oral apraxia (NVOA), its association with other forms of apraxia, and associated imaging findings in patients with primary progressive aphasia (PPA) and progressive apraxia of speech (PAOS). Patients with a degenerative speech or language disorder were prospectively recruited and diagnosed with a subtype of PPA or with PAOS. All patients had comprehensive speech and language examinations. Voxel-based morphometry was performed to determine whether atrophy of a specific region correlated with the presence of NVOA. Eighty-nine patients were identified, of which 34 had PAOS, 9 had agrammatic PPA, 41 had logopenic aphasia, and 5 had semantic dementia. NVOA was very common among patients with PAOS but was found in patients with PPA as well. Several patients exhibited only one of NVOA or apraxia of speech. Among patients with apraxia of speech, the severity of the apraxia of speech was predictive of NVOA, whereas ideomotor apraxia severity was predictive of the presence of NVOA in those without apraxia of speech. Bilateral atrophy of the prefrontal cortex anterior to the premotor area and supplementary motor area was associated with NVOA. Apraxia of speech, NVOA, and ideomotor apraxia are at least partially separable disorders. The association of NVOA and apraxia of speech likely results from the proximity of the area reported here and the premotor area, which has been implicated in apraxia of speech. The association of ideomotor apraxia and NVOA among patients without apraxia of speech could represent disruption of modules shared by nonverbal oral movements and limb movements.

  14. Nonverbal oral apraxia in primary progressive aphasia and apraxia of speech

    PubMed Central

    Botha, Hugo; Duffy, Joseph R.; Strand, Edythe A.; Machulda, Mary M.; Whitwell, Jennifer L.

    2014-01-01

    Objective: The goal of this study was to explore the prevalence of nonverbal oral apraxia (NVOA), its association with other forms of apraxia, and associated imaging findings in patients with primary progressive aphasia (PPA) and progressive apraxia of speech (PAOS). Methods: Patients with a degenerative speech or language disorder were prospectively recruited and diagnosed with a subtype of PPA or with PAOS. All patients had comprehensive speech and language examinations. Voxel-based morphometry was performed to determine whether atrophy of a specific region correlated with the presence of NVOA. Results: Eighty-nine patients were identified, of which 34 had PAOS, 9 had agrammatic PPA, 41 had logopenic aphasia, and 5 had semantic dementia. NVOA was very common among patients with PAOS but was found in patients with PPA as well. Several patients exhibited only one of NVOA or apraxia of speech. Among patients with apraxia of speech, the severity of the apraxia of speech was predictive of NVOA, whereas ideomotor apraxia severity was predictive of the presence of NVOA in those without apraxia of speech. Bilateral atrophy of the prefrontal cortex anterior to the premotor area and supplementary motor area was associated with NVOA. Conclusions: Apraxia of speech, NVOA, and ideomotor apraxia are at least partially separable disorders. The association of NVOA and apraxia of speech likely results from the proximity of the area reported here and the premotor area, which has been implicated in apraxia of speech. The association of ideomotor apraxia and NVOA among patients without apraxia of speech could represent disruption of modules shared by nonverbal oral movements and limb movements. PMID:24727315

  15. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech.

    PubMed

    Dick, Anthony Steven; Mok, Eva H; Raja Beharelle, Anjali; Goldin-Meadow, Susan; Small, Steven L

    2014-03-01

    In everyday conversation, listeners often rely on a speaker's gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers' iconic gestures. We focused on iconic gestures that contribute information not found in the speaker's talk, compared with those that convey information redundant with the speaker's talk. We found that three regions-left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)--responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. Copyright © 2012 Wiley Periodicals, Inc.

  16. Eye’m talking to you: speakers’ gaze direction modulates co-speech gesture processing in the right MTG

    PubMed Central

    Toni, Ivan; Hagoort, Peter; Kelly, Spencer D.; Özyürek, Aslı

    2015-01-01

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts. PMID:24652857

  17. Bilateral Versus Unilateral Cochlear Implantation in Adult Listeners: Speech-On-Speech Masking and Multitalker Localization.

    PubMed

    Rana, Baljeet; Buchholz, Jörg M; Morgan, Catherine; Sharma, Mridula; Weller, Tobias; Konganda, Shivali Appaiah; Shirai, Kyoko; Kawano, Atsushi

    2017-01-01

    Binaural hearing helps normal-hearing listeners localize sound sources and understand speech in noise. However, it is not fully understood how far this is the case for bilateral cochlear implant (CI) users. To determine the potential benefits of bilateral over unilateral CIs, speech comprehension thresholds (SCTs) were measured in seven Japanese bilateral CI recipients using Helen test sentences (translated into Japanese) in a two-talker speech interferer presented from the front (co-located with the target speech), ipsilateral to the first-implanted ear (at +90° or -90°), and spatially symmetric at ±90°. Spatial release from masking was calculated as the difference between co-located and spatially separated SCTs. Localization was assessed in the horizontal plane by presenting either male or female speech or both simultaneously. All measurements were performed bilaterally and unilaterally (with the first implanted ear) inside a loudspeaker array. Both SCTs and spatial release from masking were improved with bilateral CIs, demonstrating mean bilateral benefits of 7.5 dB in spatially asymmetric and 3 dB in spatially symmetric speech mixture. Localization performance varied strongly between subjects but was clearly improved with bilateral over unilateral CIs with the mean localization error reduced by 27°. Surprisingly, adding a second talker had only a negligible effect on localization.

  18. [Validity criteria of a short test to assess speech and language competence in 4-year-olds].

    PubMed

    Euler, H A; Holler-Zittlau, I; Minnen, S; Sick, U; Dux, W; Zaretsky, Y; Neumann, K

    2010-11-01

    A psychometrically constructed short test as a prerequisite for screening was developed on the basis of a revision of the Marburger Speech Screening to assess speech/language competence among children in Hessen (Germany). A total of 257 children (age 4.0 to 4.5 years) performed the test battery for speech/language competence; 214 children repeated the test 1 year later. Test scores correlated highly with scores of two competing language screenings (SSV, HASE) and with a combined score from four diagnostic tests of individual speech/language competences (Reynell III, patholinguistic diagnostics in impaired language development, PLAKSS, AWST-R). Validity was demonstrated by three comparisons: (1) Children with German family language had higher scores than children with another language. (2) The 3-month-older children achieved higher scores than younger children. (3) The difference between the children with German family language and those with another language was higher for the 3-month-older than for the younger children. The short test assesses the speech/language competence of 4-year-olds quickly, validly, and comprehensively.

  19. Bilinguals at the "cocktail party": dissociable neural activity in auditory-linguistic brain regions reveals neurobiological basis for nonnative listeners' speech-in-noise recognition deficits.

    PubMed

    Bidelman, Gavin M; Dexter, Lauren

    2015-04-01

    We examined a consistent deficit observed in bilinguals: poorer speech-in-noise (SIN) comprehension for their nonnative language. We recorded neuroelectric mismatch potentials in mono- and bi-lingual listeners in response to contrastive speech sounds in noise. Behaviorally, late bilinguals required ∼10dB more favorable signal-to-noise ratios to match monolinguals' SIN abilities. Source analysis of cortical activity demonstrated monotonic increase in response latency with noise in superior temporal gyrus (STG) for both groups, suggesting parallel degradation of speech representations in auditory cortex. Contrastively, we found differential speech encoding between groups within inferior frontal gyrus (IFG)-adjacent to Broca's area-where noise delays observed in nonnative listeners were offset in monolinguals. Notably, brain-behavior correspondences double dissociated between language groups: STG activation predicted bilinguals' SIN, whereas IFG activation predicted monolinguals' performance. We infer higher-order brain areas act compensatorily to enhance impoverished sensory representations but only when degraded speech recruits linguistic brain mechanisms downstream from initial auditory-sensory inputs. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Early Listening and Speaking Skills Predict Later Reading Proficiency in Pediatric Cochlear Implant Users

    PubMed Central

    Spencer, Linda J.; Oleson, Jacob J.

    2011-01-01

    Objectives Previous studies have reported that children who use cochlear implants (CIs) tend to achieve higher reading levels than their peers with profound hearing loss who use hearing aids. The purpose of this study was to investigate the influences of auditory information provided by the CI on the later reading skills of children born with profound deafness. The hypothesis was that there would be a positive and predictive relationship between earlier speech perception, production, and subsequent reading comprehension. Design The speech perception and production skills at the vowel, consonant, phoneme, and word level of 72 children with prelingual, profound hearing loss were assessed after 48 mos of CI use. The children's reading skills were subsequently assessed using word and passage comprehension measures after an average of 89.5 mos of CI use. A regression analysis determined the amount of variance in reading that could be explained by the variables of perception, production, and socioeconomic status. Results Regression analysis revealed that it was possible to explain 59% of the variance of later reading skills by assessing the early speech perception and production performance. The results indicated that early speech perception and production skills of children with profound hearing loss who receive CIs predict future reading achievement skills. Furthermore, the study implies that better early speech perception and production skills result in higher reading achievement. It is speculated that the early access to sound helps to build better phonological processing skills, which is one of the likely contributors to eventual reading success. PMID:18595191

  1. The contribution of visual areas to speech comprehension: a PET study in cochlear implants patients and normal-hearing subjects.

    PubMed

    Giraud, Anne Lise; Truy, Eric

    2002-01-01

    Early visual cortex can be recruited by meaningful sounds in the absence of visual information. This occurs in particular in cochlear implant (CI) patients whose dependency on visual cues in speech comprehension is increased. Such cross-modal interaction mirrors the response of early auditory cortex to mouth movements (speech reading) and may reflect the natural expectancy of the visual counterpart of sounds, lip movements. Here we pursue the hypothesis that visual activations occur specifically in response to meaningful sounds. We performed PET in both CI patients and controls, while subjects listened either to their native language or to a completely unknown language. A recruitment of early visual cortex, the left posterior inferior temporal gyrus (ITG) and the left superior parietal cortex was observed in both groups. While no further activation occurred in the group of normal-hearing subjects, CI patients additionally recruited the right perirhinal/fusiform and mid-fusiform, the right temporo-occipito-parietal (TOP) junction and the left inferior prefrontal cortex (LIPF, Broca's area). This study confirms a participation of visual cortical areas in semantic processing of speech sounds. Observation of early visual activation in normal-hearing subjects shows that auditory-to-visual cross-modal effects can also be recruited under natural hearing conditions. In cochlear implant patients, speech activates the mid-fusiform gyrus in the vicinity of the so-called face area. This suggests that specific cross-modal interaction involving advanced stages in the visual processing hierarchy develops after cochlear implantation and may be the correlate of increased usage of lip-reading.

  2. [Qualifying language disorders of schizophrenia through the speech therapists' assessment].

    PubMed

    Boucard, C; Laffy-Beaufils, B

    2008-06-01

    This study investigates a comprehensive assessment of language disorders in order to identify impaired and unaffected language abilities of individuals with schizophrenia. Furthermore, the purpose of this study was to demonstrate the importance of the role of speech therapists in the treatment of schizophrenia. Speech therapy is especially thought to treat language disorders. However, to date, speech therapists have not been solicited in the treatment of schizophrenia, despite growing evidence supporting that schizophrenia is characterized by cognitive disorders such as impairments in memory, attention, executive functioning and language. In this article, we discuss the fact that elements of language and cognition are interactively affected and that cognition influences language. We then demonstrate that language impairments can be treated in the same way as neurological language impairments (cerebrovascular disease, brain injury), in order to reduce their functional outcome. Schizophrenia affects the pragmatic component of language with a major negative outcome in daily living skills [Champagne M, Stip E, Joanette Y. Social cognition deficit in schizophrenia: accounting for pragmatic deficits in communication abilities? Curr Psychiatry Rev:2006;(2):309-315]. The results of our comprehensive assessment also provide a basis for the design of a care plan. For this, subjects with schizophrenia were examined for language comprehension and language production with a focus on pragmatic abilities. In neurology, standardized tests are available that have been designed specifically to assess language functions. However, no such tests are available in psychiatry, so we gathered assessments widely used in neurology and examined the more relevant skills. In this article, each test we chose is described and particular attention is paid to the information they provided on impaired language abilities in schizophrenia. In this manner, we provide an accurate characterization of schizophrenia-associated language impairments and offer a solid foundation for rehabilitation. Current research makes connections between schizophrenia and other neurological disorders concerning language. Nevertheless, further studies are needed to explore these connections to complete our investigations. The strategies we designed are aimed at enabling a subject with schizophrenia to improve his/her language skills. We support the idea that such improvement could be reached by speech therapy. We conclude that speech therapists can play an important role in the non pharmacological treatment of schizophrenia, by selecting appropriate interventions that capitalize on spared abilities to compensate for impaired abilities.

  3. Don’t speak too fast! Processing of fast rate speech in children with specific language impairment

    PubMed Central

    Bedoin, Nathalie; Krifi-Papoz, Sonia; Herbillon, Vania; Caillot-Bascoul, Aurélia; Gonzalez-Monge, Sibylle; Boulenger, Véronique

    2018-01-01

    Background Perception of speech rhythm requires the auditory system to track temporal envelope fluctuations, which carry syllabic and stress information. Reduced sensitivity to rhythmic acoustic cues has been evidenced in children with Specific Language Impairment (SLI), impeding syllabic parsing and speech decoding. Our study investigated whether these children experience specific difficulties processing fast rate speech as compared with typically developing (TD) children. Method Sixteen French children with SLI (8–13 years old) with mainly expressive phonological disorders and with preserved comprehension and 16 age-matched TD children performed a judgment task on sentences produced 1) at normal rate, 2) at fast rate or 3) time-compressed. Sensitivity index (d′) to semantically incongruent sentence-final words was measured. Results Overall children with SLI perform significantly worse than TD children. Importantly, as revealed by the significant Group × Speech Rate interaction, children with SLI find it more challenging than TD children to process both naturally or artificially accelerated speech. The two groups do not significantly differ in normal rate speech processing. Conclusion In agreement with rhythm-processing deficits in atypical language development, our results suggest that children with SLI face difficulties adjusting to rapid speech rate. These findings are interpreted in light of temporal sampling and prosodic phrasing frameworks and of oscillatory mechanisms underlying speech perception. PMID:29373610

  4. Effect of concurrent walking and interlocutor distance on conversational speech intensity and rate in Parkinson's disease.

    PubMed

    McCaig, Cassandra M; Adams, Scott G; Dykstra, Allyson D; Jog, Mandar

    2016-01-01

    Previous studies have demonstrated a negative effect of concurrent walking and talking on gait in Parkinson's disease (PD) but there is limited information about the effect of concurrent walking on speech production. The present study examined the effect of sitting, standing, and three concurrent walking tasks (slow, normal, fast) on conversational speech intensity and speech rate in fifteen individuals with hypophonia related to idiopathic Parkinson's disease (PD) and fourteen age-equivalent controls. Interlocuter (talker-to-talker) distance effects and walking speed were also examined. Concurrent walking was found to produce a significant increase in speech intensity, relative to standing and sitting, in both the control and PD groups. Faster walking produced significantly greater speech intensity than slower walking. Concurrent walking had no effect on speech rate. Concurrent walking and talking produced significant reductions in walking speed in both the control and PD groups. In general, the results of the present study indicate that concurrent walking tasks and the speed of concurrent walking can have a significant positive effect on conversational speech intensity. These positive, "energizing" effects need to be given consideration in future attempts to develop a comprehensive model of speech intensity regulation and they may have important implications for the development of new evaluation and treatment procedures for individuals with hypophonia related to PD. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  5. Hearing Aids

    MedlinePlus

    ... primarily useful in improving the hearing and speech comprehension of people who have hearing loss that results ... and you can change the program for different listening environments—from a small, quiet room to a ...

  6. Imitation and speech: commonalities within Broca's area.

    PubMed

    Kühn, Simone; Brass, Marcel; Gallinat, Jürgen

    2013-11-01

    The so-called embodiment of communication has attracted considerable interest. Recently a growing number of studies have proposed a link between Broca's area's involvement in action processing and its involvement in speech. The present quantitative meta-analysis set out to test whether neuroimaging studies on imitation and overt speech show overlap within inferior frontal gyrus. By means of activation likelihood estimation (ALE), we investigated concurrence of brain regions activated by object-free hand imitation studies as well as overt speech studies including simple syllable and more complex word production. We found direct overlap between imitation and speech in bilateral pars opercularis (BA 44) within Broca's area. Subtraction analyses revealed no unique localization neither for speech nor for imitation. To verify the potential of ALE subtraction analysis to detect unique involvement within Broca's area, we contrasted the results of a meta-analysis on motor inhibition and imitation and found separable regions involved for imitation. This is the first meta-analysis to compare the neural correlates of imitation and overt speech. The results are in line with the proposed evolutionary roots of speech in imitation.

  7. Anatomy of aphasia revisited.

    PubMed

    Fridriksson, Julius; den Ouden, Dirk-Bart; Hillis, Argye E; Hickok, Gregory; Rorden, Chris; Basilakos, Alexandra; Yourganov, Grigori; Bonilha, Leonardo

    2018-01-17

    In most cases, aphasia is caused by strokes involving the left hemisphere, with more extensive damage typically being associated with more severe aphasia. The classical model of aphasia commonly adhered to in the Western world is the Wernicke-Lichtheim model. The model has been in existence for over a century, and classification of aphasic symptomatology continues to rely on it. However, far more detailed models of speech and language localization in the brain have been formulated. In this regard, the dual stream model of cortical brain organization proposed by Hickok and Poeppel is particularly influential. Their model describes two processing routes, a dorsal stream and a ventral stream, that roughly support speech production and speech comprehension, respectively, in normal subjects. Despite the strong influence of the dual stream model in current neuropsychological research, there has been relatively limited focus on explaining aphasic symptoms in the context of this model. Given that the dual stream model represents a more nuanced picture of cortical speech and language organization, cortical damage that causes aphasic impairment should map clearly onto the dual processing streams. Here, we present a follow-up study to our previous work that used lesion data to reveal the anatomical boundaries of the dorsal and ventral streams supporting speech and language processing. Specifically, by emphasizing clinical measures, we examine the effect of cortical damage and disconnection involving the dorsal and ventral streams on aphasic impairment. The results reveal that measures of motor speech impairment mostly involve damage to the dorsal stream, whereas measures of impaired speech comprehension are more strongly associated with ventral stream involvement. Equally important, many clinical tests that target behaviours such as naming, speech repetition, or grammatical processing rely on interactions between the two streams. This latter finding explains why patients with seemingly disparate lesion locations often experience similar impairments on given subtests. Namely, these individuals' cortical damage, although dissimilar, affects a broad cortical network that plays a role in carrying out a given speech or language task. The current data suggest this is a more accurate characterization than ascribing specific lesion locations as responsible for specific language deficits.awx363media15705668782001. © The Author(s) (2018). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Did you or I say pretty, rude or brief? An ERP study of the effects of speaker's identity on emotional word processing.

    PubMed

    Pinheiro, Ana P; Rezaii, Neguine; Nestor, Paul G; Rauber, Andréia; Spencer, Kevin M; Niznikiewicz, Margaret

    2016-02-01

    During speech comprehension, multiple cues need to be integrated at a millisecond speed, including semantic information, as well as voice identity and affect cues. A processing advantage has been demonstrated for self-related stimuli when compared with non-self stimuli, and for emotional relative to neutral stimuli. However, very few studies investigated self-other speech discrimination and, in particular, how emotional valence and voice identity interactively modulate speech processing. In the present study we probed how the processing of words' semantic valence is modulated by speaker's identity (self vs. non-self voice). Sixteen healthy subjects listened to 420 prerecorded adjectives differing in voice identity (self vs. non-self) and semantic valence (neutral, positive and negative), while electroencephalographic data were recorded. Participants were instructed to decide whether the speech they heard was their own (self-speech condition), someone else's (non-self speech), or if they were unsure. The ERP results demonstrated interactive effects of speaker's identity and emotional valence on both early (N1, P2) and late (Late Positive Potential - LPP) processing stages: compared with non-self speech, self-speech with neutral valence elicited more negative N1 amplitude, self-speech with positive valence elicited more positive P2 amplitude, and self-speech with both positive and negative valence elicited more positive LPP. ERP differences between self and non-self speech occurred in spite of similar accuracy in the recognition of both types of stimuli. Together, these findings suggest that emotion and speaker's identity interact during speech processing, in line with observations of partially dependent processing of speech and speaker information. Copyright © 2016. Published by Elsevier Inc.

  9. The eye as a window to the listening brain: neural correlates of pupil size as a measure of cognitive listening load.

    PubMed

    Zekveld, Adriana A; Heslenfeld, Dirk J; Johnsrude, Ingrid S; Versfeld, Niek J; Kramer, Sophia E

    2014-11-01

    An important aspect of hearing is the degree to which listeners have to deploy effort to understand speech. One promising measure of listening effort is task-evoked pupil dilation. Here, we use functional magnetic resonance imaging (fMRI) to identify the neural correlates of pupil dilation during comprehension of degraded spoken sentences in 17 normal-hearing listeners. Subjects listened to sentences degraded in three different ways: the target female speech was masked by fluctuating noise, by speech from a single male speaker, or the target speech was noise-vocoded. The degree of degradation was individually adapted such that 50% or 84% of the sentences were intelligible. Control conditions included clear speech in quiet, and silent trials. The peak pupil dilation was larger for the 50% compared to the 84% intelligibility condition, and largest for speech masked by the single-talker masker, followed by speech masked by fluctuating noise, and smallest for noise-vocoded speech. Activation in the bilateral superior temporal gyrus (STG) showed the same pattern, with most extensive activation for speech masked by the single-talker masker. Larger peak pupil dilation was associated with more activation in the bilateral STG, bilateral ventral and dorsal anterior cingulate cortex and several frontal brain areas. A subset of the temporal region sensitive to pupil dilation was also sensitive to speech intelligibility and degradation type. These results show that pupil dilation during speech perception in challenging conditions reflects both auditory and cognitive processes that are recruited to cope with degraded speech and the need to segregate target speech from interfering sounds. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Neural Oscillations Carry Speech Rhythm through to Comprehension

    PubMed Central

    Peelle, Jonathan E.; Davis, Matthew H.

    2012-01-01

    A key feature of speech is the quasi-regular rhythmic information contained in its slow amplitude modulations. In this article we review the information conveyed by speech rhythm, and the role of ongoing brain oscillations in listeners’ processing of this content. Our starting point is the fact that speech is inherently temporal, and that rhythmic information conveyed by the amplitude envelope contains important markers for place and manner of articulation, segmental information, and speech rate. Behavioral studies demonstrate that amplitude envelope information is relied upon by listeners and plays a key role in speech intelligibility. Extending behavioral findings, data from neuroimaging – particularly electroencephalography (EEG) and magnetoencephalography (MEG) – point to phase locking by ongoing cortical oscillations to low-frequency information (~4–8 Hz) in the speech envelope. This phase modulation effectively encodes a prediction of when important events (such as stressed syllables) are likely to occur, and acts to increase sensitivity to these relevant acoustic cues. We suggest a framework through which such neural entrainment to speech rhythm can explain effects of speech rate on word and segment perception (i.e., that the perception of phonemes and words in connected speech is influenced by preceding speech rate). Neuroanatomically, acoustic amplitude modulations are processed largely bilaterally in auditory cortex, with intelligible speech resulting in differential recruitment of left-hemisphere regions. Notable among these is lateral anterior temporal cortex, which we propose functions in a domain-general fashion to support ongoing memory and integration of meaningful input. Together, the reviewed evidence suggests that low-frequency oscillations in the acoustic speech signal form the foundation of a rhythmic hierarchy supporting spoken language, mirrored by phase-locked oscillations in the human brain. PMID:22973251

  11. Speech reading and learning to read: a comparison of 8-year-old profoundly deaf children with good and poor reading ability.

    PubMed

    Harris, Margaret; Moreno, Constanza

    2006-01-01

    Nine children with severe-profound prelingual hearing loss and single-word reading scores not more than 10 months behind chronological age (Good Readers) were matched with 9 children whose reading lag was at least 15 months (Poor Readers). Good Readers had significantly higher spelling and reading comprehension scores. They produced significantly more phonetic errors (indicating the use of phonological coding) and more often correctly represented the number of syllables in spelling than Poor Readers. They also scored more highly on orthographic awareness and were better at speech reading. Speech intelligibility was the same in the two groups. Cluster analysis revealed that only three Good Readers showed strong evidence of phonetic coding in spelling although seven had good representation of syllables; only four had high orthographic awareness scores. However, all 9 children were good speech readers, suggesting that a phonological code derived through speech reading may underpin reading success for deaf children.

  12. Treating dysarthria following traumatic brain injury: investigating the benefits of commencing treatment during post-traumatic amnesia in two participants.

    PubMed

    McGhee, Hannah; Cornwell, Petrea; Addis, Paula; Jarman, Carly

    2006-11-01

    The aims of this preliminary study were to explore the suitability for and benefits of commencing dysarthria treatment for people with traumatic brain injury (TBI) while in post-traumatic amnesia (PTA). It was hypothesized that behaviours in PTA don't preclude participation and dysarthria characteristics would improve post-treatment. A series of comprehensive case analyses. Two participants with severe TBI received dysarthria treatment focused on motor speech deficits until emergence from PTA. A checklist of neurobehavioural sequelae of TBI was rated during therapy and perceptual and motor speech assessments were administered before and after therapy. Results revealed that certain behaviours affected the quality of therapy but didn't preclude the provision of therapy. Treatment resulted in physiological improvements in some speech sub-systems for both participants, with varying functional speech outcomes. These findings suggest that dysarthria treatment can begin and provide short-term benefits to speech production during the late stages of PTA post-TBI.

  13. Speech, Language, and Reading in 10-Year-Olds With Cleft: Associations With Teasing, Satisfaction With Speech, and Psychological Adjustment.

    PubMed

    Feragen, Kristin Billaud; Særvold, Tone Kristin; Aukner, Ragnhild; Stock, Nicola Marie

    2017-03-01

      Despite the use of multidisciplinary services, little research has addressed issues involved in the care of those with cleft lip and/or palate across disciplines. The aim was to investigate associations between speech, language, reading, and reports of teasing, subjective satisfaction with speech, and psychological adjustment.   Cross-sectional data collected during routine, multidisciplinary assessments in a centralized treatment setting, including speech and language therapists and clinical psychologists.   Children with cleft with palatal involvement aged 10 years from three birth cohorts (N = 170) and their parents.   Speech: SVANTE-N. Language: Language 6-16 (sentence recall, serial recall, vocabulary, and phonological awareness). Reading: Word Chain Test and Reading Comprehension Test. Psychological measures: Strengths and Difficulties Questionnaire and extracts from the Satisfaction With Appearance Scale and Child Experience Questionnaire.   Reading skills were associated with self- and parent-reported psychological adjustment in the child. Subjective satisfaction with speech was associated with psychological adjustment, while not being consistently associated with speech therapists' assessments. Parent-reported teasing was found to be associated with lower levels of reading skills. Having a medical and/or psychological condition in addition to the cleft was found to affect speech, language, and reading significantly.   Cleft teams need to be aware of speech, language, and/or reading problems as potential indicators of psychological risk in children with cleft. This study highlights the importance of multiple reports (self, parent, and specialist) and a multidisciplinary approach to cleft care and research.

  14. From In-Session Behaviors to Drinking Outcomes: A Causal Chain for Motivational Interviewing

    ERIC Educational Resources Information Center

    Moyers, Theresa B.; Martin, Tim; Houck, Jon M.; Christopher, Paulette J.; Tonigan, J. Scott

    2009-01-01

    Client speech in favor of change within motivational interviewing sessions has been linked to treatment outcomes, but a causal chain has not yet been demonstrated. Using a sequential behavioral coding system for client speech, the authors found that, at both the session and utterance levels, specific therapist behaviors predict client change talk.…

  15. Language Policy, Tacit Knowledge, and Institutional Learning: The Case of the Swiss Public Service Broadcaster SRG SSR

    ERIC Educational Resources Information Center

    Perrin, Daniel

    2011-01-01

    "Promoting public understanding" is what the programming mandate asks the Swiss public broadcasting company SRG SSR to do. From a sociolinguistic perspective, this means linking speech communities with other speech communities, both between and within the German-, French-, Italian-, and Romansh-speaking parts of Switzerland. In the…

  16. Intact Inner Speech Use in Autism Spectrum Disorder: Evidence from a Short-Term Memory Task

    ERIC Educational Resources Information Center

    Williams, David; Happe, Francesca; Jarrold, Christopher

    2008-01-01

    Background: Inner speech has been linked to higher-order cognitive processes including "theory of mind", self-awareness and executive functioning, all of which are impaired in autism spectrum disorder (ASD). Individuals with ASD, themselves, report a propensity for visual rather than verbal modes of thinking. This study explored the extent to…

  17. Variability in Word Duration as a Function of Probability, Speech Style, and Prosody

    PubMed Central

    Baker, Rachel E.; Bradlow, Ann R.

    2010-01-01

    This article examines how probability (lexical frequency and previous mention), speech style, and prosody affect word duration, and how these factors interact. Participants read controlled materials in clear and plain speech styles. As expected, more probable words (higher frequencies and second mentions) were significantly shorter than less probable words, and words in plain speech were significantly shorter than those in clear speech. Interestingly, we found second mention reduction effects in both clear and plain speech, indicating that while clear speech is hyper-articulated, this hyper-articulation does not override probabilistic effects on duration. We also found an interaction between mention and frequency, but only in plain speech. High frequency words allowed more second mention reduction than low frequency words in plain speech, revealing a tendency to hypo-articulate as much as possible when all factors support it. Finally, we found that first mentions were more likely to be accented than second mentions. However, when these differences in accent likelihood were controlled, a significant second mention reduction effect remained. This supports the concept of a direct link between probability and duration, rather than a relationship solely mediated by prosodic prominence. PMID:20121039

  18. Are mirror neurons the basis of speech perception? Evidence from five cases with damage to the purported human mirror system

    PubMed Central

    Rogalsky, Corianne; Love, Tracy; Driscoll, David; Anderson, Steven W.; Hickok, Gregory

    2013-01-01

    The discovery of mirror neurons in macaque has led to a resurrection of motor theories of speech perception. Although the majority of lesion and functional imaging studies have associated perception with the temporal lobes, it has also been proposed that the ‘human mirror system’, which prominently includes Broca’s area, is the neurophysiological substrate of speech perception. Although numerous studies have demonstrated a tight link between sensory and motor speech processes, few have directly assessed the critical prediction of mirror neuron theories of speech perception, namely that damage to the human mirror system should cause severe deficits in speech perception. The present study measured speech perception abilities of patients with lesions involving motor regions in the left posterior frontal lobe and/or inferior parietal lobule (i.e., the proposed human ‘mirror system’). Performance was at or near ceiling in patients with fronto-parietal lesions. It is only when the lesion encroaches on auditory regions in the temporal lobe that perceptual deficits are evident. This suggests that ‘mirror system’ damage does not disrupt speech perception, but rather that auditory systems are the primary substrate for speech perception. PMID:21207313

  19. Student Assistant for Learning from Text (SALT): a hypermedia reading aid.

    PubMed

    MacArthur, C A; Haynes, J B

    1995-03-01

    Student Assistant for Learning from Text (SALT) is a software system for developing hypermedia versions of textbooks designed to help students with learning disabilities and other low-achieving students to compensate for their reading difficulties. In the present study, 10 students with learning disabilities (3 young women and 7 young men ages 15 to 17) in Grades 9 and 10 read passages from a science textbook using a basic computer version and an enhanced computer version. The basic version included the components found in the printed textbook (text, graphics, outline, and questions) and a notebook. The enhanced version added speech synthesis, an on-line glossary, links between questions and text, highlighting of main ideas, and supplementary explanations that summarized important ideas. Students received significantly higher comprehension scores using the enhanced version. Furthermore, students preferred the enhanced version and thought it helped them learn the material better.

  20. Facilitating Comprehension of Non-Native English Speakers during Lectures in English with STR-Texts

    ERIC Educational Resources Information Center

    Shadiev, Rustam; Wu, Ting-Ting; Huang, Yueh-Min

    2018-01-01

    We provided texts generated by speech-to text-recognition (STR) technology for non-native English speaking students during lectures in English in order to test whether STR-texts were useful for enhancing students' comprehension of lectures. To this end, we carried out an experiment in which 60 participants were randomly assigned to a control group…

  1. Comprehensive Early Stimulation Program for Infants. Instruction Manual [and] Early Interventionist's Workbook [and] Parent/Caregiver Workbook. William Beaumont Hospital Speech and Language Pathology Series.

    ERIC Educational Resources Information Center

    Santana, Altagracia A.; Bottino, Patti M.

    This early intervention kit includes a Comprehensive Early Stimulation Program for Infants (CESPI) instruction manual, an early interventionist workbook, and ten parent/caregiver workbooks. The CESPI early intervention program is designed to provide therapists, teachers, other health professionals, and parents with a common-sense, practical guide…

  2. Development of Comprehensibility and Its Linguistic Correlates: A Longitudinal Study of Video-Mediated Telecollaboration

    ERIC Educational Resources Information Center

    Akiyama, Yuka; Saito, Kazuyo

    2016-01-01

    This study examined whether 30 learners of Japanese in the United States who engaged in a semester-long video-based eTandem course made gains in global language comprehensibility, that is, ease of understanding (Derwing & Munro, 2009), and what linguistic correlates contributed to these gains. Speech excerpts from Week 2 and 8 of tandem…

  3. Towards the Development of a Comprehensive Pedagogical Framework for Pronunciation Training Based on Adapted Automatic Speech Recognition Systems

    ERIC Educational Resources Information Center

    Ali, Saandia

    2016-01-01

    This paper reports on the early stages of a locally funded research and development project taking place at Rennes 2 university. It aims at developing a comprehensive pedagogical framework for pronunciation training for adult learners of English. This framework will combine a direct approach to pronunciation training (face-to-face teaching) with…

  4. Linguistic Processing of Accented Speech Across the Lifespan

    PubMed Central

    Cristia, Alejandrina; Seidl, Amanda; Vaughn, Charlotte; Schmale, Rachel; Bradlow, Ann; Floccia, Caroline

    2012-01-01

    In most of the world, people have regular exposure to multiple accents. Therefore, learning to quickly process accented speech is a prerequisite to successful communication. In this paper, we examine work on the perception of accented speech across the lifespan, from early infancy to late adulthood. Unfamiliar accents initially impair linguistic processing by infants, children, younger adults, and older adults, but listeners of all ages come to adapt to accented speech. Emergent research also goes beyond these perceptual abilities, by assessing links with production and the relative contributions of linguistic knowledge and general cognitive skills. We conclude by underlining points of convergence across ages, and the gaps left to face in future work. PMID:23162513

  5. Beyond stuttering: Speech disfluencies in normally fluent French-speaking children at age 4.

    PubMed

    Leclercq, Anne-Lise; Suaire, Pauline; Moyse, Astrid

    2018-01-01

    The aim of this study was to establish normative data on the speech disfluencies of normally fluent French-speaking children at age 4, an age at which stuttering has begun in 95% of children who stutter (Yairi & Ambrose, 2013). Fifty monolingual French-speaking children who do not stutter participated in the study. Analyses of a conversational speech sample comprising 250-550 words revealed an average of 10% total disfluencies, 2% stuttering-like disfluencies and around 8% non-stuttered disfluencies. Possible explanations for these high speech disfluency frequencies are discussed, including explanations linked to French in particular. The results shed light on the importance of normative data specific to each language.

  6. Hypermedia (Multimedia).

    ERIC Educational Resources Information Center

    Byrom, Elizabeth

    1990-01-01

    Hypermedia allows students to follow associative links among elements of nonsequential information, by combining information from multiple sources into one microcomputer-controlled system. Hypermedia products help teachers create lessons integrating text, motion film, color graphics, speech, and music, by linking such electronic devices as…

  7. What an otolaryngologist should know about evaluation of a child referred for delay in speech development.

    PubMed

    Tonn, Christopher R; Grundfast, Kenneth M

    2014-03-01

    Otolaryngologists are asked to evaluate children who a parent, physician, or someone else believes is slow in developing speech. Therefore, an otolaryngologist should be familiar with milestones for normal speech development, the causes of delay in speech development, and the best ways to help assure that children develop the ability to speak in a normal way. To provide information for otolaryngologists that is helpful in the evaluation and management of children perceived to be delayed in developing speech. Data were obtained via literature searches, online databases, textbooks, and the most recent national guidelines on topics including speech delay and language delay and the underlying disorders that can cause delay in developing speech. Emphasis was placed on epidemiology, pathophysiology, most common presentation, and treatment strategies. Most of the sources referenced were published within the past 5 years. Our article is a summary of major causes of speech delay based on reliable sources as listed herein. Speech delay can be the manifestation of a spectrum of disorders affecting the language comprehension and/or speech production pathways, ranging from disorders involving global developmental limitations to motor dysfunction to hearing loss. Determining the cause of a child's delay in speech production is a time-sensitive issue because a child loses valuable opportunities in intellectual development if his or her communication defect is not addressed and ameliorated with treatment. Knowing several key items about each disorder can help otolaryngologists direct families to the correct health care provider to maximize the child's learning potential and intellectual growth curve.

  8. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography

    PubMed Central

    Ozker, Muge; Schepers, Inga M.; Magnotti, John F.; Yoshor, Daniel; Beauchamp, Michael S.

    2017-01-01

    Human speech can be comprehended using only auditory information from the talker’s voice. However, comprehension is improved if the talker’s face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl’s gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech. PMID:28253074

  9. Motor speech signature of behavioral variant frontotemporal dementia: Refining the phenotype.

    PubMed

    Vogel, Adam P; Poole, Matthew L; Pemberton, Hugh; Caverlé, Marja W J; Boonstra, Frederique M C; Low, Essie; Darby, David; Brodtmann, Amy

    2017-08-22

    To provide a comprehensive description of motor speech function in behavioral variant frontotemporal dementia (bvFTD). Forty-eight individuals (24 bvFTD and 24 age- and sex-matched healthy controls) provided speech samples. These varied in complexity and thus cognitive demand. Their language was assessed using the Progressive Aphasia Language Scale and verbal fluency tasks. Speech was analyzed perceptually to describe the nature of deficits and acoustically to quantify differences between patients with bvFTD and healthy controls. Cortical thickness and subcortical volume derived from MRI scans were correlated with speech outcomes in patients with bvFTD. Speech of affected individuals was significantly different from that of healthy controls. The speech signature of patients with bvFTD is characterized by a reduced rate (75%) and accuracy (65%) on alternating syllable production tasks, and prosodic deficits including reduced speech rate (45%), prolonged intervals (54%), and use of short phrases (41%). Groups differed on acoustic measures derived from the reading, unprepared monologue, and diadochokinetic tasks but not the days of the week or sustained vowel tasks. Variability of silence length was associated with cortical thickness of the inferior frontal gyrus and insula and speech rate with the precentral gyrus. One in 8 patients presented with moderate speech timing deficits with a further two-thirds rated as mild or subclinical. Subtle but measurable deficits in prosody are common in bvFTD and should be considered during disease management. Language function correlated with speech timing measures derived from the unprepared monologue only. © 2017 American Academy of Neurology.

  10. Neural source dynamics of brain responses to continuous stimuli: Speech processing from acoustics to comprehension.

    PubMed

    Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z

    2018-05-15

    Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    PubMed

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  12. Potential interactions among linguistic, autonomic, and motor factors in speech.

    PubMed

    Kleinow, Jennifer; Smith, Anne

    2006-05-01

    Though anecdotal reports link certain speech disorders to increases in autonomic arousal, few studies have described the relationship between arousal and speech processes. Additionally, it is unclear how increases in arousal may interact with other cognitive-linguistic processes to affect speech motor control. In this experiment we examine potential interactions between autonomic arousal, linguistic processing, and speech motor coordination in adults and children. Autonomic responses (heart rate, finger pulse volume, tonic skin conductance, and phasic skin conductance) were recorded simultaneously with upper and lower lip movements during speech. The lip aperture variability (LA variability index) across multiple repetitions of sentences that varied in length and syntactic complexity was calculated under low- and high-arousal conditions. High arousal conditions were elicited by performance of the Stroop color word task. Children had significantly higher lip aperture variability index values across all speaking tasks, indicating more variable speech motor coordination. Increases in syntactic complexity and utterance length were associated with increases in speech motor coordination variability in both speaker groups. There was a significant effect of Stroop task, which produced increases in autonomic arousal and increased speech motor variability in both adults and children. These results provide novel evidence that high arousal levels can influence speech motor control in both adults and children. (c) 2006 Wiley Periodicals, Inc.

  13. The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language

    PubMed Central

    Poeppel, David

    2012-01-01

    Research on the brain basis of speech and language faces theoretical and empirical challenges. The majority of current research, dominated by imaging, deficit-lesion, and electrophysiological techniques, seeks to identify regions that underpin aspects of language processing such as phonology, syntax, or semantics. The emphasis lies on localization and spatial characterization of function. The first part of the paper deals with a practical challenge that arises in the context of such a research program. This maps problem concerns the extent to which spatial information and localization can satisfy the explanatory needs for perception and cognition. Several areas of investigation exemplify how the neural basis of speech and language is discussed in those terms (regions, streams, hemispheres, networks). The second part of the paper turns to a more troublesome challenge, namely how to formulate the formal links between neurobiology and cognition. This principled problem thus addresses the relation between the primitives of cognition (here speech, language) and neurobiology. Dealing with this mapping problem invites the development of linking hypotheses between the domains. The cognitive sciences provide granular, theoretically motivated claims about the structure of various domains (the ‘cognome’); neurobiology, similarly, provides a list of the available neural structures. However, explanatory connections will require crafting computationally explicit linking hypotheses at the right level of abstraction. For both the practical maps problem and the principled mapping problem, developmental approaches and evidence can play a central role in the resolution. PMID:23017085

  14. A Shared Neural Substrate for Mentalizing and the Affective Component of Sentence Comprehension

    PubMed Central

    Hervé, Pierre-Yves; Razafimandimby, Annick; Jobard, Gaël; Tzourio-Mazoyer, Nathalie

    2013-01-01

    Using event-related fMRI in a sample of 42 healthy participants, we compared the cerebral activity maps obtained when classifying spoken sentences based on the mental content of the main character (belief, deception or empathy) or on the emotional tonality of the sentence (happiness, anger or sadness). To control for the effects of different syntactic constructions (such as embedded clauses in belief sentences), we subtracted from each map the BOLD activations obtained during plausibility judgments on structurally matching sentences, devoid of emotions or ToM. The obtained theory of mind (ToM) and emotional speech comprehension networks overlapped in the bilateral temporo-parietal junction, posterior cingulate cortex, right anterior temporal lobe, dorsomedial prefrontal cortex and in the left inferior frontal sulcus. These regions form a ToM network, which contributes to the emotional component of spoken sentence comprehension. Compared with the ToM task, in which the sentences were enounced on a neutral tone, the emotional sentence classification task, in which the sentences were play-acted, was associated with a greater activity in the bilateral superior temporal sulcus, in line with the presence of emotional prosody. Besides, the ventromedial prefrontal cortex was more active during emotional than ToM sentence processing. This region may link mental state representations with verbal and prosodic emotional cues. Compared with emotional sentence classification, ToM was associated with greater activity in the caudate nucleus, paracingulate cortex, and superior frontal and parietal regions, in line with behavioral data showing that ToM sentence comprehension was a more demanding task. PMID:23342148

  15. Children with Williams Syndrome: Language, Cognitive, and Behavioral Characteristics and their Implications for Intervention

    PubMed Central

    Mervis, Carolyn B.; Velleman, Shelley L.

    2012-01-01

    Williams syndrome (WS) is a rare genetic disorder characterized by heart disease, failure to thrive, hearing loss, intellectual or learning disability, speech and language delay, gregariousness, and non-social anxiety. The WS psycholinguistic profile is complex, including relative strengths in concrete vocabulary, phonological processing, and verbal short-term memory and relative weaknesses in relational/conceptual language, reading comprehension, and pragmatics. Many children evidence difficulties with finiteness marking and complex grammatical constructions. Speech-language intervention, support, and advocacy are crucial. PMID:22754603

  16. Speech as a breakthrough signaling resource in the cognitive evolution of biological complex adaptive systems.

    PubMed

    Mattei, Tobias A

    2014-12-01

    In self-adapting dynamical systems, a significant improvement in the signaling flow among agents constitutes one of the most powerful triggering events for the emergence of new complex behaviors. Ackermann and colleagues' comprehensive phylogenetic analysis of the brain structures involved in acoustic communication provides further evidence of the essential role which speech, as a breakthrough signaling resource, has played in the evolutionary development of human cognition viewed from the standpoint of complex adaptive system analysis.

  17. Human factors research problems in electronic voice warning system design

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.; Williams, D. H.

    1975-01-01

    The speech messages issued by voice warning systems must be carefully designed in accordance with general principles of human decision making processes, human speech comprehension, and the conditions in which the warnings can occur. The operator's effectiveness must not be degraded by messages that are either inappropriate or difficult to comprehend. Important experimental variables include message content, linguistic redundancy, signal/noise ratio, interference with concurrent tasks, and listener expectations generated by the pragmatic or real world context in which the messages are presented.

  18. Linking sounds to meanings: infant statistical learning in a natural language.

    PubMed

    Hay, Jessica F; Pelucchi, Bruna; Graf Estes, Katharine; Saffran, Jenny R

    2011-09-01

    The processes of infant word segmentation and infant word learning have largely been studied separately. However, the ease with which potential word forms are segmented from fluent speech seems likely to influence subsequent mappings between words and their referents. To explore this process, we tested the link between the statistical coherence of sequences presented in fluent speech and infants' subsequent use of those sequences as labels for novel objects. Notably, the materials were drawn from a natural language unfamiliar to the infants (Italian). The results of three experiments suggest that there is a close relationship between the statistics of the speech stream and subsequent mapping of labels to referents. Mapping was facilitated when the labels contained high transitional probabilities in the forward and/or backward direction (Experiment 1). When no transitional probability information was available (Experiment 2), or when the internal transitional probabilities of the labels were low in both directions (Experiment 3), infants failed to link the labels to their referents. Word learning appears to be strongly influenced by infants' prior experience with the distribution of sounds that make up words in natural languages. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Perception and analysis of Spanish accents in English speech

    NASA Astrophysics Data System (ADS)

    Chism, Cori; Lass, Norman

    2002-05-01

    The purpose of the present study was to determine what relates most closely to the degree of perceived foreign accent in the English speech of native Spanish speakers: intonation, vowel length, stress, voice onset time (VOT), or segmental accuracy. Nineteen native English speaking listeners rated speech samples from 7 native English speakers and 15 native Spanish speakers for comprehensibility and degree of foreign accent. The speech samples were analyzed spectrographically and perceptually to obtain numerical values for each variable. Correlation coefficients were computed to determine the relationship beween these values and the average foreign accent scores. Results showed that the average foreign accent scores were statistically significantly correlated with three variables: the length of stressed vowels (r=-0.48, p=0.05), voice onset time (r =-0.62, p=0.01), and segmental accuracy (r=0.92, p=0.001). Implications of these findings and suggestions for future research are discussed.

  20. Developmental language and speech disability.

    PubMed

    Spiel, G; Brunner, E; Allmayer, B; Pletz, A

    2001-09-01

    Speech disabilities (articulation deficits) and language disorders--expressive (vocabulary) receptive (language comprehension) are not uncommon in children. An overview of these along with a global description of the impairment of communication as well as clinical characteristics of language developmental disorders are presented in this article. The diagnostic tables, which are applied in the European and Anglo-American speech areas, ICD-10 and DSM-IV, have been explained and compared. Because of their strengths and weaknesses an alternative classification of language and speech developmental disorders is proposed, which allows a differentiation between expressive and receptive language capabilities with regard to the semantic and the morphological/syntax domains. Prevalence and comorbidity rates, psychosocial influences, biological factors and the biological social interaction have been discussed. The necessity of the use of standardized examinations is emphasised. General logopaedic treatment paradigms, specific therapy concepts and an overview of prognosis have been described.

  1. Conceptual clusters in figurative language production.

    PubMed

    Corts, Daniel P; Meyers, Kristina

    2002-07-01

    Although most prior research on figurative language examines comprehension, several recent studies on the production of such language have proved to be informative. One of the most noticeable traits of figurative language production is that it is produced at a somewhat random rate with occasional bursts of highly figurative speech (e.g., Corts & Pollio, 1999). The present article seeks to extend these findings by observing production during speech that involves a very high base rate of figurative language, making statistically defined bursts difficult to detect. In an analysis of three Baptist sermons, burst-like clusters of figurative language were identified. Further study indicated that these clusters largely involve a central root metaphor that represents the topic under consideration. An interaction of the coherence, along with a conceptual understanding of a topic and the relative importance of the topic to the purpose of the speech, is offered as the most likely explanation for the clustering of figurative language in natural speech.

  2. Electroencephalographic Abnormalities during Sleep in Children with Developmental Speech-Language Disorders: A Case-Control Study

    ERIC Educational Resources Information Center

    Parry-Fielder, Bronwyn; Collins, Kevin; Fisher, John; Keir, Eddie; Anderson, Vicki; Jacobs, Rani; Scheffer, Ingrid E.; Nolan, Terry

    2009-01-01

    Earlier research has suggested a link between epileptiform activity in the electroencephalogram (EEG) and developmental speech-language disorder (DSLD). This study investigated the strength of this association by comparing the frequency of EEG abnormalities in 45 language-normal children (29 males, 16 females; mean age 6y 11mo, SD 1y 10mo, range…

  3. Brief Report: Further Evidence for a Link between Inner Speech Limitations and Executive Function in High-Functioning Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Russell-Smith, Suzanna N.; Comerford, Bronwynn J. E.; Maybery, Murray T.; Whitehouse, Andrew J. O.

    2014-01-01

    This study investigated the involvement of inner speech limitations in the executive dysfunction associated with autism spectrum disorders (ASDs). Seventeen children with ASD and 18 controls, statistically-matched in age and IQ, performed a computer-based card sorting test (CST) to assess cognitive flexibility under four conditions: baseline, with…

  4. Pitch and Time Processing in Speech and Tones: The Effects of Musical Training and Attention

    ERIC Educational Resources Information Center

    Sares, Anastasia G.; Foster, Nicholas E. V.; Allen, Kachina; Hyde, Krista L.

    2018-01-01

    Purpose: Musical training is often linked to enhanced auditory discrimination, but the relative roles of pitch and time in music and speech are unclear. Moreover, it is unclear whether pitch and time processing are correlated across individuals and how they may be affected by attention. This study aimed to examine pitch and time processing in…

  5. Finding Words and Word Structure in Artificial Speech: The Development of Infants' Sensitivity to Morphosyntactic Regularities

    ERIC Educational Resources Information Center

    Marchetto, Erika; Bonatti, Luca L.

    2015-01-01

    To achieve language proficiency, infants must find the building blocks of speech and master the rules governing their legal combinations. However, these problems are linked: words are also built according to rules. Here, we explored early morphosyntactic sensitivity by testing when and how infants could find either words or within-word structure…

  6. Are Young Children's Utterances Affected by Characteristics of Their Learning Environments? A Multiple Case Study

    ERIC Educational Resources Information Center

    Richardson, Tanya; Murray, Jane

    2017-01-01

    Within English early childhood education, there is emphasis on improving speech and language development as well as a drive for outdoor learning. This paper synthesises both aspects to consider whether or not links exist between the environment and the quality of young children's utterances as part of their speech and language development and if…

  7. Revisiting the "enigma" of musicians with dyslexia: Auditory sequencing and speech abilities.

    PubMed

    Zuk, Jennifer; Bishop-Liebler, Paula; Ozernov-Palchik, Ola; Moore, Emma; Overy, Katie; Welch, Graham; Gaab, Nadine

    2017-04-01

    Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing and speech discrimination in 52 adults comprised of musicians with dyslexia, nonmusicians with dyslexia, and typical musicians. An auditory sequencing task measuring perceptual acuity for tone sequences of increasing length was administered. Furthermore, subjects were asked to discriminate synthesized syllable continua varying in acoustic components of speech necessary for intraphonemic discrimination, which included spectral (formant frequency) and temporal (voice onset time [VOT] and amplitude envelope) features. Results indicate that musicians with dyslexia did not significantly differ from typical musicians and performed better than nonmusicians with dyslexia for auditory sequencing as well as discrimination of spectral and VOT cues within syllable continua. However, typical musicians demonstrated superior performance relative to both groups with dyslexia for discrimination of syllables varying in amplitude information. These findings suggest a distinct profile of speech processing abilities in musicians with dyslexia, with specific weaknesses in discerning amplitude cues within speech. Because these difficulties seem to remain persistent in adults with dyslexia despite musical training, this study only partly supports the potential for musical training to enhance the auditory processing skills known to be crucial for literacy in individuals with dyslexia. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. On how the brain decodes vocal cues about speaker confidence.

    PubMed

    Jiang, Xiaoming; Pell, Marc D

    2015-05-01

    In speech communication, listeners must accurately decode vocal cues that refer to the speaker's mental state, such as their confidence or 'feeling of knowing'. However, the time course and neural mechanisms associated with online inferences about speaker confidence are unclear. Here, we used event-related potentials (ERPs) to examine the temporal neural dynamics underlying a listener's ability to infer speaker confidence from vocal cues during speech processing. We recorded listeners' real-time brain responses while they evaluated statements wherein the speaker's tone of voice conveyed one of three levels of confidence (confident, close-to-confident, unconfident) or were spoken in a neutral manner. Neural responses time-locked to event onset show that the perceived level of speaker confidence could be differentiated at distinct time points during speech processing: unconfident expressions elicited a weaker P2 than all other expressions of confidence (or neutral-intending utterances), whereas close-to-confident expressions elicited a reduced negative response in the 330-500 msec and 550-740 msec time window. Neutral-intending expressions, which were also perceived as relatively confident, elicited a more delayed, larger sustained positivity than all other expressions in the 980-1270 msec window for this task. These findings provide the first piece of evidence of how quickly the brain responds to vocal cues signifying the extent of a speaker's confidence during online speech comprehension; first, a rough dissociation between unconfident and confident voices occurs as early as 200 msec after speech onset. At a later stage, further differentiation of the exact level of speaker confidence (i.e., close-to-confident, very confident) is evaluated via an inferential system to determine the speaker's meaning under current task settings. These findings extend three-stage models of how vocal emotion cues are processed in speech comprehension (e.g., Schirmer & Kotz, 2006) by revealing how a speaker's mental state (i.e., feeling of knowing) is simultaneously inferred from vocal expressions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior

    PubMed Central

    2018-01-01

    Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners’ abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication. PMID:28938250

  10. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior.

    PubMed

    Peelle, Jonathan E

    Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.

  11. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech

    PubMed Central

    Dick, Anthony Steven; Mok, Eva H.; Beharelle, Anjali Raja; Goldin-Meadow, Susan; Small, Steven L.

    2013-01-01

    In everyday conversation, listeners often rely on a speaker’s gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers’ iconic gestures. We focused on iconic gestures that contribute information not found in the speaker’s talk, compared to those that convey information redundant with the speaker’s talk. We found that three regions—left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)—responded more strongly when gestures added information to non-specific language, compared to when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the non-specific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. PMID:23238964

  12. Bilateral Versus Unilateral Cochlear Implantation in Adult Listeners: Speech-On-Speech Masking and Multitalker Localization

    PubMed Central

    Buchholz, Jörg M.; Morgan, Catherine; Sharma, Mridula; Weller, Tobias; Konganda, Shivali Appaiah; Shirai, Kyoko; Kawano, Atsushi

    2017-01-01

    Binaural hearing helps normal-hearing listeners localize sound sources and understand speech in noise. However, it is not fully understood how far this is the case for bilateral cochlear implant (CI) users. To determine the potential benefits of bilateral over unilateral CIs, speech comprehension thresholds (SCTs) were measured in seven Japanese bilateral CI recipients using Helen test sentences (translated into Japanese) in a two-talker speech interferer presented from the front (co-located with the target speech), ipsilateral to the first-implanted ear (at +90° or −90°), and spatially symmetric at ±90°. Spatial release from masking was calculated as the difference between co-located and spatially separated SCTs. Localization was assessed in the horizontal plane by presenting either male or female speech or both simultaneously. All measurements were performed bilaterally and unilaterally (with the first implanted ear) inside a loudspeaker array. Both SCTs and spatial release from masking were improved with bilateral CIs, demonstrating mean bilateral benefits of 7.5 dB in spatially asymmetric and 3 dB in spatially symmetric speech mixture. Localization performance varied strongly between subjects but was clearly improved with bilateral over unilateral CIs with the mean localization error reduced by 27°. Surprisingly, adding a second talker had only a negligible effect on localization. PMID:28752811

  13. Automatic speech recognition in air-ground data link

    NASA Technical Reports Server (NTRS)

    Armstrong, Herbert B.

    1989-01-01

    In the present air traffic system, information presented to the transport aircraft cockpit crew may originate from a variety of sources and may be presented to the crew in visual or aural form, either through cockpit instrument displays or, most often, through voice communication. Voice radio communications are the most error prone method for air-ground data link. Voice messages can be misstated or misunderstood and radio frequency congestion can delay or obscure important messages. To prevent proliferation, a multiplexed data link display can be designed to present information from multiple data link sources on a shared cockpit display unit (CDU) or multi-function display (MFD) or some future combination of flight management and data link information. An aural data link which incorporates an automatic speech recognition (ASR) system for crew response offers several advantages over visual displays. The possibility of applying ASR to the air-ground data link was investigated. The first step was to review current efforts in ASR applications in the cockpit and in air traffic control and evaluated their possible data line application. Next, a series of preliminary research questions is to be developed for possible future collaboration.

  14. The influence of target-masker similarity on across-ear interference in dichotic listening

    NASA Astrophysics Data System (ADS)

    Brungart, Douglas; Simpson, Brian

    2004-05-01

    In most dichotic listening tasks, the comprehension of a target speech signal presented in one ear is unaffected by the presence of irrelevant speech in the opposite ear. However, recent results have shown that contralaterally presented interfering speech signals do influence performance when a second interfering speech signal is present in the same ear as the target speech. In this experiment, we examined the influence of target-masker similarity on this effect by presenting ipsilateral and contralateral masking phrases spoken by the same talker, a different same-sex talker, or a different-sex talker than the one used to generate the target speech. The results show that contralateral target-masker similarity has the greatest influence on performance when an easily segregated different-sex masker is presented in the target ear, and the least influence when a difficult-to-segregate same-talker masker is presented in the target ear. These results indicate that across-ear interference in dichotic listening is not directly related to the difficulty of the segregation task in the target ear, and suggest that contralateral maskers are least likely to interfere with dichotic speech perception when the same general strategy could be used to segregate the target from the masking voices in the ipsilateral and contralateral ears.

  15. Current Research in Southeast Asia.

    ERIC Educational Resources Information Center

    Beh, Yolanda

    1990-01-01

    Summaries of eight language-related research projects are presented from Brunei Darussalam, Indonesia, Malaysia, and Singapore. Topics include children's reading, nonstandard spoken Indonesian, English speech act performance, classroom verbal interaction, journal writing, and listening comprehension. (LB)

  16. Neural reuse of action perception circuits for language, concepts and communication.

    PubMed

    Pulvermüller, Friedemann

    2018-01-01

    Neurocognitive and neurolinguistics theories make explicit statements relating specialized cognitive and linguistic processes to specific brain loci. These linking hypotheses are in need of neurobiological justification and explanation. Recent mathematical models of human language mechanisms constrained by fundamental neuroscience principles and established knowledge about comparative neuroanatomy offer explanations for where, when and how language is processed in the human brain. In these models, network structure and connectivity along with action- and perception-induced correlation of neuronal activity co-determine neurocognitive mechanisms. Language learning leads to the formation of action perception circuits (APCs) with specific distributions across cortical areas. Cognitive and linguistic processes such as speech production, comprehension, verbal working memory and prediction are modelled by activity dynamics in these APCs, and combinatorial and communicative-interactive knowledge is organized in the dynamics within, and connections between APCs. The network models and, in particular, the concept of distributionally-specific circuits, can account for some previously not well understood facts about the cortical 'hubs' for semantic processing and the motor system's role in language understanding and speech sound recognition. A review of experimental data evaluates predictions of the APC model and alternative theories, also providing detailed discussion of some seemingly contradictory findings. Throughout, recent disputes about the role of mirror neurons and grounded cognition in language and communication are assessed critically. Copyright © 2017 The Author. Published by Elsevier Ltd.. All rights reserved.

  17. Prediction in the service of comprehension: modulated early brain responses to omitted speech segments.

    PubMed

    Bendixen, Alexandra; Scharinger, Mathias; Strauß, Antje; Obleser, Jonas

    2014-04-01

    Speech signals are often compromised by disruptions originating from external (e.g., masking noise) or internal (e.g., inaccurate articulation) sources. Speech comprehension thus entails detecting and replacing missing information based on predictive and restorative neural mechanisms. The present study targets predictive mechanisms by investigating the influence of a speech segment's predictability on early, modality-specific electrophysiological responses to this segment's omission. Predictability was manipulated in simple physical terms in a single-word framework (Experiment 1) or in more complex semantic terms in a sentence framework (Experiment 2). In both experiments, final consonants of the German words Lachs ([laks], salmon) or Latz ([lats], bib) were occasionally omitted, resulting in the syllable La ([la], no semantic meaning), while brain responses were measured with multi-channel electroencephalography (EEG). In both experiments, the occasional presentation of the fragment La elicited a larger omission response when the final speech segment had been predictable. The omission response occurred ∼125-165 msec after the expected onset of the final segment and showed characteristics of the omission mismatch negativity (MMN), with generators in auditory cortical areas. Suggestive of a general auditory predictive mechanism at work, this main observation was robust against varying source of predictive information or attentional allocation, differing between the two experiments. Source localization further suggested the omission response enhancement by predictability to emerge from left superior temporal gyrus and left angular gyrus in both experiments, with additional experiment-specific contributions. These results are consistent with the existence of predictive coding mechanisms in the central auditory system, and suggestive of the general predictive properties of the auditory system to support spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Working Memory Training and Speech in Noise Comprehension in Older Adults.

    PubMed

    Wayne, Rachel V; Hamilton, Cheryl; Jones Huyck, Julia; Johnsrude, Ingrid S

    2016-01-01

    Understanding speech in the presence of background sound can be challenging for older adults. Speech comprehension in noise appears to depend on working memory and executive-control processes (e.g., Heald and Nusbaum, 2014), and their augmentation through training may have rehabilitative potential for age-related hearing loss. We examined the efficacy of adaptive working-memory training (Cogmed; Klingberg et al., 2002) in 24 older adults, assessing generalization to other working-memory tasks (near-transfer) and to other cognitive domains (far-transfer) using a cognitive test battery, including the Reading Span test, sensitive to working memory (e.g., Daneman and Carpenter, 1980). We also assessed far transfer to speech-in-noise performance, including a closed-set sentence task (Kidd et al., 2008). To examine the effect of cognitive training on benefit obtained from semantic context, we also assessed transfer to open-set sentences; half were semantically coherent (high-context) and half were semantically anomalous (low-context). Subjects completed 25 sessions (0.5-1 h each; 5 sessions/week) of both adaptive working memory training and placebo training over 10 weeks in a crossover design. Subjects' scores on the adaptive working-memory training tasks improved as a result of training. However, training did not transfer to other working memory tasks, nor to tasks recruiting other cognitive domains. We did not observe any training-related improvement in speech-in-noise performance. Measures of working memory correlated with the intelligibility of low-context, but not high-context, sentences, suggesting that sentence context may reduce the load on working memory. The Reading Span test significantly correlated only with a test of visual episodic memory, suggesting that the Reading Span test is not a pure-test of working memory, as is commonly assumed.

  19. Working Memory Training and Speech in Noise Comprehension in Older Adults

    PubMed Central

    Wayne, Rachel V.; Hamilton, Cheryl; Jones Huyck, Julia; Johnsrude, Ingrid S.

    2016-01-01

    Understanding speech in the presence of background sound can be challenging for older adults. Speech comprehension in noise appears to depend on working memory and executive-control processes (e.g., Heald and Nusbaum, 2014), and their augmentation through training may have rehabilitative potential for age-related hearing loss. We examined the efficacy of adaptive working-memory training (Cogmed; Klingberg et al., 2002) in 24 older adults, assessing generalization to other working-memory tasks (near-transfer) and to other cognitive domains (far-transfer) using a cognitive test battery, including the Reading Span test, sensitive to working memory (e.g., Daneman and Carpenter, 1980). We also assessed far transfer to speech-in-noise performance, including a closed-set sentence task (Kidd et al., 2008). To examine the effect of cognitive training on benefit obtained from semantic context, we also assessed transfer to open-set sentences; half were semantically coherent (high-context) and half were semantically anomalous (low-context). Subjects completed 25 sessions (0.5–1 h each; 5 sessions/week) of both adaptive working memory training and placebo training over 10 weeks in a crossover design. Subjects' scores on the adaptive working-memory training tasks improved as a result of training. However, training did not transfer to other working memory tasks, nor to tasks recruiting other cognitive domains. We did not observe any training-related improvement in speech-in-noise performance. Measures of working memory correlated with the intelligibility of low-context, but not high-context, sentences, suggesting that sentence context may reduce the load on working memory. The Reading Span test significantly correlated only with a test of visual episodic memory, suggesting that the Reading Span test is not a pure-test of working memory, as is commonly assumed. PMID:27047370

  20. Speech-associated gestures, Broca’s area, and the human mirror system

    PubMed Central

    Skipper, Jeremy I.; Goldin-Meadow, Susan; Nusbaum, Howard C.; Small, Steven L

    2009-01-01

    Speech-associated gestures are hand and arm movements that not only convey semantic information to listeners but are themselves actions. Broca’s area has been assumed to play an important role both in semantic retrieval or selection (as part of a language comprehension system) and in action recognition (as part of a “mirror” or “observation–execution matching” system). We asked whether the role that Broca’s area plays in processing speech-associated gestures is consistent with the semantic retrieval/selection account (predicting relatively weak interactions between Broca’s area and other cortical areas because the meaningful information that speech-associated gestures convey reduces semantic ambiguity and thus reduces the need for semantic retrieval/selection) or the action recognition account (predicting strong interactions between Broca’s area and other cortical areas because speech-associated gestures are goal-direct actions that are “mirrored”). We compared the functional connectivity of Broca’s area with other cortical areas when participants listened to stories while watching meaningful speech-associated gestures, speech-irrelevant self-grooming hand movements, or no hand movements. A network analysis of neuroimaging data showed that interactions involving Broca’s area and other cortical areas were weakest when spoken language was accompanied by meaningful speech-associated gestures, and strongest when spoken language was accompanied by self-grooming hand movements or by no hand movements at all. Results are discussed with respect to the role that the human mirror system plays in processing speech-associated movements. PMID:17533001

  1. Neurobiological Bases of Reading Comprehension: Insights from Neuroimaging Studies of Word-Level and Text-Level Processing in Skilled and Impaired Readers

    ERIC Educational Resources Information Center

    Landi, Nicole; Frost, Stephen J.; Mencl, W. Einar; Sandak, Rebecca; Pugh, Kenneth R.

    2013-01-01

    For accurate reading comprehension, readers must first learn to map letters to their corresponding speech sounds and meaning, and then they must string the meanings of many words together to form a representation of the text. Furthermore, readers must master the complexities involved in parsing the relevant syntactic and pragmatic information…

  2. Working memory training to improve speech perception in noise across languages

    PubMed Central

    Ingvalson, Erin M.; Dhar, Sumitrajit; Wong, Patrick C. M.; Liu, Hanjun

    2015-01-01

    Working memory capacity has been linked to performance on many higher cognitive tasks, including the ability to perceive speech in noise. Current efforts to train working memory have demonstrated that working memory performance can be improved, suggesting that working memory training may lead to improved speech perception in noise. A further advantage of working memory training to improve speech perception in noise is that working memory training materials are often simple, such as letters or digits, making them easily translatable across languages. The current effort tested the hypothesis that working memory training would be associated with improved speech perception in noise and that materials would easily translate across languages. Native Mandarin Chinese and native English speakers completed ten days of reversed digit span training. Reading span and speech perception in noise both significantly improved following training, whereas untrained controls showed no gains. These data suggest that working memory training may be used to improve listeners' speech perception in noise and that the materials may be quickly adapted to a wide variety of listeners. PMID:26093435

  3. Working memory training to improve speech perception in noise across languages.

    PubMed

    Ingvalson, Erin M; Dhar, Sumitrajit; Wong, Patrick C M; Liu, Hanjun

    2015-06-01

    Working memory capacity has been linked to performance on many higher cognitive tasks, including the ability to perceive speech in noise. Current efforts to train working memory have demonstrated that working memory performance can be improved, suggesting that working memory training may lead to improved speech perception in noise. A further advantage of working memory training to improve speech perception in noise is that working memory training materials are often simple, such as letters or digits, making them easily translatable across languages. The current effort tested the hypothesis that working memory training would be associated with improved speech perception in noise and that materials would easily translate across languages. Native Mandarin Chinese and native English speakers completed ten days of reversed digit span training. Reading span and speech perception in noise both significantly improved following training, whereas untrained controls showed no gains. These data suggest that working memory training may be used to improve listeners' speech perception in noise and that the materials may be quickly adapted to a wide variety of listeners.

  4. Developing a bilingual "persian cued speech" website for parents and professionals of children with hearing impairment.

    PubMed

    Movallali, Guita; Sajedi, Firoozeh

    2014-03-01

    The use of the internet as a source of information gathering, self-help and support is becoming increasingly recognized. Parents and professionals of children with hearing impairment have been shown to seek information about different communication approaches online. Cued Speech is a very new approach to Persian speaking pupils. Our aim was to develop a useful website to give related information about Persian Cued Speech to parents and professionals of children with hearing impairment. All Cued Speech websites from different countries that fell within the first ten pages of Google and Yahoo search-engines were assessed. Main subjects and links were studied. All related information was gathered from the websites, textbooks, articles etc. Using a framework that combined several criteria for health-information websites, we developed the Persian Cued Speech website for three distinct audiences (parents, professionals and children). An accurate, complete, accessible and readable resource about Persian Cued Speech for parents and professionals is available now.

  5. The brain’s conversation with itself: neural substrates of dialogic inner speech

    PubMed Central

    Weis, Susanne; McCarthy-Jones, Simon; Moseley, Peter; Smailes, David; Fernyhough, Charles

    2016-01-01

    Inner speech has been implicated in important aspects of normal and atypical cognition, including the development of auditory hallucinations. Studies to date have focused on covert speech elicited by simple word or sentence repetition, while ignoring richer and arguably more psychologically significant varieties of inner speech. This study compared neural activation for inner speech involving conversations (‘dialogic inner speech’) with single-speaker scenarios (‘monologic inner speech’). Inner speech-related activation differences were then compared with activations relating to Theory-of-Mind (ToM) reasoning and visual perspective-taking in a conjunction design. Generation of dialogic (compared with monologic) scenarios was associated with a widespread bilateral network including left and right superior temporal gyri, precuneus, posterior cingulate and left inferior and medial frontal gyri. Activation associated with dialogic scenarios and ToM reasoning overlapped in areas of right posterior temporal cortex previously linked to mental state representation. Implications for understanding verbal cognition in typical and atypical populations are discussed. PMID:26197805

  6. Cortical Measures of Phoneme-Level Speech Encoding Correlate with the Perceived Clarity of Natural Speech

    PubMed Central

    2018-01-01

    Abstract In real-world environments, humans comprehend speech by actively integrating prior knowledge (P) and expectations with sensory input. Recent studies have revealed effects of prior information in temporal and frontal cortical areas and have suggested that these effects are underpinned by enhanced encoding of speech-specific features, rather than a broad enhancement or suppression of cortical activity. However, in terms of the specific hierarchical stages of processing involved in speech comprehension, the effects of integrating bottom-up sensory responses and top-down predictions are still unclear. In addition, it is unclear whether the predictability that comes with prior information may differentially affect speech encoding relative to the perceptual enhancement that comes with that prediction. One way to investigate these issues is through examining the impact of P on indices of cortical tracking of continuous speech features. Here, we did this by presenting participants with degraded speech sentences that either were or were not preceded by a clear recording of the same sentences while recording non-invasive electroencephalography (EEG). We assessed the impact of prior information on an isolated index of cortical tracking that reflected phoneme-level processing. Our findings suggest the possibility that prior information affects the early encoding of natural speech in a dual manner. Firstly, the availability of prior information, as hypothesized, enhanced the perceived clarity of degraded speech, which was positively correlated with changes in phoneme-level encoding across subjects. In addition, P induced an overall reduction of this cortical measure, which we interpret as resulting from the increase in predictability. PMID:29662947

  7. "My Mind Is Doing It All": No "Brake" to Stop Speech Generation in Jargon Aphasia.

    PubMed

    Robinson, Gail A; Butterworth, Brian; Cipolotti, Lisa

    2015-12-01

    To study whether pressure of speech in jargon aphasia arises out of disturbances to core language or executive processes, or at the intersection of conceptual preparation. Conceptual preparation mechanisms for speech have not been well studied. Several mechanisms have been proposed for jargon aphasia, a fluent, well-articulated, logorrheic propositional speech that is almost incomprehensible. We studied the vast quantity of jargon speech produced by patient J.A., who had suffered an infarct after the clipping of a middle cerebral artery aneurysm. We gave J.A. baseline cognitive tests and experimental word- and sentence-generation tasks that we had designed for patients with dynamic aphasia, a severely reduced but otherwise fairly normal propositional speech thought to result from deficits in conceptual preparation. J.A. had cognitive dysfunction, including executive difficulties, and a language profile characterized by poor repetition and naming in the context of relatively intact single-word comprehension. J.A.'s spontaneous speech was fluent but jargon. He had no difficulty generating sentences; in contrast to dynamic aphasia, his sentences were largely meaningless and not significantly affected by stimulus constraint level. This patient with jargon aphasia highlights that voluminous speech output can arise from disturbances of both language and executive functions. Our previous studies have identified three conceptual preparation mechanisms for speech: generation of novel thoughts, their sequencing, and selection. This study raises the possibility that a "brake" to stop message generation may be a fourth conceptual preparation mechanism behind the pressure of speech characteristic of jargon aphasia.

  8. Analytic study of the Tadoma method: background and preliminary results.

    PubMed

    Norton, S J; Schultz, M C; Reed, C M; Braida, L D; Durlach, N I; Rabinowitz, W M; Chomsky, C

    1977-09-01

    Certain deaf-blind persons have been taught, through the Tadoma method of speechreading, to use vibrotactile cues from the face and neck to understand speech. This paper reports the results of preliminary tests of the speechreading ability of one adult Tadoma user. The tests were of four major types: (1) discrimination of speech stimuli; (2) recognition of words in isolation and in sentences; (3) interpretation of prosodic and syntactic features in sentences; and (4) comprehension of written (Braille) and oral speech. Words in highly contextual environments were much better perceived than were words in low-context environments. Many of the word errors involved phonemic substitutions which shared articulatory features with the target phonemes, with a higher error rate for vowels than consonants. Relative to performance on word-recognition tests, performance on some of the discrimination tests was worse than expected. Perception of sentences appeared to be mildly sensitive to rate of talking and to speaker differences. Results of the tests on perception of prosodic and syntactic features, while inconclusive, indicate that many of the features tested were not used in interpreting sentences. On an English comprehension test, a higher score was obtained for items administered in Braille than through oral presentation.

  9. The role of beat gesture and pitch accent in semantic processing: an ERP study.

    PubMed

    Wang, Lin; Chu, Mingyuan

    2013-11-01

    The present study investigated whether and how beat gesture (small baton-like hand movements used to emphasize information in speech) influences semantic processing as well as its interaction with pitch accent during speech comprehension. Event-related potentials were recorded as participants watched videos of a person gesturing and speaking simultaneously. The critical words in the spoken sentences were accompanied by a beat gesture, a control hand movement, or no hand movement, and were expressed either with or without pitch accent. We found that both beat gesture and control hand movement induced smaller negativities in the N400 time window than when no hand movement was presented. The reduced N400s indicate that both beat gesture and control movement facilitated the semantic integration of the critical word into the sentence context. In addition, the words accompanied by beat gesture elicited smaller negativities in the N400 time window than those accompanied by control hand movement over right posterior electrodes, suggesting that beat gesture has a unique role for enhancing semantic processing during speech comprehension. Finally, no interaction was observed between beat gesture and pitch accent, indicating that they affect semantic processing independently. © 2013 Elsevier Ltd. All rights reserved.

  10. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    PubMed

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate anatomically distinct cortical representations of modulated noise in normal-hearing and hearing-impaired listeners. This work provides the first link among hearing thresholds, the amplitude of cortical representations of modulated sounds, and the ability to understand speech in modulated background noise. In light of previous work, we propose that magnified cortical representations of modulated sounds disrupt the separation of speech from modulated background noise in auditory cortex. Copyright © 2017 Millman et al.

  11. Neurogenic Orofacial Weakness and Speech in Adults With Dysarthria

    PubMed Central

    Makashay, Matthew J.; Helou, Leah B.; Clark, Heather M.

    2017-01-01

    Purpose This study compared orofacial strength between adults with dysarthria and neurologically normal (NN) matched controls. In addition, orofacial muscle weakness was examined for potential relationships to speech impairments in adults with dysarthria. Method Matched groups of 55 adults with dysarthria and 55 NN adults generated maximum pressure (Pmax) against an air-filled bulb during lingual elevation, protrusion and lateralization, and buccodental and labial compressions. These orofacial strength measures were compared with speech intelligibility, perceptual ratings of speech, articulation rate, and fast syllable-repetition rate. Results The dysarthria group demonstrated significantly lower orofacial strength than the NN group on all tasks. Lingual strength correlated moderately and buccal strength correlated weakly with most ratings of speech deficits. Speech intelligibility was not sensitive to dysarthria severity. Individuals with severely reduced anterior lingual elevation Pmax (< 18 kPa) had normal to profoundly impaired sentence intelligibility (99%–6%) and moderately to severely impaired speech (26%–94% articulatory imprecision; 33%–94% overall severity). Conclusions Results support the presence of orofacial muscle weakness in adults with dysarthrias of varying etiologies but reinforce tenuous links between orofacial strength and speech production disorders. By examining individual data, preliminary evidence emerges to suggest that speech, but not necessarily intelligibility, is likely to be impaired when lingual weakness is severe. PMID:28763804

  12. Alerting prefixes for speech warning messages. [in helicopters

    NASA Technical Reports Server (NTRS)

    Bucher, N. M.; Voorhees, J. W.; Karl, R. L.; Werner, E.

    1984-01-01

    A major question posed by the design of an integrated voice information display/warning system for next-generation helicopter cockpits is whether an alerting prefix should precede voice warning messages; if so, the characteristics desirable in such a cue must also be addressed. Attention is presently given to the results of a study which ascertained pilot response time and response accuracy to messages preceded by either neutral cues or the cognitively appropriate semantic cues. Both verbal cues and messages were spoken in direct, phoneme-synthesized speech, and a training manipulation was included to determine the extent to which previous exposure to speech thus produced facilitates these messages' comprehension. Results are discussed in terms of the importance of human factors research in cockpit display design.

  13. What is Dyslexia? | NIH MedlinePlus the Magazine

    MedlinePlus

    ... words Difficulty understanding text that is read (poor comprehension) Problems with spelling Delayed speech (learning to talk ... of technology. Children with dyslexia may benefit from listening to books on tape or using word-processing ...

  14. [The Freiburg monosyllable word test in postoperative cochlear implant diagnostics].

    PubMed

    Hey, M; Brademann, G; Ambrosch, P

    2016-08-01

    The Freiburg monosyllable word test represents a central tool of postoperative cochlear implant (CI) diagnostics. The objective of this study is to test the equivalence of different word lists by analysing word comprehension. For patients whose CI has been implanted for more than 5 years, the distribution of suprathreshold speech intelligibility outcomes will also be analysed. In a retrospective data analysis, speech understanding for 626 CI users word correct scores were evaluated using a total of 5211 lists with 20 words each. The analysis of word comprehension within each list shows differences in mean and in the kind of distribution function. There are lists which show a significant difference of their mean word recognition to the overall mean. The Freiburg monosyllable word test is easy to administer at suprathreshold speech level for CI recipients, and typically has a saturation level above 80 %. The Freiburg monosyllable word test can be performed successfully by the majority of CI patients. The limited balance of the test lists elicits the conclusion that an adaptive test procedure with the Freiburg monosyllable test does not make sense. The Freiburg monosyllable test can be restructured by resorting all words across lists, or by omitting individual words of a test list to increase the reliability of the test. The results show that speech intelligibility in quiet should also be investigated in CI recipients al levels below 70 dB.

  15. Beat gestures help preschoolers recall and comprehend discourse information.

    PubMed

    Llanes-Coromina, Judith; Vilà-Giménez, Ingrid; Kushch, Olga; Borràs-Comes, Joan; Prieto, Pilar

    2018-08-01

    Although the positive effects of iconic gestures on word recall and comprehension by children have been clearly established, less is known about the benefits of beat gestures (rhythmic hand/arm movements produced together with prominent prosody). This study investigated (a) whether beat gestures combined with prosodic information help children recall contrastively focused words as well as information related to those words in a child-directed discourse (Experiment 1) and (b) whether the presence of beat gestures helps children comprehend a narrative discourse (Experiment 2). In Experiment 1, 51 4-year-olds were exposed to a total of three short stories with contrastive words presented in three conditions, namely with prominence in both speech and gesture, prominence in speech only, and nonprominent speech. Results of a recall task showed that (a) children remembered more words when exposed to prominence in both speech and gesture than in either of the other two conditions and that (b) children were more likely to remember information related to those words when the words were associated with beat gestures. In Experiment 2, 55 5- and 6-year-olds were presented with six narratives with target items either produced with prosodic prominence but no beat gestures or produced with both prosodic prominence and beat gestures. Results of a comprehension task demonstrated that stories told with beat gestures were comprehended better by children. Together, these results constitute evidence that beat gestures help preschoolers not only to recall discourse information but also to comprehend it. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Ultra-fast speech comprehension in blind subjects engages primary visual cortex, fusiform gyrus, and pulvinar – a functional magnetic resonance imaging (fMRI) study

    PubMed Central

    2013-01-01

    Background Individuals suffering from vision loss of a peripheral origin may learn to understand spoken language at a rate of up to about 22 syllables (syl) per second - exceeding by far the maximum performance level of normal-sighted listeners (ca. 8 syl/s). To further elucidate the brain mechanisms underlying this extraordinary skill, functional magnetic resonance imaging (fMRI) was performed in blind subjects of varying ultra-fast speech comprehension capabilities and sighted individuals while listening to sentence utterances of a moderately fast (8 syl/s) or ultra-fast (16 syl/s) syllabic rate. Results Besides left inferior frontal gyrus (IFG), bilateral posterior superior temporal sulcus (pSTS) and left supplementary motor area (SMA), blind people highly proficient in ultra-fast speech perception showed significant hemodynamic activation of right-hemispheric primary visual cortex (V1), contralateral fusiform gyrus (FG), and bilateral pulvinar (Pv). Conclusions Presumably, FG supports the left-hemispheric perisylvian “language network”, i.e., IFG and superior temporal lobe, during the (segmental) sequencing of verbal utterances whereas the collaboration of bilateral pulvinar, right auditory cortex, and ipsilateral V1 implements a signal-driven timing mechanism related to syllabic (suprasegmental) modulation of the speech signal. These data structures, conveyed via left SMA to the perisylvian “language zones”, might facilitate – under time-critical conditions – the consolidation of linguistic information at the level of verbal working memory. PMID:23879896

  17. Effects of Phonological Contrast on Auditory Word Discrimination in Children with and without Reading Disability: A Magnetoencephalography (MEG) Study

    ERIC Educational Resources Information Center

    Wehner, Daniel T.; Ahlfors, Seppo P.; Mody, Maria

    2007-01-01

    Poor readers perform worse than their normal reading peers on a variety of speech perception tasks, which may be linked to their phonological processing abilities. The purpose of the study was to compare the brain activation patterns of normal and impaired readers on speech perception to better understand the phonological basis in reading…

  18. Effect of minimal/mild hearing loss on children's speech understanding in a simulated classroom.

    PubMed

    Lewis, Dawna E; Valente, Daniel L; Spalding, Jody L

    2015-01-01

    While classroom acoustics can affect educational performance for all students, the impact for children with minimal/mild hearing loss (MMHL) may be greater than for children with normal hearing (NH). The purpose of this study was to examine the effect of MMHL on children's speech recognition comprehension and looking behavior in a simulated classroom environment. It was hypothesized that children with MMHL would perform similarly to their peers with NH on the speech recognition task but would perform more poorly on the comprehension task. Children with MMHL also were expected to look toward talkers more often than children with NH. Eighteen children with MMHL and 18 age-matched children with NH participated. In a simulated classroom environment, children listened to lines from an elementary-age-appropriate play read by a teacher and four students reproduced over LCD monitors and loudspeakers located around the listener. A gyroscopic headtracking device was used to monitor looking behavior during the task. At the end of the play, comprehension was assessed by asking a series of 18 factual questions. Children also were asked to repeat 50 meaningful sentences with three key words each presented audio-only by a single talker either from the loudspeaker at 0 degree azimuth or randomly from the five loudspeakers. Both children with NH and those with MMHL performed at or near ceiling on the sentence recognition task. For the comprehension task, children with MMHL performed more poorly than those with NH. Assessment of looking behavior indicated that both groups of children looked at talkers while they were speaking less than 50% of the time. In addition, the pattern of overall looking behaviors suggested that, compared with older children with NH, a larger portion of older children with MMHL may demonstrate looking behaviors similar to younger children with or without MMHL. The results of this study demonstrate that, under realistic acoustic conditions, it is difficult to differentiate performance among children with MMHL and children with NH using a sentence recognition task. The more cognitively demanding comprehension task identified performance differences between these two groups. The comprehension task represented a condition in which the persons talking change rapidly and are not readily visible to the listener. Examination of looking behavior suggested that, in this complex task, attempting to visualize the talker may inefficiently utilize cognitive resources that would otherwise be allocated for comprehension.

  19. [Is Władysław Ołtuszewski a creator of modern phoniatrics in Poland?].

    PubMed

    Kierzek, A

    1995-01-01

    In 1880 Władysław Ołtuszewski founded the faulty articulation infirmary, which was functioning until 1892. He was also working (1884-1892) at the department of Dr Heryng, one of the pioneers of Polish laryngology. He was involved in a welfare work in Warsaw Charity Society. He supported his interest in physiopathology of speech with studies in foreign centres in Germany and France. In 1892 he founded the "Warsaw Therapeutic Institution for persons stricken with speech deviations". It was the first phoniatric infirmary. He was delivering lectures, talks and he published tens papers in field of speech physiopathology. He was indicating the connection of dysphasia with psychiatric disorders. The author has presented in his article the main assumptions of the most valuable book by Ołtuszewski: Study of the science on speech and its deviations, and speech hygiene, published in 1905, pointing out that compared with the books by foreign authors the contents of this one was much ampler and more modern. He has also presented the comprehensive picture of Ołtuszewski's scientific output and wide non-scientific interests.

  20. Comparing Binaural Pre-processing Strategies I: Instrumental Evaluation.

    PubMed

    Baumgärtel, Regina M; Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M A; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias

    2015-12-30

    In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. © The Author(s) 2015.

  1. Hemispheric asymmetry in the hierarchical perception of music and speech.

    PubMed

    Rosenthal, Matthew A

    2016-11-01

    The perception of music and speech involves a higher level, cognitive mechanism that allows listeners to form expectations for future music and speech events. This article comprehensively reviews studies on hemispheric differences in the formation of melodic and harmonic expectations in music and selectively reviews studies on hemispheric differences in the formation of syntactic and semantic expectations in speech. On the basis of this review, it is concluded that the higher level mechanism flexibly lateralizes music processing to either hemisphere depending on the expectation generated by a given musical context. When a context generates in the listener an expectation whose elements are sequentially ordered over time, higher level processing is dominant in the left hemisphere. When a context generates in the listener an expectation whose elements are not sequentially ordered over time, higher level processing is dominant in the right hemisphere. This article concludes with a spreading activation model that describes expectations for music and speech in terms of shared temporal and nontemporal representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Comparing Binaural Pre-processing Strategies I

    PubMed Central

    Krawczyk-Becker, Martin; Marquardt, Daniel; Völker, Christoph; Hu, Hongmei; Herzke, Tobias; Coleman, Graham; Adiloğlu, Kamil; Ernst, Stephan M. A.; Gerkmann, Timo; Doclo, Simon; Kollmeier, Birger; Hohmann, Volker; Dietz, Mathias

    2015-01-01

    In a collaborative research project, several monaural and binaural noise reduction algorithms have been comprehensively evaluated. In this article, eight selected noise reduction algorithms were assessed using instrumental measures, with a focus on the instrumental evaluation of speech intelligibility. Four distinct, reverberant scenarios were created to reflect everyday listening situations: a stationary speech-shaped noise, a multitalker babble noise, a single interfering talker, and a realistic cafeteria noise. Three instrumental measures were employed to assess predicted speech intelligibility and predicted sound quality: the intelligibility-weighted signal-to-noise ratio, the short-time objective intelligibility measure, and the perceptual evaluation of speech quality. The results show substantial improvements in predicted speech intelligibility as well as sound quality for the proposed algorithms. The evaluated coherence-based noise reduction algorithm was able to provide improvements in predicted audio signal quality. For the tested single-channel noise reduction algorithm, improvements in intelligibility-weighted signal-to-noise ratio were observed in all but the nonstationary cafeteria ambient noise scenario. Binaural minimum variance distortionless response beamforming algorithms performed particularly well in all noise scenarios. PMID:26721920

  3. Evaluation du programme d'etudes de francais langue seconde en immersion a la 6e annee. La comprehension orale et la production orale, 1990. Rapport Final. (Evaluation of French as a Second Language Study Program for Grade Six Immersion. Oral Comprehension and Speaking Skills, 1990. Final Report).

    ERIC Educational Resources Information Center

    Theberge, Raymond

    An evaluation of Manitoba's French immersion programs at the levels of grades 3, 6, and 9 focused on program effectiveness in teaching listening comprehension and speech skills. The results for grade 6 are presented here. The first section describes the framework of the immersion curriculum and the listening and oral skills targeted in it. The…

  4. Educational consequences of developmental speech disorder: Key Stage 1 National Curriculum assessment results in English and mathematics.

    PubMed

    Nathan, Liz; Stackhouse, Joy; Goulandris, Nata; Snowling, Margaret J

    2004-06-01

    Children with speech difficulties may have associated educational problems. This paper reports a study examining the educational attainment of children at Key Stage 1 of the National Curriculum who had previously been identified with a speech difficulty. (1) To examine the educational attainment at Key Stage 1 of children diagnosed with speech difficulties two/three years prior to the present study. (2) To compare the Key Stage 1 assessment results of children whose speech problems had resolved at the time of assessment with those whose problems persisted. Data were available from 39 children who had an earlier diagnosis of speech difficulties at age 4/5 (from an original cohort of 47) at the age of 7. A control group of 35 children identified and matched at preschool on age, nonverbal ability and gender provided comparative data. Results of Statutory Assessment Tests (SATs) in reading, reading comprehension, spelling, writing and maths, administered to children at the end of Year 2 of school were analysed. Performance across the two groups was compared. Performance was also compared to published statistics on national levels of attainment. Children with a history of speech difficulties performed less well than controls on reading, spelling and maths. However, children whose speech problems had resolved by the time of assessment performed no differently to controls. Children with persisting speech problems performed less well than controls on tests of literacy and maths. Spelling performance was a particular area of difficulty for children with persisting speech problems. Children with speech difficulties are likely to perform less well than expected on literacy and maths SAT's at age 7. Performance is related to whether the speech problem resolves early on and whether associated language problems exist. Whilst it is unclear whether poorer performance on maths is because of the language components of this task, the results indicate that speech problems, especially persisting ones, can affect the ability to access the National Curriculum to expected levels.

  5. Experimental investigation of the effects of the acoustical conditions in a simulated classroom on speech recognition and learning in children a

    PubMed Central

    Valente, Daniel L.; Plevinsky, Hallie M.; Franco, John M.; Heinrichs-Graham, Elizabeth C.; Lewis, Dawna E.

    2012-01-01

    The potential effects of acoustical environment on speech understanding are especially important as children enter school where students’ ability to hear and understand complex verbal information is critical to learning. However, this ability is compromised because of widely varied and unfavorable classroom acoustics. The extent to which unfavorable classroom acoustics affect children’s performance on longer learning tasks is largely unknown as most research has focused on testing children using words, syllables, or sentences as stimuli. In the current study, a simulated classroom environment was used to measure comprehension performance of two classroom learning activities: a discussion and lecture. Comprehension performance was measured for groups of elementary-aged students in one of four environments with varied reverberation times and background noise levels. The reverberation time was either 0.6 or 1.5 s, and the signal-to-noise level was either +10 or +7 dB. Performance is compared to adult subjects as well as to sentence-recognition in the same condition. Significant differences were seen in comprehension scores as a function of age and condition; both increasing background noise and reverberation degraded performance in comprehension tasks compared to minimal differences in measures of sentence-recognition. PMID:22280587

  6. Vocabulary comprehension and strategies in name construction among children using aided communication.

    PubMed

    Deliberato, Débora; Jennische, Margareta; Oxley, Judith; Nunes, Leila Regina d'Oliveira de Paula; Walter, Cátia Crivelenti de Figueiredo; Massaro, Munique; Almeida, Maria Amélia; Stadskleiv, Kristine; Basil, Carmen; Coronas, Marc; Smith, Martine; von Tetzchner, Stephen

    2018-03-01

    Vocabulary learning reflects the language experiences of the child, both in typical and atypical development, although the vocabulary development of children who use aided communication may differ from children who use natural speech. This study compared the performance of children using aided communication with that of peers using natural speech on two measures of vocabulary knowledge: comprehension of graphic symbols and labeling of common objects. There were 92 participants not considered intellectually disabled in the aided group. The reference group consisted of 60 participants without known disorders. The comprehension task consisted of 63 items presented individually in each participant's graphic system, together with four colored line drawings. Participants were required to indicate which drawing corresponded to the symbol. In the expressive labelling task, 20 common objects presented in drawings had to be named. Both groups indicated the correct drawing for most of the items in the comprehension tasks, with a small advantage for the reference group. The reference group named most objects quickly and accurately, demonstrating that the objects were common and easily named. The aided language group named the majority correctly and in addition used a variety of naming strategies; they required more time than the reference group. The results give insights into lexical processing in aided communication and may have implications for aided language intervention.

  7. Mothers Consistently Alter Their Unique Vocal Fingerprints When Communicating with Infants.

    PubMed

    Piazza, Elise A; Iordan, Marius Cătălin; Lew-Williams, Casey

    2017-10-23

    The voice is the most direct link we have to others' minds, allowing us to communicate using a rich variety of speech cues [1, 2]. This link is particularly critical early in life as parents draw infants into the structure of their environment using infant-directed speech (IDS), a communicative code with unique pitch and rhythmic characteristics relative to adult-directed speech (ADS) [3, 4]. To begin breaking into language, infants must discern subtle statistical differences about people and voices in order to direct their attention toward the most relevant signals. Here, we uncover a new defining feature of IDS: mothers significantly alter statistical properties of vocal timbre when speaking to their infants. Timbre, the tone color or unique quality of a sound, is a spectral fingerprint that helps us instantly identify and classify sound sources, such as individual people and musical instruments [5-7]. We recorded 24 mothers' naturalistic speech while they interacted with their infants and with adult experimenters in their native language. Half of the participants were English speakers, and half were not. Using a support vector machine classifier, we found that mothers consistently shifted their timbre between ADS and IDS. Importantly, this shift was similar across languages, suggesting that such alterations of timbre may be universal. These findings have theoretical implications for understanding how infants tune in to their local communicative environments. Moreover, our classification algorithm for identifying infant-directed timbre has direct translational implications for speech recognition technology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Required attention for synthesized speech perception for three levels of linguistic redundancy

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.; Hart, S. G.

    1977-01-01

    The study evaluates the attention required for synthesized speech perception with reference to three levels of linguistic redundancy. Twelve commercial airline pilots were individually tested for 16 cockpit warning messages eight of which consisted of two monosyllabic key words and eight of which consisted of two polysyllabic key words. Three levels of linguistic redundancy were identified: monosyllabic words, polysyllabic words, and sentences. The experiment contained a message familiarization phase and a message recognition phase. It was found that: (1) when the messages are part of a previously learned and recently heard set, and the subject is familiar with the phrasing, the attention needed to recognize the message is not a function of the level of linguistic redundancy, and (2) there is a quantitative and qualitative difference between recognition and comprehension processes; only in the case of active comprehension does additional redundancy reduce attention requirements.

  9. Dependency distance minimization in understanding of ambiguous structure. Comment on "Dependency distance: A new perspective on syntactic patterns in natural languages" by Haitao Liu et al.

    NASA Astrophysics Data System (ADS)

    Zhao, Yiyi

    2017-07-01

    Dependency Distance, proposed by Hudson [1], calculated by Liu [2,3], is an important concept in Dependency Theory. It can be used as a measure of the syntactic difficulty, and lots of research [2,4] have testified the universal of Dependency Distance in various languages. Human languages seem to present a preference for short dependency distance, which may be explained in terms of general cognitive constraint of limited working memory [5]. Psychological experiments in English, German, Russian and Chinese support the hypothesis that Dependency Distance minimization (DDM) make languages to evolve into some syntactic patterns to reduce memory burden [6-9]. The study of psychology focuses on the process and mechanism of syntactic structure selection in speech comprehension. In many speech comprehension experiments [10], ambiguous structure is an important experimental material.

  10. Small intragenic deletion in FOXP2 associated with childhood apraxia of speech and dysarthria.

    PubMed

    Turner, Samantha J; Hildebrand, Michael S; Block, Susan; Damiano, John; Fahey, Michael; Reilly, Sheena; Bahlo, Melanie; Scheffer, Ingrid E; Morgan, Angela T

    2013-09-01

    Relatively little is known about the neurobiological basis of speech disorders although genetic determinants are increasingly recognized. The first gene for primary speech disorder was FOXP2, identified in a large, informative family with verbal and oral dyspraxia. Subsequently, many de novo and familial cases with a severe speech disorder associated with FOXP2 mutations have been reported. These mutations include sequencing alterations, translocations, uniparental disomy, and genomic copy number variants. We studied eight probands with speech disorder and their families. Family members were phenotyped using a comprehensive assessment of speech, oral motor function, language, literacy skills, and cognition. Coding regions of FOXP2 were screened to identify novel variants. Segregation of the variant was determined in the probands' families. Variants were identified in two probands. One child with severe motor speech disorder had a small de novo intragenic FOXP2 deletion. His phenotype included features of childhood apraxia of speech and dysarthria, oral motor dyspraxia, receptive and expressive language disorder, and literacy difficulties. The other variant was found in a family in two of three family members with stuttering, and also in the mother with oral motor impairment. This variant was considered a benign polymorphism as it was predicted to be non-pathogenic with in silico tools and found in database controls. This is the first report of a small intragenic deletion of FOXP2 that is likely to be the cause of severe motor speech disorder associated with language and literacy problems. Copyright © 2013 Wiley Periodicals, Inc.

  11. Neural Mechanism Underling Comprehension of Narrative Speech and Its Heritability: Study in a Large Population.

    PubMed

    Babajani-Feremi, Abbas

    2017-09-01

    Comprehension of narratives constitutes a fundamental part of our everyday life experience. Although the neural mechanism of auditory narrative comprehension has been investigated in some studies, the neural correlates underlying this mechanism and its heritability remain poorly understood. We investigated comprehension of naturalistic speech in a large, healthy adult population (n = 429; 176/253 M/F; 22-36 years of age) consisting of 192 twin pairs (49 monozygotic and 47 dizygotic pairs) and 237 of their siblings. We used high quality functional MRI datasets from the Human Connectome Project (HCP) in which a story-based paradigm was utilized for the auditory narrative comprehension. Our results revealed that narrative comprehension was associated with activations of the classical language regions including superior temporal gyrus (STG), middle temporal gyrus (MTG), and inferior frontal gyrus (IFG) in both hemispheres, though STG and MTG were activated symmetrically and activation in IFG were left-lateralized. Our results further showed that the narrative comprehension was associated with activations in areas beyond the classical language regions, e.g. medial superior frontal gyrus (SFGmed), middle frontal gyrus (MFG), and supplementary motor area (SMA). Of subcortical structures, only the hippocampus was involved. The results of heritability analysis revealed that the oral reading recognition and picture vocabulary comprehension were significantly heritable (h 2  > 0.56, p < 10 - 13 ). In addition, the extent of activation of five areas in the left hemisphere, i.e. STG, IFG pars opercularis, SFGmed, SMA, and precuneus, and one area in the right hemisphere, i.e. MFG, were significantly heritable (h 2  > 0.33, p < 0.0004). The current study, to the best of our knowledge, is the first to investigate auditory narrative comprehension and its heritability in a large healthy population. Referring to the excellent quality of the HCP data, our results can clarify the functional contributions of linguistic and extra-linguistic cortices during narrative comprehension.

  12. The functional neuroanatomy of language

    NASA Astrophysics Data System (ADS)

    Hickok, Gregory

    2009-09-01

    There has been substantial progress over the last several years in understanding aspects of the functional neuroanatomy of language. Some of these advances are summarized in this review. It will be argued that recognizing speech sounds is carried out in the superior temporal lobe bilaterally, that the superior temporal sulcus bilaterally is involved in phonological-level aspects of this process, that the frontal/motor system is not central to speech recognition although it may modulate auditory perception of speech, that conceptual access mechanisms are likely located in the lateral posterior temporal lobe (middle and inferior temporal gyri), that speech production involves sensory-related systems in the posterior superior temporal lobe in the left hemisphere, that the interface between perceptual and motor systems is supported by a sensory-motor circuit for vocal tract actions (not dedicated to speech) that is very similar to sensory-motor circuits found in primate parietal lobe, and that verbal short-term memory can be understood as an emergent property of this sensory-motor circuit. These observations are considered within the context of a dual stream model of speech processing in which one pathway supports speech comprehension and the other supports sensory-motor integration. Additional topics of discussion include the functional organization of the planum temporale for spatial hearing and speech-related sensory-motor processes, the anatomical and functional basis of a form of acquired language disorder, conduction aphasia, the neural basis of vocabulary development, and sentence-level/grammatical processing.

  13. Multiple benefits of personal FM system use by children with auditory processing disorder (APD).

    PubMed

    Johnston, Kristin N; John, Andrew B; Kreisman, Nicole V; Hall, James W; Crandell, Carl C

    2009-01-01

    Children with auditory processing disorders (APD) were fitted with Phonak EduLink FM devices for home and classroom use. Baseline measures of the children with APD, prior to FM use, documented significantly lower speech-perception scores, evidence of decreased academic performance, and psychosocial problems in comparison to an age- and gender-matched control group. Repeated measures during the school year demonstrated speech-perception improvement in noisy classroom environments as well as significant academic and psychosocial benefits. Compared with the control group, the children with APD showed greater speech-perception advantage with FM technology. Notably, after prolonged FM use, even unaided (no FM device) speech-perception performance was improved in the children with APD, suggesting the possibility of fundamentally enhanced auditory system function.

  14. PARRHESIA, PHAEDRA, AND THE POLIS: ANTICIPATING PSYCHOANALYTIC FREE ASSOCIATION AS DEMOCRATIC PRACTICE.

    PubMed

    Gentile, Jill

    2015-07-01

    This essay explores the mostly unexamined analogy of psychoanalytic free association to democratic free speech. The author turns back to a time when free speech was a matter of considerable discussion: the classical period of the Athenian constitution and its experiment with parrhesia. Ordinarily translated into English as "free speech," parrhesia is startlingly relevant to psychoanalysis. The Athenian stage-in particular, Hippolytus (Euripides, 5th century BCE)-illustrates this point. Euripides's tragic tale anticipates Freud's inquiries, exploring the fundamental link between free speech and female embodiment. The author suggests that psychoanalysis should claim its own conception of a polis as a mediated and ethical space between private and public spheres, between body and mind, and between speaking and listening communities. © 2015 The Psychoanalytic Quarterly, Inc.

  15. Making sense of progressive non-fluent aphasia: an analysis of conversational speech

    PubMed Central

    Woollams, Anna M.; Hodges, John R.; Patterson, Karalyn

    2009-01-01

    The speech of patients with progressive non-fluent aphasia (PNFA) has often been described clinically, but these descriptions lack support from quantitative data. The clinical classification of the progressive aphasic syndromes is also debated. This study selected 15 patients with progressive aphasia on broad criteria, excluding only those with clear semantic dementia. It aimed to provide a detailed quantitative description of their conversational speech, along with cognitive testing and visual rating of structural brain imaging, and to examine which, if any features were consistently present throughout the group; as well as looking for sub-syndromic associations between these features. A consistent increase in grammatical and speech sound errors and a simplification of spoken syntax relative to age-matched controls were observed, though telegraphic speech was rare; slow speech was common but not universal. Almost all patients showed impairments in picture naming, syntactic comprehension and executive function. The degree to which speech was affected was independent of the severity of the other cognitive deficits. A partial dissociation was also observed between slow speech with simplified grammar on the one hand, and grammatical and speech sound errors on the other. Overlap between these sets of impairments was however, the rule rather than the exception, producing continuous variation within a single consistent syndrome. The distribution of atrophy was remarkably variable, with frontal, temporal and medial temporal areas affected, either symmetrically or asymmetrically. The study suggests that PNFA is a coherent, well-defined syndrome and that varieties such as logopaenic progressive aphasia and progressive apraxia of speech may be seen as points in a space of continuous variation within progressive non-fluent aphasia. PMID:19696033

  16. Speech motor development: Integrating muscles, movements, and linguistic units.

    PubMed

    Smith, Anne

    2006-01-01

    A fundamental problem for those interested in human communication is to determine how ideas and the various units of language structure are communicated through speaking. The physiological concepts involved in the control of muscle contraction and movement are theoretically distant from the processing levels and units postulated to exist in language production models. A review of the literature on adult speakers suggests that they engage complex, parallel processes involving many units, including sentence, phrase, syllable, and phoneme levels. Infants must develop multilayered interactions among language and motor systems. This discussion describes recent studies of speech motor performance relative to varying linguistic goals during the childhood, teenage, and young adult years. Studies of the developing interactions between speech motor and language systems reveal both qualitative and quantitative differences between the developing and the mature systems. These studies provide an experimental basis for a more comprehensive theoretical account of how mappings between units of language and units of action are formed and how they function. Readers will be able to: (1) understand the theoretical differences between models of speech motor control and models of language processing, as well as the nature of the concepts used in the two different kinds of models, (2) explain the concept of coarticulation and state why this phenomenon has confounded attempts to determine the role of linguistic units, such as syllables and phonemes, in speech production, (3) describe the development of speech motor performance skills and specify quantitative and qualitative differences between speech motor performance in children and adults, and (4) describe experimental methods that allow scientists to study speech and limb motor control, as well as compare units of action used to study non-speech and speech movements.

  17. The Timing and Effort of Lexical Access in Natural and Degraded Speech

    PubMed Central

    Wagner, Anita E.; Toffanin, Paolo; Başkent, Deniz

    2016-01-01

    Understanding speech is effortless in ideal situations, and although adverse conditions, such as caused by hearing impairment, often render it an effortful task, they do not necessarily suspend speech comprehension. A prime example of this is speech perception by cochlear implant users, whose hearing prostheses transmit speech as a significantly degraded signal. It is yet unknown how mechanisms of speech processing deal with such degraded signals, and whether they are affected by effortful processing of speech. This paper compares the automatic process of lexical competition between natural and degraded speech, and combines gaze fixations, which capture the course of lexical disambiguation, with pupillometry, which quantifies the mental effort involved in processing speech. Listeners’ ocular responses were recorded during disambiguation of lexical embeddings with matching and mismatching durational cues. Durational cues were selected due to their substantial role in listeners’ quick limitation of the number of lexical candidates for lexical access in natural speech. Results showed that lexical competition increased mental effort in processing natural stimuli in particular in presence of mismatching cues. Signal degradation reduced listeners’ ability to quickly integrate durational cues in lexical selection, and delayed and prolonged lexical competition. The effort of processing degraded speech was increased overall, and because it had its sources at the pre-lexical level this effect can be attributed to listening to degraded speech rather than to lexical disambiguation. In sum, the course of lexical competition was largely comparable for natural and degraded speech, but showed crucial shifts in timing, and different sources of increased mental effort. We argue that well-timed progress of information from sensory to pre-lexical and lexical stages of processing, which is the result of perceptual adaptation during speech development, is the reason why in ideal situations speech is perceived as an undemanding task. Degradation of the signal or the receiver channel can quickly bring this well-adjusted timing out of balance and lead to increase in mental effort. Incomplete and effortful processing at the early pre-lexical stages has its consequences on lexical processing as it adds uncertainty to the forming and revising of lexical hypotheses. PMID:27065901

  18. Hemispheric speech lateralisation in the developing brain is related to motor praxis ability.

    PubMed

    Hodgson, Jessica C; Hirst, Rebecca J; Hudson, John M

    2016-12-01

    Commonly displayed functional asymmetries such as hand dominance and hemispheric speech lateralisation are well researched in adults. However there is debate about when such functions become lateralised in the typically developing brain. This study examined whether patterns of speech laterality and hand dominance were related and whether they varied with age in typically developing children. 148 children aged 3-10 years performed an electronic pegboard task to determine hand dominance; a subset of 38 of these children also underwent functional Transcranial Doppler (fTCD) imaging to derive a lateralisation index (LI) for hemispheric activation during speech production using an animation description paradigm. There was no main effect of age in the speech laterality scores, however, younger children showed a greater difference in performance between their hands on the motor task. Furthermore, this between-hand performance difference significantly interacted with direction of speech laterality, with a smaller between-hand difference relating to increased left hemisphere activation. This data shows that both handedness and speech lateralisation appear relatively determined by age 3, but that atypical cerebral lateralisation is linked to greater performance differences in hand skill, irrespective of age. Results are discussed in terms of the common neural systems underpinning handedness and speech lateralisation. Copyright © 2016. Published by Elsevier Ltd.

  19. Speech and Communication Changes Reported by People with Parkinson's Disease.

    PubMed

    Schalling, Ellika; Johansson, Kerstin; Hartelius, Lena

    2017-01-01

    Changes in communicative functions are common in Parkinson's disease (PD), but there are only limited data provided by individuals with PD on how these changes are perceived, what their consequences are, and what type of intervention is provided. To present self-reported information about speech and communication, the impact on communicative participation, and the amount and type of speech-language pathology services received by people with PD. Respondents with PD recruited via the Swedish Parkinson's Disease Society filled out a questionnaire accessed via a Web link or provided in a paper version. Of 188 respondents, 92.5% reported at least one symptom related to communication; the most common symptoms were weak voice, word-finding difficulties, imprecise articulation, and getting off topic in conversation. The speech and communication problems resulted in restricted communicative participation for between a quarter and a third of the respondents, and their speech caused embarrassment sometimes or more often to more than half. Forty-five percent of the respondents had received speech-language pathology services. Most respondents reported both speech and language symptoms, and many experienced restricted communicative participation. Access to speech-language pathology services is still inadequate. Services should also address cognitive/linguistic aspects to meet the needs of people with PD. © 2018 S. Karger AG, Basel.

  20. Predicting fundamental frequency from mel-frequency cepstral coefficients to enable speech reconstruction.

    PubMed

    Shao, Xu; Milner, Ben

    2005-08-01

    This work proposes a method to reconstruct an acoustic speech signal solely from a stream of mel-frequency cepstral coefficients (MFCCs) as may be encountered in a distributed speech recognition (DSR) system. Previous methods for speech reconstruction have required, in addition to the MFCC vectors, fundamental frequency and voicing components. In this work the voicing classification and fundamental frequency are predicted from the MFCC vectors themselves using two maximum a posteriori (MAP) methods. The first method enables fundamental frequency prediction by modeling the joint density of MFCCs and fundamental frequency using a single Gaussian mixture model (GMM). The second scheme uses a set of hidden Markov models (HMMs) to link together a set of state-dependent GMMs, which enables a more localized modeling of the joint density of MFCCs and fundamental frequency. Experimental results on speaker-independent male and female speech show that accurate voicing classification and fundamental frequency prediction is attained when compared to hand-corrected reference fundamental frequency measurements. The use of the predicted fundamental frequency and voicing for speech reconstruction is shown to give very similar speech quality to that obtained using the reference fundamental frequency and voicing.

Top