Sample records for difficulty understanding speech

  1. Relative Difficulty of Understanding Foreign Accents as a Marker of Proficiency

    ERIC Educational Resources Information Center

    Lev-Ari, Shiri; van Heugten, Marieke; Peperkamp, Sharon

    2017-01-01

    Foreign-accented speech is generally harder to understand than native-accented speech. This difficulty is reduced for non-native listeners who share their first language with the non-native speaker. It is currently unclear, however, how non-native listeners deal with foreign-accented speech produced by speakers of a different language. We show…

  2. How Autism Affects Speech Understanding in Multitalker Environments

    DTIC Science & Technology

    2015-12-01

    Page 1 AD_________________ Award Number: W81XWH-12-1-0363 TITLE: How Autism Affects Speech Understanding in Multitalker Environments PRINCIPAL...COVERED 30 Sep 2012 - 29 Sep 2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER How Autism Affects Speech Understanding in Multitalker 5b. GRANT NUMBER...that adults with Autism Spectrum Disorders have particular difficulty recognizing speech in acoustically-hostile environments (e.g., Alcantara et al

  3. Between-Word Processes in Children with Speech Difficulties: Insights from a Usage-Based Approach to Phonology

    ERIC Educational Resources Information Center

    Newton, Caroline

    2012-01-01

    There are some children with speech and/or language difficulties who are significantly more difficult to understand in connected speech than in single words. The study reported here explores the between-word behaviours of three such children, aged 11;8, 12;2 and 12;10. It focuses on whether these patterns could be accounted for by lenition, as…

  4. Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis

    PubMed Central

    Vielsmeier, Veronika; Kreuzer, Peter M.; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R. O.; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin

    2016-01-01

    Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments (“How would you rate your ability to understand speech?”; “How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?”). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role. PMID:28018209

  5. Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis.

    PubMed

    Vielsmeier, Veronika; Kreuzer, Peter M; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R O; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin

    2016-01-01

    Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments ("How would you rate your ability to understand speech?"; "How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?"). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role.

  6. Difficulty understanding speech in noise by the hearing impaired: underlying causes and technological solutions.

    PubMed

    Healy, Eric W; Yoho, Sarah E

    2016-08-01

    A primary complaint of hearing-impaired individuals involves poor speech understanding when background noise is present. Hearing aids and cochlear implants often allow good speech understanding in quiet backgrounds. But hearing-impaired individuals are highly noise intolerant, and existing devices are not very effective at combating background noise. As a result, speech understanding in noise is often quite poor. In accord with the significance of the problem, considerable effort has been expended toward understanding and remedying this issue. Fortunately, our understanding of the underlying issues is reasonably good. In sharp contrast, effective solutions have remained elusive. One solution that seems promising involves a single-microphone machine-learning algorithm to extract speech from background noise. Data from our group indicate that the algorithm is capable of producing vast increases in speech understanding by hearing-impaired individuals. This paper will first provide an overview of the speech-in-noise problem and outline why hearing-impaired individuals are so noise intolerant. An overview of our approach to solving this problem will follow.

  7. Brainstem Correlates of Speech-in-Noise Perception in Children

    PubMed Central

    Anderson, Samira; Skoe, Erika; Chandrasekaran, Bharath; Zecker, Steven; Kraus, Nina

    2010-01-01

    Children often have difficulty understanding speech in challenging listening environments. In the absence of peripheral hearing loss, these speech perception difficulties may arise from dysfunction at more central levels in the auditory system, including subcortical structures. We examined brainstem encoding of pitch in a speech syllable in 38 school-age children. In children with poor speech-in-noise perception, we find impaired encoding of the fundamental frequency and the second harmonic, two important cues for pitch perception. Pitch, an important factor in speaker identification, aids the listener in tracking a specific voice from a background of voices. These results suggest that the robustness of subcortical neural encoding of pitch features in time-varying signals is an important factor in determining success with speech perception in noise. PMID:20708671

  8. Effect of Age on Silent Gap Discrimination in Synthetic Speech Stimuli.

    ERIC Educational Resources Information Center

    Lister, Jennifer; Tarver, Kenton

    2004-01-01

    The difficulty that older listeners experience understanding conversational speech may be related to their limited ability to use information present in the silent intervals (i.e., temporal gaps) between dynamic speech sounds. When temporal gaps are present between nonspeech stimuli that are spectrally invariant (e.g., noise bands or sinusoids),…

  9. Associations between speech understanding and auditory and visual tests of verbal working memory: effects of linguistic complexity, task, age, and hearing loss

    PubMed Central

    Smith, Sherri L.; Pichora-Fuller, M. Kathleen

    2015-01-01

    Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners’ auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding. PMID:26441769

  10. Musical Experience and the Aging Auditory System: Implications for Cognitive Abilities and Hearing Speech in Noise

    PubMed Central

    Parbery-Clark, Alexandra; Strait, Dana L.; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-01-01

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18–30), we asked whether musical experience benefits an older cohort of musicians (ages 45–65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline. PMID:21589653

  11. Musical experience and the aging auditory system: implications for cognitive abilities and hearing speech in noise.

    PubMed

    Parbery-Clark, Alexandra; Strait, Dana L; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2011-05-11

    Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.

  12. Processing Mechanisms in Hearing-Impaired Listeners: Evidence from Reaction Times and Sentence Interpretation.

    PubMed

    Carroll, Rebecca; Uslar, Verena; Brand, Thomas; Ruigendijk, Esther

    The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. Hearing impairment adds perceptual processing load during sentence processing, but affects grammatical processing beyond the word level to the same degree as in normal hearing, with minor differences in processing mechanisms. The data contribute to our understanding of individual differences in speech perception and language understanding. The authors interpret their results within the ease of language understanding model.

  13. 45 CFR 1308.9 - Eligibility criteria: Speech or language impairments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... HUMAN DEVELOPMENT SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES THE ADMINISTRATION FOR CHILDREN... language impairments. (a) A speech or language impairment means a communication disorder such as stuttering... language disorder may be characterized by difficulty in understanding and producing language, including...

  14. New developments in the management of speech and language disorders.

    PubMed

    Harding, Celia; Gourlay, Sara

    2008-05-01

    Speech and language disorders, which include swallowing difficulties, are usually managed by speech and language therapists. Such a diverse, complex and challenging clinical group of symptoms requires practitioners with detailed knowledge and understanding of research within those areas, as well as the ability to implement appropriate therapy strategies within many environments. These environments range from neonatal units, acute paediatric wards and health centres through to nurseries, schools and children's homes. This paper summarises the key issues that are fundamental to our understanding of this client group.

  15. Musicians change their tune: how hearing loss alters the neural code.

    PubMed

    Parbery-Clark, Alexandra; Anderson, Samira; Kraus, Nina

    2013-08-01

    Individuals with sensorineural hearing loss have difficulty understanding speech, especially in background noise. This deficit remains even when audibility is restored through amplification, suggesting that mechanisms beyond a reduction in peripheral sensitivity contribute to the perceptual difficulties associated with hearing loss. Given that normal-hearing musicians have enhanced auditory perceptual skills, including speech-in-noise perception, coupled with heightened subcortical responses to speech, we aimed to determine whether similar advantages could be observed in middle-aged adults with hearing loss. Results indicate that musicians with hearing loss, despite self-perceptions of average performance for understanding speech in noise, have a greater ability to hear in noise relative to nonmusicians. This is accompanied by more robust subcortical encoding of sound (e.g., stimulus-to-response correlations and response consistency) as well as more resilient neural responses to speech in the presence of background noise (e.g., neural timing). Musicians with hearing loss also demonstrate unique neural signatures of spectral encoding relative to nonmusicians: enhanced neural encoding of the speech-sound's fundamental frequency but not of its upper harmonics. This stands in contrast to previous outcomes in normal-hearing musicians, who have enhanced encoding of the harmonics but not the fundamental frequency. Taken together, our data suggest that although hearing loss modifies a musician's spectral encoding of speech, the musician advantage for perceiving speech in noise persists in a hearing-impaired population by adaptively strengthening underlying neural mechanisms for speech-in-noise perception. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Cochlear implants: a remarkable past and a brilliant future

    PubMed Central

    Wilson, Blake S.; Dorman, Michael F.

    2013-01-01

    The aims of this paper are to (i) provide a brief history of cochlear implants; (ii) present a status report on the current state of implant engineering and the levels of speech understanding enabled by that engineering; (iii) describe limitations of current signal processing strategies and (iv) suggest new directions for research. With current technology the “average” implant patient, when listening to predictable conversations in quiet, is able to communicate with relative ease. However, in an environment typical of a workplace the average patient has a great deal of difficulty. Patients who are “above average” in terms of speech understanding, can achieve 100% correct scores on the most difficult tests of speech understanding in quiet but also have significant difficulty when signals are presented in noise. The major factors in these outcomes appear to be (i) a loss of low-frequency, fine structure information possibly due to the envelope extraction algorithms common to cochlear implant signal processing; (ii) a limitation in the number of effective channels of stimulation due to overlap in electric fields from electrodes, and (iii) central processing deficits, especially for patients with poor speech understanding. Two recent developments, bilateral implants and combined electric and acoustic stimulation, have promise to remediate some of the difficulties experienced by patients in noise and to reinstate low-frequency fine structure information. If other possibilities are realized, e.g., electrodes that emit drugs to inhibit cell death following trauma and to induce the growth of neurites toward electrodes, then the future is very bright indeed. PMID:18616994

  17. How Autism Affects Speech Understanding in Multitalker Environments

    DTIC Science & Technology

    2013-10-01

    difficult than will typically- developing children. Knowing whether toddlers with ASD have difficulties processing speech in the presence of acoustic...to separate the speech of different talkers than do their typically- developing peers. We also predict that they will fail to exploit visual cues on...learn language from many settings in which children are typically placed. In addition, one of the cues that typically- developing listeners use to

  18. An analysis of the masking of speech by competing speech using self-report data.

    PubMed

    Agus, Trevor R; Akeroyd, Michael A; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the "Speech, Spatial, and Qualities of Hearing" scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations.

  19. The role of auditory and cognitive factors in understanding speech in noise by normal-hearing older listeners

    PubMed Central

    Schoof, Tim; Rosen, Stuart

    2014-01-01

    Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60–72 years) and younger (19–29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed. PMID:25429266

  20. Perception of Native English Reduced Forms in Adverse Environments by Chinese Undergraduate Students

    ERIC Educational Resources Information Center

    Wong, Simpson W. L.; Tsui, Jenny K. Y.; Chow, Bonnie Wing-Yin; Leung, Vina W. H.; Mok, Peggy; Chung, Kevin Kien-Hoa

    2017-01-01

    Previous research has shown that learners of English-as-a-second-language (ESL) have difficulties in understanding connected speech spoken by native English speakers. Extending from past research limited to quiet listening condition, this study examined the perception of English connected speech presented under five adverse conditions, namely…

  1. Strategies for Coping with Educational and Social Consequences of Chronic Ear Infections in Rural Communities.

    ERIC Educational Resources Information Center

    Pillai, Patrick

    2000-01-01

    Children with chronic ear infections experience a lag time in understanding speech, which inhibits classroom participation and the ability to make friends, and ultimately reduces self-esteem. Difficulty in hearing affects speech and vocabulary development, reading and writing proficiency, and academic performance, and could lead to placement in…

  2. Perception of Spectral Contrast by Hearing-Impaired Listeners

    ERIC Educational Resources Information Center

    Dreisbach, Laura E.; Leek, Marjorie R.; Lentz, Jennifer J.

    2005-01-01

    The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and…

  3. Normal Adult Aging and the Contextual Influences Affecting Speech and Meaningful Sound Perception

    PubMed Central

    Aydelott, Jennifer; Leech, Robert; Crinion, Jennifer

    2010-01-01

    It is widely accepted that hearing loss increases markedly with age, beginning in the fourth decade ISO 7029 (2000). Age-related hearing loss is typified by high-frequency threshold elevation and associated reductions in speech perception because speech sounds, especially consonants, become inaudible. Nevertheless, older adults often report additional and progressive difficulties in the perception and comprehension of speech, often highlighted in adverse listening conditions that exceed those reported by younger adults with a similar degree of high-frequency hearing loss (Dubno, Dirks, & Morgan) leading to communication difficulties and social isolation (Weinstein & Ventry). Some of the age-related decline in speech perception can be accounted for by peripheral sensory problems but cognitive aging can also be a contributing factor. In this article, we review findings from the psycholinguistic literature predominantly over the last four years and present a pilot study illustrating how normal age-related changes in cognition and the linguistic context can influence speech-processing difficulties in older adults. For significant progress in understanding and improving the auditory performance of aging listeners to be made, we discuss how future research will have to be much more specific not only about which interactions between auditory and cognitive abilities are critical but also how they are modulated in the brain. PMID:21307006

  4. [Spontaneous speech prosody and discourse analysis in schizophrenia and Fronto Temporal Dementia (FTD) patients].

    PubMed

    Martínez, Angela; Felizzola Donado, Carlos Alberto; Matallana Eslava, Diana Lucía

    2015-01-01

    Patients with schizophrenia and Frontotemporal Dementia (FTD) in their linguistic variants share some language characteristics such as the lexical access difficulties, disordered speech with disruptions, many pauses, interruptions and reformulations. For the schizophrenia patients it reflects a difficulty of affect expression, while for the FTD patients it reflects a linguistic issue. This study, through an analysis of a series of cases assessed Clinic both in memory and on the Mental Health Unit of HUSI-PUJ (Hospital Universitario San Ignacio), with additional language assessment (analysis speech and acoustic analysis), present distinctive features of the DFT in its linguistic variants and schizophrenia that will guide the specialist in finding early markers of a differential diagnosis. In patients with FTD language variants, in 100% of cases there is a difficulty understanding linguistic structure of complex type; and important speech fluency problems. In patients with schizophrenia, there are significant alterations in the expression of the suprasegmental elements of speech, as well as disruptions in discourse. We present how depth language assessment allows to reassess some of the rules for the speech and prosody analysis of patients with dementia and schizophrenia; we suggest how elements of speech are useful in guiding the diagnosis and correlate functional compromise in everyday psychiatrist's practice. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  5. Masking Period Patterns and Forward Masking for Speech-Shaped Noise: Age-Related Effects.

    PubMed

    Grose, John H; Menezes, Denise C; Porter, Heather L; Griz, Silvana

    2016-01-01

    The purpose of this study was to assess age-related changes in temporal resolution in listeners with relatively normal audiograms. The hypothesis was that increased susceptibility to nonsimultaneous masking contributes to the hearing difficulties experienced by older listeners in complex fluctuating backgrounds. Participants included younger (n = 11), middle-age (n = 12), and older (n = 11) listeners with relatively normal audiograms. The first phase of the study measured masking period patterns for speech-shaped noise maskers and signals. From these data, temporal window shapes were derived. The second phase measured forward-masking functions and assessed how well the temporal window fits accounted for these data. The masking period patterns demonstrated increased susceptibility to backward masking in the older listeners, compatible with a more symmetric temporal window in this group. The forward-masking functions exhibited an age-related decline in recovery to baseline thresholds, and there was also an increase in the variability of the temporal window fits to these data. This study demonstrated an age-related increase in susceptibility to nonsimultaneous masking, supporting the hypothesis that exacerbated nonsimultaneous masking contributes to age-related difficulties understanding speech in fluctuating noise. Further support for this hypothesis comes from limited speech-in-noise data, suggesting an association between susceptibility to forward masking and speech understanding in modulated noise.

  6. The Impact of the Picture Exchange Communication System on Requesting and Speech Development in Preschoolers with Autism Spectrum Disorders and Similar Characteristics

    ERIC Educational Resources Information Center

    Ganz, Jennifer B.; Simpson, Richard L.; Corbin-Newsome, Jawanda

    2008-01-01

    By definition children with autism spectrum disorders (ASD) experience difficulty understanding and using language. Accordingly, visual and picture-based strategies such as the Picture Exchange Communication System (PECS) show promise in ameliorating speech and language deficits. This study reports the results of a multiple baseline across…

  7. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    PubMed

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound disorders. Non-speech oral motor exercise use was most frequently reported in the treatment of dysarthria. Non-speech oral motor exercise use when targeting speech sound disorders is not widely endorsed in the literature.

  8. Speech and language adverse effects after thalamotomy and deep brain stimulation in patients with movement disorders: A meta-analysis.

    PubMed

    Alomar, Soha; King, Nicolas K K; Tam, Joseph; Bari, Ausaf A; Hamani, Clement; Lozano, Andres M

    2017-01-01

    The thalamus has been a surgical target for the treatment of various movement disorders. Commonly used therapeutic modalities include ablative and nonablative procedures. A major clinical side effect of thalamic surgery is the appearance of speech problems. This review summarizes the data on the development of speech problems after thalamic surgery. A systematic review and meta-analysis was performed using nine databases, including Medline, Web of Science, and Cochrane Library. We also checked for articles by searching citing and cited articles. We retrieved studies between 1960 and September 2014. Of a total of 2,320 patients, 19.8% (confidence interval: 14.8-25.9) had speech difficulty after thalamotomy. Speech difficulty occurred in 15% (confidence interval: 9.8-22.2) of those treated with a unilaterally and 40.6% (confidence interval: 29.5-52.8) of those treated bilaterally. Speech impairment was noticed 2- to 3-fold more commonly after left-sided procedures (40.7% vs. 15.2%). Of the 572 patients that underwent DBS, 19.4% (confidence interval: 13.1-27.8) experienced speech difficulty. Subgroup analysis revealed that this complication occurs in 10.2% (confidence interval: 7.4-13.9) of patients treated unilaterally and 34.6% (confidence interval: 21.6-50.4) treated bilaterally. After thalamotomy, the risk was higher in Parkinson's patients compared to patients with essential tremor: 19.8% versus 4.5% in the unilateral group and 42.5% versus 13.9% in the bilateral group. After DBS, this rate was higher in essential tremor patients. Both lesioning and stimulation thalamic surgery produce adverse effects on speech. Left-sided and bilateral procedures are approximately 3-fold more likely to cause speech difficulty. This effect was higher after thalamotomy compared to DBS. In the thalamotomy group, the risk was higher in Parkinson's patients, whereas in the DBS group it was higher in patients with essential tremor. Understanding the pathophysiology of speech disturbance after thalamic procedures is a priority. © 2017 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  9. Masking Period Patterns & Forward Masking for Speech-Shaped Noise: Age-related effects

    PubMed Central

    Grose, John H.; Menezes, Denise C.; Porter, Heather L.; Griz, Silvana

    2015-01-01

    Objective The purpose of this study was to assess age-related changes in temporal resolution in listeners with relatively normal audiograms. The hypothesis was that increased susceptibility to non-simultaneous masking contributes to the hearing difficulties experienced by older listeners in complex fluctuating backgrounds. Design Participants included younger (n = 11), middle-aged (n = 12), and older (n = 11) listeners with relatively normal audiograms. The first phase of the study measured masking period patterns for speech-shaped noise maskers and signals. From these data, temporal window shapes were derived. The second phase measured forward-masking functions, and assessed how well the temporal window fits accounted for these data. Results The masking period patterns demonstrated increased susceptibility to backward masking in the older listeners, compatible with a more symmetric temporal window in this group. The forward-masking functions exhibited an age-related decline in recovery to baseline thresholds, and there was also an increase in the variability of the temporal window fits to these data. Conclusions This study demonstrated an age-related increase in susceptibility to non-simultaneous masking, supporting the hypothesis that exacerbated non-simultaneous masking contributes to age-related difficulties understanding speech in fluctuating noise. Further support for this hypothesis comes from limited speech-in-noise data suggesting an association between susceptibility to forward masking and speech understanding in modulated noise. PMID:26230495

  10. Factors affecting speech understanding in gated interference: Cochlear implant users and normal-hearing listeners

    NASA Astrophysics Data System (ADS)

    Nelson, Peggy B.; Jin, Su-Hyun

    2004-05-01

    Previous work [Nelson, Jin, Carney, and Nelson (2003), J. Acoust. Soc. Am 113, 961-968] suggested that cochlear implant users do not benefit from masking release when listening in modulated noise. The previous findings indicated that implant users experience little to no release from masking when identifying sentences in speech-shaped noise, regardless of the modulation frequency applied to the noise. The lack of masking release occurred for all implant subjects who were using three different devices and speech processing strategies. In the present study, possible causes of this reduced masking release in implant listeners were investigated. Normal-hearing listeners, implant users, and normal-hearing listeners presented with a four-band simulation of a cochlear implant were tested for their understanding of sentences in gated noise (1-32 Hz gate frequencies) when the duty cycle of the noise was varied from 25% to 75%. No systematic effect of noise duty cycle on implant and simulation listeners' performance was noted, indicating that the masking caused by gated noise is not only energetic masking. Masking release significantly increased when the number of spectral channels was increased from 4 to 12 for simulation listeners, suggesting that spectral resolution is important for masking release. Listeners were also tested for their understanding of gated sentences (sentences in quiet interrupted by periods of silence ranging from 1 to 32 Hz as a measure of auditory fusion, or the ability to integrate speech across temporal gaps. Implant and simulation listeners had significant difficulty understanding gated sentences at every gate frequency. When the number of spectral channels was increased for simulation listeners, their ability to understand gated sentences improved significantly. Findings suggest that implant listeners' difficulty understanding speech in modulated conditions is related to at least two (possibly related) factors: degraded spectral information and limitations in auditory fusion across temporal gaps.

  11. Auditory Processing Disorder (For Parents)

    MedlinePlus

    ... or other speech-language difficulties? Are verbal (word) math problems difficult for your child? Is your child ... inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  12. Motor functions and adaptive behaviour in children with childhood apraxia of speech.

    PubMed

    Tükel, Şermin; Björelius, Helena; Henningsson, Gunilla; McAllister, Anita; Eliasson, Ann Christin

    2015-01-01

    Undiagnosed motor and behavioural problems have been reported for children with childhood apraxia of speech (CAS). This study aims to understand the extent of these problems by determining the profile of and relationships between speech/non-speech oral, manual and overall body motor functions and adaptive behaviours in CAS. Eighteen children (five girls and 13 boys) with CAS, 4 years 4 months to 10 years 6 months old, participated in this study. The assessments used were the Verbal Motor Production Assessment for Children (VMPAC), Bruininks-Oseretsky Test of Motor Proficiency (BOT-2) and Adaptive Behaviour Assessment System (ABAS-II). Median result of speech/non-speech oral motor function was between -1 and -2 SD of the mean VMPAC norms. For BOT-2 and ABAS-II, the median result was between the mean and -1 SD of test norms. However, on an individual level, many children had co-occurring difficulties (below -1 SD of the mean) in overall and manual motor functions and in adaptive behaviour, despite few correlations between sub-tests. In addition to the impaired speech motor output, children displayed heterogeneous motor problems suggesting the presence of a global motor deficit. The complex relationship between motor functions and behaviour may partly explain the undiagnosed developmental difficulties in CAS.

  13. Stroke patients communicating their healthcare needs in hospital: a study within the ICF framework.

    PubMed

    O'Halloran, Robyn; Worrall, Linda; Hickson, Louise

    2012-01-01

    Previous research has identified that many patients admitted into acute hospital stroke units have communication-related impairments such as hearing, vision, speech, language and/or cognitive communicative impairment. However, no research has identified how many patients in acute hospital stroke units have difficulty actually communicating their healthcare needs. The World Health Organization's International Classification of Functioning, Disability and Health (ICF) conceptualizes difficulty communicating about healthcare needs as a type of activity limitation, within the Activity and Participation component. The ICF proposes that activity limitation can be measured in four different ways. The first aim of this research was to measure a patient's difficulty communicating his or her healthcare needs, that is, activity limitation, in two of the four ways suggested by the ICF when interacting with healthcare providers. The second aim was to investigate whether communication-related impairments in hearing, vision, speech, language and/or cognitive communicative impairment predict difficulty communicating healthcare needs, measured in these ways. A total of 65 patients consecutively admitted into two acute hospital stroke units in Melbourne, Australia, who consented to this research participated in this study. Early in their admission participants were screened for hearing, vision, speech, language and cognitive communicative impairment. Participants were also assessed for difficulty communicating about healthcare needs in two ways proposed by the ICF: 'capacity with assistance' and 'performance'. Relationships between communication-related impairment and both capacity with assistance and performance were explored through Spearman's correlations and binary logistic regression. A total of 87% of patients had one or more communication-related impairments. Half of the patients (51%) had difficulty communicating their healthcare needs when assessed in terms of capacity with assistance. Slightly more patients (55%) were observed to have difficulty communicating their healthcare needs when assessed in terms of performance. More severe vision, speech, language and cognitive communicative impairment were significantly associated with more severe difficulty communicating healthcare needs. About half of the stroke patients admitted into acute hospital stroke units had difficulty communicating their healthcare needs. Patients with more severe communication-related impairments had more severe difficulty communicating their healthcare needs. Future research is needed to understand the other factors that influence communication between people with communication disabilities and their healthcare providers in acute hospital settings. © 2012 Royal College of Speech and Language Therapists.

  14. Relationship between Speech Production and Perception in People Who Stutter.

    PubMed

    Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter

    2016-01-01

    Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl's gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS.

  15. What is Dyslexia? | NIH MedlinePlus the Magazine

    MedlinePlus

    ... words Difficulty understanding text that is read (poor comprehension) Problems with spelling Delayed speech (learning to talk ... of technology. Children with dyslexia may benefit from listening to books on tape or using word-processing ...

  16. Do age-related word retrieval difficulties appear (or disappear) in connected speech?

    PubMed

    Kavé, Gitit; Goral, Mira

    2017-09-01

    We conducted a comprehensive literature review of studies of word retrieval in connected speech in healthy aging and reviewed relevant aphasia research that could shed light on the aging literature. Four main hypotheses guided the review: (1) Significant retrieval difficulties would lead to reduced output in connected speech. (2) Significant retrieval difficulties would lead to a more limited lexical variety in connected speech. (3) Significant retrieval difficulties would lead to an increase in word substitution errors and in pronoun use as well as to greater dysfluency and hesitation in connected speech. (4) Retrieval difficulties on tests of single-word production would be associated with measures of word retrieval in connected speech. Studies on aging did not confirm these four hypotheses, unlike studies on aphasia that generally did. The review suggests that future research should investigate how context facilitates word production in old age.

  17. Educational consequences of developmental speech disorder: Key Stage 1 National Curriculum assessment results in English and mathematics.

    PubMed

    Nathan, Liz; Stackhouse, Joy; Goulandris, Nata; Snowling, Margaret J

    2004-06-01

    Children with speech difficulties may have associated educational problems. This paper reports a study examining the educational attainment of children at Key Stage 1 of the National Curriculum who had previously been identified with a speech difficulty. (1) To examine the educational attainment at Key Stage 1 of children diagnosed with speech difficulties two/three years prior to the present study. (2) To compare the Key Stage 1 assessment results of children whose speech problems had resolved at the time of assessment with those whose problems persisted. Data were available from 39 children who had an earlier diagnosis of speech difficulties at age 4/5 (from an original cohort of 47) at the age of 7. A control group of 35 children identified and matched at preschool on age, nonverbal ability and gender provided comparative data. Results of Statutory Assessment Tests (SATs) in reading, reading comprehension, spelling, writing and maths, administered to children at the end of Year 2 of school were analysed. Performance across the two groups was compared. Performance was also compared to published statistics on national levels of attainment. Children with a history of speech difficulties performed less well than controls on reading, spelling and maths. However, children whose speech problems had resolved by the time of assessment performed no differently to controls. Children with persisting speech problems performed less well than controls on tests of literacy and maths. Spelling performance was a particular area of difficulty for children with persisting speech problems. Children with speech difficulties are likely to perform less well than expected on literacy and maths SAT's at age 7. Performance is related to whether the speech problem resolves early on and whether associated language problems exist. Whilst it is unclear whether poorer performance on maths is because of the language components of this task, the results indicate that speech problems, especially persisting ones, can affect the ability to access the National Curriculum to expected levels.

  18. Auditory-neurophysiological responses to speech during early childhood: Effects of background noise

    PubMed Central

    White-Schwoch, Travis; Davies, Evan C.; Thompson, Elaine C.; Carr, Kali Woodruff; Nicol, Trent; Bradlow, Ann R.; Kraus, Nina

    2015-01-01

    Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But learning rarely occurs under ideal listening conditions—children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3–5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features—even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood. PMID:26113025

  19. Tracking Change in Children with Severe and Persisting Speech Difficulties

    ERIC Educational Resources Information Center

    Newbold, Elisabeth Joy; Stackhouse, Joy; Wells, Bill

    2013-01-01

    Standardised tests of whole-word accuracy are popular in the speech pathology and developmental psychology literature as measures of children's speech performance. However, they may not be sensitive enough to measure changes in speech output in children with severe and persisting speech difficulties (SPSD). To identify the best ways of doing this,…

  20. Relationship between Speech Production and Perception in People Who Stutter

    PubMed Central

    Lu, Chunming; Long, Yuhang; Zheng, Lifen; Shi, Guang; Liu, Li; Ding, Guosheng; Howell, Peter

    2016-01-01

    Speech production difficulties are apparent in people who stutter (PWS). PWS also have difficulties in speech perception compared to controls. It is unclear whether the speech perception difficulties in PWS are independent of, or related to, their speech production difficulties. To investigate this issue, functional MRI data were collected on 13 PWS and 13 controls whilst the participants performed a speech production task and a speech perception task. PWS performed poorer than controls in the perception task and the poorer performance was associated with a functional activity difference in the left anterior insula (part of the speech motor area) compared to controls. PWS also showed a functional activity difference in this and the surrounding area [left inferior frontal cortex (IFC)/anterior insula] in the production task compared to controls. Conjunction analysis showed that the functional activity differences between PWS and controls in the left IFC/anterior insula coincided across the perception and production tasks. Furthermore, Granger Causality Analysis on the resting-state fMRI data of the participants showed that the causal connection from the left IFC/anterior insula to an area in the left primary auditory cortex (Heschl’s gyrus) differed significantly between PWS and controls. The strength of this connection correlated significantly with performance in the perception task. These results suggest that speech perception difficulties in PWS are associated with anomalous functional activity in the speech motor area, and the altered functional connectivity from this area to the auditory area plays a role in the speech perception difficulties of PWS. PMID:27242487

  1. Correlation of Oxygenated Hemoglobin Concentration and Psychophysical Amount on Speech Recognition

    NASA Astrophysics Data System (ADS)

    Nozawa, Akio; Ide, Hideto

    The subjective understanding on oral language understanding task is quantitatively evaluated by the fluctuation of oxygenated hemoglobin concentration measured by the near-infrared spectroscopy. The English listening comprehension test wihch consists of two difficulty level was executed by 4 subjects during the measurement. A significant correlation was found between the subjective understanding and the fluctuation of oxygenated hemoglobin concentration.

  2. Investigation of potential cognitive tests for use with older adults in audiology clinics.

    PubMed

    Vaughan, Nancy; Storzbach, Daniel; Furukawa, Izumi

    2008-01-01

    Cognitive declines in working memory and processing speed are hallmarks of aging. Deficits in speech understanding also are seen in aging individuals. A clinical test to determine whether the cognitive aging changes contribute to aging speech understanding difficulties would be helpful for determining rehabilitation strategies in audiology clinics. To identify a clinical neurocognitive test or battery of tests that could be used in audiology clinics to help explain deficits in speech recognition in some older listeners. A correlational study examining the association between certain cognitive test scores and speech recognition performance. Speeded (time-compressed) speech was used to increase the cognitive processing load. Two hundred twenty-five adults aged 50 through 75 years were participants in this study. Both batteries of tests were administered to all participants in two separate sessions. A selected battery of neurocognitive tests and a time-compressed speech recognition test battery using various rates of speech were administered. Principal component analysis was used to extract the important component factors from each set of tests, and regression models were constructed to examine the association between tests and to identify the neurocognitive test most strongly associated with speech recognition performance. A sequencing working memory test (Letter-Number Sequencing [LNS]) was most strongly associated with rapid speech understanding. The association between the LNS test results and the compressed sentence recognition scores (CSRS) was strong even when age and hearing loss were controlled. The LNS is a sequencing test that provides information about temporal processing at the cognitive level and may prove useful in diagnosis of speech understanding problems, and in the development of aural rehabilitation and training strategies.

  3. Profile of Australian Preschool Children with Speech Sound Disorders at Risk for Literacy Difficulties

    ERIC Educational Resources Information Center

    McLeod, Sharynne; Crowe, Kathryn; Masso, Sarah; Baker, Elise; McCormack, Jane; Wren, Yvonne; Roulstone, Susan; Howland, Charlotte

    2017-01-01

    Speech sound disorders are a common communication difficulty in preschool children. Teachers indicate difficulty identifying and supporting these children. The aim of this research was to describe speech and language characteristics of children identified by their parents and/or teachers as having possible communication concerns. 275 Australian 4-…

  4. The impact of adolescent stuttering and other speech problems on psychological well-being in adulthood: evidence from a birth cohort study.

    PubMed

    McAllister, Jan; Collier, Jacqueline; Shepstone, Lee

    2013-01-01

    Developmental stuttering is associated with increased risk of psychological distress and mental health difficulties. Less is known about the impact of other developmental speech problems on psychological outcomes, or the impact of stuttering and speech problems once other predictors have been adjusted for. To determine the impact of parent-reported adolescent stuttering and other speech difficulties on psychological distress and associated symptoms as measured by the Rutter Malaise Inventory. A British birth cohort dataset provided information about 217 cohort members who stuttered and 301 cohort members who had other kinds of speech problem at age 16 according to parental report, and 15,694 cohort members who had experienced neither stuttering nor other speech difficulties. The main analyses concerned associations between adolescent stuttering or speech difficulty and score on the Rutter Malaise Inventory at age 42. Other factors that had previously been shown to be associated with score on the Malaise Inventory were also included in the analyses. In the adjusted analyses that controlled for other predictors, cohort members who were reported to stutter had higher malaise scores than controls overall, indicating a higher level of psychological distress, but they were not at significantly more likely to have malaise scores in the range indicating a risk of serious mental health difficulties. Cohort members who were reported to have other speech difficulties during adolescence had malaise scores that overall did not differ significantly from those of controls in the adjusted analyses, but they were at significantly greater risk of serious mental health difficulties. These findings support those of other studies that indicate an association between stuttering and psychological distress. This study is the first to have shown that adolescents who experience speech difficulties other than stuttering are more likely than controls to be at risk of poorer mental health in adulthood. The results suggest a need for therapeutic provision to address psychosocial issues for both stuttering and other developmental speech disorders in adulthood, as well as further research into the consequences in adulthood of stuttering and other developmental speech disorders. © 2013 Royal College of Speech and Language Therapists.

  5. Recognition of Speech from the Television with Use of a Wireless Technology Designed for Cochlear Implants.

    PubMed

    Duke, Mila Morais; Wolfe, Jace; Schafer, Erin

    2016-05-01

    Cochlear implant (CI) recipients often experience difficulty understanding speech in noise and speech that originates from a distance. Many CI recipients also experience difficulty understanding speech originating from a television. Use of hearing assistance technology (HAT) may improve speech recognition in noise and for signals that originate from more than a few feet from the listener; however, there are no published studies evaluating the potential benefits of a wireless HAT designed to deliver audio signals from a television directly to a CI sound processor. The objective of this study was to compare speech recognition in quiet and in noise of CI recipients with the use of their CI alone and with the use of their CI and a wireless HAT (Cochlear Wireless TV Streamer). A two-way repeated measures design was used to evaluate performance differences obtained in quiet and in competing noise (65 dBA) with the CI sound processor alone and with the sound processor coupled to the Cochlear Wireless TV Streamer. Sixteen users of Cochlear Nucleus 24 Freedom, CI512, and CI422 implants were included in the study. Participants were evaluated in four conditions including use of the sound processor alone and use of the sound processor with the wireless streamer in quiet and in the presence of competing noise at 65 dBA. Speech recognition was evaluated in each condition with two full lists of Computer-Assisted Speech Perception Testing and Training Sentence-Level Test sentences presented from a light-emitting diode television. Speech recognition in noise was significantly better with use of the wireless streamer compared to participants' performance with their CI sound processor alone. There was also a nonsignificant trend toward better performance in quiet with use of the TV Streamer. Performance was significantly poorer when evaluated in noise compared to performance in quiet when the TV Streamer was not used. Use of the Cochlear Wireless TV Streamer designed to stream audio from a television directly to a CI sound processor provides better speech recognition in quiet and in noise when compared to performance obtained with use of the CI sound processor alone. American Academy of Audiology.

  6. A screening approach for classroom acoustics using web-based listening tests and subjective ratings.

    PubMed

    Persson Waye, Kerstin; Magnusson, Lennart; Fredriksson, Sofie; Croy, Ilona

    2015-01-01

    Perception of speech is crucial in school where speech is the main mode of communication. The aim of the study was to evaluate whether a web based approach including listening tests and questionnaires could be used as a screening tool for poor classroom acoustics. The prime focus was the relation between pupils' comprehension of speech, the classroom acoustics and their description of the acoustic qualities of the classroom. In total, 1106 pupils aged 13-19, from 59 classes and 38 schools in Sweden participated in a listening study using Hagerman's sentences administered via Internet. Four listening conditions were applied: high and low background noise level and positions close and far away from the loudspeaker. The pupils described the acoustic quality of the classroom and teachers provided information on the physical features of the classroom using questionnaires. In 69% of the classes, at least three pupils described the sound environment as adverse and in 88% of the classes one or more pupil reported often having difficulties concentrating due to noise. The pupils' comprehension of speech was strongly influenced by the background noise level (p<0.001) and distance to the loudspeakers (p<0.001). Of the physical classroom features, presence of suspended acoustic panels (p<0.05) and length of the classroom (p<0.01) predicted speech comprehension. Of the pupils' descriptions of acoustic qualities, clattery significantly (p<0.05) predicted speech comprehension. Clattery was furthermore associated to difficulties understanding each other, while the description noisy was associated to concentration difficulties. The majority of classrooms do not seem to have an optimal sound environment. The pupil's descriptions of acoustic qualities and listening tests can be one way of predicting sound conditions in the classroom.

  7. Pragmatic Difficulties in the Production of the Speech Act of Apology by Iraqi EFL Learners

    ERIC Educational Resources Information Center

    Al-Ghazalli, Mehdi Falih; Al-Shammary, Mohanad A. Amert

    2014-01-01

    The purpose of this paper is to investigate the pragmatic difficulties encountered by Iraqi EFL university students in producing the speech act of apology. Although the act of apology is easy to recognize or use by native speakers of English, non-native speakers generally encounter difficulties in discriminating one speech act from another. The…

  8. Population Estimates, Health Care Characteristics, and Material Hardship Experiences of U.S. Children With Parent-Reported Speech-Language Difficulties: Evidence From Three Nationally Representative Surveys.

    PubMed

    Sonik, Rajan A; Parish, Susan L; Akobirshoev, Ilhom; Son, Esther; Rosenthal, Eliana

    2017-10-05

    To provide estimates for the prevalence of parent-reported speech-language difficulties in U.S. children, and to describe the levels of health care access and material hardship in this population. We tabulated descriptive and bivariate statistics using cross-sectional data from the 2007 and 2011/2012 iterations of the National Survey of Children's Health, the 2005/2006 and 2009/2010 iterations of the National Survey of Children with Special Health Care Needs, and the 2004 and 2008 panels of the Survey of Income and Program Participation. Prevalence estimates ranged from 1.8% to 5.0%, with data from two of the three surveys preliminarily indicating increased prevalence in recent years. The largest health care challenge was in accessing care coordination, with 49%-56% of children with parent-reported speech-language difficulties lacking full access. Children with parent-reported speech-language difficulties were more likely than peers without any indications of speech-language difficulties to live in households experiencing each measured material hardship and participating in each measured public benefit program (e.g., 20%-22% experiencing food insecurity, compared to 11%-14% of their peers without any indications of speech-language difficulties). We found mixed preliminary evidence to suggest that the prevalence of parent-reported speech-language difficulties among children may be rising. These children face heightened levels of material hardship and barriers in accessing health care.

  9. Ingressive Speech Errors: A Service Evaluation of Speech-Sound Therapy in a Child Aged 4;6

    ERIC Educational Resources Information Center

    Hrastelj, Laura; Knight, Rachael-Anne

    2017-01-01

    Background: A pattern of ingressive substitutions for word-final sibilants can be identified in a small number of cases in child speech disorder, with growing evidence suggesting it is a phonological difficulty, despite the unusual surface form. Phonological difficulty implies a problem with the cognitive process of organizing speech into sound…

  10. Using Flanagan's phase vocoder to improve cochlear implant performance

    NASA Astrophysics Data System (ADS)

    Zeng, Fan-Gang

    2004-10-01

    The cochlear implant has restored partial hearing to more than 100000 deaf people worldwide, allowing the average user to talk on the telephone in quiet environment. However, significant difficulty still remains for speech recognition in noise, music perception, and tonal language understanding. This difficulty may be related to speech processing strategies in current cochlear implants that emphasized the extraction and encoding of the temporal envelope while ignoring the temporal fine structure in speech sounds. A novel strategy was developed based on Flanagan's phase vocoder [Flanagan and Golden, Bell Syst. Tech. 45, 1493-1509 (1966)], in which frequency modulation was extracted from the temporal fine structure and then added to amplitude modulation in the current cochlear implants. Acoustic simulation results showed that amplitude and frequency modulation contributed complementarily to speech perception with amplitude modulation contributing mainly to intelligibility whereas frequency modulation contributed to speaker identification and auditory grouping. The results also showed that the novel strategy significantly improved cochlear implant performance under realistic listening situations. Overall, the present result demonstrated that Flanagan's classic work on phase vocoder still shed insight on current problems of both theoretical and practical importance. [Work supported by NIH.

  11. Optimal speech level for speech transmission in a noisy environment for young adults and aged persons

    NASA Astrophysics Data System (ADS)

    Sato, Hayato; Ota, Ryo; Morimoto, Masayuki; Sato, Hiroshi

    2005-04-01

    Assessing sound environment of classrooms for the aged is a very important issue, because classrooms can be used by the aged for their lifelong learning, especially in the aged society. Hence hearing loss due to aging is a considerable factor for classrooms. In this study, the optimal speech level in noisy fields for both young adults and aged persons was investigated. Listening difficulty ratings and word intelligibility scores for familiar words were used to evaluate speech transmission performance. The results of the tests demonstrated that the optimal speech level for moderate background noise (i.e., less than around 60 dBA) was fairly constant. Meanwhile, the optimal speech level depended on the speech-to-noise ratio when the background noise level exceeded around 60 dBA. The minimum required speech level to minimize difficulty ratings for the aged was higher than that for the young. However, the minimum difficulty ratings for both the young and the aged were given in the range of speech level of 70 to 80 dBA of speech level.

  12. Speech-language therapy for adolescents with written-language difficulties: the South African context.

    PubMed

    Erasmus, D; Schutte, L; van der Merwe, M; Geertsema, S

    2013-12-01

    To investigate whether privately practising speech-language therapists in South Africa are fulfilling their role of identification, assessment and intervention for adolescents with written-language and reading difficulties. Further needs concerning training with regard to this population group were also determined. A survey study was conducted, using a self-administered questionnaire. Twenty-two currently practising speech-language therapists who are registered members of the South African Speech-Language-Hearing Association (SASLHA) participated in the study. The respondents indicated that they are aware of their role regarding adolescents with written-language difficulties. However, they feel that South-African speech-language therapists are not fulfilling this role. Existing assessment tools and interventions for written-language difficulties are described as inadequate, and culturally and age inappropriate. Yet, the majority of the respondents feel that they are adequately equipped to work with adolescents with written-language difficulties, based on their own experience, self-study and secondary training. The respondents feel that training regarding effective collaboration with teachers is necessary to establish specific roles, and to promote speech-language therapy for adolescents among teachers. Further research is needed in developing appropriate assessment and intervention tools as well as improvement of training at an undergraduate level.

  13. Emotional speech comprehension in children and adolescents with autism spectrum disorders.

    PubMed

    Le Sourn-Bissaoui, Sandrine; Aguert, Marc; Girard, Pauline; Chevreuil, Claire; Laval, Virginie

    2013-01-01

    We examined the understanding of emotional speech by children and adolescents with autism spectrum disorders (ASD). We predicted that they would have difficulty understanding emotional speech, not because of an emotional prosody processing impairment but because of problems drawing appropriate inferences, especially in multiple-cue environments. Twenty-six children and adolescents with ASD and 26 typically developing controls performed a computerized task featuring emotional prosody, either embedded in a discrepant context or without any context at all. They must identify the speaker's feeling. When the prosody was the sole cue, participants with ASD performed just as well as controls, relying on this cue to infer the speaker's intention. When the prosody was embedded in a discrepant context, both ASD and TD participants exhibited a contextual bias and a negativity bias. However ASD participants relied less on the emotional prosody than the controls when it was positive. We discuss these findings with respect to executive function and intermodal processing. After reading this article, the reader should be able to (1) describe the ASD participants pragmatic impairments, (2) explain why ASD participants did not have an emotional prosody processing impairment, and (3) explain why ASD participants had difficulty inferring the speaker's intention from emotional prosody in a discrepant situation. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Using listening difficulty ratings of conditions for speech communication in rooms

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi; Bradley, John S.; Morimoto, Masayuki

    2005-03-01

    The use of listening difficulty ratings of speech communication in rooms is explored because, in common situations, word recognition scores do not discriminate well among conditions that are near to acceptable. In particular, the benefits of early reflections of speech sounds on listening difficulty were investigated and compared to the known benefits to word intelligibility scores. Listening tests were used to assess word intelligibility and perceived listening difficulty of speech in simulated sound fields. The experiments were conducted in three types of sound fields with constant levels of ambient noise: only direct sound, direct sound with early reflections, and direct sound with early reflections and reverberation. The results demonstrate that (1) listening difficulty can better discriminate among these conditions than can word recognition scores; (2) added early reflections increase the effective signal-to-noise ratio equivalent to the added energy in the conditions without reverberation; (3) the benefit of early reflections on difficulty scores is greater than expected from the simple increase in early arriving speech energy with reverberation; (4) word intelligibility tests are most appropriate for conditions with signal-to-noise (S/N) ratios less than 0 dBA, and where S/N is between 0 and 15-dBA S/N, listening difficulty is a more appropriate evaluation tool. .

  15. Individual differences in selective attention predict speech identification at a cocktail party.

    PubMed

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-08-31

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise.

  16. Hearing loss and speech perception in noise difficulties in Fanconi anemia.

    PubMed

    Verheij, Emmy; Oomen, Karin P Q; Smetsers, Stephanie E; van Zanten, Gijsbert A; Speleman, Lucienne

    2017-10-01

    Fanconi anemia is a hereditary chromosomal instability disorder. Hearing loss and ear abnormalities are among the many manifestations reported in this disorder. In addition, Fanconi anemia patients often complain about hearing difficulties in situations with background noise (speech perception in noise difficulties). Our study aimed to describe the prevalence of hearing loss and speech perception in noise difficulties in Dutch Fanconi anemia patients. Retrospective chart review. A retrospective chart review was conducted at a Dutch tertiary care center. All patients with Fanconi anemia at clinical follow-up in our hospital were included. Medical files were reviewed to collect data on hearing loss and speech perception in noise difficulties. In total, 49 Fanconi anemia patients were included. Audiograms were available in 29 patients and showed hearing loss in 16 patients (55%). Conductive hearing loss was present in 24.1%, sensorineural in 20.7%, and mixed in 10.3%. A speech in noise test was performed in 17 patients; speech perception in noise was subnormal in nine patients (52.9%) and abnormal in two patients (11.7%). Hearing loss and speech perception in noise abnormalities are common in Fanconi anemia. Therefore, pure tone audiograms and speech in noise tests should be performed, preferably already at a young age, because hearing aids or assistive listening devices could be very valuable in developing language and communication skills. 4. Laryngoscope, 127:2358-2361, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  17. Children's comprehension of an unfamiliar speaker accent: a review.

    PubMed

    Harte, Jennifer; Oliveira, Ana; Frizelle, Pauline; Gibbon, Fiona

    2016-05-01

    The effect of speaker accent on listeners' comprehension has become a key focus of research given the increasing cultural diversity of society and the increased likelihood of an individual encountering a clinician with an unfamiliar accent. To review the studies exploring the effect of an unfamiliar accent on language comprehension in typically developing (TD) children and in children with speech and language difficulties. This review provides a methodological analysis of the relevant studies by exploring the challenges facing this field of research and highlighting the current gaps in the literature. A total of nine studies were identified using a systematic search and organized under studies investigating the effect of speaker accent on language comprehension in (1) TD children and (2) children with speech and/or language difficulties. This review synthesizes the evidence that an unfamiliar speaker accent may lead to a breakdown in language comprehension in TD children and in children with speech difficulties. Moreover, it exposes the inconsistencies found in this field of research and highlights the lack of studies investigating the effect of speaker accent in children with language deficits. Overall, research points towards a developmental trend in children's ability to comprehend accent-related variations in speech. Vocabulary size, language exposure, exposure to different accents and adequate processing resources (e.g. attention) seem to play a key role in children's ability to understand unfamiliar accents. This review uncovered some inconsistencies in the literature that highlight the methodological issues that must be considered when conducting research in this field. It explores how such issues may be controlled in order to increase the validity and reliability of future research. Key clinical implications are also discussed. © 2016 Royal College of Speech and Language Therapists.

  18. Musical experience strengthens the neural representation of sounds important for communication in middle-aged adults

    PubMed Central

    Parbery-Clark, Alexandra; Anderson, Samira; Hittner, Emily; Kraus, Nina

    2012-01-01

    Older adults frequently complain that while they can hear a person talking, they cannot understand what is being said; this difficulty is exacerbated by background noise. Peripheral hearing loss cannot fully account for this age-related decline in speech-in-noise ability, as declines in central processing also contribute to this problem. Given that musicians have enhanced speech-in-noise perception, we aimed to define the effects of musical experience on subcortical responses to speech and speech-in-noise perception in middle-aged adults. Results reveal that musicians have enhanced neural encoding of speech in quiet and noisy settings. Enhancements include faster neural response timing, higher neural response consistency, more robust encoding of speech harmonics, and greater neural precision. Taken together, we suggest that musical experience provides perceptual benefits in an aging population by strengthening the underlying neural pathways necessary for the accurate representation of important temporal and spectral features of sound. PMID:23189051

  19. Combined Electric and Acoustic Stimulation With Hearing Preservation: Effect of Cochlear Implant Low-Frequency Cutoff on Speech Understanding and Perceived Listening Difficulty.

    PubMed

    Gifford, René H; Davis, Timothy J; Sunderhaus, Linsey W; Menapace, Christine; Buck, Barbara; Crosson, Jillian; O'Neill, Lori; Beiter, Anne; Segel, Phil

    The primary objective of this study was to assess the effect of electric and acoustic overlap for speech understanding in typical listening conditions using semidiffuse noise. This study used a within-subjects, repeated measures design including 11 experienced adult implant recipients (13 ears) with functional residual hearing in the implanted and nonimplanted ear. The aided acoustic bandwidth was fixed and the low-frequency cutoff for the cochlear implant (CI) was varied systematically. Assessments were completed in the R-SPACE sound-simulation system which includes a semidiffuse restaurant noise originating from eight loudspeakers placed circumferentially about the subject's head. AzBio sentences were presented at 67 dBA with signal to noise ratio varying between +10 and 0 dB determined individually to yield approximately 50 to 60% correct for the CI-alone condition with full CI bandwidth. Listening conditions for all subjects included CI alone, bimodal (CI + contralateral hearing aid), and bilateral-aided electric and acoustic stimulation (EAS; CI + bilateral hearing aid). Low-frequency cutoffs both below and above the original "clinical software recommendation" frequency were tested for all patients, in all conditions. Subjects estimated listening difficulty for all conditions using listener ratings based on a visual analog scale. Three primary findings were that (1) there was statistically significant benefit of preserved acoustic hearing in the implanted ear for most overlap conditions, (2) the default clinical software recommendation rarely yielded the highest level of speech recognition (1 of 13 ears), and (3) greater EAS overlap than that provided by the clinical recommendation yielded significant improvements in speech understanding. For standard-electrode CI recipients with preserved hearing, spectral overlap of acoustic and electric stimuli yielded significantly better speech understanding and less listening effort in a laboratory-based, restaurant-noise simulation. In conclusion, EAS patients may derive more benefit from greater acoustic and electric overlap than given in current software fitting recommendations, which are based solely on audiometric threshold. These data have larger scientific implications, as previous studies may not have assessed outcomes with optimized EAS parameters, thereby underestimating the benefit afforded by hearing preservation.

  20. A laboratory study for assessing speech privacy in a simulated open-plan office.

    PubMed

    Lee, P J; Jeon, J Y

    2014-06-01

    The aim of this study is to assess speech privacy in open-plan office using two recently introduced single-number quantities: the spatial decay rate of speech, DL(2,S) [dB], and the A-weighted sound pressure level of speech at a distance of 4 m, L(p,A,S,4) m [dB]. Open-plan offices were modeled using a DL(2,S) of 4, 8, and 12 dB, and L(p,A,S,4) m was changed in three steps, from 43 to 57 dB.Auditory experiments were conducted at three locations with source–receiver distances of 8, 16, and 24 m, while background noise level was fixed at 30 dBA.A total of 20 subjects were asked to rate the speech intelligibility and listening difficulty of 240 Korean sentences in such surroundings. The speech intelligibility scores were not affected by DL(2,S) or L(p,A,S,4) m at a source–receiver distance of 8 m; however, listening difficulty ratings were significantly changed with increasing DL(2,S) and L(p,A,S,4) m values. At other locations, the influences of DL(2,S) and L(p,A,S,4) m on speech intelligibility and listening difficulty ratings were significant. It was also found that the speech intelligibility scores and listening difficulty ratings were considerably changed with increasing the distraction distance (r(D)). Furthermore, listening difficulty is more sensitive to variations in DL(2,S) and L(p,A,S,4) m than intelligibility scores for sound fields with high speech transmission performances. The recently introduced single-number quantities in the ISO standard, based on the spatial distribution of sound pressure level, were associated with speech privacy in an open-plan office. The results support single-number quantities being suitable to assess speech privacy, mainly at large distances. This new information can be considered when designing open-plan offices and making acoustic guidelines of open-plan offices.

  1. Patient-reported speech in noise difficulties and hyperacusis symptoms and correlation with test results.

    PubMed

    Spyridakou, Chrysa; Luxon, Linda M; Bamiou, Doris E

    2012-07-01

    To compare self-reported symptoms of difficulty hearing speech in noise and hyperacusis in adults with auditory processing disorders (APDs) and normal controls; and to compare self-reported symptoms to objective test results (speech in babble test, transient evoked otoacoustic emission [TEOAE] suppression test using contralateral noise). A prospective case-control pilot study. Twenty-two participants were recruited in the study: 10 patients with reported hearing difficulty, normal audiometry, and a clinical diagnosis of APD; and 12 normal age-matched controls with no reported hearing difficulty. All participants completed the validated Amsterdam Inventory for Auditory Disability questionnaire, a hyperacusis questionnaire, a speech in babble test, and a TEOAE suppression test using contralateral noise. Patients had significantly worse scores than controls in all domains of the Amsterdam Inventory questionnaire (with the exception of sound detection) and the hyperacusis questionnaire (P < .005). Patients also had worse TEOAE suppression test results in both ears than controls; however, this result was not significant after Bonferroni correction. Strong correlations were observed between self-reported symptoms of difficulty hearing speech in noise and speech in babble test results in the right ear (ρ = 0.624, P = .002), and between self-reported symptoms of hyperacusis and TEOAE suppression test results in the right ear (ρ = -0.597 P = .003). There was no significant correlation between the two tests. A strong correlation was observed between right ear speech in babble and patient-reported intelligibility of speech in noise, and right ear TEOAE suppression by contralateral noise and hyperacusis questionnaire. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  2. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds.

    PubMed

    Shinn-Cunningham, Barbara

    2017-10-17

    This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. The results from neuroscience and psychoacoustics are reviewed. In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. http://cred.pubs.asha.org/article.aspx?articleid=2601617.

  3. Multifaceted Communication Problems in Everyday Conversations Involving People with Parkinson’s Disease

    PubMed Central

    Saldert, Charlotta; Bauer, Malin

    2017-01-01

    It is known that Parkinson’s disease is often accompanied by a motor speech disorder, which results in impaired communication. However, people with Parkinson’s disease may also have impaired word retrieval (anomia) and other communicative problems, which have a negative impact on their ability to participate in conversations with family as well as healthcare staff. The aim of the present study was to explore effects of impaired speech and language on communication and how this is managed by people with Parkinson’s disease and their spouses. Using a qualitative method based on Conversation Analysis, in-depth analyses were performed on natural conversational interaction in five dyads including elderly men who were at different stages of Parkinson’s disease. The findings showed that the motor speech disorder in combination with word retrieval difficulties and adaptations, such as using communication strategies, may result in atypical utterances that are difficult for communication partners to understand. The coexistence of several communication problems compounds the difficulties faced in conversations and individuals with Parkinson’s disease are often dependent on cooperation with their communication partner to make themselves understood. PMID:28946714

  4. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment

    PubMed Central

    PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.

    2014-01-01

    Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648

  5. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment.

    PubMed

    Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J

    2013-06-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.

  6. Is Listening in Noise Worth It? The Neurobiology of Speech Recognition in Challenging Listening Conditions.

    PubMed

    Eckert, Mark A; Teubner-Rhodes, Susan; Vaden, Kenneth I

    2016-01-01

    This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. The authors propose that the behavioral economics or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance.

  7. Is Listening in Noise Worth It? The Neurobiology of Speech Recognition in Challenging Listening Conditions

    PubMed Central

    Eckert, Mark A.; Teubner-Rhodes, Susan; Vaden, Kenneth I.

    2016-01-01

    This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. We propose that the behavioral economics and/or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance. PMID:27355759

  8. "Do We Make Ourselves Clear?" Developing a Social, Emotional and Behavioural Difficulties (SEBD) Support Service's Effectiveness in Detecting and Supporting Children Experiencing Speech, Language and Communication Difficulties (SLCD)

    ERIC Educational Resources Information Center

    Stiles, Matthew

    2013-01-01

    Research has identified a significant relationship between social, emotional and behavioural difficulties (SEBD) and speech, language and communication difficulties (SLCD). However, little has been published regarding the levels of knowledge and skill that practitioners working with pupils experiencing SEBD have in this important area, nor how…

  9. Identifying the Challenges and Opportunities to Meet the Needs of Children with Speech, Language and Communication Difficulties

    ERIC Educational Resources Information Center

    Dockrell, Julie E.; Howell, Peter

    2015-01-01

    The views of experienced educational practitioners were examined with respect to the terminology used to describe children with speech, language and communication needs (SLCN), associated problems and the impact of speech and language difficulties in the classroom. Results showed that education staff continue to experience challenges with the…

  10. Shhh… I Need Quiet! Children's Understanding of American, British, and Japanese-accented English Speakers.

    PubMed

    Bent, Tessa; Holt, Rachael Frush

    2018-02-01

    Children's ability to understand speakers with a wide range of dialects and accents is essential for efficient language development and communication in a global society. Here, the impact of regional dialect and foreign-accent variability on children's speech understanding was evaluated in both quiet and noisy conditions. Five- to seven-year-old children ( n = 90) and adults ( n = 96) repeated sentences produced by three speakers with different accents-American English, British English, and Japanese-accented English-in quiet or noisy conditions. Adults had no difficulty understanding any speaker in quiet conditions. Their performance declined for the nonnative speaker with a moderate amount of noise; their performance only substantially declined for the British English speaker (i.e., below 93% correct) when their understanding of the American English speaker was also impeded. In contrast, although children showed accurate word recognition for the American and British English speakers in quiet conditions, they had difficulty understanding the nonnative speaker even under ideal listening conditions. With a moderate amount of noise, their perception of British English speech declined substantially and their ability to understand the nonnative speaker was particularly poor. These results suggest that although school-aged children can understand unfamiliar native dialects under ideal listening conditions, their ability to recognize words in these dialects may be highly susceptible to the influence of environmental degradation. Fully adult-like word identification for speakers with unfamiliar accents and dialects may exhibit a protracted developmental trajectory.

  11. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    PubMed Central

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on functioning. PMID:26136699

  12. Read-Aloud Accommodations, Expository Text, and Adolescents with Learning Disabilities

    ERIC Educational Resources Information Center

    Meyer, Nancy K.; Bouck, Emily C.

    2017-01-01

    Adolescents with learning disabilities in reading have difficulties with reading and understanding difficult gradelevel curricular material. One frequently used method of support is using read-aloud accommodations, which can be live read-alouds or text-to-speech (TTS) read-alouds. A single case alternating treatment design was used to examine the…

  13. Auditory and Cognitive Factors Associated with Speech-in-Noise Complaints following Mild Traumatic Brain Injury.

    PubMed

    Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J

    2017-04-01

    Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was related to a combination of working memory and peripheral auditory factors across all listeners in the study. The results of this study are consistent with previous findings that a subset of listeners with MTBI has objective auditory deficits. Speech-in-noise performance was related to a combination of auditory and nonauditory factors, confirming the important role of audiology in MTBI rehabilitation. Further research is needed to evaluate the prevalence and causal relationship of auditory deficits following MTBI. American Academy of Audiology

  14. Risk of Reading Difficulty among Students with a History of Speech or Language Impairment: Implications for Student Support Teams

    ERIC Educational Resources Information Center

    Zipoli, Richard P., Jr.; Merritt, Donna D.

    2017-01-01

    Many students with a history of speech or language impairment have an elevated risk of reading difficulty. Specific subgroups of these students remain at risk of reading problems even after clinical manifestations of a speech or language disorder have diminished. These students may require reading intervention within a general education system of…

  15. Meeting the needs of children and young people with speech, language and communication difficulties.

    PubMed

    Lindsay, Geoff; Dockrell, Julie; Desforges, Martin; Law, James; Peacey, Nick

    2010-01-01

    The UK government set up a review of provision for children and young people with the full range of speech, language and communication needs led by a Member of Parliament, John Bercow. A research study was commissioned to provide empirical evidence to inform the Bercow Review. To examine the efficiency and effectiveness of different arrangements for organizing and providing services for children and young people with needs associated with primary speech, language and communication difficulties. Six Local Authorities in England and associated Primary Care Trusts were selected to represent a range of locations reflecting geographic spread, urban/rural and prevalence of children with speech, language and communication difficulties. In each case study, interviews were held with the senior Local Authority manager for special educational needs and a Primary Care Trust senior manager for speech and language therapy. A further 23 head teachers or heads of specialist provision for speech, language and communication difficulties were also interviewed and policy documents were examined. A thematic analysis of the interviews produced four main themes: identification of children and young people with speech, language and communication difficulties; meeting their needs; monitoring and evaluation; and research and evaluation. There were important differences between Local Authorities and Primary Care Trusts in the collection, analysis and use of data, in particular. There were also differences between Local Authority/Primary Care Trust pairs, especially in the degree to which they collaborated in developing policy and implementing practice. This study has demonstrated a lack of consistency across Local Authorities and Primary Care Trusts. Optimizing provision to meet the needs of children and young people with speech, language and communication difficulties will require concerted action, with leadership from central government. The study was used by the Bercow Review whose recommendations have been addressed by central government and a funded action plan has been implemented as a result.

  16. Communication attitude and speech in 10-year-old children with cleft (lip and) palate: an ICF perspective.

    PubMed

    Havstam, Christina; Sandberg, Annika Dahlgren; Lohmander, Anette

    2011-04-01

    Many children born with cleft palate have impaired speech during their pre-school years, but usually the speech difficulties are transient and resolved by later childhood. This study investigated communication attitude with the Swedish version of the Communication Attitude Test (CAT-S) in 54 10-year-olds with cleft (lip and) palate. In addition, environmental factors were assessed via parent questionnaire. These data were compared to speech assessments by experienced listeners, who rated the children's velopharyngeal function, articulation, intelligibility, and general impression of speech at ages 5, 7, and 10 years. The children with clefts scored significantly higher on the CAT-S compared to reference data, indicating a more negative communication attitude on group level but with large individual variation. All speech variables, except velopharyngeal function at earlier ages, as well as the parent questionnaire scores, correlated significantly with the CAT-S scores. Although there was a relationship between speech and communication attitude, not all children with impaired speech developed negative communication attitudes. The assessment of communication attitude can make an important contribution to our understanding of the communicative situation for children with cleft (lip and) palate and give important indications for intervention.

  17. Brain responses and looking behavior during audiovisual speech integration in infants predict auditory speech comprehension in the second year of life

    PubMed Central

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Potton, Anita; Birtles, Deidre; Frostick, Caroline; Moore, Derek G.

    2013-01-01

    The use of visual cues during the processing of audiovisual (AV) speech is known to be less efficient in children and adults with language difficulties and difficulties are known to be more prevalent in children from low-income populations. In the present study, we followed an economically diverse group of thirty-seven infants longitudinally from 6–9 months to 14–16 months of age. We used eye-tracking to examine whether individual differences in visual attention during AV processing of speech in 6–9 month old infants, particularly when processing congruent and incongruent auditory and visual speech cues, might be indicative of their later language development. Twenty-two of these 6–9 month old infants also participated in an event-related potential (ERP) AV task within the same experimental session. Language development was then followed-up at the age of 14–16 months, using two measures of language development, the Preschool Language Scale and the Oxford Communicative Development Inventory. The results show that those infants who were less efficient in auditory speech processing at the age of 6–9 months had lower receptive language scores at 14–16 months. A correlational analysis revealed that the pattern of face scanning and ERP responses to audiovisually incongruent stimuli at 6–9 months were both significantly associated with language development at 14–16 months. These findings add to the understanding of individual differences in neural signatures of AV processing and associated looking behavior in infants. PMID:23882240

  18. Perception of Filtered Speech by Children with Developmental Dyslexia and Children with Specific Language Impairments

    PubMed Central

    Goswami, Usha; Cumming, Ruth; Chait, Maria; Huss, Martina; Mead, Natasha; Wilson, Angela M.; Barnes, Lisa; Fosker, Tim

    2016-01-01

    Here we use two filtered speech tasks to investigate children’s processing of slow (<4 Hz) versus faster (∼33 Hz) temporal modulations in speech. We compare groups of children with either developmental dyslexia (Experiment 1) or speech and language impairments (SLIs, Experiment 2) to groups of typically-developing (TD) children age-matched to each disorder group. Ten nursery rhymes were filtered so that their modulation frequencies were either low-pass filtered (<4 Hz) or band-pass filtered (22 – 40 Hz). Recognition of the filtered nursery rhymes was tested in a picture recognition multiple choice paradigm. Children with dyslexia aged 10 years showed equivalent recognition overall to TD controls for both the low-pass and band-pass filtered stimuli, but showed significantly impaired acoustic learning during the experiment from low-pass filtered targets. Children with oral SLIs aged 9 years showed significantly poorer recognition of band pass filtered targets compared to their TD controls, and showed comparable acoustic learning effects to TD children during the experiment. The SLI samples were also divided into children with and without phonological difficulties. The children with both SLI and phonological difficulties were impaired in recognizing both kinds of filtered speech. These data are suggestive of impaired temporal sampling of the speech signal at different modulation rates by children with different kinds of developmental language disorder. Both SLI and dyslexic samples showed impaired discrimination of amplitude rise times. Implications of these findings for a temporal sampling framework for understanding developmental language disorders are discussed. PMID:27303348

  19. Speech and Language Difficulties in Children with and without a Family History of Dyslexia

    ERIC Educational Resources Information Center

    Carroll, Julia M.; Myers, Joanne M.

    2010-01-01

    Comorbidity between SLI and dyslexia is well documented. Researchers have variously argued that dyslexia is a separate disorder from SLI, or that children with dyslexia show a subset of the difficulties shown in SLI. This study examines these hypotheses by assessing whether family history of dyslexia and speech and language difficulties are…

  20. Rise Time and Formant Transition Duration in the Discrimination of Speech Sounds: The Ba-Wa Distinction in Developmental Dyslexia

    ERIC Educational Resources Information Center

    Goswami, Usha; Fosker, Tim; Huss, Martina; Mead, Natasha; Szucs, Denes

    2011-01-01

    Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory…

  1. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior

    PubMed Central

    2018-01-01

    Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners’ abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication. PMID:28938250

  2. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior.

    PubMed

    Peelle, Jonathan E

    Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.

  3. School Leavers with Learning Disabilities Moving from Child to Adult Speech and Language Therapy (SLT) Teams: SLTs' Views of Successful and Less Successful Transition Co-Working Practices

    ERIC Educational Resources Information Center

    McCartney, Elspeth; Muir, Margaret

    2017-01-01

    School-leaving for pupils with long-term speech, language, swallowing or communication difficulties requires careful management. Speech and language therapists (SLTs) support communication, secure assistive technology and manage swallowing difficulties post-school. UK SLTs are employed by health services, with child SLT teams based in schools.…

  4. Alliances and Arguments: A Case Study of a Child with Persisting Speech Difficulties in Peer Play

    ERIC Educational Resources Information Center

    Tempest, Alison; Wells, Bill

    2012-01-01

    The ability to argue and to create alliances with peers are important social competencies for all children, including those who have speech, language and communication needs. In this study, we investigated the management of arguments and alliances by a group of 5-year-old male friends, one of whom has a persisting speech difficulty (PSD). Twelve…

  5. Awareness and Reactions of Young Stuttering Children Aged 2-7 Years Old towards Their Speech Disfluency

    ERIC Educational Resources Information Center

    Boey, Ronny A.; Van de Heyning, Paul H.; Wuyts, Floris L.; Heylen, Louis; Stoop, Reinhard; De Bodt, Marc S.

    2009-01-01

    Awareness has been an important factor in theories of onset and development of stuttering. So far it has been suggested that even young children might be aware of their speech difficulty. The purpose of the present study was to investigate (a) the number of stuttering children aware of their speech difficulty, (b) the description of reported…

  6. How Long Does It Take to Describe What One Sees? A Logarithmic Speed-Difficulty Trade-off in Speech Production

    PubMed Central

    Latash, Mark L.; Mikaelian, Irina L.

    2010-01-01

    We explored the relations between task difficulty and speech time in picture description tasks. Six native speakers of Mandarin Chinese (CH group) and six native speakers or Indo-European languages (IE group) produced quick and accurate verbal descriptions of pictures in a self-paced manner. The pictures always involved two objects, a plate and one of the three objects (a stick, a fork, or a knife) located and oriented differently with respect to the plate in different trials. An index of difficulty was assigned to each picture. CH group showed lower reaction time and much lower speech time. Speech time scaled linearly with the log-transformed index of difficulty in all subjects. The results suggest generality of Fitts’ law for movement and speech tasks, and possibly for other cognitive tasks as well. The differences between the CH and IE groups may be due to specific task features, differences in the grammatical rules of CH and IE languages, and possible use of tone for information transmission. PMID:21339514

  7. Aging affects neural precision of speech encoding

    PubMed Central

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2012-01-01

    Older adults frequently report they can hear what is said but cannot understand the meaning, especially in noise. This difficulty may arise from the inability to process rapidly changing elements of speech. Aging is accompanied by a general slowing of neural processing and decreased neural inhibition, both of which likely interfere with temporal processing in auditory and other sensory domains. Age-related reductions in inhibitory neurotransmitter levels and delayed neural recovery can contribute to decreases in the auditory system’s temporal precision. Decreased precision may lead to neural timing delays, reductions in neural response magnitude, and a disadvantage in processing the rapid acoustic changes in speech. The auditory brainstem response (ABR), a scalp-recorded electrical potential, is known for its ability to capture precise neural synchrony within subcortical auditory nuclei; therefore, we hypothesized that a loss of temporal precision results in subcortical timing delays and decreases in response consistency and magnitude. To assess this hypothesis, we recorded ABRs to the speech syllable /da/ in normal hearing younger (ages 18 to 30) and older adult humans (60 to 67). Older adults had delayed ABRs, especially in response to the rapidly changing formant transition, and greater response variability. We also found that older adults had decreased phase locking and smaller response magnitudes than younger adults. Taken together, our results support the theory that older adults have a loss of temporal precision in subcortical encoding of sound, which may account, at least in part, for their difficulties with speech perception. PMID:23055485

  8. Audiovisual integration in children listening to spectrally degraded speech.

    PubMed

    Maidment, David W; Kang, Hi Jee; Stewart, Hannah J; Amitay, Sygal

    2015-02-01

    The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Children (n=69) and adults (n=15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in auditory-only or audiovisual conditions. The number of bands was adaptively varied to modulate the degradation of the auditory signal, with the number of bands required for approximately 79% correct identification calculated as the threshold. The youngest children (4- to 5-year-olds) did not benefit from accompanying visual information, in comparison to 6- to 11-year-old children and adults. Audiovisual gain also increased with age in the child sample. The current data suggest that children younger than 6 years of age do not fully utilize visual speech cues to enhance speech perception when the auditory signal is degraded. This evidence not only has implications for understanding the development of speech perception skills in children with normal hearing but may also inform the development of new treatment and intervention strategies that aim to remediate speech perception difficulties in pediatric cochlear implant users.

  9. Clear speech and lexical competition in younger and older adult listeners.

    PubMed

    Van Engen, Kristin J

    2017-08-01

    This study investigated whether clear speech reduces the cognitive demands of lexical competition by crossing speaking style with lexical difficulty. Younger and older adults identified more words in clear versus conversational speech and more easy words than hard words. An initial analysis suggested that the effect of lexical difficulty was reduced in clear speech, but more detailed analyses within each age group showed this interaction was significant only for older adults. The results also showed that both groups improved over the course of the task and that clear speech was particularly helpful for individuals with poorer hearing: for younger adults, clear speech eliminated hearing-related differences that affected performance on conversational speech. For older adults, clear speech was generally more helpful to listeners with poorer hearing. These results suggest that clear speech affords perceptual benefits to all listeners and, for older adults, mitigates the cognitive challenge associated with identifying words with many phonological neighbors.

  10. Ageing without hearing loss or cognitive impairment causes a decrease in speech intelligibility only in informational maskers.

    PubMed

    Rajan, R; Cainer, K E

    2008-06-23

    In most everyday settings, speech is heard in the presence of competing sounds and understanding speech requires skills in auditory streaming and segregation, followed by identification and recognition, of the attended signals. Ageing leads to difficulties in understanding speech in noisy backgrounds. In addition to age-related changes in hearing-related factors, cognitive factors also play a role but it is unclear to what extent these are generalized or modality-specific cognitive factors. We examined how ageing in normal-hearing decade age cohorts from 20 to 69 years affected discrimination of open-set speech in background noise. We used two types of sentences of similar structural and linguistic characteristics but different masking levels (i.e. differences in signal-to-noise ratios required for detection of sentences in a standard masker) so as to vary sentence demand, and two background maskers (one causing purely energetic masking effects and the other causing energetic and informational masking) to vary load conditions. There was a decline in performance (measured as speech reception thresholds for perception of sentences in noise) in the oldest cohort for both types of sentences, but only in the presence of the more demanding informational masker. We interpret these results to indicate a modality-specific decline in cognitive processing, likely a decrease in the ability to use acoustic and phonetic cues efficiently to segregate speech from background noise, in subjects aged >60.

  11. Individual differences in selective attention predict speech identification at a cocktail party

    PubMed Central

    Oberfeld, Daniel; Klöckner-Nowotny, Felicitas

    2016-01-01

    Listeners with normal hearing show considerable individual differences in speech understanding when competing speakers are present, as in a crowded restaurant. Here, we show that one source of this variance are individual differences in the ability to focus selective attention on a target stimulus in the presence of distractors. In 50 young normal-hearing listeners, the performance in tasks measuring auditory and visual selective attention was associated with sentence identification in the presence of spatially separated competing speakers. Together, the measures of selective attention explained a similar proportion of variance as the binaural sensitivity for the acoustic temporal fine structure. Working memory span, age, and audiometric thresholds showed no significant association with speech understanding. These results suggest that a reduced ability to focus attention on a target is one reason why some listeners with normal hearing sensitivity have difficulty communicating in situations with background noise. DOI: http://dx.doi.org/10.7554/eLife.16747.001 PMID:27580272

  12. Child speech, language and communication need re-examined in a public health context: a new direction for the speech and language therapy profession.

    PubMed

    Law, James; Reilly, Sheena; Snow, Pamela C

    2013-01-01

    Historically speech and language therapy services for children have been framed within a rehabilitative framework with explicit assumptions made about providing therapy to individuals. While this is clearly important in many cases, we argue that this model needs revisiting for a number of reasons. First, our understanding of the nature of disability, and therefore communication disabilities, has changed over the past century. Second, there is an increasing understanding of the impact that the social gradient has on early communication difficulties. Finally, understanding how these factors interact with one other and have an impact across the life course remains poorly understood. To describe the public health paradigm and explore its implications for speech and language therapy with children. We test the application of public health methodologies to speech and language therapy services by looking at four dimensions of service delivery: (1) the uptake of services and whether those children who need services receive them; (2) the development of universal prevention services in relation to social disadvantage; (3) the risk of over-interpreting co-morbidity from clinical samples; and (4) the overlap between communicative competence and mental health. It is concluded that there is a strong case for speech and language therapy services to be reconceptualized to respond to the needs of the whole population and according to socially determined needs, focusing on primary prevention. This is not to disregard individual need, but to highlight the needs of the population as a whole. Although the socio-political context is different between countries, we maintain that this is relevant wherever speech and language therapists have a responsibility for covering whole populations. Finally, we recommend that speech and language therapy services be conceptualized within the framework laid down in The Ottawa Charter for Health Promotion. © 2013 Royal College of Speech and Language Therapists.

  13. The benefits of remote microphone technology for adults with cochlear implants.

    PubMed

    Fitzpatrick, Elizabeth M; Séguin, Christiane; Schramm, David R; Armstrong, Shelly; Chénier, Josée

    2009-10-01

    Cochlear implantation has become a standard practice for adults with severe to profound hearing loss who demonstrate limited benefit from hearing aids. Despite the substantial auditory benefits provided by cochlear implants, many adults experience difficulty understanding speech in noisy environments and in other challenging listening conditions such as television. Remote microphone technology may provide some benefit in these situations; however, little is known about whether these systems are effective in improving speech understanding in difficult acoustic environments for this population. This study was undertaken with adult cochlear implant recipients to assess the potential benefits of remote microphone technology. The objectives were to examine the measurable and perceived benefit of remote microphone devices during television viewing and to assess the benefits of a frequency-modulated system for speech understanding in noise. Fifteen adult unilateral cochlear implant users were fit with remote microphone devices in a clinical environment. The study used a combination of direct measurements and patient perceptions to assess speech understanding with and without remote microphone technology. The direct measures involved a within-subject repeated-measures design. Direct measures of patients' speech understanding during television viewing were collected using their cochlear implant alone and with their implant device coupled to an assistive listening device. Questionnaires were administered to document patients' perceptions of benefits during the television-listening tasks. Speech recognition tests of open-set sentences in noise with and without remote microphone technology were also administered. Participants showed improved speech understanding for television listening when using remote microphone devices coupled to their cochlear implant compared with a cochlear implant alone. This benefit was documented both when listening to news and talk show recordings. Questionnaire results also showed statistically significant differences between listening with a cochlear implant alone and listening with a remote microphone device. Participants judged that remote microphone technology provided them with better comprehension, more confidence, and greater ease of listening. Use of a frequency-modulated system coupled to a cochlear implant also showed significant improvement over a cochlear implant alone for open-set sentence recognition in +10 and +5 dB signal to noise ratios. Benefits were measured during remote microphone use in focused-listening situations in a clinical setting, for both television viewing and speech understanding in noise in the audiometric sound suite. The results suggest that adult cochlear implant users should be counseled regarding the potential for enhanced speech understanding in difficult listening environments through the use of remote microphone technology.

  14. Impairments of speech fluency in Lewy body spectrum disorder.

    PubMed

    Ash, Sharon; McMillan, Corey; Gross, Rachel G; Cook, Philip; Gunawardena, Delani; Morgan, Brianna; Boller, Ashley; Siderowf, Andrew; Grossman, Murray

    2012-03-01

    Few studies have examined connected speech in demented and non-demented patients with Parkinson's disease (PD). We assessed the speech production of 35 patients with Lewy body spectrum disorder (LBSD), including non-demented PD patients, patients with PD dementia (PDD), and patients with dementia with Lewy bodies (DLB), in a semi-structured narrative speech sample in order to characterize impairments of speech fluency and to determine the factors contributing to reduced speech fluency in these patients. Both demented and non-demented PD patients exhibited reduced speech fluency, characterized by reduced overall speech rate and long pauses between sentences. Reduced speech rate in LBSD correlated with measures of between-utterance pauses, executive functioning, and grammatical comprehension. Regression analyses related non-fluent speech, grammatical difficulty, and executive difficulty to atrophy in frontal brain regions. These findings indicate that multiple factors contribute to slowed speech in LBSD, and this is mediated in part by disease in frontal brain regions. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Helping the Child with a Cleft Palate in Your Classroom.

    ERIC Educational Resources Information Center

    Moran, Michael J.; Pentz, Arthur L.

    1995-01-01

    Guidelines for teachers of a student with a cleft palate include understand the physical problem; know what kind of speech problem to expect; be alert to the possibility of language-based learning difficulties; watch for signs of hearing loss; be alert to socialization problems; help the student make up work; and avoid self-fulfilling prophecies.…

  16. Effects of Age and Hearing Loss on Gap Detection and the Precedence Effect: Broadband Stimuli

    ERIC Educational Resources Information Center

    Roberts, Richard A.; Lister, Jennifer J.

    2004-01-01

    Older listeners with normal-hearing sensitivity and impaired-hearing sensitivity often demonstrate poorer-than-normal performance on tasks of speech understanding in noise and reverberation. Deficits in temporal resolution and in the precedence effect may underlie this difficulty. Temporal resolution is often studied by means of a gap-detection…

  17. Early and Late Spanish-English Bilingual Adults' Perception of American English Vowels

    ERIC Educational Resources Information Center

    Baigorri, Miriam

    2016-01-01

    Increasing numbers of Hispanic immigrants are entering the US (US Census Bureau, 2011) and are learning American English (AE) as a second language (L2). Many may experience difficulty in understanding AE. Accurate perception of AE vowels is important because vowels carry a large part of the speech signal (Kewley-Port, Burkle, & Lee, 2007). The…

  18. Word-finding difficulty: a clinical analysis of the progressive aphasias

    PubMed Central

    Rohrer, Jonathan D.; Knight, William D.; Warren, Jane E.; Fox, Nick C.; Rossor, Martin N.; Warren, Jason D.

    2008-01-01

    The patient with word-finding difficulty presents a common and challenging clinical problem. The complaint of ‘word-finding difficulty’ covers a wide range of clinical phenomena and may signify any of a number of distinct pathophysiological processes. Although it occurs in a variety of clinical contexts, word-finding difficulty generally presents a diagnostic conundrum when it occurs as a leading or apparently isolated symptom, most often as the harbinger of degenerative disease: the progressive aphasias. Recent advances in the neurobiology of the focal, language-based dementias have transformed our understanding of these processes and the ways in which they breakdown in different diseases, but translation of this knowledge to the bedside is far from straightforward. Speech and language disturbances in the dementias present unique diagnostic and conceptual problems that are not fully captured by classical models derived from the study of vascular and other acute focal brain lesions. This has led to a reformulation of our understanding of how language is organized in the brain. In this review we seek to provide the clinical neurologist with a practical and theoretical bridge between the patient presenting with word-finding difficulty in the clinic and the evidence of the brain sciences. We delineate key illustrative speech and language syndromes in the degenerative dementias, compare these syndromes with the syndromes of acute brain damage, and indicate how the clinical syndromes relate to emerging neurolinguistic, neuroanatomical and neurobiological insights. We propose a conceptual framework for the analysis of word-finding difficulty, in order both better to define the patient's complaint and its differential diagnosis for the clinician and to identify unresolved issues as a stimulus to future work. PMID:17947337

  19. Robust relationship between reading span and speech recognition in noise

    PubMed Central

    Souza, Pamela; Arehart, Kathryn

    2015-01-01

    Objective Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. Design The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. Study sample The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Results Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Conclusions Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition. PMID:25975360

  20. Robust relationship between reading span and speech recognition in noise.

    PubMed

    Souza, Pamela; Arehart, Kathryn

    2015-01-01

    Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition.

  1. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  2. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds

    PubMed Central

    2017-01-01

    Purpose This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method The results from neuroscience and psychoacoustics are reviewed. Results In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with “normal hearing.” Conclusions How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601617 PMID:29049598

  3. Longitudinal Patterns of Behaviour Problems in Children with Specific Speech and Language Difficulties: Child and Contextual Factors

    ERIC Educational Resources Information Center

    Lindsay, Geoff; Dockrell, Julie E.; Strand, Steve

    2007-01-01

    Background: The purpose of this study was to examine the stability of behavioural, emotional and social difficulties (BESD) in children with specific speech and language difficulties (SSLD), and the relationship between BESD and the language ability. Methods: A sample of children with SSLD were assessed for BESD at ages 8, 10 and 12 years by both…

  4. Correlations of decision weights and cognitive function for the masked discrimination of vowels by young and old adults

    PubMed Central

    Lutfi, Robert A.

    2014-01-01

    Older adults are often reported in the literature to have greater difficulty than younger adults understanding speech in noise [Helfer and Wilber (1988). J. Acoust. Soc. Am, 859–893]. The poorer performance of older adults has been attributed to a general deterioration of cognitive processing, deterioration of cochlear anatomy, and/or greater difficulty segregating speech from noise. The current work used perturbation analysis [Berg (1990). J. Acoust. Soc. Am., 149–158] to provide a more specific assessment of the effect of cognitive factors on speech perception in noise. Sixteen older (age 56–79 years) and seventeen younger (age 19–30 years) adults discriminated a target vowel masked by randomly selected masker vowels immediately preceding and following the target. Relative decision weights on target and maskers resulting from the analysis revealed large individual differences across participants despite similar performance scores in many cases. On the most difficult vowel discriminations, the older adult decision weights were significantly correlated with inhibitory control (Color Word Interference test) and pure-tone threshold averages (PTA). Young adult decision weights were not correlated with any measures of peripheral (PTA) or central function (inhibition or working memory). PMID:25256580

  5. Communicative and psychological dimensions of the KiddyCAT.

    PubMed

    Clark, Chagit E; Conture, Edward G; Frankel, Carl B; Walden, Tedra A

    2012-01-01

    The purpose of the present study was to investigate the underlying constructs of the Communication Attitude Test for Preschool and Kindergarten Children Who Stutter (KiddyCAT; Vanryckeghem & Brutten, 2007), especially those related to awareness of stuttering and negative speech-associated attitudes. Participants were 114 preschool-age children who stutter (CWS; n=52; 15 females) and children who do not stutter (CWNS; n=62; 31 females). Their scores on the KiddyCAT were assessed to determine whether they differed with respect to talker group (CWS vs. CWNS), chronological age, younger versus older age groups, and gender. A categorical data principal components factor analysis (CATPCA) assessed the quantity and quality of the KiddyCAT dimensions. Findings indicated that preschool-age CWS scored significantly higher than CWNS on the KiddyCAT, regardless of age or gender. Additionally, the extraction of a single factor from the CATPCA indicated that one dimension-speech difficulty-appears to underlie the KiddyCAT items. As reported by its test developers, the KiddyCAT differentiates between CWS and CWNS. Furthermore, one factor, which appears related to participants' attitudes towards speech difficulty, underlies the questionnaire. Findings were taken to suggest that children's responses to the KiddyCAT are related to their perception that speech is difficult, which, for CWS, may be associated with relatively frequent experiences with their speaking difficulties (i.e., stuttering). After reading this article, the reader will be able to: (1) Better understand the concepts of attitude and awareness; (2) compare historical views with more recent empirical findings regarding preschool-age CWS' attitudes/awareness towards their stuttering; (3) describe the underlying dimension of the KiddyCAT questionnaire; (4) interpret KiddyCAT results and describe implications of those results. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Effects of Noise Level and Cognitive Function on Speech Perception in Normal Elderly and Elderly with Amnestic Mild Cognitive Impairment.

    PubMed

    Lee, Soo Jung; Park, Kyung Won; Kim, Lee-Suk; Kim, HyangHee

    2016-06-01

    Along with auditory function, cognitive function contributes to speech perception in the presence of background noise. Older adults with cognitive impairment might, therefore, have more difficulty perceiving speech-in-noise than their peers who have normal cognitive function. We compared the effects of noise level and cognitive function on speech perception in patients with amnestic mild cognitive impairment (aMCI), cognitively normal older adults, and cognitively normal younger adults. We studied 14 patients with aMCI and 14 age-, education-, and hearing threshold-matched cognitively intact older adults as experimental groups, and 14 younger adults as a control group. We assessed speech perception with monosyllabic word and sentence recognition tests at four noise levels: quiet condition and signal-to-noise ratio +5 dB, 0 dB, and -5 dB. We also evaluated the aMCI group with a neuropsychological assessment. Controlling for hearing thresholds, we found that the aMCI group scored significantly lower than both the older adults and the younger adults only when the noise level was high (signal-to-noise ratio -5 dB). At signal-to-noise ratio -5 dB, both older groups had significantly lower scores than the younger adults on the sentence recognition test. The aMCI group's sentence recognition performance was related to their executive function scores. Our findings suggest that patients with aMCI have more problems communicating in noisy situations in daily life than do their cognitively healthy peers and that older listeners with more difficulties understanding speech in noise should be considered for testing of neuropsychological function as well as hearing.

  7. Eyes and ears: Using eye tracking and pupillometry to understand challenges to speech recognition.

    PubMed

    Van Engen, Kristin J; McLaughlin, Drew J

    2018-05-04

    Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition. Copyright © 2018. Published by Elsevier B.V.

  8. The impact of impaired semantic knowledge on spontaneous iconic gesture production

    PubMed Central

    Cocks, Naomi; Dipper, Lucy; Pritchard, Madeleine; Morgan, Gary

    2013-01-01

    Background Previous research has found that people with aphasia produce more spontaneous iconic gesture than control participants, especially during word-finding difficulties. There is some evidence that impaired semantic knowledge impacts on the diversity of gestural handshapes, as well as the frequency of gesture production. However, no previous research has explored how impaired semantic knowledge impacts on the frequency and type of iconic gestures produced during fluent speech compared with those produced during word-finding difficulties. Aims To explore the impact of impaired semantic knowledge on the frequency and type of iconic gestures produced during fluent speech and those produced during word-finding difficulties. Methods & Procedures A group of 29 participants with aphasia and 29 control participants were video recorded describing a cartoon they had just watched. All iconic gestures were tagged and coded as either “manner,” “path only,” “shape outline” or “other”. These gestures were then separated into either those occurring during fluent speech or those occurring during a word-finding difficulty. The relationships between semantic knowledge and gesture frequency and form were then investigated in the two different conditions. Outcomes & Results As expected, the participants with aphasia produced a higher frequency of iconic gestures than the control participants, but when the iconic gestures produced during word-finding difficulties were removed from the analysis, the frequency of iconic gesture was not significantly different between the groups. While there was not a significant relationship between the frequency of iconic gestures produced during fluent speech and semantic knowledge, there was a significant positive correlation between semantic knowledge and the proportion of word-finding difficulties that contained gesture. There was also a significant positive correlation between the speakers' semantic knowledge and the proportion of gestures that were produced during fluent speech that were classified as “manner”. Finally while not significant, there was a positive trend between semantic knowledge of objects and the production of “shape outline” gestures during word-finding difficulties for objects. Conclusions The results indicate that impaired semantic knowledge in aphasia impacts on both the iconic gestures produced during fluent speech and those produced during word-finding difficulties but in different ways. These results shed new light on the relationship between impaired language and iconic co-speech gesture production and also suggest that analysis of iconic gesture may be a useful addition to clinical assessment. PMID:24058228

  9. Speech Perception in Tones and Noise via Cochlear Implants Reveals Influence of Spectral Resolution on Temporal Processing

    PubMed Central

    Kreft, Heather A.

    2014-01-01

    Under normal conditions, human speech is remarkably robust to degradation by noise and other distortions. However, people with hearing loss, including those with cochlear implants, often experience great difficulty in understanding speech in noisy environments. Recent work with normal-hearing listeners has shown that the amplitude fluctuations inherent in noise contribute strongly to the masking of speech. In contrast, this study shows that speech perception via a cochlear implant is unaffected by the inherent temporal fluctuations of noise. This qualitative difference between acoustic and electric auditory perception does not seem to be due to differences in underlying temporal acuity but can instead be explained by the poorer spectral resolution of cochlear implants, relative to the normally functioning ear, which leads to an effective smoothing of the inherent temporal-envelope fluctuations of noise. The outcome suggests an unexpected trade-off between the detrimental effects of poorer spectral resolution and the beneficial effects of a smoother noise temporal envelope. This trade-off provides an explanation for the long-standing puzzle of why strong correlations between speech understanding and spectral resolution have remained elusive. The results also provide a potential explanation for why cochlear-implant users and hearing-impaired listeners exhibit reduced or absent masking release when large and relatively slow temporal fluctuations are introduced in noise maskers. The multitone maskers used here may provide an effective new diagnostic tool for assessing functional hearing loss and reduced spectral resolution. PMID:25315376

  10. Speech Sound Disorders in a Community Study of Preschool Children

    ERIC Educational Resources Information Center

    McLeod, Sharynne; Harrison, Linda J.; McAllister, Lindy; McCormack, Jane

    2013-01-01

    Purpose: To undertake a community (nonclinical) study to describe the speech of preschool children who had been identified by parents/teachers as having difficulties "talking and making speech sounds" and compare the speech characteristics of those who had and had not accessed the services of a speech-language pathologist (SLP). Method:…

  11. Effects of reverberation and noise on speech intelligibility in normal-hearing and aided hearing-impaired listeners.

    PubMed

    Xia, Jing; Xu, Buye; Pentony, Shareka; Xu, Jingjing; Swaminathan, Jayaganesh

    2018-03-01

    Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts. Once corrected for ceiling effects, the differences in the effects of reverberation on speech intelligibility between the two groups were much smaller. This suggests that, at least, part of the difference in susceptibility to reverberation between normal-hearing and hearing-impaired listeners was due to ceiling effects. Across both groups, a complex interaction between the noise characteristics and reverberation was observed on the speech intelligibility scores. Further fine-grained analyses of the perception of consonants showed that, for both listener groups, final consonants were more susceptible to reverberation than initial consonants. However, differences in the perception of specific consonant features were observed between the groups.

  12. Tracking development from early speech-language acquisition to reading skills at age 13.

    PubMed

    Bartl-Pokorny, Katrin D; Marschik, Peter B; Sachse, Steffi; Green, Vanessa A; Zhang, Dajie; Van Der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2013-06-01

    Previous studies have indicated a link between speech-language and literacy development. To add to this body of knowledge, we investigated whether lexical and grammatical skills from toddler to early school age are related to reading competence in adolescence. Twenty-three typically developing children were followed from age 1;6 to 13;6 (years;months). Parental checklists and standardized tests were used to assess the development of mental lexicon, grammatical and reading capacities of the children. Direct assessment of early speech-language functions positively correlated with later reading competence, whereas lexical skills reported by parents were not associated with this capacity. At (pre-) school age, larger vocabulary and better grammatical abilities predicted advanced reading abilities in adolescence. Our study contributes to the understanding of typical speech-language development and its relation to later reading outcome, extending the body of knowledge on these developmental domains for future early identification of children at risk for reading difficulties.

  13. Impact of Hearing Aid Technology on Outcomes in Daily Life II: Speech Understanding and Listening Effort.

    PubMed

    Johnson, Jani A; Xu, Jingjing; Cox, Robyn M

    2016-01-01

    Modern hearing aid (HA) devices include a collection of acoustic signal-processing features designed to improve listening outcomes in a variety of daily auditory environments. Manufacturers market these features at successive levels of technological sophistication. The features included in costlier premium hearing devices are designed to result in further improvements to daily listening outcomes compared with the features included in basic hearing devices. However, independent research has not substantiated such improvements. This research was designed to explore differences in speech-understanding and listening-effort outcomes for older adults using premium-feature and basic-feature HAs in their daily lives. For this participant-blinded, repeated, crossover trial 45 older adults (mean age 70.3 years) with mild-to-moderate sensorineural hearing loss wore each of four pairs of bilaterally fitted HAs for 1 month. HAs were premium- and basic-feature devices from two major brands. After each 1-month trial, participants' speech-understanding and listening-effort outcomes were evaluated in the laboratory and in daily life. Three types of speech-understanding and listening-effort data were collected: measures of laboratory performance, responses to standardized self-report questionnaires, and participant diary entries about daily communication. The only statistically significant superiority for the premium-feature HAs occurred for listening effort in the loud laboratory condition and was demonstrated for only one of the tested brands. The predominant complaint of older adults with mild-to-moderate hearing impairment is difficulty understanding speech in various settings. The combined results of all the outcome measures used in this research suggest that, when fitted using scientifically based practices, both premium- and basic-feature HAs are capable of providing considerable, but essentially equivalent, improvements to speech understanding and listening effort in daily life for this population. For HA providers to make evidence-based recommendations to their clientele with hearing impairment it is essential that further independent research investigates the relative benefit/deficit of different levels of hearing technology across brands and manufacturers in these and other real-world listening domains.

  14. Integrated Speech and Phonological Awareness Intervention for Pre-School Children with Down Syndrome

    ERIC Educational Resources Information Center

    van Bysterveldt, Anne Katherine; Gillon, Gail; Foster-Cohen, Susan

    2010-01-01

    Background: Children with Down syndrome experience difficulty with both spoken and written language acquisition, however controlled intervention studies to improve these difficulties are rare and have typically focused on improving one language domain. Aims: To investigate the effectiveness of an integrated intervention approach on the speech,…

  15. Supporting Children with Speech and Language Difficulties. Supporting Children Series

    ERIC Educational Resources Information Center

    David Fulton Publishers, 2004

    2004-01-01

    Off-the-shelf support containing all the vital information practitioners need to know about Speech and Language Difficulties, this book includes: (1) Strategies for developing attention control; (2) Guidance on how to improve language and listening skills; and (3) Ideas for teaching phonological awareness. Following a foreword and an introduction,…

  16. Speech Perception Deficits in Poor Readers: A Reply to Denenberg's Critique.

    ERIC Educational Resources Information Center

    Studdert-Kennedy, Michael; Mody, Maria; Brady, Susan

    2000-01-01

    This rejoinder to a critique of the authors' research on speech perception deficits in poor readers answers the specific criticisms and reaffirms their conclusion that the difficulty some poor readers have with rapid /ba/-/da/ discrimination does not stem from difficulty in discriminating the rapid spectral transitions at stop-vowel syllable…

  17. Contribution of auditory working memory to speech understanding in mandarin-speaking cochlear implant users.

    PubMed

    Tao, Duoduo; Deng, Rui; Jiang, Ye; Galvin, John J; Fu, Qian-Jie; Chen, Bing

    2014-01-01

    To investigate how auditory working memory relates to speech perception performance by Mandarin-speaking cochlear implant (CI) users. Auditory working memory and speech perception was measured in Mandarin-speaking CI and normal-hearing (NH) participants. Working memory capacity was measured using forward digit span and backward digit span; working memory efficiency was measured using articulation rate. Speech perception was assessed with: (a) word-in-sentence recognition in quiet, (b) word-in-sentence recognition in speech-shaped steady noise at +5 dB signal-to-noise ratio, (c) Chinese disyllable recognition in quiet, (d) Chinese lexical tone recognition in quiet. Self-reported school rank was also collected regarding performance in schoolwork. There was large inter-subject variability in auditory working memory and speech performance for CI participants. Working memory and speech performance were significantly poorer for CI than for NH participants. All three working memory measures were strongly correlated with each other for both CI and NH participants. Partial correlation analyses were performed on the CI data while controlling for demographic variables. Working memory efficiency was significantly correlated only with sentence recognition in quiet when working memory capacity was partialled out. Working memory capacity was correlated with disyllable recognition and school rank when efficiency was partialled out. There was no correlation between working memory and lexical tone recognition in the present CI participants. Mandarin-speaking CI users experience significant deficits in auditory working memory and speech performance compared with NH listeners. The present data suggest that auditory working memory may contribute to CI users' difficulties in speech understanding. The present pattern of results with Mandarin-speaking CI users is consistent with previous auditory working memory studies with English-speaking CI users, suggesting that the lexical importance of voice pitch cues (albeit poorly coded by the CI) did not influence the relationship between working memory and speech perception.

  18. Educational provision for children with specific speech and language difficulties: perspectives of speech and language therapy service managers.

    PubMed

    Dockrell, Julie E; Lindsay, Geoff; Letchford, Becky; Mackie, Clare

    2006-01-01

    Children with specific speech and language difficulties (SSLD) pose a challenge to the education system, and to speech and language therapists who support them, as a result of their language needs and associated educational and social-behavioural difficulties. The development of inclusion raises questions regarding appropriate provision, whether the tradition of language units or full inclusion into mainstream schools. To gather the views of speech and language therapy service managers in England and Wales regarding approaches to service delivery, terminology and decision-making for educational provision, and the use of direct and indirect (consultancy) models of intervention. The study reports on a national survey of speech and language therapy (SLT) services in England and Wales (129 respondents, 72.1% response rate) and interviews with 39 SLT service managers. Provision varied by age group with support to children in the mainstream common from pre-school to the end of Key Stage 2 (up to 11 years), and to those in designated specialist provision, common at Key Stages 1/2 (ages 5-11 years), but less prevalent at Key Stages 3/4 (11-16 years). Decision-making regarding provision was influenced by the lack of common terminology, with SSLD and specific language impairment (SLI) the most common, and criteria, including the use of the discrepancy model for defining SSLD. Practice was influenced by the difficulties in distinguishing children with SSLD from those with autistic spectrum disorder, and difficulties translating policies into practice. The implications of the study are discussed with reference to SLT practice, including consultancy models, and the increasingly prevalent policy in local education authorities of inclusion of children with special educational needs.

  19. Difficulties in Automatic Speech Recognition of Dysarthric Speakers and Implications for Speech-Based Applications Used by the Elderly: A Literature Review

    ERIC Educational Resources Information Center

    Young, Victoria; Mihailidis, Alex

    2010-01-01

    Despite their growing presence in home computer applications and various telephony services, commercial automatic speech recognition technologies are still not easily employed by everyone; especially individuals with speech disorders. In addition, relatively little research has been conducted on automatic speech recognition performance with older…

  20. The role of the speech-language pathologist in home care.

    PubMed

    Giles, Melanie; Barker, Mary; Hayes, Amanda

    2014-06-01

    Speech language pathologists play an important role in the care of patients with speech, language, or swallowing difficulties that can result from a variety of medical conditions. This article describes how speech language pathologists assess and treat these conditions and the red flags that suggest a referral to a speech language pathologist is indicated.

  1. On the relationship between auditory cognition and speech intelligibility in cochlear implant users: An ERP study.

    PubMed

    Finke, Mareike; Büchner, Andreas; Ruigendijk, Esther; Meyer, Martin; Sandmann, Pascale

    2016-07-01

    There is a high degree of variability in speech intelligibility outcomes across cochlear-implant (CI) users. To better understand how auditory cognition affects speech intelligibility with the CI, we performed an electroencephalography study in which we examined the relationship between central auditory processing, cognitive abilities, and speech intelligibility. Postlingually deafened CI users (N=13) and matched normal-hearing (NH) listeners (N=13) performed an oddball task with words presented in different background conditions (quiet, stationary noise, modulated noise). Participants had to categorize words as living (targets) or non-living entities (standards). We also assessed participants' working memory (WM) capacity and verbal abilities. For the oddball task, we found lower hit rates and prolonged response times in CI users when compared with NH listeners. Noise-related prolongation of the N1 amplitude was found for all participants. Further, we observed group-specific modulation effects of event-related potentials (ERPs) as a function of background noise. While NH listeners showed stronger noise-related modulation of the N1 latency, CI users revealed enhanced modulation effects of the N2/N4 latency. In general, higher-order processing (N2/N4, P3) was prolonged in CI users in all background conditions when compared with NH listeners. Longer N2/N4 latency in CI users suggests that these individuals have difficulties to map acoustic-phonetic features to lexical representations. These difficulties seem to be increased for speech-in-noise conditions when compared with speech in quiet background. Correlation analyses showed that shorter ERP latencies were related to enhanced speech intelligibility (N1, N2/N4), better lexical fluency (N1), and lower ratings of listening effort (N2/N4) in CI users. In sum, our findings suggest that CI users and NH listeners differ with regards to both the sensory and the higher-order processing of speech in quiet as well as in noisy background conditions. Our results also revealed that verbal abilities are related to speech processing and speech intelligibility in CI users, confirming the view that auditory cognition plays an important role for CI outcome. We conclude that differences in auditory-cognitive processing contribute to the variability in speech performance outcomes observed in CI users. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. From Birdsong to Human Speech Recognition: Bayesian Inference on a Hierarchy of Nonlinear Dynamical Systems

    PubMed Central

    Yildiz, Izzet B.; von Kriegstein, Katharina; Kiebel, Stefan J.

    2013-01-01

    Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments. PMID:24068902

  3. From birdsong to human speech recognition: bayesian inference on a hierarchy of nonlinear dynamical systems.

    PubMed

    Yildiz, Izzet B; von Kriegstein, Katharina; Kiebel, Stefan J

    2013-01-01

    Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents-an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.

  4. A highly penetrant form of childhood apraxia of speech due to deletion of 16p11.2

    PubMed Central

    Fedorenko, Evelina; Morgan, Angela; Murray, Elizabeth; Cardinaux, Annie; Mei, Cristina; Tager-Flusberg, Helen; Fisher, Simon E; Kanwisher, Nancy

    2016-01-01

    Individuals with heterozygous 16p11.2 deletions reportedly suffer from a variety of difficulties with speech and language. Indeed, recent copy-number variant screens of children with childhood apraxia of speech (CAS), a specific and rare motor speech disorder, have identified three unrelated individuals with 16p11.2 deletions. However, the nature and prevalence of speech and language disorders in general, and CAS in particular, is unknown for individuals with 16p11.2 deletions. Here we took a genotype-first approach, conducting detailed and systematic characterization of speech abilities in a group of 11 unrelated children ascertained on the basis of 16p11.2 deletions. To obtain the most precise and replicable phenotyping, we included tasks that are highly diagnostic for CAS, and we tested children under the age of 18 years, an age group where CAS has been best characterized. Two individuals were largely nonverbal, preventing detailed speech analysis, whereas the remaining nine met the standard accepted diagnostic criteria for CAS. These results link 16p11.2 deletions to a highly penetrant form of CAS. Our findings underline the need for further precise characterization of speech and language profiles in larger groups of affected individuals, which will also enhance our understanding of how genetic pathways contribute to human communication disorders. PMID:26173965

  5. The McGurk effect in children with autism and Asperger syndrome.

    PubMed

    Bebko, James M; Schroeder, Jessica H; Weiss, Jonathan A

    2014-02-01

    Children with autism may have difficulties in audiovisual speech perception, which has been linked to speech perception and language development. However, little has been done to examine children with Asperger syndrome as a group on tasks assessing audiovisual speech perception, despite this group's often greater language skills. Samples of children with autism, Asperger syndrome, and Down syndrome, as well as a typically developing sample, were presented with an auditory-only condition, a speech-reading condition, and an audiovisual condition designed to elicit the McGurk effect. Children with autism demonstrated unimodal performance at the same level as the other groups, yet showed a lower rate of the McGurk effect compared with the Asperger, Down and typical samples. These results suggest that children with autism may have unique intermodal speech perception difficulties linked to their representations of speech sounds. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  6. Investigations in mechanisms and strategies to enhance hearing with cochlear implants

    NASA Astrophysics Data System (ADS)

    Churchill, Tyler H.

    Cochlear implants (CIs) produce hearing sensations by stimulating the auditory nerve (AN) with current pulses whose amplitudes are modulated by filtered acoustic temporal envelopes. While this technology has provided hearing for multitudinous CI recipients, even bilaterally-implanted listeners have more difficulty understanding speech in noise and localizing sounds than normal hearing (NH) listeners. Three studies reported here have explored ways to improve electric hearing abilities. Vocoders are often used to simulate CIs for NH listeners. Study 1 was a psychoacoustic vocoder study examining the effects of harmonic carrier phase dispersion and simulated CI current spread on speech intelligibility in noise. Results showed that simulated current spread was detrimental to speech understanding and that speech vocoded with carriers whose components' starting phases were equal was the least intelligible. Cross-correlogram analyses of AN model simulations confirmed that carrier component phase dispersion resulted in better neural envelope representation. Localization abilities rely on binaural processing mechanisms in the brainstem and mid-brain that are not fully understood. In Study 2, several potential mechanisms were evaluated based on the ability of metrics extracted from stereo AN simulations to predict azimuthal locations. Results suggest that unique across-frequency patterns of binaural cross-correlation may provide a strong cue set for lateralization and that interaural level differences alone cannot explain NH sensitivity to lateral position. While it is known that many bilateral CI users are sensitive to interaural time differences (ITDs) in low-rate pulsatile stimulation, most contemporary CI processing strategies use high-rate, constant-rate pulse trains. In Study 3, we examined the effects of pulse rate and pulse timing on ITD discrimination, ITD lateralization, and speech recognition by bilateral CI listeners. Results showed that listeners were able to use low-rate pulse timing cues presented redundantly on multiple electrodes for ITD discrimination and lateralization of speech stimuli even when mixed with high rates on other electrodes. These results have contributed to a better understanding of those aspects of the auditory system that support speech understanding and binaural hearing, suggested vocoder parameters that may simulate aspects of electric hearing, and shown that redundant, low-rate pulse timing supports improved spatial hearing for bilateral CI listeners.

  7. Vulnerability to Bullying in Children with a History of Specific Speech and Language Difficulties

    ERIC Educational Resources Information Center

    Lindsay, Geoff; Dockrell, Julie E.; Mackie, Clare

    2008-01-01

    This study examined the susceptibility to problems with peer relationships and being bullied in a UK sample of 12-year-old children with a history of specific speech and language difficulties. Data were derived from the children's self-reports and the reports of parents and teachers using measures of victimization, emotional and behavioral…

  8. Worster-Drought Syndrome: Poorly Recognized despite Severe and Persistent Difficulties with Feeding and Speech

    ERIC Educational Resources Information Center

    Clark, Maria; Harris, Rebecca; Jolleff, Nicola; Price, Katie; Neville, Brian G. R.

    2010-01-01

    Aim: Worster-Drought syndrome (WDS), or congenital suprabulbar paresis, is a permanent movement disorder of the bulbar muscles causing persistent difficulties with swallowing, feeding, speech, and saliva control owing to a non-progressive disturbance in early brain development. As such, it falls within the cerebral palsies. The aim of this study…

  9. Are the Literacy Difficulties That Characterize Developmental Dyslexia Associated with a Failure to Integrate Letters and Speech Sounds?

    ERIC Educational Resources Information Center

    Nash, Hannah M.; Gooch, Debbie; Hulme, Charles; Mahajan, Yatin; McArthur, Genevieve; Steinmetzger, Kurt; Snowling, Margaret J.

    2017-01-01

    The "automatic letter-sound integration hypothesis" (Blomert, [Blomert, L., 2011]) proposes that dyslexia results from a failure to fully integrate letters and speech sounds into automated audio-visual objects. We tested this hypothesis in a sample of English-speaking children with dyslexic difficulties (N = 13) and samples of…

  10. Practices of Other-Initiated Repair in the Classrooms of Children with Specific Speech and Language Difficulties

    ERIC Educational Resources Information Center

    Radford, Julie

    2010-01-01

    Repair practices used by teachers who work with children with specific speech and language difficulties (SSLDs) have hitherto remained largely unexplored. Such classrooms therefore offer a new context for researching repairs and considering how they compare with non-SSLD interactions. Repair trajectories are of interest because they are dialogic…

  11. Working memory predicts semantic comprehension in dichotic listening in older adults.

    PubMed

    James, Philip J; Krishnan, Saloni; Aydelott, Jennifer

    2014-10-01

    Older adults have difficulty understanding spoken language in the presence of competing voices. Everyday social situations involving multiple simultaneous talkers may become increasingly challenging in later life due to changes in the ability to focus attention. This study examined whether individual differences in cognitive function predict older adults' ability to access sentence-level meanings in competing speech using a dichotic priming paradigm. Older listeners showed faster responses to words that matched the meaning of spoken sentences presented to the left or right ear, relative to a neutral baseline. However, older adults were more vulnerable than younger adults to interference from competing speech when the competing signal was presented to the right ear. This pattern of performance was strongly correlated with a non-auditory working memory measure, suggesting that cognitive factors play a key role in semantic comprehension in competing speech in healthy aging. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Particularities of Speech Readiness for Schooling in Pre-School Children Having General Speech Underdevelopment: A Social and Pedagogical Aspect

    ERIC Educational Resources Information Center

    Emelyanova, Irina A.; Borisova, Elena A.; Shapovalova, Olga E.; Karynbaeva, Olga V.; Vorotilkina, Irina M.

    2018-01-01

    The relevance of the research is due to the necessity of creating the pedagogical conditions for correction and development of speech in children having the general speech underdevelopment. For them, difficulties generating a coherent utterance are characteristic, which prevents a sufficient speech readiness for schooling forming in them as well…

  13. A dynamic auditory-cognitive system supports speech-in-noise perception in older adults

    PubMed Central

    Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina

    2013-01-01

    Understanding speech in noise is one of the most complex activities encountered in everyday life, relying on peripheral hearing, central auditory processing, and cognition. These abilities decline with age, and so older adults are often frustrated by a reduced ability to communicate effectively in noisy environments. Many studies have examined these factors independently; in the last decade, however, the idea of the auditory-cognitive system has emerged, recognizing the need to consider the processing of complex sounds in the context of dynamic neural circuits. Here, we use structural equation modeling to evaluate interacting contributions of peripheral hearing, central processing, cognitive ability, and life experiences to understanding speech in noise. We recruited 120 older adults (ages 55 to 79) and evaluated their peripheral hearing status, cognitive skills, and central processing. We also collected demographic measures of life experiences, such as physical activity, intellectual engagement, and musical training. In our model, central processing and cognitive function predicted a significant proportion of variance in the ability to understand speech in noise. To a lesser extent, life experience predicted hearing-in-noise ability through modulation of brainstem function. Peripheral hearing levels did not significantly contribute to the model. Previous musical experience modulated the relative contributions of cognitive ability and lifestyle factors to hearing in noise. Our models demonstrate the complex interactions required to hear in noise and the importance of targeting cognitive function, lifestyle, and central auditory processing in the management of individuals who are having difficulty hearing in noise. PMID:23541911

  14. Effect of Dialect on Identification and Severity of Speech Impairment in Indigenous Australian Children

    ERIC Educational Resources Information Center

    Toohill, Bethany J.; Mcleod, Sharynne; Mccormack, Jane

    2012-01-01

    This study investigated the effect of dialectal difference on identification and rating of severity of speech impairment in children from Indigenous Australian backgrounds. The speech of 15 Indigenous Australian children identified by their parents/caregivers and teachers as having "difficulty talking and making speech sounds" was…

  15. Speech Characteristics and Intelligibility in Adults with Mild and Moderate Intellectual Disabilities

    PubMed Central

    Coppens-Hofman, Marjolein C.; Terband, Hayo; Snik, Ad F.M.; Maassen, Ben A.M.

    2017-01-01

    Purpose Adults with intellectual disabilities (ID) often show reduced speech intelligibility, which affects their social interaction skills. This study aims to establish the main predictors of this reduced intelligibility in order to ultimately optimise management. Method Spontaneous speech and picture naming tasks were recorded in 36 adults with mild or moderate ID. Twenty-five naïve listeners rated the intelligibility of the spontaneous speech samples. Performance on the picture-naming task was analysed by means of a phonological error analysis based on expert transcriptions. Results The transcription analyses showed that the phonemic and syllabic inventories of the speakers were complete. However, multiple errors at the phonemic and syllabic level were found. The frequencies of specific types of errors were related to intelligibility and quality ratings. Conclusions The development of the phonemic and syllabic repertoire appears to be completed in adults with mild-to-moderate ID. The charted speech difficulties can be interpreted to indicate speech motor control and planning difficulties. These findings may aid the development of diagnostic tests and speech therapies aimed at improving speech intelligibility in this specific group. PMID:28118637

  16. Speech and oromotor outcome in adolescents born preterm: relationship to motor tract integrity.

    PubMed

    Northam, Gemma B; Liégeois, Frédérique; Chong, Wui K; Baker, Kate; Tournier, Jacques-Donald; Wyatt, John S; Baldeweg, Torsten; Morgan, Angela

    2012-03-01

    To assess speech abilities in adolescents born preterm and investigate whether there is an association between specific speech deficits and brain abnormalities. Fifty adolescents born prematurely (<33 weeks' gestation) with a spectrum of brain injuries were recruited (mean age, 16 years). Speech examination included tests of speech-sound processing and production and speech and oromotor control. Conventional magnetic resonance imaging and diffusion-weighted imaging was acquired in all adolescents born preterm and 30 term-born control subjects. Radiological ratings of brain injury were recorded and the integrity of the primary motor projections was measured (corticospinal tract and speech-motor corticobulbar tract [CST/CBT]). There were no clinical diagnoses of developmental dysarthria, dyspraxia, or a speech-sound disorder, but difficulties in speech and oromotor control were common. A regression analysis revealed that presence of a neurologic impairment, and diffusion-weighted imaging abnormalities in the left CST/CBT were significant independent predictors of poor speech and oromotor outcome. These left-lateralized abnormalities were most evident at the level of the posterior limb of the internal capsule. Difficulties in speech and oromotor control are common in adolescents born preterm, and adolescents with injury to the CST/CBT pathways in the left-hemisphere may be most at risk. Copyright © 2012 Mosby, Inc. All rights reserved.

  17. Le langage ecrit: Actes du 6e colloque d'orthophonie/logopedie (Written Language: Proceedings of the Sixth Colloquium on Speech Therapy/Speech Pathology) (Neuchatel, Switzerland, September 21-22, 2000).

    ERIC Educational Resources Information Center

    de Weck, Genevieve, Ed.; Sovilla, Jocelyne Buttet, Ed.

    This collection of papers discusses various theoretical, clinical, and assessment issues in reading and writing delays and disorders. Topics include the following: integrating different theoretical approaches (cognitive psychology, neuropsychology, constructivism) into clinical approaches to reading and writing difficulties; difficulties of…

  18. Language learning impairments: integrating basic science, technology, and remediation.

    PubMed

    Tallal, P; Merzenich, M M; Miller, S; Jenkins, W

    1998-11-01

    One of the fundamental goals of the modern field of neuroscience is to understand how neuronal activity gives rise to higher cortical function. However, to bridge the gap between neurobiology and behavior, we must understand higher cortical functions at the behavioral level at least as well as we have come to understand neurobiological processes at the cellular and molecular levels. This is certainly the case in the study of speech processing, where critical studies of behavioral dysfunction have provided key insights into the basic neurobiological mechanisms relevant to speech perception and production. Much of this progress derives from a detailed analysis of the sensory, perceptual, cognitive, and motor abilities of children who fail to acquire speech, language, and reading skills normally within the context of otherwise normal development. Current research now shows that a dysfunction in normal phonological processing, which is critical to the development of oral and written language, may derive, at least in part, from difficulties in perceiving and producing basic sensory-motor information in rapid succession--within tens of ms (see Tallal et al. 1993a for a review). There is now substantial evidence supporting the hypothesis that basic temporal integration processes play a fundamental role in establishing neural representations for the units of speech (phonemes), which must be segmented from the (continuous) speech stream and combined to form words, in order for the normal development of oral and written language to proceed. Results from magnetic resonance imaging (MRI) and positron emission tomography (PET) studies, as well as studies of behavioral performance in normal and language impaired children and adults, will be reviewed to support the view that the integration of rapidly changing successive acoustic events plays a primary role in phonological development and disorders. Finally, remediation studies based on this research, coupled with neuroplasticity research, will be presented.

  19. Relation between speech-in-noise threshold, hearing loss and cognition from 40-69 years of age.

    PubMed

    Moore, David R; Edmondson-Jones, Mark; Dawes, Piers; Fortnum, Heather; McCormack, Abby; Pierzycki, Robert H; Munro, Kevin J

    2014-01-01

    Healthy hearing depends on sensitive ears and adequate brain processing. Essential aspects of both hearing and cognition decline with advancing age, but it is largely unknown how one influences the other. The current standard measure of hearing, the pure-tone audiogram is not very cognitively demanding and does not predict well the most important yet challenging use of hearing, listening to speech in noisy environments. We analysed data from UK Biobank that asked 40-69 year olds about their hearing, and assessed their ability on tests of speech-in-noise hearing and cognition. About half a million volunteers were recruited through NHS registers. Respondents completed 'whole-body' testing in purpose-designed, community-based test centres across the UK. Objective hearing (spoken digit recognition in noise) and cognitive (reasoning, memory, processing speed) data were analysed using logistic and multiple regression methods. Speech hearing in noise declined exponentially with age for both sexes from about 50 years, differing from previous audiogram data that showed a more linear decline from <40 years for men, and consistently less hearing loss for women. The decline in speech-in-noise hearing was especially dramatic among those with lower cognitive scores. Decreasing cognitive ability and increasing age were both independently associated with decreasing ability to hear speech-in-noise (0.70 and 0.89 dB, respectively) among the population studied. Men subjectively reported up to 60% higher rates of difficulty hearing than women. Workplace noise history associated with difficulty in both subjective hearing and objective speech hearing in noise. Leisure noise history was associated with subjective, but not with objective difficulty hearing. Older people have declining cognitive processing ability associated with reduced ability to hear speech in noise, measured by recognition of recorded spoken digits. Subjective reports of hearing difficulty generally show a higher prevalence than objective measures, suggesting that current objective methods could be extended further.

  20. Relation between Speech-in-Noise Threshold, Hearing Loss and Cognition from 40–69 Years of Age

    PubMed Central

    Moore, David R.; Edmondson-Jones, Mark; Dawes, Piers; Fortnum, Heather; McCormack, Abby; Pierzycki, Robert H.; Munro, Kevin J.

    2014-01-01

    Background Healthy hearing depends on sensitive ears and adequate brain processing. Essential aspects of both hearing and cognition decline with advancing age, but it is largely unknown how one influences the other. The current standard measure of hearing, the pure-tone audiogram is not very cognitively demanding and does not predict well the most important yet challenging use of hearing, listening to speech in noisy environments. We analysed data from UK Biobank that asked 40–69 year olds about their hearing, and assessed their ability on tests of speech-in-noise hearing and cognition. Methods and Findings About half a million volunteers were recruited through NHS registers. Respondents completed ‘whole-body’ testing in purpose-designed, community-based test centres across the UK. Objective hearing (spoken digit recognition in noise) and cognitive (reasoning, memory, processing speed) data were analysed using logistic and multiple regression methods. Speech hearing in noise declined exponentially with age for both sexes from about 50 years, differing from previous audiogram data that showed a more linear decline from <40 years for men, and consistently less hearing loss for women. The decline in speech-in-noise hearing was especially dramatic among those with lower cognitive scores. Decreasing cognitive ability and increasing age were both independently associated with decreasing ability to hear speech-in-noise (0.70 and 0.89 dB, respectively) among the population studied. Men subjectively reported up to 60% higher rates of difficulty hearing than women. Workplace noise history associated with difficulty in both subjective hearing and objective speech hearing in noise. Leisure noise history was associated with subjective, but not with objective difficulty hearing. Conclusions Older people have declining cognitive processing ability associated with reduced ability to hear speech in noise, measured by recognition of recorded spoken digits. Subjective reports of hearing difficulty generally show a higher prevalence than objective measures, suggesting that current objective methods could be extended further. PMID:25229622

  1. Management of developmental speech and language disorders: Part 1.

    PubMed

    O'Hare, Anne; Bremner, Lynne

    2016-03-01

    The identification of developmental problems in a child's acquisition of speech, language and/or communication is a core activity in child surveillance. These are common difficulties with up to 15% of toddlers being 'late talkers' and 7% of children entering school with persisting impairments of their language development. These delays can confer disadvantages in the long term, adversely affecting language, cognition, academic attainment, behaviour and mental health. All children presenting with significant speech and language delay should be investigated with a comprehensive hearing assessment and be considered for speech and language therapy assessment. Socioeconomic adversity correlates with delayed language development. Clinical assessment should confirm that the presentation is definitely not acquired (see part 2) and will also guide whether the difficulty is primary, in which there are often familial patterns, or secondary, from a very wide range of aetiologies. Symptoms may be salient, such as the regression of communication in <3-year-olds which 'flags up' autism spectrum disorder. Further investigation will be informed from this clinical assessment, for example, genetic investigation for sex aneuploidies in enduring primary difficulties. Management of the speech and language difficulty itself is the realm of the speech and language therapist, who has an ever-increasing evidence-based choice of interventions. This should take place within a multidisciplinary team, particularly for children with more severe conditions who may benefit from individualised parental and educational supports. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  2. Speech perception in noise in unilateral hearing loss.

    PubMed

    Mondelli, Maria Fernanda Capoani Garcia; Dos Santos, Marina de Marchi; José, Maria Renata

    2016-01-01

    Unilateral hearing loss is characterized by a decrease of hearing in one ear only. In the presence of ambient noise, individuals with unilateral hearing loss are faced with greater difficulties understanding speech than normal listeners. To evaluate the speech perception of individuals with unilateral hearing loss in speech perception with and without competitive noise, before and after the hearing aid fitting process. The study included 30 adults of both genders diagnosed with moderate or severe sensorineural unilateral hearing loss using the Hearing In Noise Test - Hearing In Noise Test-Brazil, in the following scenarios: silence, frontal noise, noise to the right, and noise to the left, before and after the hearing aid fitting process. The study participants had a mean age of 41.9 years and most of them presented right unilateral hearing loss. In all cases evaluated with Hearing In Noise Test, a better performance in speech perception was observed with the use of hearing aids. Using the Hearing In Noise Test-Brazil test evaluation, individuals with unilateral hearing loss demonstrated better performance in speech perception when using hearing aids, both in silence and in situations with a competing noise, with use of hearing aids. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  3. Cognitive Compensation of Speech Perception With Hearing Impairment, Cochlear Implants, and Aging

    PubMed Central

    Clarke, Jeanne; Pals, Carina; Benard, Michel R.; Bhargava, Pranesh; Saija, Jefta; Sarampalis, Anastasios; Wagner, Anita; Gaudrain, Etienne

    2016-01-01

    External degradations in incoming speech reduce understanding, and hearing impairment further compounds the problem. While cognitive mechanisms alleviate some of the difficulties, their effectiveness may change with age. In our research, reviewed here, we investigated cognitive compensation with hearing impairment, cochlear implants, and aging, via (a) phonemic restoration as a measure of top-down filling of missing speech, (b) listening effort and response times as a measure of increased cognitive processing, and (c) visual world paradigm and eye gazing as a measure of the use of context and its time course. Our results indicate that between speech degradations and their cognitive compensation, there is a fine balance that seems to vary greatly across individuals. Hearing impairment or inadequate hearing device settings may limit compensation benefits. Cochlear implants seem to allow the effective use of sentential context, but likely at the cost of delayed processing. Linguistic and lexical knowledge, which play an important role in compensation, may be successfully employed in advanced age, as some compensatory mechanisms seem to be preserved. These findings indicate that cognitive compensation in hearing impairment can be highly complicated—not always absent, but also not easily predicted by speech intelligibility tests only.

  4. Speech Rate Entrainment in Children and Adults With and Without Autism Spectrum Disorder.

    PubMed

    Wynn, Camille J; Borrie, Stephanie A; Sellers, Tyra P

    2018-05-03

    Conversational entrainment, a phenomenon whereby people modify their behaviors to match their communication partner, has been evidenced as critical to successful conversation. It is plausible that deficits in entrainment contribute to the conversational breakdowns and social difficulties exhibited by people with autism spectrum disorder (ASD). This study examined speech rate entrainment in children and adult populations with and without ASD. Sixty participants including typically developing children, children with ASD, typically developed adults, and adults with ASD participated in a quasi-conversational paradigm with a pseudoconfederate. The confederate's speech rate was digitally manipulated to create slow and fast speech rate conditions. Typically developed adults entrained their speech rate in the quasi-conversational paradigm, using a faster rate during the fast speech rate conditions and a slower rate during the slow speech rate conditions. This entrainment pattern was not evident in adults with ASD or in children populations. Findings suggest that speech rate entrainment is a developmentally acquired skill and offers preliminary evidence of speech rate entrainment deficits in adults with ASD. Impairments in this area may contribute to the conversational breakdowns and social difficulties experienced by this population. Future work is needed to advance this area of inquiry.

  5. The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise

    PubMed Central

    Xie, Zilong; Tessmer, Rachel; Chandrasekaran, Bharath

    2017-01-01

    Purpose Although lexical information influences phoneme perception, the extent to which reliance on lexical information enhances speech processing in challenging listening environments is unclear. We examined the extent to which individual differences in lexical influences on phonemic processing impact speech processing in maskers containing varying degrees of linguistic information (2-talker babble or pink noise). Method Twenty-nine monolingual English speakers were instructed to ignore the lexical status of spoken syllables (e.g., gift vs. kift) and to only categorize the initial phonemes (/g/ vs. /k/). The same participants then performed speech recognition tasks in the presence of 2-talker babble or pink noise in audio-only and audiovisual conditions. Results Individuals who demonstrated greater lexical influences on phonemic processing experienced greater speech processing difficulties in 2-talker babble than in pink noise. These selective difficulties were present across audio-only and audiovisual conditions. Conclusion Individuals with greater reliance on lexical processes during speech perception exhibit impaired speech recognition in listening conditions in which competing talkers introduce audible linguistic interferences. Future studies should examine the locus of lexical influences/interferences on phonemic processing and speech-in-speech processing. PMID:28586824

  6. Current management for word finding difficulties by speech-language therapists in South African remedial schools.

    PubMed

    de Rauville, Ingrid; Chetty, Sandhya; Pahl, Jenny

    2006-01-01

    Word finding difficulties frequently found in learners with language learning difficulties (Casby, 1992) are an integral part of Speech-Language Therapists' management role when working with learning disabled children. This study investigated current management for word finding difficulties by 70 Speech-Language Therapists in South African remedial schools. A descriptive survey design using a quantitative and qualitative approach was used. A questionnaire and follow-up focus group discussion were used to collect data. Results highlighted the use of the Renfrew Word Finding Scale (Renfrew, 1972, 1995) as the most frequently used formal assessment tool. Language sample analysis and discourse analysis were the most frequently used informal assessment procedures. Formal intervention programmes were generally not used. Phonetic, phonemic or phonological cueing were the most frequently used therapeutic strategies. The authors note strengths and raise concerns about current management for word finding difficulties in South African remedial schools, particularly in terms of bilingualism. Opportunities are highlighted regarding the development of assessment and intervention measures relevant to the diverse learning disabled population in South Africa.

  7. Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performance.

    PubMed

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2013-02-01

    To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette, Gudmundsen, Revit, & Banerjee, 2004) and pure-tone hearing thresholds. Participants included 111 middle- to older-age adults (range = 45-78) with audiometric configurations ranging from normal hearing levels to moderate sensorineural hearing loss. In addition to using audiometric testing, the authors also used such evaluation measures as the QuickSIN, the SSQ, and the cABR. Multiple linear regression analysis indicated that the inclusion of brainstem variables in a model with QuickSIN, hearing thresholds, and age accounted for 30% of the variance in the Speech subtest of the SSQ, compared with significantly less variance (19%) when brainstem variables were not included. The authors' results demonstrate the cABR's efficacy for predicting self-reported speech-in-noise perception difficulties. The fact that the cABR predicts more variance in self-reported speech-in-noise (SIN) perception than either the QuickSIN or hearing thresholds indicates that the cABR provides additional insight into an individual's ability to hear in background noise. In addition, the findings underscore the link between the cABR and hearing in noise.

  8. Private Speech Use in Arithmetical Calculation: Contributory Role of Phonological Awareness in Children with and without Mathematical Difficulties

    ERIC Educational Resources Information Center

    Ostad, Snorre A.

    2013-01-01

    The majority of recent studies conclude that children's private speech development (private speech internalization) is related to and important for mathematical development and disabilities. It is far from clear, however, whether private speech internalization itself plays any causal role in the development of mathematical competence. The main…

  9. Is use of laser really essential for release of tongue-tie?

    PubMed

    Sane, Vikrant Dilip; Pawar, Sudhir; Modi, Sachin; Saddiwal, Rashmi; Khade, Mayur; Tendulkar, Hrishikesh

    2014-05-01

    Ankyloglossia, or tongue-tie, is a congenital condition characterized by a short, thickened, or abnormally tight lingual frenulum. This anomaly can cause a varying degree of reduced tongue mobility and has been associated with functional limitations including breastfeeding difficulties, atypical swallowing habits, speech articulation problems, mechanical problems such as inability to clean the oral cavity, and psychosocial stress. In this article, we report a 50-year-old female patient with tongue-tie having difficulty in speech and maintenance of oral hygiene due to high attachment of lingual frenum. The patient was managed by frenectomy by conventional method (scalpel and blade) under local anesthesia as an outpatient procedure without any complications. She later required speech therapy lessons for improvement of speech.

  10. Computer-based auditory training (CBAT): benefits for children with language- and reading-related learning difficulties.

    PubMed

    Loo, Jenny Hooi Yin; Bamiou, Doris-Eva; Campbell, Nicci; Luxon, Linda M

    2010-08-01

    This article reviews the evidence for computer-based auditory training (CBAT) in children with language, reading, and related learning difficulties, and evaluates the extent it can benefit children with auditory processing disorder (APD). Searches were confined to studies published between 2000 and 2008, and they are rated according to the level of evidence hierarchy proposed by the American Speech-Language Hearing Association (ASHA) in 2004. We identified 16 studies of two commercially available CBAT programs (13 studies of Fast ForWord (FFW) and three studies of Earobics) and five further outcome studies of other non-speech and simple speech sounds training, available for children with language, learning, and reading difficulties. The results suggest that, apart from the phonological awareness skills, the FFW and Earobics programs seem to have little effect on the language, spelling, and reading skills of children. Non-speech and simple speech sounds training may be effective in improving children's reading skills, but only if it is delivered by an audio-visual method. There is some initial evidence to suggest that CBAT may be of benefit for children with APD. Further research is necessary, however, to substantiate these preliminary findings.

  11. Atypical preference for infant-directed speech as an early marker of autism spectrum disorders? A literature review and directions for further research.

    PubMed

    Filipe, Marisa G; Watson, Linda; Vicente, Selene G; Frota, Sónia

    2018-01-01

    Autism spectrum disorders (ASD) refer to a complex group of neurodevelopmental disorders causing difficulties with communication and interpersonal relationships, as well as restricted and repetitive behaviours and interests. As early identification, diagnosis, and intervention provide better long-term outcomes, early markers of ASD have gained increased research attention. This review examines evidence that auditory processing enhanced by social interest, in particular auditory preference of speech directed towards infants and young children (i.e. infant-directed speech - IDS), may be an early marker of risk for ASD. Although this review provides evidence for IDS preference as, indeed, a potential early marker of ASD, the explanation for differences in IDS processing among children with ASD versus other children remains unclear, as are the implications of these impairments for later social-communicative development. Therefore, it is crucial to explore atypicalities in IDS processing early on development and to understand whether preferential listening to specific types of speech sounds in the first years of life may help to predict the impairments in social and language development.

  12. Policy-to-Practice Context to the Delays and Difficulties in the Acquisition of Speech, Language and Communication in the Early Years

    ERIC Educational Resources Information Center

    Blackburn, Carolyn; Aubrey, Carol

    2016-01-01

    The aim was to investigate the policy-to-practice context of delays and difficulties in the acquisition of speech, language and communication (SLC) in children from birth to five in one local authority within the context of Bronfenbrenner's bio-ecological model. Methods included a survey of early years practitioners (64 responses), interviews with…

  13. Auditory function in children with Charcot-Marie-Tooth disease.

    PubMed

    Rance, Gary; Ryan, Monique M; Bayliss, Kristen; Gill, Kathryn; O'Sullivan, Caitlin; Whitechurch, Marny

    2012-05-01

    The peripheral manifestations of the inherited neuropathies are increasingly well characterized, but their effects upon cranial nerve function are not well understood. Hearing loss is recognized in a minority of children with this condition, but has not previously been systemically studied. A clear understanding of the prevalence and degree of auditory difficulties in this population is important as hearing impairment can impact upon speech/language development, social interaction ability and educational progress. The aim of this study was to investigate auditory pathway function, speech perception ability and everyday listening and communication in a group of school-aged children with inherited neuropathies. Twenty-six children with Charcot-Marie-Tooth disease confirmed by genetic testing and physical examination participated. Eighteen had demyelinating neuropathies (Charcot-Marie-Tooth type 1) and eight had the axonal form (Charcot-Marie-Tooth type 2). While each subject had normal or near-normal sound detection, individuals in both disease groups showed electrophysiological evidence of auditory neuropathy with delayed or low amplitude auditory brainstem responses. Auditory perception was also affected, with >60% of subjects with Charcot-Marie-Tooth type 1 and >85% of Charcot-Marie-Tooth type 2 suffering impaired processing of auditory temporal (timing) cues and/or abnormal speech understanding in everyday listening conditions.

  14. Prosody perception and musical pitch discrimination in adults using cochlear implants.

    PubMed

    Kalathottukaren, Rose Thomas; Purdy, Suzanne C; Ballard, Elaine

    2015-07-01

    This study investigated prosodic perception and musical pitch discrimination in adults using cochlear implants (CI), and examined the relationship between prosody perception scores and non-linguistic auditory measures, demographic variables, and speech recognition scores. Participants were given four subtests of the PEPS-C (profiling elements of prosody in speech-communication), the adult paralanguage subtest of the DANVA 2 (diagnostic analysis of non verbal accuracy 2), and the contour and interval subtests of the MBEA (Montreal battery of evaluation of amusia). Twelve CI users aged 25;5 to 78;0 years participated. CI participants performed significantly more poorly than normative values for New Zealand adults for PEPS-C turn-end, affect, and contrastive stress reception subtests, but were not different from the norm for the chunking reception subtest. Performance on the DANVA 2 adult paralanguage subtest was lower than the normative mean reported by Saindon (2010) . Most of the CI participants performed at chance level on both MBEA subtests. CI users have difficulty perceiving prosodic information accurately. Difficulty in understanding different aspects of prosody and music may be associated with reduced pitch perception ability.

  15. Melodic Contour Identification and Music Perception by Cochlear Implant Users

    PubMed Central

    Galvin, John J.; Fu, Qian-Jie; Shannon, Robert V.

    2013-01-01

    Research and outcomes with cochlear implants (CIs) have revealed a dichotomy in the cues necessary for speech and music recognition. CI devices typically transmit 16–22 spectral channels, each modulated slowly in time. This coarse representation provides enough information to support speech understanding in quiet and rhythmic perception in music, but not enough to support speech understanding in noise or melody recognition. Melody recognition requires some capacity for complex pitch perception, which in turn depends strongly on access to spectral fine structure cues. Thus, temporal envelope cues are adequate for speech perception under optimal listening conditions, while spectral fine structure cues are needed for music perception. In this paper, we present recent experiments that directly measure CI users’ melodic pitch perception using a melodic contour identification (MCI) task. While normal-hearing (NH) listeners’ performance was consistently high across experiments, MCI performance was highly variable across CI users. CI users’ MCI performance was significantly affected by instrument timbre, as well as by the presence of a competing instrument. In general, CI users had great difficulty extracting melodic pitch from complex stimuli. However, musically-experienced CI users often performed as well as NH listeners, and MCI training in less experienced subjects greatly improved performance. With fixed constraints on spectral resolution, such as it occurs with hearing loss or an auditory prosthesis, training and experience can provide a considerable improvements in music perception and appreciation. PMID:19673835

  16. Speech sound disorder at 4 years: prevalence, comorbidities, and predictors in a community cohort of children.

    PubMed

    Eadie, Patricia; Morgan, Angela; Ukoumunne, Obioha C; Ttofari Eecen, Kyriaki; Wake, Melissa; Reilly, Sheena

    2015-06-01

    The epidemiology of preschool speech sound disorder is poorly understood. Our aims were to determine: the prevalence of idiopathic speech sound disorder; the comorbidity of speech sound disorder with language and pre-literacy difficulties; and the factors contributing to speech outcome at 4 years. One thousand four hundred and ninety-four participants from an Australian longitudinal cohort completed speech, language, and pre-literacy assessments at 4 years. Prevalence of speech sound disorder (SSD) was defined by standard score performance of ≤79 on a speech assessment. Logistic regression examined predictors of SSD within four domains: child and family; parent-reported speech; cognitive-linguistic; and parent-reported motor skills. At 4 years the prevalence of speech disorder in an Australian cohort was 3.4%. Comorbidity with SSD was 40.8% for language disorder and 20.8% for poor pre-literacy skills. Sex, maternal vocabulary, socio-economic status, and family history of speech and language difficulties predicted SSD, as did 2-year speech, language, and motor skills. Together these variables provided good discrimination of SSD (area under the curve=0.78). This is the first epidemiological study to demonstrate prevalence of SSD at 4 years of age that was consistent with previous clinical studies. Early detection of SSD at 4 years should focus on family variables and speech, language, and motor skills measured at 2 years. © 2014 Mac Keith Press.

  17. Results of the Sensory Profile in Children with Suspected Childhood Apraxia of Speech

    ERIC Educational Resources Information Center

    Newmeyer Amy J.; Grether, Sandra; Aylward, Christa; deGrauw, Ton; Akers, Rachel; Grasha, Carol; Ishikawa, Keiko; White, Jaye

    2009-01-01

    Speech-sound disorders are common in preschool-age children, and are characterized by difficulty in the planning and production of speech sounds and their combination into words and sentences. The objective of this study was to review and compare the results of the "Sensory Profile" ([Dunn, 1999]) in children with a specific type of speech-sound…

  18. Cognitive Flexibility in Children with and without Speech Disorder

    ERIC Educational Resources Information Center

    Crosbie, Sharon; Holm, Alison; Dodd, Barbara

    2009-01-01

    Most children's speech difficulties are "functional" (i.e. no known sensory, motor or intellectual deficits). Speech disorder may, however, be associated with cognitive deficits considered core abilities in executive function: rule abstraction and cognitive flexibility. The study compares the rule abstraction and cognitive flexibility of…

  19. Basic auditory processing and sensitivity to prosodic structure in children with specific language impairments: a new look at a perceptual hypothesis

    PubMed Central

    Cumming, Ruth; Wilson, Angela; Goswami, Usha

    2015-01-01

    Children with specific language impairments (SLIs) show impaired perception and production of spoken language, and can also present with motor, auditory, and phonological difficulties. Recent auditory studies have shown impaired sensitivity to amplitude rise time (ART) in children with SLIs, along with non-speech rhythmic timing difficulties. Linguistically, these perceptual impairments should affect sensitivity to speech prosody and syllable stress. Here we used two tasks requiring sensitivity to prosodic structure, the DeeDee task and a stress misperception task, to investigate this hypothesis. We also measured auditory processing of ART, rising pitch and sound duration, in both speech (“ba”) and non-speech (tone) stimuli. Participants were 45 children with SLI aged on average 9 years and 50 age-matched controls. We report data for all the SLI children (N = 45, IQ varying), as well as for two independent SLI subgroupings with intact IQ. One subgroup, “Pure SLI,” had intact phonology and reading (N = 16), the other, “SLI PPR” (N = 15), had impaired phonology and reading. Problems with syllable stress and prosodic structure were found for all the group comparisons. Both sub-groups with intact IQ showed reduced sensitivity to ART in speech stimuli, but the PPR subgroup also showed reduced sensitivity to sound duration in speech stimuli. Individual differences in processing syllable stress were associated with auditory processing. These data support a new hypothesis, the “prosodic phrasing” hypothesis, which proposes that grammatical difficulties in SLI may reflect perceptual difficulties with global prosodic structure related to auditory impairments in processing amplitude rise time and duration. PMID:26217286

  20. Early speech development in Koolen de Vries syndrome limited by oral praxis and hypotonia.

    PubMed

    Morgan, Angela T; Haaften, Leenke van; van Hulst, Karen; Edley, Carol; Mei, Cristina; Tan, Tiong Yang; Amor, David; Fisher, Simon E; Koolen, David A

    2018-01-01

    Communication disorder is common in Koolen de Vries syndrome (KdVS), yet its specific symptomatology has not been examined, limiting prognostic counselling and application of targeted therapies. Here we examine the communication phenotype associated with KdVS. Twenty-nine participants (12 males, 4 with KANSL1 variants, 25 with 17q21.31 microdeletion), aged 1.0-27.0 years were assessed for oral-motor, speech, language, literacy, and social functioning. Early history included hypotonia and feeding difficulties. Speech and language development was delayed and atypical from onset of first words (2; 5-3; 5 years of age on average). Speech was characterised by apraxia (100%) and dysarthria (93%), with stuttering in some (17%). Speech therapy and multi-modal communication (e.g., sign-language) was critical in preschool. Receptive and expressive language abilities were typically commensurate (79%), both being severely affected relative to peers. Children were sociable with a desire to communicate, although some (36%) had pragmatic impairments in domains, where higher-level language was required. A common phenotype was identified, including an overriding 'double hit' of oral hypotonia and apraxia in infancy and preschool, associated with severely delayed speech development. Remarkably however, speech prognosis was positive; apraxia resolved, and although dysarthria persisted, children were intelligible by mid-to-late childhood. In contrast, language and literacy deficits persisted, and pragmatic deficits were apparent. Children with KdVS require early, intensive, speech motor and language therapy, with targeted literacy and social language interventions as developmentally appropriate. Greater understanding of the linguistic phenotype may help unravel the relevance of KANSL1 to child speech and language development.

  1. University-School-Center Collaboration in Support of Identifying and Treating Minority Children with Hearing, Language, or Speech Difficulties: Fulfilling the Spirit of "No Child Left Behind"

    ERIC Educational Resources Information Center

    Kidd-Proctor, Kathleen; Herrington, David

    2006-01-01

    St. Martin Hall, a demonstration school affiliated with Our Lady of the Lake University in San Antonio, Texas, collaborated with the Harry Jersig Center to test students for speech and language difficulties or hearing loss. A significant number of the children were from economically disadvantaged homes. Most of the children were Hispanic.…

  2. Population Estimates, Health Care Characteristics, and Material Hardship Experiences of U.S. Children with Parent-Reported Speech-Language Difficulties: Evidence from Three Nationally Representative Surveys

    ERIC Educational Resources Information Center

    Sonik, Rajan A.; Parish, Susan L.; Akorbirshoev, Ilhom; Son, Esther; Rosenthal, Eliana

    2014-01-01

    Purpose: To provide estimates for the prevalence of parent-reported speech-language difficulties in U.S. children, and to describe the levels of health care access and material hardship in this population. Method: We tabulated descriptive and bivariate statistics using cross-sectional data from the 2007 and 2011/2012 iterations of the National…

  3. A dynamic auditory-cognitive system supports speech-in-noise perception in older adults.

    PubMed

    Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina

    2013-06-01

    Understanding speech in noise is one of the most complex activities encountered in everyday life, relying on peripheral hearing, central auditory processing, and cognition. These abilities decline with age, and so older adults are often frustrated by a reduced ability to communicate effectively in noisy environments. Many studies have examined these factors independently; in the last decade, however, the idea of an auditory-cognitive system has emerged, recognizing the need to consider the processing of complex sounds in the context of dynamic neural circuits. Here, we used structural equation modeling to evaluate the interacting contributions of peripheral hearing, central processing, cognitive ability, and life experiences to understanding speech in noise. We recruited 120 older adults (ages 55-79) and evaluated their peripheral hearing status, cognitive skills, and central processing. We also collected demographic measures of life experiences, such as physical activity, intellectual engagement, and musical training. In our model, central processing and cognitive function predicted a significant proportion of variance in the ability to understand speech in noise. To a lesser extent, life experience predicted hearing-in-noise ability through modulation of brainstem function. Peripheral hearing levels did not significantly contribute to the model. Previous musical experience modulated the relative contributions of cognitive ability and lifestyle factors to hearing in noise. Our models demonstrate the complex interactions required to hear in noise and the importance of targeting cognitive function, lifestyle, and central auditory processing in the management of individuals who are having difficulty hearing in noise. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Speech perception in older adults: the importance of speech-specific cognitive abilities.

    PubMed

    Sommers, M S

    1997-05-01

    To provide a critical evaluation of studies examining the contribution of changes in language-specific cognitive abilities to the speech perception difficulties of older adults. A review of the literature on aging and speech perception. The research considered in the present review suggests that age-related changes in absolute sensitivity is the principal factor affecting older listeners' speech perception in quiet. However, under less favorable listening conditions, changes in a number of speech-specific cognitive abilities can also affect spoken language processing in older people. Clinically, these findings suggest that hearing aids, which have been the traditional treatment for improving speech perception in older adults, are likely to offer considerable benefit in quiet listening situations because the amplification they provide can serve to compensate for age-related hearing losses. However, such devices may be less beneficial in more natural environments, (e.g., noisy backgrounds, multiple talkers, reverberant rooms) because they are less effective for improving speech perception difficulties that result from age-related cognitive declines. It is suggested that an integrative approach to designing test batteries that can assess both sensory and cognitive abilities needed for processing spoken language offers the most promising approach for developing therapeutic interventions to improve speech perception in older adults.

  5. How Spoken Language Comprehension is Achieved by Older Listeners in Difficult Listening Situations.

    PubMed

    Schneider, Bruce A; Avivi-Reich, Meital; Daneman, Meredyth

    2016-01-01

    Comprehending spoken discourse in noisy situations is likely to be more challenging to older adults than to younger adults due to potential declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. These challenges might force older listeners to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up versus top-down processes to speech comprehension. The authors review studies that investigated the effect of age on listeners' ability to follow and comprehend lectures (monologues), and two-talker conversations (dialogues), and the extent to which individual differences in lexical knowledge and reading comprehension skill relate to individual differences in speech comprehension. Comprehension was evaluated after each lecture or conversation by asking listeners to answer multiple-choice questions regarding its content. Once individual differences in speech recognition for words presented in babble were compensated for, age differences in speech comprehension were minimized if not eliminated. However, younger listeners benefited more from spatial separation than did older listeners. Vocabulary knowledge predicted the comprehension scores of both younger and older listeners when listening was difficult, but not when it was easy. However, the contribution of reading comprehension to listening comprehension appeared to be independent of listening difficulty in younger adults but not in older adults. The evidence suggests (1) that most of the difficulties experienced by older adults are due to age-related auditory declines, and (2) that these declines, along with listening difficulty, modulate the degree to which selective linguistic and cognitive abilities are engaged to support listening comprehension in difficult listening situations. When older listeners experience speech recognition difficulties, their attentional resources are more likely to be deployed to facilitate lexical access, making it difficult for them to fully engage higher-order cognitive abilities in support of listening comprehension.

  6. Children with Speech and Language Disability: Caseload Characteristics

    ERIC Educational Resources Information Center

    Broomfield, Jan; Dodd, Barbara

    2004-01-01

    Background: There has been no previous incidence survey of children referred to a speech and language therapy service in the UK. Previous studies of prevalence of specific communication difficulties provide contradictory data from which it is difficult to plan speech and language therapy service provision. Reliable data are needed concerning the…

  7. Thickened Liquids: Practice Patterns of Speech-Language Pathologists

    ERIC Educational Resources Information Center

    Garcia, Jane Mertz; Chambers, Edgar, IV; Molander, Michelle

    2005-01-01

    This study surveyed the practice patterns of speech-language pathologists in their use of thickened liquids for patients with swallowing difficulties. A 25-item Internet survey about thickened liquids was posted via an e-mail list to members of the American Speech-Language-Hearing Association Division 13, Swallowing and Swallowing Disorders…

  8. Preschoolers Benefit from Visually Salient Speech Cues

    ERIC Educational Resources Information Center

    Lalonde, Kaylah; Holt, Rachael Frush

    2015-01-01

    Purpose: This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method: Twelve adults and 27 typically developing 3-…

  9. Annual Research Review: The nature and classification of reading disorders – a commentary on proposals for DSM-5

    PubMed Central

    Snowling, Margaret J; Hulme, Charles

    2012-01-01

    This article reviews our understanding of reading disorders in children and relates it to current proposals for their classification in DSM-5. There are two different, commonly occurring, forms of reading disorder in children which arise from different underlying language difficulties. Dyslexia (as defined in DSM-5), or decoding difficulty, refers to children who have difficulty in mastering the relationships between the spelling patterns of words and their pronunciations. These children typically read aloud inaccurately and slowly, and experience additional problems with spelling. Dyslexia appears to arise principally from a weakness in phonological (speech sound) skills, and there is good evidence that it can be ameliorated by systematic phonic teaching combined with phonological awareness training. The other major form of reading difficulty is reading comprehension impairment. These children read aloud accurately and fluently, but have difficulty understanding what they have read. Reading comprehension impairment appears to arise from weaknesses in a range of oral language skills including poor vocabulary knowledge, weak grammatical skills and difficulties in oral language comprehension. We suggest that the omission of reading comprehension impairment from DSM-5 is a serious one that should be remedied. Both dyslexia and reading comprehension impairment are dimensional in nature, and show strong continuities with other disorders of language. We argue that recognizing the continuities between reading and language disorders has important implications for assessment and treatment, and we note that the high rates of comorbidity between reading disorders and other seemingly disparate disorders (including ADHD and motor disorders) raises important challenges for understanding these disorders. PMID:22141434

  10. Neural Timing is Linked to Speech Perception in Noise

    PubMed Central

    Samira, Anderson; Erika, Skoe; Bharath, Chandrasekaran; Nina, Kraus

    2010-01-01

    Understanding speech in background noise is challenging for every listener, including those with normal peripheral hearing. This difficulty is due in part to the disruptive effects of noise on neural synchrony, resulting in degraded representation of speech at cortical and subcortical levels as reflected by electrophysiological responses. These problems are especially pronounced in clinical populations such as children with learning impairments. Given the established effects of noise on evoked responses, we hypothesized that listening-in-noise problems are associated with degraded processing of timing information at the brainstem level. Participants (66 children, ages 8 to 14 years, 22 females) were divided into groups based on their performance on clinical measures of speech-in-noise perception (SIN) and reading. We compared brainstem responses to speech syllables between top and bottom SIN and reading groups in the presence and absence of competing multi-talker babble. In the quiet condition, neural response timing was equivalent between groups. In noise, however, the bottom groups exhibited greater neural delays relative to the top groups. Group-specific timing delays occurred exclusively in response to the noise-vulnerable formant transition, not to the more perceptually-robust, steady-state portion of the stimulus. These results demonstrate that neural timing is disrupted by background noise and that greater disruptions are associated with the inability to perceive speech in challenging listening conditions. PMID:20371812

  11. Auditory perceptual simulation: Simulating speech rates or accents?

    PubMed

    Zhou, Peiyun; Christianson, Kiel

    2016-07-01

    When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Teacher Candidates' Mastery of Phoneme-Grapheme Correspondence: Massed versus Distributed Practice in Teacher Education

    ERIC Educational Resources Information Center

    Sayeski, Kristin L.; Earle, Gentry A.; Eslinger, R. Paige; Whitenton, Jessy N.

    2017-01-01

    Matching phonemes (speech sounds) to graphemes (letters and letter combinations) is an important aspect of decoding (translating print to speech) and encoding (translating speech to print). Yet, many teacher candidates do not receive explicit training in phoneme-grapheme correspondence. Difficulty with accurate phoneme production and/or lack of…

  13. Intensive Dysarthria Therapy for Older Children with Cerebral Palsy: Findings from Six Cases

    ERIC Educational Resources Information Center

    Pennington, Lindsay; Smallman, Claire; Farrier, Faith

    2006-01-01

    Children with cerebral palsy often have speech, language and communication difficulties that affect their access to social and educational activities. Speech and language therapy to improve the intelligibility of the speech of children with cerebral palsy has long been advocated, but there is a dearth of research investigating therapy…

  14. The Relationship between Speech Impairment, Phonological Awareness and Early Literacy Development

    ERIC Educational Resources Information Center

    Harris, Judy; Botting, Nicola; Myers, Lucy; Dodd, Barbara

    2011-01-01

    Although children with speech impairment are at increased risk for impaired literacy, many learn to read and spell without difficulty. Around half the children with speech impairment have delayed acquisition, making errors typical of a normally developing younger child (e.g. reducing consonant clusters so that "spoon" is pronounced as…

  15. A study of sound balances for the hard of hearing

    NASA Astrophysics Data System (ADS)

    Mathers, C. D.

    Over a period of years, complaints have been received from television viewers, especially those who are hard of hearing, that background sound (e.g., audience laughter, crowd noise, mood music) is often transmitted at too high a level with respect to speech, so that information essential to the understanding of the program is lost. To consider possible solutions to the problem, a working party was set up representing both broadcasters and organizations for the hard of hearing. At early meetings, it was resolved that a series of subjective tests should be carried out to determine what reduction of background levels would be needed to provide a significant improvement in the intelligibility of television speech for viewers with hearing difficulties. The preparation of test tapes and the analysis of results are given.

  16. When Does Speech Sound Disorder Matter for Literacy? The Role of Disordered Speech Errors, Co-Occurring Language Impairment and Family Risk of Dyslexia

    ERIC Educational Resources Information Center

    Hayiou-Thomas, Marianna E.; Carroll, Julia M.; Leavett, Ruth; Hulme, Charles; Snowling, Margaret J.

    2017-01-01

    Background: This study considers the role of early speech difficulties in literacy development, in the context of additional risk factors. Method: Children were identified with speech sound disorder (SSD) at the age of 3½ years, on the basis of performance on the Diagnostic Evaluation of Articulation and Phonology. Their literacy skills were…

  17. Association between muscle power impairment and WHODAS 2.0 in older adults with physical disability in Taiwan.

    PubMed

    Chang, Kwang-Hwa; Liao, Hua-Fang; Yen, Chia-Fan; Hwang, Ai-Wen; Chi, Wen-Chou; Escorpizo, Reuben; Liou, Tsan-Hon

    2015-01-01

    To explore the association between muscle power impairment and each World Health Organization Disability Assessment Schedule second edition (WHODAS 2.0) domain score among subjects with physical disability. Subjects (≥ 60 years) with physical disability related to neurological diseases, including 730 subjects with brain disease (BD) and 126 subjects with non-BD, were enrolled from a data bank of persons with disabilities from 1 July 2011 to 29 February 2012. Standardized WHODAS 2.0 scores ranging from 0 (least difficulty) to 100 (greatest difficulty) points were calculated for each domain. More than 50% of subjects with physical disability had the greatest difficulty in household activities and mobility. Muscle power impairment (adjusted odds ratios range among domains, 2.75-376.42, p < 0.001), age (1.38-4.81, p < 0.05), and speech impairment (1.94-5.80, p < 0.05) were associated with BD subjects experiencing the greatest difficulty in most WHODAS 2.0 domains. But a few associated factors were identified for the non-BD group in the study. Although the patterns of difficulty in most daily activities were similar between the BD and non-BD groups, factors associated with the difficulties differed between those two groups. Muscle power impairment, age and speech impairment were important factors associated with difficulties in subjects with BD-related physical disability. Older adults with physical disability often experience difficulties in household activities and mobility. Muscle power impairment is associated with difficulties in daily life in subjects with physical disability related to brain disease. Those subjects with brain disease who had older age, a greater degree of muscle power impairment, and the presence of speech impairment were at higher risk of experiencing difficulties in most daily activities.

  18. Aided and Unaided Speech Perception by Older Hearing Impaired Listeners

    PubMed Central

    Woods, David L.; Arbogast, Tanya; Doss, Zoe; Younus, Masood; Herron, Timothy J.; Yund, E. William

    2015-01-01

    The most common complaint of older hearing impaired (OHI) listeners is difficulty understanding speech in the presence of noise. However, tests of consonant-identification and sentence reception threshold (SeRT) provide different perspectives on the magnitude of impairment. Here we quantified speech perception difficulties in 24 OHI listeners in unaided and aided conditions by analyzing (1) consonant-identification thresholds and consonant confusions for 20 onset and 20 coda consonants in consonant-vowel-consonant (CVC) syllables presented at consonant-specific signal-to-noise (SNR) levels, and (2) SeRTs obtained with the Quick Speech in Noise Test (QSIN) and the Hearing in Noise Test (HINT). Compared to older normal hearing (ONH) listeners, nearly all unaided OHI listeners showed abnormal consonant-identification thresholds, abnormal consonant confusions, and reduced psychometric function slopes. Average elevations in consonant-identification thresholds exceeded 35 dB, correlated strongly with impairments in mid-frequency hearing, and were greater for hard-to-identify consonants. Advanced digital hearing aids (HAs) improved average consonant-identification thresholds by more than 17 dB, with significant HA benefit seen in 83% of OHI listeners. HAs partially normalized consonant-identification thresholds, reduced abnormal consonant confusions, and increased the slope of psychometric functions. Unaided OHI listeners showed much smaller elevations in SeRTs (mean 6.9 dB) than in consonant-identification thresholds and SeRTs in unaided listening conditions correlated strongly (r = 0.91) with identification thresholds of easily identified consonants. HAs produced minimal SeRT benefit (2.0 dB), with only 38% of OHI listeners showing significant improvement. HA benefit on SeRTs was accurately predicted (r = 0.86) by HA benefit on easily identified consonants. Consonant-identification tests can accurately predict sentence processing deficits and HA benefit in OHI listeners. PMID:25730423

  19. Evaluation of Speech Recognition of Cochlear Implant Recipients Using Adaptive, Digital Remote Microphone Technology and a Speech Enhancement Sound Processing Algorithm.

    PubMed

    Wolfe, Jace; Morais, Mila; Schafer, Erin; Agrawal, Smita; Koch, Dawn

    2015-05-01

    Cochlear implant recipients often experience difficulty with understanding speech in the presence of noise. Cochlear implant manufacturers have developed sound processing algorithms designed to improve speech recognition in noise, and research has shown these technologies to be effective. Remote microphone technology utilizing adaptive, digital wireless radio transmission has also been shown to provide significant improvement in speech recognition in noise. There are no studies examining the potential improvement in speech recognition in noise when these two technologies are used simultaneously. The goal of this study was to evaluate the potential benefits and limitations associated with the simultaneous use of a sound processing algorithm designed to improve performance in noise (Advanced Bionics ClearVoice) and a remote microphone system that incorporates adaptive, digital wireless radio transmission (Phonak Roger). A two-by-two way repeated measures design was used to examine performance differences obtained without these technologies compared to the use of each technology separately as well as the simultaneous use of both technologies. Eleven Advanced Bionics (AB) cochlear implant recipients, ages 11 to 68 yr. AzBio sentence recognition was measured in quiet and in the presence of classroom noise ranging in level from 50 to 80 dBA in 5-dB steps. Performance was evaluated in four conditions: (1) No ClearVoice and no Roger, (2) ClearVoice enabled without the use of Roger, (3) ClearVoice disabled with Roger enabled, and (4) simultaneous use of ClearVoice and Roger. Speech recognition in quiet was better than speech recognition in noise for all conditions. Use of ClearVoice and Roger each provided significant improvement in speech recognition in noise. The best performance in noise was obtained with the simultaneous use of ClearVoice and Roger. ClearVoice and Roger technology each improves speech recognition in noise, particularly when used at the same time. Because ClearVoice does not degrade performance in quiet settings, clinicians should consider recommending ClearVoice for routine, full-time use for AB implant recipients. Roger should be used in all instances in which remote microphone technology may assist the user in understanding speech in the presence of noise. American Academy of Audiology.

  20. Don’t speak too fast! Processing of fast rate speech in children with specific language impairment

    PubMed Central

    Bedoin, Nathalie; Krifi-Papoz, Sonia; Herbillon, Vania; Caillot-Bascoul, Aurélia; Gonzalez-Monge, Sibylle; Boulenger, Véronique

    2018-01-01

    Background Perception of speech rhythm requires the auditory system to track temporal envelope fluctuations, which carry syllabic and stress information. Reduced sensitivity to rhythmic acoustic cues has been evidenced in children with Specific Language Impairment (SLI), impeding syllabic parsing and speech decoding. Our study investigated whether these children experience specific difficulties processing fast rate speech as compared with typically developing (TD) children. Method Sixteen French children with SLI (8–13 years old) with mainly expressive phonological disorders and with preserved comprehension and 16 age-matched TD children performed a judgment task on sentences produced 1) at normal rate, 2) at fast rate or 3) time-compressed. Sensitivity index (d′) to semantically incongruent sentence-final words was measured. Results Overall children with SLI perform significantly worse than TD children. Importantly, as revealed by the significant Group × Speech Rate interaction, children with SLI find it more challenging than TD children to process both naturally or artificially accelerated speech. The two groups do not significantly differ in normal rate speech processing. Conclusion In agreement with rhythm-processing deficits in atypical language development, our results suggest that children with SLI face difficulties adjusting to rapid speech rate. These findings are interpreted in light of temporal sampling and prosodic phrasing frameworks and of oscillatory mechanisms underlying speech perception. PMID:29373610

  1. Nonspeech Oral Movements and Oral Motor Disorders: A Narrative Review.

    PubMed

    Kent, Ray D

    2015-11-01

    Speech and other oral functions such as swallowing have been compared and contrasted with oral behaviors variously labeled quasispeech, paraspeech, speechlike, and nonspeech, all of which overlap to some degree in neural control, muscles deployed, and movements performed. Efforts to understand the relationships among these behaviors are hindered by the lack of explicit and widely accepted definitions. This review article offers definitions and taxonomies for nonspeech oral movements and for diverse speaking tasks, both overt and covert. Review of the literature included searches of Medline, Google Scholar, HighWire Press, and various online sources. Search terms pertained to speech, quasispeech, paraspeech, speechlike, and nonspeech oral movements. Searches also were carried out for associated terms in oral biology, craniofacial physiology, and motor control. Nonspeech movements have a broad spectrum of clinical applications, including developmental speech and language disorders, motor speech disorders, feeding and swallowing difficulties, obstructive sleep apnea syndrome, trismus, and tardive stereotypies. The role and benefit of nonspeech oral movements are controversial in many oral motor disorders. It is argued that the clinical value of these movements can be elucidated through careful definitions and task descriptions such as those proposed in this review article.

  2. Nonspeech Oral Movements and Oral Motor Disorders: A Narrative Review

    PubMed Central

    2015-01-01

    Purpose Speech and other oral functions such as swallowing have been compared and contrasted with oral behaviors variously labeled quasispeech, paraspeech, speechlike, and nonspeech, all of which overlap to some degree in neural control, muscles deployed, and movements performed. Efforts to understand the relationships among these behaviors are hindered by the lack of explicit and widely accepted definitions. This review article offers definitions and taxonomies for nonspeech oral movements and for diverse speaking tasks, both overt and covert. Method Review of the literature included searches of Medline, Google Scholar, HighWire Press, and various online sources. Search terms pertained to speech, quasispeech, paraspeech, speechlike, and nonspeech oral movements. Searches also were carried out for associated terms in oral biology, craniofacial physiology, and motor control. Results and Conclusions Nonspeech movements have a broad spectrum of clinical applications, including developmental speech and language disorders, motor speech disorders, feeding and swallowing difficulties, obstructive sleep apnea syndrome, trismus, and tardive stereotypies. The role and benefit of nonspeech oral movements are controversial in many oral motor disorders. It is argued that the clinical value of these movements can be elucidated through careful definitions and task descriptions such as those proposed in this review article. PMID:26126128

  3. Working towards an inclusive curriculum.

    PubMed

    Wren, Y; Parkhouse, J

    1998-01-01

    The move towards an inclusive model of education presents teachers with the difficulty of differentiating the curriculum for children with speech, language and communication impairments. This paper focuses on the 'WiSaLT Curriculum Appendix'-a tool which can be used by teachers and speech and language therapists to help such children access the mainstream curriculum and to promote improvement in their language and communication skills. As well as highlighting potential areas of difficulty within each attainment target for key stage one, the appendix guides users to specific strategies and activities. Thus the speech and language therapist and teacher can identify which attainment targets might prove problematic for any one child and also have access to ideas which can help.

  4. Magnetic resonance imaging of the brain and vocal tract: Applications to the study of speech production and language learning.

    PubMed

    Carey, Daniel; McGettigan, Carolyn

    2017-04-01

    The human vocal system is highly plastic, allowing for the flexible expression of language, mood and intentions. However, this plasticity is not stable throughout the life span, and it is well documented that adult learners encounter greater difficulty than children in acquiring the sounds of foreign languages. Researchers have used magnetic resonance imaging (MRI) to interrogate the neural substrates of vocal imitation and learning, and the correlates of individual differences in phonetic "talent". In parallel, a growing body of work using MR technology to directly image the vocal tract in real time during speech has offered primarily descriptive accounts of phonetic variation within and across languages. In this paper, we review the contribution of neural MRI to our understanding of vocal learning, and give an overview of vocal tract imaging and its potential to inform the field. We propose methods by which our understanding of speech production and learning could be advanced through the combined measurement of articulation and brain activity using MRI - specifically, we describe a novel paradigm, developed in our laboratory, that uses both MRI techniques to for the first time map directly between neural, articulatory and acoustic data in the investigation of vocalisation. This non-invasive, multimodal imaging method could be used to track central and peripheral correlates of spoken language learning, and speech recovery in clinical settings, as well as provide insights into potential sites for targeted neural interventions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A user-operated test of suprathreshold acuity in noise for adult hearing screening: The SUN (Speech Understanding in Noise) test.

    PubMed

    Paglialonga, Alessia; Tognola, Gabriella; Grandori, Ferdinando

    2014-09-01

    A novel, user-operated test of suprathreshold acuity in noise for use in adult hearing screening (AHS) was developed. The Speech Understanding in Noise test (SUN) is a speech-in-noise test that makes use of a list of vowel-consonant-vowel (VCV) stimuli in background noise presented in a three-alternative forced choice (3AFC) paradigm by means of a touch sensitive screen. The test is automated, easy-to-use, and provides self-explanatory results (i.e., 'no hearing difficulties', or 'a hearing check would be advisable', or 'a hearing check is recommended'). The test was developed from its building blocks (VCVs and speech-shaped noise) through two main steps: (i) development of the test list through equalization of the intelligibility of test stimuli across the set and (ii) optimization of the test results through maximization of the test sensitivity and specificity. The test had 82.9% sensitivity and 85.9% specificity compared to conventional pure-tone screening, and 83.8% sensitivity and 83.9% specificity to identify individuals with disabling hearing impairment. Results obtained so far showed that the test could be easily performed by adults and older adults in less than one minute per ear and that its results were not influenced by ambient noise (up to 65dBA), suggesting that the test might be a viable method for AHS in clinical as well as non-clinical settings. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. The Acquisition of Consonant Clusters by Japanese Learners of English: Interactions of Speech Perception and Production

    ERIC Educational Resources Information Center

    Sperbeck, Mieko

    2010-01-01

    The primary aim of this dissertation was to investigate the relationship between speech perception and speech production difficulties among Japanese second language (L2) learners of English, in their learning complex syllable structures. Japanese L2 learners and American English controls were tested in a categorical ABX discrimination task of…

  7. Atypical Speech and Language Development: A Consensus Study on Clinical Signs in the Netherlands

    ERIC Educational Resources Information Center

    Visser-Bochane, Margot I.; Gerrits, Ellen; van der Schans, Cees P.; Reijneveld, Sijmen A.; Luinge, Margreet R.

    2017-01-01

    Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and red flags (i.e. most urgent clinical signs) for…

  8. Perception of Audio-Visual Speech Synchrony in Spanish-Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.

    2013-01-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…

  9. Phonological Awareness and Types of Sound Errors in Preschoolers with Speech Sound Disorders

    ERIC Educational Resources Information Center

    Preston, Jonathan; Edwards, Mary Louise

    2010-01-01

    Purpose: Some children with speech sound disorders (SSD) have difficulty with literacy-related skills, particularly phonological awareness (PA). This study investigates the PA skills of preschoolers with SSD by using a regression model to evaluate the degree to which PA can be concurrently predicted by types of speech sound errors. Method:…

  10. Speech and Language Deficits in Early-Treated Children with Galactosemia.

    ERIC Educational Resources Information Center

    Waisbren, Susan E.; And Others

    1983-01-01

    Intelligence and speech-language development of eight children (3.6 to 11.6 years old) with classic galactosemia were assessed by standardized tests. Each of the children had delays of early speech difficulties, and all but one had language disorders in at least one area. Available from: Journal of Pediatrics, C.V. Mosby Co., 11830 Westline…

  11. Phonological Awareness, Reading Accuracy and Spelling Ability of Children with Inconsistent Phonological Disorder

    ERIC Educational Resources Information Center

    Holm, Alison; Farrier, Faith; Dodd, Barbara

    2008-01-01

    Background: Although children with speech disorder are at increased risk of literacy impairments, many learn to read and spell without difficulty. They are also a heterogeneous population in terms of the number and type of speech errors and their identified speech processing deficits. One problem lies in determining which preschool children with…

  12. Speech, language, and reading skills in 10-year-old children with palatal clefts: The impact of additional conditions.

    PubMed

    Feragen, Kristin Billaud; Aukner, Ragnhild; Særvold, Tone K; Hide, Øydis

    2017-03-01

    This study examined speech (hypernasality and intelligibility), language, and reading skills in children with a cleft palate, specifically investigating additional conditions to the cleft, in order to differentiate challenges related to a cleft only, and challenges associated with an additional condition. Cross-sectional data collected during routine assessments of speech and language in a centralised treatment setting. Children born with cleft with palatal involvement from four birth cohorts (n=184), aged 10. Speech: SVANTE-N; Language: Language 6-16; Reading: Word Chain Test and Reading Comprehension Test. Descriptive analyses revealed that 123 of the children had a cleft only (66.8%), while 61 children (33.2%) had a cleft that was associated with an additional condition (syndrome, developmental difficulty, attentional difficulties). Due to close associations with the outcome variables, children with specific language impairments and dyslexia were excluded from the sample (n=14). In the total cleft sample, 33.1% had mild to severe hypernasality, and 27.9% had mild to severe intelligibility deviances. Most children with intelligibility and hypernasality scores within the normal range had a cleft without any other condition. A high number of children with developmental difficulties (63.2%) or AD/HD (45.5%) had problems with intelligibility. Hypernasality scores were also associated with developmental difficulties (58.8%), whereas most children with AD/HD had normal hypernasality scores (83.3%). As could be expected, results demonstrated that children with a cleft and an additional condition had language and reading scores below average. Children with a cleft only had language and reading scores within the normal range. Among the children with scores below average, 33.3-44.7% had no other conditions explaining difficulties with language and reading. The findings highlight the need for routine assessments of language and reading skills, in addition to assessments of speech, in children with a cleft, in order to identify potential problems as early as possible. Study designs need to take additional difficulties into account, so that potential problems with language and reading are not ascribed the cleft diagnosis, and can be followed by appropriate treatment and interventions. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Auditory stream segregation in children with Asperger syndrome

    PubMed Central

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E.; Nieminen-von Wendt, T.; Kujala, T.

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception. PMID:19751798

  14. Annual research review: the nature and classification of reading disorders--a commentary on proposals for DSM-5.

    PubMed

    Snowling, Margaret J; Hulme, Charles

    2012-05-01

    This article reviews our understanding of reading disorders in children and relates it to current proposals for their classification in DSM-5. There are two different, commonly occurring, forms of reading disorder in children which arise from different underlying language difficulties. Dyslexia (as defined in DSM-5), or decoding difficulty, refers to children who have difficulty in mastering the relationships between the spelling patterns of words and their pronunciations. These children typically read aloud inaccurately and slowly, and experience additional problems with spelling. Dyslexia appears to arise principally from a weakness in phonological (speech sound) skills, and there is good evidence that it can be ameliorated by systematic phonic teaching combined with phonological awareness training. The other major form of reading difficulty is reading comprehension impairment. These children read aloud accurately and fluently, but have difficulty understanding what they have read. Reading comprehension impairment appears to arise from weaknesses in a range of oral language skills including poor vocabulary knowledge, weak grammatical skills and difficulties in oral language comprehension. We suggest that the omission of reading comprehension impairment from DSM-5 is a serious one that should be remedied. Both dyslexia and reading comprehension impairment are dimensional in nature, and show strong continuities with other disorders of language. We argue that recognizing the continuities between reading and language disorders has important implications for assessment and treatment, and we note that the high rates of comorbidity between reading disorders and other seemingly disparate disorders (including ADHD and motor disorders) raises important challenges for understanding these disorders. © 2011 The Authors. Journal of Child Psychology and Psychiatry © 2011 Association for Child and Adolescent Mental Health.

  15. Metaphor in psychosis: on the possible convergence of Lacanian theory and neuro-scientific research

    PubMed Central

    Ribolsi, Michele; Feyaerts, Jasper; Vanheule, Stijn

    2015-01-01

    Starting from the theories of leading psychiatrists, like Kraepelin and de Clérambault, the French psychoanalyst Jacques Lacan (1901–1981) formulated an original theory of psychosis, focusing on the subject and on the structuring role of language. In particular, he postulated that language makes up the experience of subjectivity and that psychosis is marked by the absence of a crucial metaphorization process. Interestingly, in contemporary psychiatry there is growing empirical evidence that schizophrenia is characterized by abnormal interpretation of verbal and non-verbal information, with a great difficulty to put such information in the appropriate context. Neuro-scientific contributions have investigated this difficulty suggesting the possibility of interpreting schizophrenia as a semiotic disorder which makes the patients incapable of understanding the figurative meaning of the metaphoric speech, probably due to a dysfunction of certain right hemisphere areas, such as the right temporoparietal junction and the right superior/middle temporal gyrus. In this paper we first review the Lacanian theory of psychosis and neuro-scientific research in the field of symbolization and metaphoric speech. Next, we discuss possible convergences between both approaches, exploring how they might join and inspire one another. Clinical and neurophysiological research implications are discussed. PMID:26089805

  16. Speech and orthodontic appliances: a systematic literature review.

    PubMed

    Chen, Junyu; Wan, Jia; You, Lun

    2018-01-23

    Various types of orthodontic appliances can lead to speech difficulties. However, speech difficulties caused by orthodontic appliances have not been sufficiently investigated by an evidence-based method. The aim of this study is to outline the scientific evidence and mechanism of the speech difficulties caused by orthodontic appliances. Randomized-controlled clinical trials (RCT), controlled clinical trials, and cohort studies focusing on the effect of orthodontic appliances on speech were included. A systematic search was conducted by an electronic search in PubMed, EMBASE, and the Cochrane Library databases, complemented by a manual search. The types of orthodontic appliances, the affected sounds, and duration period of the speech disturbances were extracted. The ROBINS-I tool was applied to evaluate the quality of non-randomized studies, and the bias of RCT was assessed based on the Cochrane Handbook for Systematic Reviews of Interventions. No meta-analyses could be performed due to the heterogeneity in the study designs and treatment modalities. Among 448 screened articles, 13 studies were included (n = 297 patients). Different types of orthodontic appliances such as fixed appliances, orthodontic retainers and palatal expanders could influence the clarity of speech. The /i/, /a/, and /e/ vowels as well as /s/, /z/, /l/, /t/, /d/, /r/, and /ʃ/ consonants could be distorted by appliances. Although most speech impairments could return to normal within weeks, speech distortion of the /s/ sound might last for more than 3 months. The low evidence level grading and heterogeneity were the two main limitations in this systematic review. Lingual fixed appliances, palatal expanders, and Hawley retainers have an evident influence on speech production. The /i/, /s/, /t/, and /d/ sounds are the primarily affected ones. The results of this systematic review should be interpreted with caution and more high-quality RCTs with larger sample sizes and longer follow-up periods are needed. The protocol for this systematic review (CRD42017056573) was registered in the International Prospective Register of Systematic Reviews (PROSPERO). © The Author(s) 2017. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com

  17. Speed-difficulty trade-off in speech: Chinese versus English

    PubMed Central

    Sun, Yao; Latash, Elizaveta M.; Mikaelian, Irina L.

    2011-01-01

    This study continues the investigation of the previously described speed-difficulty trade-off in picture description tasks. In particular, we tested a hypothesis that the Mandarin Chinese and American English are similar in showing logarithmic dependences between speech time and index of difficulty (ID), while they differ significantly in the amount of time needed to describe simple pictures, this difference increases for more complex pictures, and it is associated with a proportional difference in the number of syllables used. Subjects (eight Chinese speakers and eight English speakers) were tested in pairs. One subject (the Speaker) described simple pictures, while the other subject (the Performer) tried to reproduce the pictures based on the verbal description as quickly as possible with a set of objects. The Chinese speakers initiated speech production significantly faster than the English speakers. Speech time scaled linearly with ln(ID) in all subjects, but the regression coefficient was significantly higher in the English speakers as compared with the Chinese speakers. The number of errors was somewhat lower in the Chinese participants (not significantly). The Chinese pairs also showed a shorter delay between the initiation of speech and initiation of action by the Performer, shorter movement time by the Performer, and shorter overall performance time. The number of syllables scaled with ID, and the Chinese speakers used significantly smaller numbers of syllables. Speech rate was comparable between the two groups, about 3 syllables/s; it dropped for more complex pictures (higher ID). When asked to reproduce the same pictures without speaking, movement time scaled linearly with ln(ID); the Chinese performers were slower than the English performers. We conclude that natural languages show a speed-difficulty trade-off similar to Fitts’ law; the trade-offs in movement and speech production are likely to originate at a cognitive level. The time advantage of the Chinese participants originates not from similarity of the simple pictures and Chinese written characters and not from more sloppy performance. It is linked to using fewer syllables to transmit the same information. We suggest that natural languages may differ by informational density defined as the amount of information transmitted by a given number of syllables. PMID:21479658

  18. Speech disorders in neurofibromatosis type 1: a sample survey.

    PubMed

    Cosyns, Marjan; Vandeweghe, Lies; Mortier, Geert; Janssens, Sandra; Van Borsel, John

    2010-01-01

    Neurofibromatosis type 1 (NF1) is an autosomal-dominant neurocutaneous disorder with an estimated prevalence of two to three cases per 10,000 population. While the physical characteristics have been well documented, speech disorders have not been fully characterized in NF1 patients. This study serves as a pilot to identify key issues in the speech of NF1 patients. In particular, the aim is to explore further the occurrence and nature of problems associated with speech as perceived by the patients themselves. A questionnaire was sent to 149 patients with NF1 registered at the Department of Genetics, Ghent University Hospital. The questionnaire inquired about articulation, hearing, breathing, voice, resonance and fluency. Sixty individuals ranging in age from 4.5 to 61.3 years returned completed questionnaires and these served as the database for the study. The results of this sample survey were compared with data of the normal population. About two-thirds of participants experienced at least one speech or speech-related problem of any type. Compared with the normal population, the NF1 group indicated more articulation difficulties, hearing impairment, abnormalities in loudness, and stuttering. The results indicate that speech difficulties are an area of interest in the NF1 population. Further research to elucidate these findings is needed.

  19. Acceptable range of speech level in noisy sound fields for young adults and elderly persons.

    PubMed

    Sato, Hayato; Morimoto, Masayuki; Ota, Ryo

    2011-09-01

    The acceptable range of speech level as a function of background noise level was investigated on the basis of word intelligibility scores and listening difficulty ratings. In the present study, the acceptable range is defined as the range that maximizes word intelligibility scores and simultaneously does not cause a significant increase in listening difficulty ratings from the minimum ratings. Listening tests with young adult and elderly listeners demonstrated the following. (1) The acceptable range of speech level for elderly listeners overlapped that for young listeners. (2) The lower limit of the acceptable speech level for both young and elderly listeners was 65 dB (A-weighted) for noise levels of 40 and 45 dB (A-weighted), a level with a speech-to-noise ratio of +15 dB for noise levels of 50 and 55 dB, and a level with a speech-to-noise ratio of +10 dB for noise levels from 60 to 70 dB. (3) The upper limit of the acceptable speech level for both young and elderly listeners was 80 dB for noise levels from 40 to 55 dB and 85 dB or above for noise levels from 55 to 70 dB. © 2011 Acoustical Society of America

  20. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation.

    PubMed

    Gaudrain, Etienne; Carlyon, Robert P

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.

  1. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation

    PubMed Central

    Gaudrain, Etienne; Carlyon, Robert P.

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish target and masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed. PMID:23297922

  2. Effects of noise on speech recognition: Challenges for communication by service members.

    PubMed

    Le Prell, Colleen G; Clavier, Odile H

    2017-06-01

    Speech communication often takes place in noisy environments; this is an urgent issue for military personnel who must communicate in high-noise environments. The effects of noise on speech recognition vary significantly according to the sources of noise, the number and types of talkers, and the listener's hearing ability. In this review, speech communication is first described as it relates to current standards of hearing assessment for military and civilian populations. The next section categorizes types of noise (also called maskers) according to their temporal characteristics (steady or fluctuating) and perceptive effects (energetic or informational masking). Next, speech recognition difficulties experienced by listeners with hearing loss and by older listeners are summarized, and questions on the possible causes of speech-in-noise difficulty are discussed, including recent suggestions of "hidden hearing loss". The final section describes tests used by military and civilian researchers, audiologists, and hearing technicians to assess performance of an individual in recognizing speech in background noise, as well as metrics that predict performance based on a listener and background noise profile. This article provides readers with an overview of the challenges associated with speech communication in noisy backgrounds, as well as its assessment and potential impact on functional performance, and provides guidance for important new research directions relevant not only to military personnel, but also to employees who work in high noise environments. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Speech Recognition with the Advanced Combination Encoder and Transient Emphasis Spectral Maxima Strategies in Nucleus 24 Recipients

    ERIC Educational Resources Information Center

    Holden, Laura K.; Vandali, Andrew E.; Skinner, Margaret W.; Fourakis, Marios S.; Holden, Timothy A.

    2005-01-01

    One of the difficulties faced by cochlear implant (CI) recipients is perception of low-intensity speech cues. A. E. Vandali (2001) has developed the transient emphasis spectral maxima (TESM) strategy to amplify short-duration, low-level sounds. The aim of the present study was to determine whether speech scores would be significantly higher with…

  4. The Performance of Preschoolers with Speech/Language Disorders on the McCarthy Scales of Children's Abilities.

    ERIC Educational Resources Information Center

    Morgan, Robert L.; And Others

    1992-01-01

    Administered McCarthy Scales of Children's Abilities to preschool children of normal intelligence with (n=25) and without (n=25) speech/language disorders. Speech/language disorders group had significantly lower scores on all scales except Motor; showed difficulty in short-term auditory memory skills but not in visual memory skills; and had…

  5. Teaching the Tyrants: Perspectives on Freedom of Speech and Undergraduates.

    ERIC Educational Resources Information Center

    Herbeck, Dale A.

    Teaching freedom of speech to undergraduates is a difficult task, in part as a result of the challenging history of free expression in the United States. The difficulty is compounded by the need to teach the topic, in contrast to indoctrinating the students in an ideology of free speech. The Bill of Rights, and specifically the First Amendment,…

  6. The Impact of Adolescent Stuttering and Other Speech Problems on Psychological Well-Being in Adulthood: Evidence from a Birth Cohort Study

    ERIC Educational Resources Information Center

    McAllister, Jan; Collier, Jacqueline; Shepstone, Lee

    2013-01-01

    Background: Developmental stuttering is associated with increased risk of psychological distress and mental health difficulties. Less is known about the impact of other developmental speech problems on psychological outcomes, or the impact of stuttering and speech problems once other predictors have been adjusted for. Aims: To determine the impact…

  7. Right-Ear Advantage for Speech-in-Noise Recognition in Patients with Nonlateralized Tinnitus and Normal Hearing Sensitivity.

    PubMed

    Tai, Yihsin; Husain, Fatima T

    2018-04-01

    Despite having normal hearing sensitivity, patients with chronic tinnitus may experience more difficulty recognizing speech in adverse listening conditions as compared to controls. However, the association between the characteristics of tinnitus (severity and loudness) and speech recognition remains unclear. In this study, the Quick Speech-in-Noise test (QuickSIN) was conducted monaurally on 14 patients with bilateral tinnitus and 14 age- and hearing-matched adults to determine the relation between tinnitus characteristics and speech understanding. Further, Tinnitus Handicap Inventory (THI), tinnitus loudness magnitude estimation, and loudness matching were obtained to better characterize the perceptual and psychological aspects of tinnitus. The patients reported low THI scores, with most participants in the slight handicap category. Significant between-group differences in speech-in-noise performance were only found at the 5-dB signal-to-noise ratio (SNR) condition. The tinnitus group performed significantly worse in the left ear than in the right ear, even though bilateral tinnitus percept and symmetrical thresholds were reported in all patients. This between-ear difference is likely influenced by a right-ear advantage for speech sounds, as factors related to testing order and fatigue were ruled out. Additionally, significant correlations found between SNR loss in the left ear and tinnitus loudness matching suggest that perceptual factors related to tinnitus had an effect on speech-in-noise performance, pointing to a possible interaction between peripheral and cognitive factors in chronic tinnitus. Further studies, that take into account both hearing and cognitive abilities of patients, are needed to better parse out the effect of tinnitus in the absence of hearing impairment.

  8. Accommodating Variation: Dialects, Idiolects, and Speech Processing

    PubMed Central

    Kraljic, Tanya; Brennan, Susan E.; Samuel, Arthur G.

    2008-01-01

    Listeners are faced with enormous variation in pronunciation, yet they rarely have difficulty understanding speech. Although much research has been devoted to figuring out how listeners deal with variability, virtually none (outside of sociolinguistics) has focused on the source of the variation itself. The current experiments explore whether different kinds of variation lead to different cognitive and behavioral adjustments. Specifically, we compare adjustments to the same acoustic consequence when it is due to context-independent variation (resulting from articulatory properties unique to a speaker) versus context-conditioned variation (resulting from common articulatory properties of speakers who share a dialect). The contrasting results for these two cases show that the source of a particular acoustic-phonetic variation affects how that variation is handled by the perceptual system. We also show that changes in perceptual representations do not necessarily lead to changes in production. PMID:17803986

  9. Accommodating variation: dialects, idiolects, and speech processing.

    PubMed

    Kraljic, Tanya; Brennan, Susan E; Samuel, Arthur G

    2008-04-01

    Listeners are faced with enormous variation in pronunciation, yet they rarely have difficulty understanding speech. Although much research has been devoted to figuring out how listeners deal with variability, virtually none (outside of sociolinguistics) has focused on the source of the variation itself. The current experiments explore whether different kinds of variation lead to different cognitive and behavioral adjustments. Specifically, we compare adjustments to the same acoustic consequence when it is due to context-independent variation (resulting from articulatory properties unique to a speaker) versus context-conditioned variation (resulting from common articulatory properties of speakers who share a dialect). The contrasting results for these two cases show that the source of a particular acoustic-phonetic variation affects how that variation is handled by the perceptual system. We also show that changes in perceptual representations do not necessarily lead to changes in production.

  10. Perceptual pitch deficits coexist with pitch production difficulties in music but not Mandarin speech

    PubMed Central

    Yang, Wu-xia; Feng, Jie; Huang, Wan-ting; Zhang, Cheng-xiang; Nan, Yun

    2014-01-01

    Congenital amusia is a musical disorder that mainly affects pitch perception. Among Mandarin speakers, some amusics also have difficulties in processing lexical tones (tone agnosics). To examine to what extent these perceptual deficits may be related to pitch production impairments in music and Mandarin speech, eight amusics, eight tone agnosics, and 12 age- and IQ-matched normal native Mandarin speakers were asked to imitate music note sequences and Mandarin words of comparable lengths. The results indicated that both the amusics and tone agnosics underperformed the controls on musical pitch production. However, tone agnosics performed no worse than the amusics, suggesting that lexical tone perception deficits may not aggravate musical pitch production difficulties. Moreover, these three groups were all able to imitate lexical tones with perfect intelligibility. Taken together, the current study shows that perceptual musical pitch and lexical tone deficits might coexist with musical pitch production difficulties. But at the same time these perceptual pitch deficits might not affect lexical tone production or the intelligibility of the speech words that were produced. The perception-production relationship for pitch among individuals with perceptual pitch deficits may be, therefore, domain-dependent. PMID:24474944

  11. Understanding neurophobia: Reasons behind impaired understanding and learning of neuroanatomy in cross-disciplinary healthcare students.

    PubMed

    Javaid, Muhammad Asim; Chakraborty, Shelly; Cryan, John F; Schellekens, Harriët; Toulouse, André

    2018-01-01

    Recent studies have highlighted a fear or difficulty with the study and understanding of neuroanatomy among medical and healthcare students. This has been linked with a diminished confidence of clinical practitioners and students to manage patients with neurological conditions. The underlying reasons for this difficulty have been queried among a broad cohort of medical, dental, occupational therapy, and speech and language sciences students. Direct evidence of the students' perception regarding specific difficulties associated with learning neuroanatomy has been provided and some of the measures required to address these issues have been identified. Neuroanatomy is perceived as a more difficult subject compared to other anatomy topics (e.g., reproductive/pelvic anatomy) and not all components of the neuroanatomy curriculum are viewed as equally challenging. The difficulty in understanding neuroanatomical concepts is linked to intrinsic factors such as the inherent complex nature of the topic rather than outside influences (e.g., lecture duration). Participants reporting high levels of interest in the subject reported higher levels of knowledge, suggesting that teaching tools aimed at increasing interest, such as case-based scenarios, could facilitate acquisition of knowledge. Newer pedagogies, including web-resources and computer assisted learning (CAL) are considered important tools to improve neuroanatomy learning, whereas traditional tools such as lecture slides and notes were considered less important. In conclusion, it is suggested that understanding of neuroanatomy could be enhanced and neurophobia be decreased by purposefully designed CAL resources. This data could help curricular designers to refocus attention and guide educators to develop improved neuroanatomy web-resources in future. Anat Sci Educ 11: 81-93. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  12. Korean speech-language pathologists' attitudes toward stuttering according to clinical experiences.

    PubMed

    Lee, Kyungjae

    2014-11-01

    Negative attitudes toward stuttering and people who stutter (PWS) are found in various groups of people in many regions. However the results of previous studies examining the influence of fluency coursework and clinical certification on the attitudes of speech-language pathologists (SLPs) toward PWS are equivocal. Furthermore, there have been few empirical studies on the attitudes of Korean SLPs toward stuttering. To determine whether the attitudes of Korean SLPs and speech-language pathology students toward stuttering would be different according to the status of clinical certification, stuttering coursework completion and clinical practicum in stuttering. Survey data from 37 certified Korean SLPs and 70 undergraduate students majoring in speech-language pathology were analysed. All the participants completed the modified Clinician Attitudes Toward Stuttering (CATS) Inventory. Results showed that the diagnosogenic view was still accepted by many participants. Significant differences were found in seven out of 46 CATS Inventory items according to the certification status. In addition significant differences were also found in three items and one item according to stuttering coursework completion and clinical practicum experience in stuttering, respectively. Clinical and educational experience appears to have mixed influences on SLPs' and students' attitudes toward stuttering. While SLPs and students may demonstrate more appropriate understanding and knowledge in certain areas of stuttering, they may feel difficulty in their clinical experience, possibly resulting in low self-efficacy. © 2014 Royal College of Speech and Language Therapists.

  13. The perceptual learning of time-compressed speech: A comparison of training protocols with different levels of difficulty

    PubMed Central

    Gabay, Yafit; Karni, Avi; Banai, Karen

    2017-01-01

    Speech perception can improve substantially with practice (perceptual learning) even in adults. Here we compared the effects of four training protocols that differed in whether and how task difficulty was changed during a training session, in terms of the gains attained and the ability to apply (transfer) these gains to previously un-encountered items (tokens) and to different talkers. Participants trained in judging the semantic plausibility of sentences presented as time-compressed speech and were tested on their ability to reproduce, in writing, the target sentences; trail-by-trial feedback was afforded in all training conditions. In two conditions task difficulty (low or high compression) was kept constant throughout the training session, whereas in the other two conditions task difficulty was changed in an adaptive manner (incrementally from easy to difficult, or using a staircase procedure). Compared to a control group (no training), all four protocols resulted in significant post-training improvement in the ability to reproduce the trained sentences accurately. However, training in the constant-high-compression protocol elicited the smallest gains in deciphering and reproducing trained items and in reproducing novel, untrained, items after training. Overall, these results suggest that training procedures that start off with relatively little signal distortion (“easy” items, not far removed from standard speech) may be advantageous compared to conditions wherein severe distortions are presented to participants from the very beginning of the training session. PMID:28545039

  14. "My Mind Is Doing It All": No "Brake" to Stop Speech Generation in Jargon Aphasia.

    PubMed

    Robinson, Gail A; Butterworth, Brian; Cipolotti, Lisa

    2015-12-01

    To study whether pressure of speech in jargon aphasia arises out of disturbances to core language or executive processes, or at the intersection of conceptual preparation. Conceptual preparation mechanisms for speech have not been well studied. Several mechanisms have been proposed for jargon aphasia, a fluent, well-articulated, logorrheic propositional speech that is almost incomprehensible. We studied the vast quantity of jargon speech produced by patient J.A., who had suffered an infarct after the clipping of a middle cerebral artery aneurysm. We gave J.A. baseline cognitive tests and experimental word- and sentence-generation tasks that we had designed for patients with dynamic aphasia, a severely reduced but otherwise fairly normal propositional speech thought to result from deficits in conceptual preparation. J.A. had cognitive dysfunction, including executive difficulties, and a language profile characterized by poor repetition and naming in the context of relatively intact single-word comprehension. J.A.'s spontaneous speech was fluent but jargon. He had no difficulty generating sentences; in contrast to dynamic aphasia, his sentences were largely meaningless and not significantly affected by stimulus constraint level. This patient with jargon aphasia highlights that voluminous speech output can arise from disturbances of both language and executive functions. Our previous studies have identified three conceptual preparation mechanisms for speech: generation of novel thoughts, their sequencing, and selection. This study raises the possibility that a "brake" to stop message generation may be a fourth conceptual preparation mechanism behind the pressure of speech characteristic of jargon aphasia.

  15. Understanding speech in noise after correction of congenital unilateral aural atresia: effects of age in the emergence of binaural squelch but not in use of head-shadow.

    PubMed

    Gray, Lincoln; Kesser, Bradley; Cole, Erika

    2009-09-01

    Unilateral hearing loss causes difficulty hearing in noise (the "cocktail party effect") due to absence of redundancy, head-shadow, and binaural squelch. This study explores the emergence of the head-shadow and binaural squelch effects in children with unilateral congenital aural atresia undergoing surgery to correct their hearing deficit. Adding patients and data from a similar study previously published, we also evaluate covariates such as the age of the patient, surgical outcome, and complexity of the task that might predict the extent of binaural benefit--patients' ability to "use" their new ear--when understanding speech in noise. Patients with unilateral congenital aural atresia were tested for their ability to understand speech in noise before and again 1 month after surgery to repair their atresia. In a sound-attenuating booth participants faced a speaker that produced speech signals with noise 90 degrees to the side of the normal (non-atretic) ear and again to the side of the atretic ear. The Hearing in Noise Test (HINT for adults or HINT-C for children) was used to estimate the patients' speech reception thresholds. The speech-in-noise test (SPIN) or the Pediatric Speech Intelligibility (PSI) Test was used in the previous study. There was consistent improvement, averaging 5dB regardless of age, in the ability to take advantage of head-shadow in understanding speech with noise to the side of the non-atretic (normal) ear. There was, in contrast, a strong negative linear effect of age (r(2)=.78, selecting patients over 8 years) in the emergence of binaural squelch to understand speech with noise to the side of the atretic ear. In patients over 8 years, this trend replicated over different studies and different tests. Children less than 8 years, however, showed less improvement in the HINT-C than in the PSI after surgery with noise toward their atretic ear (effect size=3). No binaural result was correlated with degree of hearing improvement after surgery. All patients are able to take advantage of a favorable signal-to-noise ratio in their newly opened ear; that is with noise toward the side of the normal ear (but this physical, bilateral, head-shadow effect need not involve true central binaural processing). With noise toward the atretic ear, the emergence of binaural squelch replicates between two studies for all but the youngest patients. Approximately 2dB of binaural gain is lost for each decade that surgery is delayed, and zero (or poorer) binaural benefit is predicted after 38 years of age. Older adults do more poorly, possibly secondary to their long period of auditory deprivation. At the youngest ages, however, binaural results are different in open- and closed-set speech tests; the more complex hearing tasks may involve a greater cognitive load. Other cognitive abilities (late evoked potentials, grey matter in auditory cortex, and multitasking) show similar effects of age, peaking at the same late-teen/young-adult period. Longer follow-up is likely critical for the understanding of these data. Getting a new ear may be--like multitasking--challenging for the youngest and oldest subjects.

  16. Facts About Fetal Alcohol Spectrum Disorders (FASDs)

    MedlinePlus

    ... attention Poor memory Difficulty in school (especially with math) Learning disabilities Speech and language delays Intellectual disability ... do poorly in school and have difficulties with math, memory, attention, judgment, and poor impulse control. Alcohol- ...

  17. Older adults benefit from music training early in life: biological evidence for long-term training-driven plasticity.

    PubMed

    White-Schwoch, Travis; Woodruff Carr, Kali; Anderson, Samira; Strait, Dana L; Kraus, Nina

    2013-11-06

    Aging results in pervasive declines in nervous system function. In the auditory system, these declines include neural timing delays in response to fast-changing speech elements; this causes older adults to experience difficulty understanding speech, especially in challenging listening environments. These age-related declines are not inevitable, however: older adults with a lifetime of music training do not exhibit neural timing delays. Yet many people play an instrument for a few years without making a lifelong commitment. Here, we examined neural timing in a group of human older adults who had nominal amounts of music training early in life, but who had not played an instrument for decades. We found that a moderate amount (4-14 years) of music training early in life is associated with faster neural timing in response to speech later in life, long after training stopped (>40 years). We suggest that early music training sets the stage for subsequent interactions with sound. These experiences may interact over time to sustain sharpened neural processing in central auditory nuclei well into older age.

  18. Older Adults Benefit from Music Training Early in Life: Biological Evidence for Long-Term Training-Driven Plasticity

    PubMed Central

    White-Schwoch, Travis; Carr, Kali Woodruff; Anderson, Samira; Strait, Dana L.

    2013-01-01

    Aging results in pervasive declines in nervous system function. In the auditory system, these declines include neural timing delays in response to fast-changing speech elements; this causes older adults to experience difficulty understanding speech, especially in challenging listening environments. These age-related declines are not inevitable, however: older adults with a lifetime of music training do not exhibit neural timing delays. Yet many people play an instrument for a few years without making a lifelong commitment. Here, we examined neural timing in a group of human older adults who had nominal amounts of music training early in life, but who had not played an instrument for decades. We found that a moderate amount (4–14 years) of music training early in life is associated with faster neural timing in response to speech later in life, long after training stopped (>40 years). We suggest that early music training sets the stage for subsequent interactions with sound. These experiences may interact over time to sustain sharpened neural processing in central auditory nuclei well into older age. PMID:24198359

  19. Assessment and management of the communication difficulties of children with cerebral palsy: a UK survey of SLT practice.

    PubMed

    Watson, Rose Mary; Pennington, Lindsay

    2015-01-01

    Communication difficulties are common in cerebral palsy (CP) and are frequently associated with motor, intellectual and sensory impairments. Speech and language therapy research comprises single-case experimental design and small group studies, limiting evidence-based intervention and possibly exacerbating variation in practice. To describe the assessment and intervention practices of speech-language therapist (SLTs) in the UK in their management of communication difficulties associated with CP in childhood. An online survey of the assessments and interventions employed by UK SLTs working with children and young people with CP was conducted. The survey was publicized via NHS trusts, the Royal College of Speech and Language Therapists (RCSLT) and private practice associations using a variety of social media. The survey was open from 5 December 2011 to 30 January 2012. Two hundred and sixty-five UK SLTs who worked with children and young people with CP in England (n = 199), Wales (n = 13), Scotland (n = 36) and Northern Ireland (n = 17) completed the survey. SLTs reported using a wide variety of published, standardized tests, but most commonly reported assessing oromotor function, speech, receptive and expressive language, and communication skills by observation or using assessment schedules they had developed themselves. The most highly prioritized areas for intervention were: dysphagia, alternative and augmentative (AAC)/interaction and receptive language. SLTs reported using a wide variety of techniques to address difficulties in speech, language and communication. Some interventions used have no supporting evidence. Many SLTs felt unable to estimate the hours of therapy per year children and young people with CP and communication disorders received from their service. The assessment and management of communication difficulties associated with CP in childhood varies widely in the UK. Lack of standard assessment practices prevents comparisons across time or services. The adoption of a standard set of agreed clinical measures would enable benchmarking of service provision, permit the development of large-scale research studies using routine clinical data and facilitate the identification of potential participants for research studies in the UK. Some interventions provided lack evidence. Recent systematic reviews could guide intervention, but robust evidence is needed in most areas addressed in clinical practice. © 2015 The Authors International Journal of Language & Communication Disorders published by John Wiley & Sons Ltd on behalf of Royal College of Speech and Language Therapists.

  20. Rise time and formant transition duration in the discrimination of speech sounds: the Ba-Wa distinction in developmental dyslexia.

    PubMed

    Goswami, Usha; Fosker, Tim; Huss, Martina; Mead, Natasha; Szucs, Dénes

    2011-01-01

    Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory processing of brief, rapidly successive acoustic changes is compromised in dyslexia, thereby affecting phonetic discrimination (e.g. discriminating /b/ from /d/) via impaired discrimination of formant transitions (rapid acoustic changes in frequency and intensity). However, an alternative auditory temporal hypothesis is that the basic auditory processing of the slower amplitude modulation cues in speech is compromised (Goswami et al., 2002). Here, we contrast children's perception of a synthetic speech contrast (ba/wa) when it is based on the speed of the rate of change of frequency information (formant transition duration) versus the speed of the rate of change of amplitude modulation (rise time). We show that children with dyslexia have excellent phonetic discrimination based on formant transition duration, but poor phonetic discrimination based on envelope cues. The results explain why phonetic discrimination may be allophonic in developmental dyslexia (Serniclaes et al., 2004), and suggest new avenues for the remediation of developmental dyslexia. © 2010 Blackwell Publishing Ltd.

  1. Do gender differences in audio-visual benefit and visual influence in audio-visual speech perception emerge with age?

    PubMed Central

    Alm, Magnus; Behne, Dawn

    2015-01-01

    Gender and age have been found to affect adults’ audio-visual (AV) speech perception. However, research on adult aging focuses on adults over 60 years, who have an increasing likelihood for cognitive and sensory decline, which may confound positive effects of age-related AV-experience and its interaction with gender. Observed age and gender differences in AV speech perception may also depend on measurement sensitivity and AV task difficulty. Consequently both AV benefit and visual influence were used to measure visual contribution for gender-balanced groups of young (20–30 years) and middle-aged adults (50–60 years) with task difficulty varied using AV syllables from different talkers in alternative auditory backgrounds. Females had better speech-reading performance than males. Whereas no gender differences in AV benefit or visual influence were observed for young adults, visually influenced responses were significantly greater for middle-aged females than middle-aged males. That speech-reading performance did not influence AV benefit may be explained by visual speech extraction and AV integration constituting independent abilities. Contrastingly, the gender difference in visually influenced responses in middle adulthood may reflect an experience-related shift in females’ general AV perceptual strategy. Although young females’ speech-reading proficiency may not readily contribute to greater visual influence, between young and middle-adulthood recurrent confirmation of the contribution of visual cues induced by speech-reading proficiency may gradually shift females AV perceptual strategy toward more visually dominated responses. PMID:26236274

  2. On pure word deafness, temporal processing, and the left hemisphere.

    PubMed

    Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean

    2005-07-01

    Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.

  3. FOXP2 gene deletion and infant feeding difficulties: a case report.

    PubMed

    Zimmerman, Emily; Maron, Jill L

    2016-01-01

    Forkhead box protein P2 (FOXP2) is a well-studied gene known to play an essential role in normal speech development. Deletions in the gene have been shown to result in developmental speech disorders and regulatory disruption of downstream gene targets associated with common forms of language impairments. Despite similarities in motor planning and execution between speech development and oral feeding competence, there have been no reports to date linking deletions within the FOXP2 gene to oral feeding impairments in the newborn. The patient was a nondysmorphic, appropriately and symmetrically grown male infant born at 35-wk gestational age. He had a prolonged neonatal intensive care unit stay because of persistent oral feeding incoordination requiring gastrostomy tube placement. Cardiac and neurological imagings were within normal limits. A microarray analysis found an ∼9-kb loss within chromosome band 7q3.1 that contains exon 2 of FOXP2, demonstrating a single copy of this region instead of the normal two copies per diploid gene. This case study expands our current understanding of the role FOXP2 exerts on motor planning and coordination necessary for both oral feeding success and speech-language development. This case report has important consequences for future diagnosis and treatment for infants with FOXP2 deletions, mutations, and varying levels of gene expression.

  4. The perceptual significance of high-frequency energy in the human voice.

    PubMed

    Monson, Brian B; Hunter, Eric J; Lotto, Andrew J; Story, Brad H

    2014-01-01

    While human vocalizations generate acoustical energy at frequencies up to (and beyond) 20 kHz, the energy at frequencies above about 5 kHz has traditionally been neglected in speech perception research. The intent of this paper is to review (1) the historical reasons for this research trend and (2) the work that continues to elucidate the perceptual significance of high-frequency energy (HFE) in speech and singing. The historical and physical factors reveal that, while HFE was believed to be unnecessary and/or impractical for applications of interest, it was never shown to be perceptually insignificant. Rather, the main causes for focus on low-frequency energy appear to be because the low-frequency portion of the speech spectrum was seen to be sufficient (from a perceptual standpoint), or the difficulty of HFE research was too great to be justifiable (from a technological standpoint). The advancement of technology continues to overcome concerns stemming from the latter reason. Likewise, advances in our understanding of the perceptual effects of HFE now cast doubt on the first cause. Emerging evidence indicates that HFE plays a more significant role than previously believed, and should thus be considered in speech and voice perception research, especially in research involving children and the hearing impaired.

  5. The perceptual significance of high-frequency energy in the human voice

    PubMed Central

    Monson, Brian B.; Hunter, Eric J.; Lotto, Andrew J.; Story, Brad H.

    2014-01-01

    While human vocalizations generate acoustical energy at frequencies up to (and beyond) 20 kHz, the energy at frequencies above about 5 kHz has traditionally been neglected in speech perception research. The intent of this paper is to review (1) the historical reasons for this research trend and (2) the work that continues to elucidate the perceptual significance of high-frequency energy (HFE) in speech and singing. The historical and physical factors reveal that, while HFE was believed to be unnecessary and/or impractical for applications of interest, it was never shown to be perceptually insignificant. Rather, the main causes for focus on low-frequency energy appear to be because the low-frequency portion of the speech spectrum was seen to be sufficient (from a perceptual standpoint), or the difficulty of HFE research was too great to be justifiable (from a technological standpoint). The advancement of technology continues to overcome concerns stemming from the latter reason. Likewise, advances in our understanding of the perceptual effects of HFE now cast doubt on the first cause. Emerging evidence indicates that HFE plays a more significant role than previously believed, and should thus be considered in speech and voice perception research, especially in research involving children and the hearing impaired. PMID:24982643

  6. Rate and rhythm control strategies for apraxia of speech in nonfluent primary progressive aphasia.

    PubMed

    Beber, Bárbara Costa; Berbert, Monalise Costa Batista; Grawer, Ruth Siqueira; Cardoso, Maria Cristina de Almeida Freitas

    2018-01-01

    The nonfluent/agrammatic variant of primary progressive aphasia is characterized by apraxia of speech and agrammatism. Apraxia of speech limits patients' communication due to slow speaking rate, sound substitutions, articulatory groping, false starts and restarts, segmentation of syllables, and increased difficulty with increasing utterance length. Speech and language therapy is known to benefit individuals with apraxia of speech due to stroke, but little is known about its effects in primary progressive aphasia. This is a case report of a 72-year-old, illiterate housewife, who was diagnosed with nonfluent primary progressive aphasia and received speech and language therapy for apraxia of speech. Rate and rhythm control strategies for apraxia of speech were trained to improve initiation of speech. We discuss the importance of these strategies to alleviate apraxia of speech in this condition and the future perspectives in the area.

  7. Children's History of Speech-Language Difficulties: Genetic Influences and Associations with Reading-Related Measures

    ERIC Educational Resources Information Center

    DeThorne, Laura Segebart; Hart, Sara A.; Petrill, Stephen A.; Deater-Deckard, Kirby; Thompson, Lee Anne; Schatschneider, Chris; Davison, Megan Dunn

    2006-01-01

    Purpose: This study examined (a) the extent of genetic and environmental influences on children's articulation and language difficulties and (b) the phenotypic associations between such difficulties and direct assessments of reading-related skills during early school-age years. Method: Behavioral genetic analyses focused on parent-report data…

  8. Comparison of speech performance in labial and lingual orthodontic patients: A prospective study

    PubMed Central

    Rai, Ambesh Kumar; Rozario, Joe E.; Ganeshkar, Sanjay V.

    2014-01-01

    Background: The intensity and duration of speech difficulty inherently associated with lingual therapy is a significant issue of concern in orthodontics. This study was designed to evaluate and to compare the duration of changes in speech between labial and lingual orthodontics. Materials and Methods: A prospective longitudinal clinical study was designed to assess speech of 24 patients undergoing labial or lingual orthodontic treatment. An objective spectrographic evaluation of/s/sound was done using software PRAAT version 5.0.47, a semiobjective auditive evaluation of articulation was done by four speech pathologists and a subjective assessment of speech was done by four laypersons. The tests were performed before (T1), within 24 h (T2), after 1 week (T3) and after 1 month (T4) of the start of therapy. The Mann-Whitney U-test for independent samples was used to assess the significance difference between the labial and lingual appliances. A speech alteration with P < 0.05 was considered to be significant. Results: The objective method showed a significant difference to be present between the two groups for the/s/sound in the middle position (P < 0.001) at T3. The semiobjective assessment showed worst speech performance in the lingual group to be present at T3 for vowels and blends (P < 0.01) and at T3 and T4 for alveolar and palatal consonants (P < 0.01). The subjective assessment also showed a significant difference between the two groups at T3 (P < 0.01) and T4 (P < 0.05). Conclusion: Both appliance systems caused a comparable speech difficulty immediately after bonding (T2). Although the speech recovered within a week in the labial group (T3), the lingual group continued to experience discomfort even after a month (T4). PMID:25540661

  9. Autonomic Nervous System Responses During Perception of Masked Speech may Reflect Constructs other than Subjective Listening Effort

    PubMed Central

    Francis, Alexander L.; MacPherson, Megan K.; Chandrasekaran, Bharath; Alvar, Ann M.

    2016-01-01

    Typically, understanding speech seems effortless and automatic. However, a variety of factors may, independently or interactively, make listening more effortful. Physiological measures may help to distinguish between the application of different cognitive mechanisms whose operation is perceived as effortful. In the present study, physiological and behavioral measures associated with task demand were collected along with behavioral measures of performance while participants listened to and repeated sentences. The goal was to measure psychophysiological reactivity associated with three degraded listening conditions, each of which differed in terms of the source of the difficulty (distortion, energetic masking, and informational masking), and therefore were expected to engage different cognitive mechanisms. These conditions were chosen to be matched for overall performance (keywords correct), and were compared to listening to unmasked speech produced by a natural voice. The three degraded conditions were: (1) Unmasked speech produced by a computer speech synthesizer, (2) Speech produced by a natural voice and masked byspeech-shaped noise and (3) Speech produced by a natural voice and masked by two-talker babble. Masked conditions were both presented at a -8 dB signal to noise ratio (SNR), a level shown in previous research to result in comparable levels of performance for these stimuli and maskers. Performance was measured in terms of proportion of key words identified correctly, and task demand or effort was quantified subjectively by self-report. Measures of psychophysiological reactivity included electrodermal (skin conductance) response frequency and amplitude, blood pulse amplitude and pulse rate. Results suggest that the two masked conditions evoked stronger psychophysiological reactivity than did the two unmasked conditions even when behavioral measures of listening performance and listeners’ subjective perception of task demand were comparable across the three degraded conditions. PMID:26973564

  10. Dependency distance minimization in understanding of ambiguous structure. Comment on "Dependency distance: A new perspective on syntactic patterns in natural languages" by Haitao Liu et al.

    NASA Astrophysics Data System (ADS)

    Zhao, Yiyi

    2017-07-01

    Dependency Distance, proposed by Hudson [1], calculated by Liu [2,3], is an important concept in Dependency Theory. It can be used as a measure of the syntactic difficulty, and lots of research [2,4] have testified the universal of Dependency Distance in various languages. Human languages seem to present a preference for short dependency distance, which may be explained in terms of general cognitive constraint of limited working memory [5]. Psychological experiments in English, German, Russian and Chinese support the hypothesis that Dependency Distance minimization (DDM) make languages to evolve into some syntactic patterns to reduce memory burden [6-9]. The study of psychology focuses on the process and mechanism of syntactic structure selection in speech comprehension. In many speech comprehension experiments [10], ambiguous structure is an important experimental material.

  11. A Read-Aloud Storybook Selection System for Prereaders at the Preschool Language Level: A Pilot Study

    PubMed Central

    van Kleeck, Anne; Beaton, Derek; Horne, Erin; MacKenzie, Heather; Abdi, Hervé

    2015-01-01

    Purpose Many well-accepted systems for determining difficulty level exist for books children read independently, but few are available for determining the wide range of difficulty levels of storybooks read aloud to preschoolers. Also, the available tools list book characteristics only on the basis of parents' or authors' opinions. We created an empirically derived difficulty-level system on the basis of 22 speech-language pathologists' (SLPs) judgments of specific storybooks used in preschooler read-alouds. Method SLPs sorted 11 storybooks into ranked stacks on the basis of how difficult they thought the storybooks would be for preschoolers to understand when read aloud. SLPs described each stack globally as well as why they assigned each storybook to a particular stack. From transcriptions of the explanations, we derived a glossary of book characteristics using content analysis. We created a difficulty-level scale using a multivariate analysis technique that simultaneously analyzed book sorts and glossary terms. Results The book selection system includes a glossary of book characteristics, a 4-level difficulty scale, and exemplar books for each level. Conclusion This empirically derived difficulty-level system created for storybooks read aloud to preschoolers represents a step toward filling a gap in the read-aloud literature. PMID:26089030

  12. [Language observation protocol for teachers in pre-school education. Effectiveness in the detection of semantic and morphosyntactic difficulties].

    PubMed

    Ygual-Fernández, Amparo; Cervera-Merida, José F; Baixauli-Fortea, Inmaculada; Meliá-De Alba, Amanda

    2011-03-01

    A number of studies have shown that teachers are capable of recognising pupils with language difficulties if they have suitable guidelines or guidance. To determine the effectiveness of an observation-based protocol for pre-school education teachers in the detection of phonetic-phonological, semantic and morphosyntactic difficulties. The sample consisted of 175 children from public and state-subsidised schools in Valencia and its surrounding province, together with their teachers. The children were aged between 3 years and 6 months and 5 years and 11 months. The protocol that was used asks for information about pronunciation skills (intelligibility, articulation), conversational skills (with adults, with peers), literal understanding of sentences, grammatical precision, expression through discourse, lexical knowledge and semantics. There was a significant correlation between the teachers' observations and the criterion scores on intelligibility, literal understanding of sentences, grammatical expression and lexical richness, but not in the observations concerning articulation and verbal reasoning, which were more difficult for the teachers to judge. In general, the observation protocol proved to be effective, it guided the teachers in their observations and it asked them suitable questions about linguistic data that were relevant to the determination of difficulties in language development. The use of this protocol can be an effective strategy for collecting information for use by speech therapists and school psychologists in the early detection of children with language development problems.

  13. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment.

    PubMed

    Rosemann, Stephanie; Thiel, Christiane M

    2018-07-15

    Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Understanding the Relationship between Social Cognition and Word Difficulty. A Language Based Analysis of Individuals with Autism Spectrum Disorder.

    PubMed

    Aramaki, E; Shikata, S; Miyabe, M; Usuda, Y; Asada, K; Ayaya, S; Kumagaya, S

    2015-01-01

    Few quantitative studies have been conducted on the relationship between society and its languages. Individuals with autistic spectrum disorder (ASD) are known to experience social hardships, and a wide range of clinical information about their quality of life has been provided through numerous narrative analyses. However, the narratives of ASD patients have thus far been examined mainly through qualitative approaches. In this study, we analyzed adults with ASD to quantitatively examine the relationship between language abilities and ASD severity scores. We generated phonetic transcriptions of speeches by 16 ASD adults at an ASD workshop, and divided the participants into 2 groups according to their Social Responsiveness Scale(TM), 2nd Edition (SRS(TM)-2) scores (where higher scores represent more severe ASD): Group A comprised high-scoring ASD adults (SRS(TM)-2 score: ≥ 76) and Group B comprised low- and intermediate-scoring ASD adults (SRS(TM)-2 score: < 76). Using natural language processing (NLP)-based analytical methods, the narratives were converted into numerical data according to four language ability indicators, and the relationships between the language ability scores and ASD severity scores were compared. Group A showed a marginally negative correlation with the level of Japanese word difficulty (p < .10), while the "social cognition" subscale of the SRS(TM)-2 score showed a significantly negative correlation (p < .05) with word difficulty. When comparing only male participants, Group A demonstrated a significantly lower correlation with word difficulty level than Group B (p < .10). Social communication was found to be strongly associated with the level of word difficulty in speech. The clinical applications of these findings may be available in the near future, and there is a need for further detailed study on language metrics designed for ASD adults.

  15. Is there a need for a focused health care service for children with autistic spectrum disorders? A keyhole look at this problem in Tripoli, Libya.

    PubMed

    Zeglam, Adel M; Maouna, Ameena

    2012-07-01

    Autism is a global disorder, but relatively little is known about its presentation and occurrence in many developing countries, including Libya. To estimate the prevalence of autistic spectrum disorders in children referred to Al-Khadra hospital (KH). To increase the awareness among pediatrician and primary health care providers of the importance of considering autism in children presenting with speech and language disorders. Prospective hospital-based study of all children referred to a neurodevelopment clinic between 2005 and 2009 with the diagnosis of either speech and language difficulties or behavioral difficulties. A total of 38,508 children were seen in the pediatric outpatient clinics of KH, Tripoli, between 2005 and 2009. Of these, 180 children were referred to the neurodevelopment clinic with history of delayed speech and language and or behavioral difficulties. A diagnosis of autism was made in 128 children, which gives a prevalence of approximately 1 in 300. The prevalence of autism in Libya is probably similar to that seen in the USA and the UK. No data were available for comparison from either Arab or other developing countries. Autism is an important differential diagnosis of any language disorder 'and behavioral difficulties'.

  16. Children and Adults Integrate Talker and Verb Information in Online Processing

    ERIC Educational Resources Information Center

    Borovsky, Arielle; Creel, Sarah C.

    2014-01-01

    Children seem able to efficiently interpret a variety of linguistic cues during speech comprehension, yet have difficulty interpreting sources of nonlinguistic and paralinguistic information that accompany speech. The current study asked whether (paralinguistic) voice-activated role knowledge is rapidly interpreted in coordination with a…

  17. The Origin of Mathematics and Number Sense in the Cerebellum: with Implications for Finger Counting and Dyscalculia.

    PubMed

    Vandervert, Larry

    2017-01-01

    Mathematicians and scientists have struggled to adequately describe the ultimate foundations of mathematics. Nobel laureates Albert Einstein and Eugene Wigner were perplexed by this issue, with Wigner concluding that the workability of mathematics in the real world is a mystery we cannot explain. In response to this classic enigma, the major purpose of this article is to provide a theoretical model of the ultimate origin of mathematics and "number sense" (as defined by S. Dehaene) that is proposed to involve the learning of inverse dynamics models through the collaboration of the cerebellum and the cerebral cortex (but prominently cerebellum-driven). This model is based upon (1) the modern definition of mathematics as the "science of patterns," (2) cerebellar sequence (pattern) detection, and (3) findings that the manipulation of numbers is automated in the cerebellum. This cerebro-cerebellar approach does not necessarily conflict with mathematics or number sense models that focus on brain functions associated with especially the intraparietal sulcus region of the cerebral cortex. A direct corollary purpose of this article is to offer a cerebellar inner speech explanation for difficulty in developing "number sense" in developmental dyscalculia. It is argued that during infancy the cerebellum learns (1) a first tier of internal models for a primitive physics that constitutes the foundations of visual-spatial working memory, and (2) a second (and more abstract) tier of internal models based on (1) that learns "number" and relationships among dimensions across the primitive physics of the first tier. Within this context it is further argued that difficulty in the early development of the second tier of abstraction (and "number sense") is based on the more demanding attentional requirements imposed on cerebellar inner speech executive control during the learning of cerebellar inverse dynamics models. Finally, it is argued that finger counting improves (does not originate) "number sense" by extending focus of attention in executive control of silent cerebellar inner speech. It is suggested that (1) the origin of mathematics has historically been an enigma only because it is learned below the level of conscious awareness in cerebellar internal models, (2) understandings of the development of "number sense" and developmental dyscalculia can be advanced by first understanding the ultimate foundations of number and mathematics do not simply originate in the cerebral cortex, but rather in cerebro-cerebellar collaboration (predominately driven by the cerebellum). It is concluded that difficulty with "number sense" results from the extended demands on executive control in learning inverse dynamics models associated with cerebellar inner speech related to the second tier of abstraction (numbers) of the infant's primitive physics.

  18. Multiple System Atrophy (MSA)

    MedlinePlus

    ... coordination, such as unsteady gait and loss of balance Slurred, slow or low-volume speech (dysarthria) Visual disturbances, such as blurred or double vision and difficulty focusing your eyes Difficulty swallowing (dysphagia) or chewing General signs and symptoms In addition, the primary sign ...

  19. Glossectomy: a case report.

    PubMed

    Dworkin, J P

    1982-04-01

    A 27-year-old man, a law student, underwent partial glossectomy, right hemimandibulectomy and radical neck dissection due to recurrent carcinoma of the oral cavity. These surgical procedures resulted in severe swallowing and speech difficulties for which he was treated by tube feeding and speech therapy, respectively. Emphasis in therapy was placed on compensatory articulator techniques for the improvement of speech intelligibility. Those adaptive tongue stump, labial, and palato-pharyngeal compensations which were employed are discussed. After. 9 months of speech therapy, he was judged to have achieved fair-to good speech intelligibility, and was able to continue law school. At the time of this writing, he was practicing law.

  20. Evidence-based interventions for reading and language difficulties: creating a virtuous circle.

    PubMed

    Snowling, Margaret J; Hulme, Charles

    2011-03-01

    BACKGROUND. Children may experience two very different forms of reading problem: decoding difficulties (dyslexia) and reading comprehension difficulties. Decoding difficulties appear to be caused by problems with phonological (speech sound) processing. Reading comprehension difficulties in contrast appear to be caused by problems with 'higher level' language difficulties including problems with semantics (including deficient knowledge of word meanings) and grammar (knowledge of morphology and syntax). AIMS. We review evidence concerning the nature, causes of, and treatments for children's reading difficulties. We argue that any well-founded educational intervention must be based on a sound theory of the causes of a particular form of learning difficulty, which in turn must be based on an understanding of how a given skill is learned by typically developing children. Such theoretically motivated interventions should in turn be evaluated in randomized controlled trials (RCTs) to establish whether they are effective, and for whom. RESULTS. There is now considerable evidence showing that phonologically based interventions are effective in ameliorating children's word level decoding difficulties, and a smaller evidence base showing that reading and oral language (OL) comprehension difficulties can be ameliorated by suitable interventions to boost vocabulary and broader OL skills. CONCLUSIONS. The process of developing theories about the origins of children's educational difficulties and evaluating theoretically motivated treatments in RCTs, produces a 'virtuous circle' whereby theory informs practice, and the evaluation of effective interventions in turn feeds back to inform and refine theories about the nature and causes of children's reading and language difficulties. ©2010 The British Psychological Society.

  1. Some factors underlying individual differences in speech recognition on PRESTO: a first report.

    PubMed

    Tamati, Terrin N; Gilbert, Jaimie L; Pisoni, David B

    2013-01-01

    Previous studies investigating speech recognition in adverse listening conditions have found extensive variability among individual listeners. However, little is currently known about the core underlying factors that influence speech recognition abilities. To investigate sensory, perceptual, and neurocognitive differences between good and poor listeners on the Perceptually Robust English Sentence Test Open-set (PRESTO), a new high-variability sentence recognition test under adverse listening conditions. Participants who fell in the upper quartile (HiPRESTO listeners) or lower quartile (LoPRESTO listeners) on key word recognition on sentences from PRESTO in multitalker babble completed a battery of behavioral tasks and self-report questionnaires designed to investigate real-world hearing difficulties, indexical processing skills, and neurocognitive abilities. Young, normal-hearing adults (N = 40) from the Indiana University community participated in the current study. Participants' assessment of their own real-world hearing difficulties was measured with a self-report questionnaire on situational hearing and hearing health history. Indexical processing skills were assessed using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Neurocognitive abilities were measured with the Auditory Digit Span Forward (verbal short-term memory) and Digit Span Backward (verbal working memory) tests, the Stroop Color and Word Test (attention/inhibition), the WordFam word familiarity test (vocabulary size), the Behavioral Rating Inventory of Executive Function-Adult Version (BRIEF-A) self-report questionnaire on executive function, and two performance subtests of the Wechsler Abbreviated Scale of Intelligence (WASI) Performance Intelligence Quotient (IQ; nonverbal intelligence). Scores on self-report questionnaires and behavioral tasks were tallied and analyzed by listener group (HiPRESTO and LoPRESTO). The extreme groups did not differ overall on self-reported hearing difficulties in real-world listening environments. However, an item-by-item analysis of questions revealed that LoPRESTO listeners reported significantly greater difficulty understanding speakers in a public place. HiPRESTO listeners were significantly more accurate than LoPRESTO listeners at gender discrimination and regional dialect categorization, but they did not differ on talker discrimination accuracy or response time, or gender discrimination response time. HiPRESTO listeners also had longer forward and backward digit spans, higher word familiarity ratings on the WordFam test, and lower (better) scores for three individual items on the BRIEF-A questionnaire related to cognitive load. The two groups did not differ on the Stroop Color and Word Test or either of the WASI performance IQ subtests. HiPRESTO listeners and LoPRESTO listeners differed in indexical processing abilities, short-term and working memory capacity, vocabulary size, and some domains of executive functioning. These findings suggest that individual differences in the ability to encode and maintain highly detailed episodic information in speech may underlie the variability observed in speech recognition performance in adverse listening conditions using high-variability PRESTO sentences in multitalker babble. American Academy of Audiology.

  2. Some Factors Underlying Individual Differences in Speech Recognition on PRESTO: A First Report

    PubMed Central

    Tamati, Terrin N.; Gilbert, Jaimie L.; Pisoni, David B.

    2013-01-01

    Background Previous studies investigating speech recognition in adverse listening conditions have found extensive variability among individual listeners. However, little is currently known about the core, underlying factors that influence speech recognition abilities. Purpose To investigate sensory, perceptual, and neurocognitive differences between good and poor listeners on PRESTO, a new high-variability sentence recognition test under adverse listening conditions. Research Design Participants who fell in the upper quartile (HiPRESTO listeners) or lower quartile (LoPRESTO listeners) on key word recognition on sentences from PRESTO in multitalker babble completed a battery of behavioral tasks and self-report questionnaires designed to investigate real-world hearing difficulties, indexical processing skills, and neurocognitive abilities. Study Sample Young, normal-hearing adults (N = 40) from the Indiana University community participated in the current study. Data Collection and Analysis Participants’ assessment of their own real-world hearing difficulties was measured with a self-report questionnaire on situational hearing and hearing health history. Indexical processing skills were assessed using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Neurocognitive abilities were measured with the Auditory Digit Span Forward (verbal short-term memory) and Digit Span Backward (verbal working memory) tests, the Stroop Color and Word Test (attention/inhibition), the WordFam word familiarity test (vocabulary size), the BRIEF-A self-report questionnaire on executive function, and two performance subtests of the WASI Performance IQ (non-verbal intelligence). Scores on self-report questionnaires and behavioral tasks were tallied and analyzed by listener group (HiPRESTO and LoPRESTO). Results The extreme groups did not differ overall on self-reported hearing difficulties in real-world listening environments. However, an item-by-item analysis of questions revealed that LoPRESTO listeners reported significantly greater difficulty understanding speakers in a public place. HiPRESTO listeners were significantly more accurate than LoPRESTO listeners at gender discrimination and regional dialect categorization, but they did not differ on talker discrimination accuracy or response time, or gender discrimination response time. HiPRESTO listeners also had longer forward and backward digit spans, higher word familiarity ratings on the WordFam test, and lower (better) scores for three individual items on the BRIEF-A questionnaire related to cognitive load. The two groups did not differ on the Stroop Color and Word Test or either of the WASI performance IQ subtests. Conclusions HiPRESTO listeners and LoPRESTO listeners differed in indexical processing abilities, short-term and working memory capacity, vocabulary size, and some domains of executive functioning. These findings suggest that individual differences in the ability to encode and maintain highly detailed episodic information in speech may underlie the variability observed in speech recognition performance in adverse listening conditions using high-variability PRESTO sentences in multitalker babble. PMID:24047949

  3. A Dynamic Speech Comprehension Test for Assessing Real-World Listening Ability.

    PubMed

    Best, Virginia; Keidser, Gitte; Freeston, Katrina; Buchholz, Jörg M

    2016-07-01

    Many listeners with hearing loss report particular difficulties with multitalker communication situations, but these difficulties are not well predicted using current clinical and laboratory assessment tools. The overall aim of this work is to create new speech tests that capture key aspects of multitalker communication situations and ultimately provide better predictions of real-world communication abilities and the effect of hearing aids. A test of ongoing speech comprehension introduced previously was extended to include naturalistic conversations between multiple talkers as targets, and a reverberant background environment containing competing conversations. In this article, we describe the development of this test and present a validation study. Thirty listeners with normal hearing participated in this study. Speech comprehension was measured for one-, two-, and three-talker passages at three different signal-to-noise ratios (SNRs), and working memory ability was measured using the reading span test. Analyses were conducted to examine passage equivalence, learning effects, and test-retest reliability, and to characterize the effects of number of talkers and SNR. Although we observed differences in difficulty across passages, it was possible to group the passages into four equivalent sets. Using this grouping, we achieved good test-retest reliability and observed no significant learning effects. Comprehension performance was sensitive to the SNR but did not decrease as the number of talkers increased. Individual performance showed associations with age and reading span score. This new dynamic speech comprehension test appears to be valid and suitable for experimental purposes. Further work will explore its utility as a tool for predicting real-world communication ability and hearing aid benefit. American Academy of Audiology.

  4. The Acquisition of Standard English Speech Habits Using Second-Language Techniques: An Experiment in Speech Modification and Generalization in the Verbal Behavior of Prison Inmates.

    ERIC Educational Resources Information Center

    McKee, John M.; And Others

    Many people take for granted the use of language as a tool for coping with everyday occupational and social problems. However, there are those, such as prison inmates, who have difficulty using language in this manner. Realizing that prison inmates are not always able to communicate effectively through standard patterns of speech and thus are…

  5. The Effective Teacher's Guide to Autism and Communication Difficulties: Practical Strategies, Second Edition. The Effective Teacher's Guides

    ERIC Educational Resources Information Center

    Farrell, Michael

    2011-01-01

    In this welcome second edition of "The Effective Teacher's Guide to Autism and Communication Difficulties", best-selling author Michael Farrell addresses how teachers and others can develop provision for students with autism and students that have difficulties with speech, grammar, meaning, use of language and comprehension. Updated and expanded,…

  6. Heritability of specific language impairment depends on diagnostic criteria.

    PubMed

    Bishop, D V M; Hayiou-Thomas, M E

    2008-04-01

    Heritability estimates for specific language impairment (SLI) have been inconsistent. Four twin studies reported heritability of 0.5 or more, but a recent report from the Twins Early Development Study found negligible genetic influence in 4-year-olds. We considered whether the method of ascertainment influenced results and found substantially higher heritability if SLI was defined in terms of referral to speech and language pathology services than if defined by language test scores. Further analysis showed that presence of speech difficulties played a major role in determining whether a child had contact with services. Childhood language disorders that are identified by population screening are likely to have a different phenotype and different etiology from clinically referred cases. Genetic studies are more likely to find high heritability if they focus on cases who have speech difficulties and who have been referred for intervention.

  7. Pre-Lexical Disorders in Repetition Conduction Aphasia

    ERIC Educational Resources Information Center

    Sidiropoulos, Kyriakos; de Bleser, Ria; Ackermann, Hermann; Preilowski, Bruno

    2008-01-01

    At the level of clinical speech/language evaluation, the repetition type of conduction aphasia is characterized by repetition difficulties concomitant with reduced short-term memory capacities, in the presence of fluent spontaneous speech as well as unimpaired naming and reading abilities. It is still unsettled which dysfunctions of the…

  8. Conversational Responsiveness of Speech- and Language-Impaired Preschoolers.

    ERIC Educational Resources Information Center

    Hadley, Pamela A.; Rice, Mabel L.

    1991-01-01

    This study of 18 preschoolers' conversational responsiveness in an integrated classroom setting during free play found that language-impaired and speech-impaired children were ignored by their peers and responded less often when a peer initiated to them. Results suggest that peer interaction difficulties may be concomitant consequences of early…

  9. Dysfluencies in the speech of adults with intellectual disabilities and reported speech difficulties.

    PubMed

    Coppens-Hofman, Marjolein C; Terband, Hayo R; Maassen, Ben A M; van Schrojenstein Lantman-De Valk, Henny M J; van Zaalen-op't Hof, Yvonne; Snik, Ad F M

    2013-01-01

    In individuals with an intellectual disability, speech dysfluencies are more common than in the general population. In clinical practice, these fluency disorders are generally diagnosed and treated as stuttering rather than cluttering. To characterise the type of dysfluencies in adults with intellectual disabilities and reported speech difficulties with an emphasis on manifestations of stuttering and cluttering, which distinction is to help optimise treatment aimed at improving fluency and intelligibility. The dysfluencies in the spontaneous speech of 28 adults (18-40 years; 16 men) with mild and moderate intellectual disabilities (IQs 40-70), who were characterised as poorly intelligible by their caregivers, were analysed using the speech norms for typically developing adults and children. The speakers were subsequently assigned to different diagnostic categories by relating their resulting dysfluency profiles to mean articulatory rate and articulatory rate variability. Twenty-two (75%) of the participants showed clinically significant dysfluencies, of which 21% were classified as cluttering, 29% as cluttering-stuttering and 25% as clear cluttering at normal articulatory rate. The characteristic pattern of stuttering did not occur. The dysfluencies in the speech of adults with intellectual disabilities and poor intelligibility show patterns that are specific for this population. Together, the results suggest that in this specific group of dysfluent speakers interventions should be aimed at cluttering rather than stuttering. The reader will be able to (1) describe patterns of dysfluencies in the speech of adults with intellectual disabilities that are specific for this group of people, (2) explain that a high rate of dysfluencies in speech is potentially a major determiner of poor intelligibility in adults with ID and (3) describe suggestions for intervention focusing on cluttering rather than stuttering in dysfluent speakers with ID. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Verbal auditory agnosia in a patient with traumatic brain injury: A case report.

    PubMed

    Kim, Jong Min; Woo, Seung Beom; Lee, Zeeihn; Heo, Sung Jae; Park, Donghwi

    2018-03-01

    Verbal auditory agnosia is the selective inability to recognize verbal sounds. Patients with this disorder lose the ability to understand language, write from dictation, and repeat words with reserved ability to identify nonverbal sounds. However, to the best of our knowledge, there was no report about verbal auditory agnosia in adult patient with traumatic brain injury. He was able to clearly distinguish between language and nonverbal sounds, and he did not have any difficulty in identifying the environmental sounds. However, he did not follow oral commands and could not repeat and dictate words. On the other hand, he had fluent and comprehensible speech, and was able to read and understand written words and sentences. Verbal auditory agnosia INTERVENTION:: He received speech therapy and cognitive rehabilitation during his hospitalization, and he practiced understanding of verbal language by providing written sentences together. Two months after hospitalization, he regained his ability to understand some verbal words. Six months after hospitalization, his ability to understand verbal language was improved to an understandable level when speaking slowly in front of his eyes, but his comprehension of verbal sound language was still word level, not sentence level. This case gives us the lesson that the evaluation of auditory functions as well as cognition and language functions important for accurate diagnosis and appropriate treatment, because the verbal auditory agnosia tends to be easily misdiagnosed as hearing impairment, cognitive dysfunction and sensory aphasia.

  11. Speech and language delay in two children: an unusual presentation of hyperthyroidism.

    PubMed

    Sohal, Aman P S; Dasarathi, Madhuri; Lodh, Rajib; Cheetham, Tim; Devlin, Anita M

    2013-01-01

    Hyperthyroidism is rare in pre-school children. Untreated, it can have a profound effect on normal growth and development, particularly in the first 2 years of life. Although neurological manifestations of dysthyroid states are well known, specific expressive speech and language disorder as a presentation of hyperthyroidism is rarely documented. Case reports of two children with hyperthyroidism presenting with speech and language delay. We report two pre-school children with hyperthyroidism, who presented with expressive speech and language delay, and demonstrated a significant improvement in their language skills following treatment with anti-thyroid medication. Hyperthyroidism must be considered in all children presenting with speech and language difficulties, particularly expressive speech delay. Prompt recognition and early treatment are likely to improve outcome.

  12. Cognitive abilities relate to self-reported hearing disability.

    PubMed

    Zekveld, Adriana A; George, Erwin L J; Houtgast, Tammo; Kramer, Sophia E

    2013-10-01

    In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1996) and performed the Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) test as well as tests of spatial working memory (SWM) and visual sustained attention. Regression analyses examined the predictive value of age, hearing thresholds (pure-tone averages [PTAs]), speech perception in noise (speech reception thresholds in noise [SRTNs]), and the cognitive tests for the 5 AIADH factors. Besides the variance explained by age, PTA, and SRTN, cognitive abilities were related to each hearing factor. The reported difficulties with sound detection and speech perception in quiet were less severe for participants with higher age, lower PTAs, and better TRTs. Fewer sound localization and speech perception in noise problems were reported by participants with better SRTNs and smaller SWM. Fewer sound discrimination difficulties were reported by subjects with better SRTNs and TRTs and smaller SWM. The results suggest a general role of the ability to read partly masked text in subjective hearing. Large working memory was associated with more reported hearing difficulties. This study shows that besides auditory variables and age, cognitive abilities are related to self-reported hearing disability.

  13. A comparison of two treatments for childhood apraxia of speech: methods and treatment protocol for a parallel group randomised control trial

    PubMed Central

    2012-01-01

    Background Childhood Apraxia of Speech is an impairment of speech motor planning that manifests as difficulty producing the sounds (articulation) and melody (prosody) of speech. These difficulties may persist through life and are detrimental to academic, social, and vocational development. A number of published single subject and case series studies of speech treatments are available. There are currently no randomised control trials or other well designed group trials available to guide clinical practice. Methods/Design A parallel group, fixed size randomised control trial will be conducted in Sydney, Australia to determine the efficacy of two treatments for Childhood Apraxia of Speech: 1) Rapid Syllable Transition Treatment and the 2) Nuffield Dyspraxia Programme – Third edition. Eligible children will be English speaking, aged 4–12 years with a diagnosis of suspected CAS, normal or adjusted hearing and vision, and no comprehension difficulties or other developmental diagnoses. At least 20 children will be randomised to receive one of the two treatments in parallel. Treatments will be delivered by trained and supervised speech pathology clinicians using operationalised manuals. Treatment will be administered in 1-hour sessions, 4 times per week for 3 weeks. The primary outcomes are speech sound and prosodic accuracy on a customised 292 item probe and the Diagnostic Evaluation of Articulation and Phonology inconsistency subtest administered prior to treatment and 1 week, 1 month and 4 months post-treatment. All post assessments will be completed by blinded assessors. Our hypotheses are: 1) treatment effects at 1 week post will be similar for both treatments, 2) maintenance of treatment effects at 1 and 4 months post will be greater for Rapid Syllable Transition Treatment than Nuffield Dyspraxia Programme treatment, and 3) generalisation of treatment effects to untrained related speech behaviours will be greater for Rapid Syllable Transition Treatment than Nuffield Dyspraxia Programme treatment. This protocol was approved by the Human Research Ethics Committee, University of Sydney (#12924). Discussion This will be the first randomised control trial to test treatment for CAS. It will be valuable for clinical decision-making and providing evidence-based services for children with CAS. Trial Registration Australian New Zealand Clinical Trials Registry: ACTRN12612000744853 PMID:22863021

  14. Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer

    DTIC Science & Technology

    2017-01-05

    1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to...of inverse filtering performance has been challenging due to the practical difficulty in measuring the true glottal signals while speech signals are

  15. Supporting Children with Genetic Syndromes in the Classroom: The Example of 22q Deletion Syndrome

    ERIC Educational Resources Information Center

    Reilly, Colin; Stedman, Lindsey

    2013-01-01

    An increasing number of children are likely to have a known genetic cause for their special educational needs. One such genetic condition is 22q11.2 deletion syndrome (22qDS), a genetic syndrome associated with early speech and language difficulties, global and specific cognitive impairments, difficulties with attention and difficulties with…

  16. Musically Tone-Deaf Individuals Have Difficulty Discriminating Intonation Contours Extracted from Speech

    ERIC Educational Resources Information Center

    Patel, Aniruddh D.; Foxton, Jessica M.; Griffiths, Timothy D.

    2005-01-01

    Musically tone-deaf individuals have psychophysical deficits in detecting pitch changes, yet their discrimination of intonation contours in speech appears to be normal. One hypothesis for this dissociation is that intonation contours use coarse pitch contrasts which exceed the pitch-change detection thresholds of tone-deaf individuals (Peretz &…

  17. Perception and Production of Prosody by Speakers with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Paul, Rhea; Augustyn, Amy; Klin, Ami; Volkmar, Fred R.

    2005-01-01

    Speakers with autism spectrum disorders (ASD) show difficulties in suprasegmental aspects of speech production, or "prosody," those aspects of speech that accompany words and sentences and create what is commonly called "tone of voice." However, little is known about the perception of prosody, or about the specific aspects of…

  18. Speech Perception in Noise by Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Caldwell, Amanda; Nittrouer, Susan

    2013-01-01

    Purpose: Common wisdom suggests that listening in noise poses disproportionately greater difficulty for listeners with cochlear implants (CIs) than for peers with normal hearing (NH). The purpose of this study was to examine phonological, language, and cognitive skills that might help explain speech-in-noise abilities for children with CIs.…

  19. Communication Deficits and the Motor System: Exploring Patterns of Associations in Autism Spectrum Disorder (ASD)

    ERIC Educational Resources Information Center

    Mody, M.; Shui, A. M.; Nowinski, L. A.; Golas, S. B.; Ferrone, C.; O'Rourke, J. A.; McDougle, C. J.

    2017-01-01

    Many children with autism spectrum disorder (ASD) have notable difficulties in motor, speech and language domains. The connection between motor skills (oral-motor, manual-motor) and speech and language deficits reported in other developmental disorders raises important questions about a potential relationship between motor skills and…

  20. Delivering the Lee Silverman Voice Treatment (LSVT) by Web Camera: A Feasibility Study

    ERIC Educational Resources Information Center

    Howell, Susan; Tripoliti, Elina; Pring, Tim

    2009-01-01

    Background: Speech disorders are a feature of Parkinson's disease, typically worsening as the disease progresses. The Lee Silverman Voice Treatment (LSVT) was developed to address these difficulties. It targets vocal loudness as a means of increasing vocal effort and improving coordination across the subsystems of speech. Aims: Currently LSVT is…

  1. Psychosocial Outcomes at 15 Years of Children with a Preschool History of Speech-Language Impairment

    ERIC Educational Resources Information Center

    Snowling, Margaret J.; Bishop, D. V. M.; Stothard, Susan E.; Chipchase, Barry; Kaplan, Carole

    2006-01-01

    Background: Evidence suggests there is a heightened risk of psychiatric disorder in children with speech-language impairments. However, not all forms of language impairment are strongly associated with psychosocial difficulty, and some psychiatric disorders (e.g., attention deficit/hyperactivity disorder (ADHD)) are more prevalent than others in…

  2. Language and Motor Speech Skills in Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Pirila, Silja; van der Meere, Jaap; Pentikainen, Taina; Ruusu-Niemi, Pirjo; Korpela, Raija; Kilpinen, Jenni; Nieminen, Pirkko

    2007-01-01

    The aim of the study was to investigate associations between the severity of motor limitations, cognitive difficulties, language and motor speech problems in children with cerebral palsy. Also, the predictive power of neonatal cranial ultrasound findings on later outcome was investigated. For this purpose, 36 children (age range 1 year 10 months…

  3. Investigating Prompt Difficulty in an Automatically Scored Speaking Performance Assessment

    ERIC Educational Resources Information Center

    Cox, Troy L.

    2013-01-01

    Speaking assessments for second language learners have traditionally been expensive to administer because of the cost of rating the speech samples. To reduce the cost, many researchers are investigating the potential of using automatic speech recognition (ASR) as a means to score examinee responses to open-ended prompts. This study examined the…

  4. Communication after laryngectomy: an assessment of quality of life.

    PubMed

    Carr, M M; Schmidbauer, J A; Majaess, L; Smith, R L

    2000-01-01

    The purpose of this study was to examine quality of life in laryngectomees using different methods of communication. A survey was mailed to all the living laryngectomees in Nova Scotia. Patients were asked to rate their ability to communicate in a number of common situations, to rate their difficulty with several communication problems, and to complete the EORTC QLQ-C30 quality-of-life assessment tool. Sixty-two patients responded (return rate of 84%); 57% were using electrolaryngeal speech, 19% esophageal speech, and 8.5% tracheoesophageal speech. These groups were comparable with respect to age, sex, first language, education level, and years since laryngectomy. There were very few differences between these groups in ability to communicate in social situations and no difference in overall quality of life as measured by these scales. The most commonly cited problem was difficulty being heard in a noisy environment. Despite the fact that tracheoesophageal speech is objectively most intelligible, there does not seem to be a measurable improvement in quality of life or ability to communicate in everyday situations over electrolaryngeal or esophageal speakers.

  5. Speech production in children with Down's syndrome: The effects of reading, naming and imitation.

    PubMed

    Knight, Rachael-Anne; Kurtz, Scilla; Georgiadou, Ioanna

    2015-01-01

    People with DS are known to have difficulties with expressive language, and often have difficulties with intelligibility. They often have stronger visual than verbal short-term memory skills and, therefore, reading has often been suggested as an intervention for speech and language in this population. However, there is as yet no firm evidence that reading can improve speech outcomes. This study aimed to compare reading, picture naming and repetition for the same 10 words, to identify if the speech of eight children with DS (aged 11-14 years) was more accurate, consistent and intelligible when reading. Results show that children were slightly, yet significantly, more accurate and intelligible when they read words compared with when they produced those words in naming or imitation conditions although the reduction in inconsistency was non-significant. The results of this small-scale study provide tentative support for previous claims about the benefits of reading for children with DS. The mechanisms behind a facilitatory effect of reading are considered, and directions are identified for future research.

  6. Hearing aid and hearing assistance technology use in Aotearoa/New Zealand.

    PubMed

    Kelly-Campbell, Rebecca J; Lessoway, Kamea

    2015-05-01

    The purpose of this study was to describe factors that are related to hearing aid and hearing assistance technology ownership and use in Aotearoa/New Zealand. Adults with hearing impairment living in New Zealand were surveyed regarding health-related quality of life and device usage. Audiometric data (hearing sensitivity and speech in noise) were collected. Data were obtained from 123 adults with hearing impairment: 73 reported current hearing-aid use, 81 reported current hearing assistance technology use. In both analyses, device users had more difficulty understanding speech in background noise, had poor hearing in both their better and worse hearing ears, and perceived more consequences of hearing impairment in their everyday lives (both emotionally and socially) than non-hearing-aid users. Discriminant analyses showed that the social consequences of hearing impairment and the better ear hearing best classified hearing aid users from non-users but social consequences and worse ear hearing best classified hearing assistance technology users from non-users. Quality of life measurements and speech-in-noise assessments provide useful clinical information. Hearing-impaired adults in New Zealand who use hearing aids also tend to use hearing assistance technology, which has important clinical implications.

  7. Understanding speaker attitudes from prosody by adults with Parkinson's disease.

    PubMed

    Monetta, Laura; Cheang, Henry S; Pell, Marc D

    2008-09-01

    The ability to interpret vocal (prosodic) cues during social interactions can be disrupted by Parkinson's disease, with notable effects on how emotions are understood from speech. This study investigated whether PD patients who have emotional prosody deficits exhibit further difficulties decoding the attitude of a speaker from prosody. Vocally inflected but semantically nonsensical 'pseudo-utterances' were presented to listener groups with and without PD in two separate rating tasks. Task I required participants to rate how confident a speaker sounded from their voice and Task 2 required listeners to rate how polite the speaker sounded for a comparable set of pseudo-utterances. The results showed that PD patients were significantly less able than HC participants to use prosodic cues to differentiate intended levels of speaker confidence in speech, although the patients could accurately detect the politelimpolite attitude of the speaker from prosody in most cases. Our data suggest that many PD patients fail to use vocal cues to effectively infer a speaker's emotions as well as certain attitudes in speech such as confidence, consistent with the idea that the basal ganglia play a role in the meaningful processing of prosodic sequences in spoken language (Pell & Leonard, 2003).

  8. Auditory and cognitive factors underlying individual differences in aided speech-understanding among older adults

    PubMed Central

    Humes, Larry E.; Kidd, Gary R.; Lentz, Jennifer J.

    2013-01-01

    This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male) ranging in age from 60 to 86 (mean = 69.2). Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures), psychophysical (17 measures), and speech-understanding (9 measures), as well as the Speech, Spatial, and Qualities of Hearing (SSQ) self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference). All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding) measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection) as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI), and performance on the text-recognition-threshold (TRT) task (a visual analog of interrupted speech recognition). These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance. PMID:24098273

  9. Speech perception in noise in the elderly: interactions between cognitive performance, depressive symptoms, and education.

    PubMed

    de Carvalho, Laura Maria Araújo; Gonsalez, Elisiane Crestani de Miranda; Iorio, Maria Cecília Martineli

    The difficulty the elderly experience in understanding speech may be related to several factors including cognitive and perceptual performance. To evaluate the influence of cognitive performance, depressive symptoms, and education on speech perception in noise of elderly hearing aids users. The sample consisted of 25 elderly hearing aids users in bilateral adaptation, both sexes, mean age 69.7 years. Subjects underwent cognitive assessment using the Mini-Mental State Examination and the Alzheimer's Disease Assessment Scale-cognitive and depressive symptoms evaluation using the Geriatric Depression Scale. The assessment of speech perception in noise (S/N ratio) was performed in free field using the Portuguese Sentence List test. Statistical analysis included the Spearman correlation calculation and multiple linear regression model, with 95% confidence level and 0.05 significance level. In the study of speech perception in noise (S/N ratio), there was statistically significant correlation between education scores (p=0.018), as well as with the Mini-Mental State Examination (p=0.002), Alzheimer's Disease Assessment Scale-cognitive (p=0.003), and Geriatric Depression Scale (p=0.022) scores. We found that for a one-unit increase in Alzheimer's Disease Assessment Scale-cognitive score, the S/N ratio increased on average 0.15dB, and for an increase of one year in education, the S/N ratio decreased on average 0.40dB. Level of education, cognitive performance, and depressive symptoms influence the speech perception in noise of elderly hearing aids users. The better the cognitive level and the higher the education, the better is the elderly communicative performance in noise. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  10. Understanding the abstract role of speech in communication at 12 months.

    PubMed

    Martin, Alia; Onishi, Kristine H; Vouloumanos, Athena

    2012-04-01

    Adult humans recognize that even unfamiliar speech can communicate information between third parties, demonstrating an ability to separate communicative function from linguistic content. We examined whether 12-month-old infants understand that speech can communicate before they understand the meanings of specific words. Specifically, we test the understanding that speech permits the transfer of information about a Communicator's target object to a Recipient. Initially, the Communicator selectively grasped one of two objects. In test, the Communicator could no longer reach the objects. She then turned to the Recipient and produced speech (a nonsense word) or non-speech (coughing). Infants looked longer when the Recipient selected the non-target than the target object when the Communicator had produced speech but not coughing (Experiment 1). Looking time patterns differed from the speech condition when the Recipient rather than the Communicator produced the speech (Experiment 2), and when the Communicator produced a positive emotional vocalization (Experiment 3), but did not differ when the Recipient had previously received information about the target by watching the Communicator's selective grasping (Experiment 4). Thus infants understand the information-transferring properties of speech and recognize some of the conditions under which others' information states can be updated. These results suggest that infants possess an abstract understanding of the communicative function of speech, providing an important potential mechanism for language and knowledge acquisition. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. [Dichotic digit test. Case].

    PubMed

    Zenker Castro, Franz; Fernández Belda, Rafael; Barajas de Prat, José Juan

    2008-12-01

    In this study we present a case of a 71-year-old female patient with sensorineural hearing loss and fitted with bilateral hearing aids. The patient complained of scant benefit from the hearing aid fitting with difficulties in understanding speech with background noise. The otolaryngology examination was normal. Audiological tests revealed bilateral sensorineural hearing loss with threshold values of 51 and 50 dB HL in the right and left ear. The Dichotic Digit Test was administered in a divided attention mode and focalizing the attention to each ear. Results in this test are consistent with a Central Auditory Processing Disorder.

  12. [Peripheral nervous system and speech disorders].

    PubMed

    Ferri, Lluís

    2014-02-24

    Disorders affecting the lower motor neurons in childhood, with a congenital or acquired aetiology, give rise to difficulties in neuromotor response and, therefore, motor disorders affecting speech in a period that is especially critical for the development of language. The low incidence of this pathology, its comorbidity with other brain conditions and its uncertain prognosis make it a particularly interesting area of study. The purpose of this work is to review the motor disorders affecting speech in flaccid dysarthria, together with its functional evaluation and speech therapy interventions. The study aims to carry out the clinical characterisation of the disorders affecting verbal production of a peripheral origin, and more specifically flaccid dysarthria and its respiratory, phonatory, resonance, articulatory and prosodic manifestations. The analysis then goes on to outline the functional evaluation and lines of intervention for its treatment are proposed. The clinical manifestations of flaccid dysarthria are very heterogeneous and range from very slight difficulties in articulation to severe disorders that seriously limit the capacity for verbal expression. In most cases, a functional examination yields valuable findings for its identification and classification, for determining the need for complementary evaluations and for establishing the most suitable programme of speech therapy. The guided participation of the family and the interdisciplinary approach are factors that play a decisive role in improving these processes.

  13. Student diversity and implications for clinical competency development amongst domestic and international speech-language pathology students.

    PubMed

    Attrill, Stacie; Lincoln, Michelle; McAllister, Sue

    2012-06-01

    International students graduating from speech-language pathology university courses must achieve the same minimum competency standards as domestic students. This study aimed to collect descriptive information about the number, origin, and placement performance of international students as well as perceptions of the performance of international students on placement. University Clinical Education Coordinators (CECs), who manage clinical placements in eight undergraduate and six graduate entry programs across the 10 participating universities in Australia and New Zealand completed a survey about 3455 international and domestic speech-language pathology students. Survey responses were analysed quantitatively and qualitatively with non-parametric statistics and thematic analysis. Results indicated that international students came from a variety of countries, but with a regional focus on the countries of Central and Southern Asia. Although domestic students were noted to experience significantly less placement failure, fewer supplementary placements, and reduced additional placement support than international students, the effect size of these relationships was consistently small and therefore weak. CECs rated international students as more frequently experiencing difficulties with communication competencies on placement. However, CECs qualitative comments revealed that culturally and linguistically diverse (CALD) students may experience more difficulties with speech-language pathology competency development than international students. Students' CALD status should be included in future investigations of factors influencing speech-language pathology competency development.

  14. When does speech sound disorder matter for literacy? The role of disordered speech errors, co-occurring language impairment and family risk of dyslexia.

    PubMed

    Hayiou-Thomas, Marianna E; Carroll, Julia M; Leavett, Ruth; Hulme, Charles; Snowling, Margaret J

    2017-02-01

    This study considers the role of early speech difficulties in literacy development, in the context of additional risk factors. Children were identified with speech sound disorder (SSD) at the age of 3½ years, on the basis of performance on the Diagnostic Evaluation of Articulation and Phonology. Their literacy skills were assessed at the start of formal reading instruction (age 5½), using measures of phoneme awareness, word-level reading and spelling; and 3 years later (age 8), using measures of word-level reading, spelling and reading comprehension. The presence of early SSD conferred a small but significant risk of poor phonemic skills and spelling at the age of 5½ and of poor word reading at the age of 8. Furthermore, within the group with SSD, the persistence of speech difficulties to the point of school entry was associated with poorer emergent literacy skills, and children with 'disordered' speech errors had poorer word reading skills than children whose speech errors indicated 'delay'. In contrast, the initial severity of SSD was not a significant predictor of reading development. Beyond the domain of speech, the presence of a co-occurring language impairment was strongly predictive of literacy skills and having a family risk of dyslexia predicted additional variance in literacy at both time-points. Early SSD alone has only modest effects on literacy development but when additional risk factors are present, these can have serious negative consequences, consistent with the view that multiple risks accumulate to predict reading disorders. © 2016 The Authors. Journal of Child Psychology and Psychiatry published by John Wiley & Sons Ltd on behalf of Association for Child and Adolescent Mental Health.

  15. The development of co-speech gesture in the communication of children with autism spectrum disorders.

    PubMed

    Sowden, Hannah; Clegg, Judy; Perkins, Michael

    2013-12-01

    Co-speech gestures have a close semantic relationship to speech in adult conversation. In typically developing children co-speech gestures which give additional information to speech facilitate the emergence of multi-word speech. A difficulty with integrating audio-visual information is known to exist for individuals with Autism Spectrum Disorder (ASD), which may affect development of the speech-gesture system. A longitudinal observational study was conducted with four children with ASD, aged 2;4 to 3;5 years. Participants were video-recorded for 20 min every 2 weeks during their attendance on an intervention programme. Recording continued for up to 8 months, thus affording a rich analysis of gestural practices from pre-verbal to multi-word speech across the group. All participants combined gesture with either speech or vocalisations. Co-speech gestures providing additional information to speech were observed to be either absent or rare. Findings suggest that children with ASD do not make use of the facilitating communicative effects of gesture in the same way as typically developing children.

  16. Asymmetric Dynamic Attunement of Speech and Gestures in the Construction of Children's Understanding.

    PubMed

    De Jonge-Hoekstra, Lisette; Van der Steen, Steffie; Van Geert, Paul; Cox, Ralf F A

    2016-01-01

    As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6) from Kindergarten (n = 5) and first grade (n = 7) participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA) to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on (1) the temporal relation between gestures and speech, (2) the relative strength and direction of the interaction between gestures and speech, (3) the relative strength and direction between gestures and speech for different levels of understanding, and (4) relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal) asymmetry in the gestures-speech interaction. For younger children, the balance leans more toward gestures leading speech in time, while the balance leans more toward speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools' language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between gestures and speech. The picture that emerges from our analyses suggests that the relation between gestures, speech and cognition is more complex than previously thought. We suggest that temporal differences and asymmetry in influence between gestures and speech arise from simultaneous coordination of synergies.

  17. Speech Discrimination Difficulties in High-Functioning Autism Spectrum Disorder Are Likely Independent of Auditory Hypersensitivity

    PubMed Central

    Dunlop, William A.; Enticott, Peter G.; Rajan, Ramesh

    2016-01-01

    Autism Spectrum Disorder (ASD), characterized by impaired communication skills and repetitive behaviors, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD) individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants. PMID:27555814

  18. Association of Velopharyngeal Insufficiency With Quality of Life and Patient-Reported Outcomes After Speech Surgery.

    PubMed

    Bhuskute, Aditi; Skirko, Jonathan R; Roth, Christina; Bayoumi, Ahmed; Durbin-Johnson, Blythe; Tollefson, Travis T

    2017-09-01

    Patients with cleft palate and other causes of velopharyngeal insufficiency (VPI) suffer adverse effects on social interactions and communication. Measurement of these patient-reported outcomes is needed to help guide surgical and nonsurgical care. To further validate the VPI Effects on Life Outcomes (VELO) instrument, measure the change in quality of life (QOL) after speech surgery, and test the association of change in speech with change in QOL. Prospective descriptive cohort including children and young adults undergoing speech surgery for VPI in a tertiary academic center. Participants completed the validated VELO instrument before and after surgical treatment. The main outcome measures were preoperative and postoperative VELO scores and the perceptual speech assessment of speech intelligibility. The VELO scores are divided into subscale domains. Changes in VELO after surgery were analyzed using linear regression models. VELO scores were analyzed as a function of speech intelligibility adjusting for age and cleft type. The correlation between speech intelligibility rating and VELO scores was estimated using the polyserial correlation. Twenty-nine patients (13 males and 16 females) were included. Mean (SD) age was 7.9 (4.1) years (range, 4-20 years). Pharyngeal flap was used in 14 (48%) cases, Furlow palatoplasty in 12 (41%), and sphincter pharyngoplasty in 1 (3%). The mean (SD) preoperative speech intelligibility rating was 1.71 (1.08), which decreased postoperatively to 0.79 (0.93) in 24 patients who completed protocol (P < .01). The VELO scores improved after surgery (P<.001) as did most subscale scores. Caregiver impact did not change after surgery (P = .36). Speech Intelligibility was correlated with preoperative and postoperative total VELO score (P < .01) and to preoperative subscale domains (situational difficulty [VELO-SiD, P = .005] and perception by others [VELO-PO, P = .05]) and postoperative subscale domains (VELO-SiD [P = .03], VELO-PO [P = .003]). Neither the VELO total nor subscale score change after surgery was correlated with change in speech intelligibility. Speech surgery improves VPI-specific quality of life. We confirmed validation in a population of untreated patients with VPI and included pharyngeal flap surgery, which had not previously been included in validation studies. The VELO instrument provides patient-specific outcomes, which allows a broader understanding of the social, emotional, and physical effects of VPI. 2.

  19. Individual Differences in Speech and Language Ability Profiles in Areas of High Deprivation

    ERIC Educational Resources Information Center

    Jordan, Julie-Ann; Coulter, Lorraine

    2017-01-01

    Speech and language ability is not a unitary concept; rather, it is made up of multiple abilities such as grammar, articulation and vocabulary. Young children from socio-economically deprived areas are more likely to experience language difficulties than those living in more affluent areas. However, less is known about individual differences in…

  20. A Multivariate Analytic Approach to the Differential Diagnosis of Apraxia of Speech

    ERIC Educational Resources Information Center

    Basilakos, Alexandra; Yourganov, Grigori; den Ouden, Dirk-Bart; Fogerty, Daniel; Rorden, Chris; Feenaughty, Lynda; Fridriksson, Julius

    2017-01-01

    Purpose: Apraxia of speech (AOS) is a consequence of stroke that frequently co-occurs with aphasia. Its study is limited by difficulties with its perceptual evaluation and dissociation from co-occurring impairments. This study examined the classification accuracy of several acoustic measures for the differential diagnosis of AOS in a sample of…

  1. JPRS Report, East Europe.

    DTIC Science & Technology

    1987-09-23

    economic impact . Here, the cooperation of specialists from research institutes is required. c) Practice has shown that not...recording the national economic impact is the determination of the applicability area and the applicability volume. Here, in practice, repeated difficulties...68 68 Dascalescu Speech, by Constantin Dascalescu Olteanu Speech, by Constantin Olteanu Defense Minister’s Order of the Day, by Vasile Milea

  2. Age-Related Changes in Objective and Subjective Speech Perception in Complex Listening Environments

    ERIC Educational Resources Information Center

    Helfer, Karen S.; Merchant, Gabrielle R.; Wasiuk, Peter A.

    2017-01-01

    Purpose: A frequent complaint by older adults is difficulty communicating in challenging acoustic environments. The purpose of this work was to review and summarize information about how speech perception in complex listening situations changes across the adult age range. Method: This article provides a review of age-related changes in speech…

  3. Conduction Aphasia, Sensory-Motor Integration, and Phonological Short-Term Memory--An Aggregate Analysis of Lesion and fMRI Data

    ERIC Educational Resources Information Center

    Buchsbaum, Bradley R.; Baldo, Juliana; Okada, Kayoko; Berman, Karen F.; Dronkers, Nina; D'Esposito, Mark; Hickok, Gregory

    2011-01-01

    Conduction aphasia is a language disorder characterized by frequent speech errors, impaired verbatim repetition, a deficit in phonological short-term memory, and naming difficulties in the presence of otherwise fluent and grammatical speech output. While traditional models of conduction aphasia have typically implicated white matter pathways,…

  4. What Factors Place Children with Speech Sound Disorders at Risk for Reading Problems?

    ERIC Educational Resources Information Center

    Anthony, Jason L.; Aghara, Rachel Greenblatt; Dunkelberger, Martha J.; Anthony, Teresa I.; Williams, Jeffrey M.; Zhang, Zhou

    2011-01-01

    Purpose: To identify weaknesses in print awareness and phonological processing that place children with speech sound disorders (SSDs) at increased risk for reading difficulties. Method: Language, literacy, and phonological skills of 3 groups of preschool-age children were compared: a group of 68 children with SSDs, a group of 68 peers with normal…

  5. Children's Recognition of Their Own Recorded Voice: Influence of Age and Phonological Impairment

    ERIC Educational Resources Information Center

    Strombergsson, Sofia

    2013-01-01

    Children with phonological impairment (PI) often have difficulties perceiving insufficiencies in their own speech. The use of recordings has been suggested as a way of directing the child's attention toward his/her own speech, despite a lack of evidence that children actually recognize their recorded voice as their own. We present two studies of…

  6. Interaction matters: A perceived social partner alters the neural processing of human speech.

    PubMed

    Rice, Katherine; Redcay, Elizabeth

    2016-04-01

    Mounting evidence suggests that social interaction changes how communicative behaviors (e.g., spoken language, gaze) are processed, but the precise neural bases by which social-interactive context may alter communication remain unknown. Various perspectives suggest that live interactions are more rewarding, more attention-grabbing, or require increased mentalizing-thinking about the thoughts of others. Dissociating between these possibilities is difficult because most extant neuroimaging paradigms examining social interaction have not directly compared live paradigms to conventional "offline" (or recorded) paradigms. We developed a novel fMRI paradigm to assess whether and how an interactive context changes the processing of speech matched in content and vocal characteristics. Participants listened to short vignettes--which contained no reference to people or mental states--believing that some vignettes were prerecorded and that others were presented over a real-time audio-feed by a live social partner. In actuality, all speech was prerecorded. Simply believing that speech was live increased activation in each participant's own mentalizing regions, defined using a functional localizer. Contrasting live to recorded speech did not reveal significant differences in attention or reward regions. Further, higher levels of autistic-like traits were associated with altered neural specialization for live interaction. These results suggest that humans engage in ongoing mentalizing about social partners, even when such mentalizing is not explicitly required, illustrating how social context shapes social cognition. Understanding communication in social context has important implications for typical and atypical social processing, especially for disorders like autism where social difficulties are more acute in live interaction. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Acute stress reduces speech fluency.

    PubMed

    Buchanan, Tony W; Laures-Gore, Jacqueline S; Duff, Melissa C

    2014-03-01

    People often report word-finding difficulties and other language disturbances when put in a stressful situation. There is, however, scant empirical evidence to support the claim that stress affects speech productivity. To address this issue, we measured speech and language variables during a stressful Trier Social Stress Test (TSST) as well as during a less stressful "placebo" TSST (Het et al., 2009). Compared to the non-stressful speech, participants showed higher word productivity during the TSST. By contrast, participants paused more during the stressful TSST, an effect that was especially pronounced in participants who produced a larger cortisol and heart rate response to the stressor. Findings support anecdotal evidence of stress-impaired speech production abilities. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Contributions of speech science to the technology of man-machine voice interactions

    NASA Technical Reports Server (NTRS)

    Lea, Wayne A.

    1977-01-01

    Research in speech understanding was reviewed. Plans which include prosodics research, phonological rules for speech understanding systems, and continued interdisciplinary phonetics research are discussed. Improved acoustic phonetic analysis capabilities in speech recognizers are suggested.

  9. Awareness and reactions of young stuttering children aged 2-7 years old towards their speech disfluency.

    PubMed

    Boey, Ronny A; Van de Heyning, Paul H; Wuyts, Floris L; Heylen, Louis; Stoop, Reinhard; De Bodt, Marc S

    2009-01-01

    Awareness has been an important factor in theories of onset and development of stuttering. So far it has been suggested that even young children might be aware of their speech difficulty. The purpose of the present study was to investigate (a) the number of stuttering children aware of their speech difficulty, (b) the description of reported behavioural expression of awareness, (c) the relationship with age-related variables and with stuttering severity. For a total group of 1122 children with mean age of 4 year 7 months (range 2-7 years old), parental-reported unambiguous verbal and non-verbal reactions as a response to stuttering were available. In the present study, awareness is observed for 56.7% of the very young children (i.e., 2 years old) and gradually increases with age up until 89.7% of the children at the age of seven. All considered age-related factors (i.e., chronological age, age at onset and time since onset) and stuttering severity are statistically significantly related to awareness. Readers will be able to: (1) Describe findings of awareness of speech disfluency of stuttering children based on an overview of literature; (2) Describe methodological aspects of studies on awareness; (3) Know reported data on awareness of speech disfluency in young stuttering children of the present study; (4) Describe the relationship of awareness of speech disfluency with chronological age, age at onset, time since onset, gender and stuttering severity.

  10. Procedures for Obtaining and Analyzing Writing Samples of School-Age Children and Adolescents

    ERIC Educational Resources Information Center

    Price, Johanna R.; Jackson, Sandra C.

    2015-01-01

    Purpose: Many students' writing skills are below grade-level expectations, and students with oral language difficulties are at particular risk for writing difficulties. Speech-language pathologists' (SLPs') expertise in language applies to both the oral and written modalities, yet evidence suggests that SLPs' confidence regarding writing…

  11. Are Communication Strategies Teachable?

    ERIC Educational Resources Information Center

    Lewis, Samantha

    2011-01-01

    This article discusses the teachability of communication strategies in the EFL classroom. As well as reflecting on the nature of speech production in the mother tongue, it looks at some of the difficulties encountered when speaking in a foreign language and the inherent difficulties in "teaching" speaking as a skill. It focuses on different types…

  12. Orthography Influences the Perception and Production of Speech

    ERIC Educational Resources Information Center

    Rastle, Kathleen; McCormick, Samantha F.; Bayliss, Linda; Davis, Colin J.

    2011-01-01

    One intriguing question in language research concerns the extent to which orthographic information impacts on spoken word processing. Previous research has faced a number of methodological difficulties and has not reached a definitive conclusion. Our research addresses these difficulties by capitalizing on recent developments in the area of word…

  13. Clinical Reasoning Skills of Speech and Language Therapy Students

    ERIC Educational Resources Information Center

    Hoben, Kirsty; Varley, Rosemary; Cox, Richard

    2007-01-01

    Background: Difficulties experienced by novices in clinical reasoning have been well documented in many professions, especially medicine (Boshuizen and Schmidt 1992, 2000; Elstein, Shulman and Sprafka 1978; Patel and Groen 1986; Rikers, Loyens and Schmidt 2004). These studies have shown that novice clinicians have difficulties with both knowledge…

  14. Assistive Software Tools for Secondary-Level Students with Literacy Difficulties

    ERIC Educational Resources Information Center

    Lange, Alissa A.; McPhillips, Martin; Mulhern, Gerry; Wylie, Judith

    2006-01-01

    The present study assessed the compensatory effectiveness of four assistive software tools (speech synthesis, spellchecker, homophone tool, and dictionary) on literacy. Secondary-level students (N = 93) with reading difficulties completed computer-based tests of literacy skills. Training on their respective software followed for those assigned to…

  15. Parent-child interaction in motor speech therapy.

    PubMed

    Namasivayam, Aravind Kumar; Jethava, Vibhuti; Pukonen, Margit; Huynh, Anna; Goshulak, Debra; Kroll, Robert; van Lieshout, Pascal

    2018-01-01

    This study measures the reliability and sensitivity of a modified Parent-Child Interaction Observation scale (PCIOs) used to monitor the quality of parent-child interaction. The scale is part of a home-training program employed with direct motor speech intervention for children with speech sound disorders. Eighty-four preschool age children with speech sound disorders were provided either high- (2×/week/10 weeks) or low-intensity (1×/week/10 weeks) motor speech intervention. Clinicians completed the PCIOs at the beginning, middle, and end of treatment. Inter-rater reliability (Kappa scores) was determined by an independent speech-language pathologist who assessed videotaped sessions at the midpoint of the treatment block. Intervention sensitivity of the scale was evaluated using a Friedman test for each item and then followed up with Wilcoxon pairwise comparisons where appropriate. We obtained fair-to-good inter-rater reliability (Kappa = 0.33-0.64) for the PCIOs using only video-based scoring. Child-related items were more strongly influenced by differences in treatment intensity than parent-related items, where a greater number of sessions positively influenced parent learning of treatment skills and child behaviors. The adapted PCIOs is reliable and sensitive to monitor the quality of parent-child interactions in a 10-week block of motor speech intervention with adjunct home therapy. Implications for rehabilitation Parent-centered therapy is considered a cost effective method of speech and language service delivery. However, parent-centered models may be difficult to implement for treatments such as developmental motor speech interventions that require a high degree of skill and training. For children with speech sound disorders and motor speech difficulties, a translated and adapted version of the parent-child observation scale was found to be sufficiently reliable and sensitive to assess changes in the quality of the parent-child interactions during intervention. In developmental motor speech interventions, high-intensity treatment (2×/week/10 weeks) facilitates greater changes in the parent-child interactions than low intensity treatment (1×/week/10 weeks). On one hand, parents may need to attend more than five sessions with the clinician to learn how to observe and address their child's speech difficulties. On the other hand, children with speech sound disorders may need more than 10 sessions to adapt to structured play settings even when activities and therapy materials are age-appropriate.

  16. Speech-to-Speech Relay Service

    MedlinePlus

    ... are specifically trained in understanding a variety of speech disorders, which enables them to repeat what the caller says in a manner that makes the caller’s words clear and understandable to the ... people with speech disabilities cannot communicate by telephone because the parties ...

  17. Speech Understanding with a New Implant Technology: A Comparative Study with a New Nonskin Penetrating Baha System

    PubMed Central

    Caversaccio, Marco

    2014-01-01

    Objective. To compare hearing and speech understanding between a new, nonskin penetrating Baha system (Baha Attract) to the current Baha system using a skin-penetrating abutment. Methods. Hearing and speech understanding were measured in 16 experienced Baha users. The transmission path via the abutment was compared to a simulated Baha Attract transmission path by attaching the implantable magnet to the abutment and then by adding a sample of artificial skin and the external parts of the Baha Attract system. Four different measurements were performed: bone conduction thresholds directly through the sound processor (BC Direct), aided sound field thresholds, aided speech understanding in quiet, and aided speech understanding in noise. Results. The simulated Baha Attract transmission path introduced an attenuation starting from approximately 5 dB at 1000 Hz, increasing to 20–25 dB above 6000 Hz. However, aided sound field threshold shows smaller differences and aided speech understanding in quiet and in noise does not differ significantly between the two transmission paths. Conclusion. The Baha Attract system transmission path introduces predominately high frequency attenuation. This attenuation can be partially compensated by adequate fitting of the speech processor. No significant decrease in speech understanding in either quiet or in noise was found. PMID:25140314

  18. Generalized auditory agnosia with spared music recognition in a left-hander. Analysis of a case with a right temporal stroke.

    PubMed

    Mendez, M F

    2001-02-01

    After a right temporoparietal stroke, a left-handed man lost the ability to understand speech and environmental sounds but developed greater appreciation for music. The patient had preserved reading and writing but poor verbal comprehension. Slower speech, single syllable words, and minimal written cues greatly facilitated his verbal comprehension. On identifying environmental sounds, he made predominant acoustic errors. Although he failed to name melodies, he could match, describe, and sing them. The patient had normal hearing except for presbyacusis, right-ear dominance for phonemes, and normal discrimination of basic psychoacoustic features and rhythm. Further testing disclosed difficulty distinguishing tone sequences and discriminating two clicks and short-versus-long tones, particularly in the left ear. Together, these findings suggest impairment in a direct route for temporal analysis and auditory word forms in his right hemisphere to Wernicke's area in his left hemisphere. The findings further suggest a separate and possibly rhythm-based mechanism for music recognition.

  19. Brain Plasticity in Speech Training in Native English Speakers Learning Mandarin Tones

    NASA Astrophysics Data System (ADS)

    Heinzen, Christina Carolyn

    The current study employed behavioral and event-related potential (ERP) measures to investigate brain plasticity associated with second-language (L2) phonetic learning based on an adaptive computer training program. The program utilized the acoustic characteristics of Infant-Directed Speech (IDS) to train monolingual American English-speaking listeners to perceive Mandarin lexical tones. Behavioral identification and discrimination tasks were conducted using naturally recorded speech, carefully controlled synthetic speech, and non-speech control stimuli. The ERP experiments were conducted with selected synthetic speech stimuli in a passive listening oddball paradigm. Identical pre- and post- tests were administered on nine adult listeners, who completed two-to-three hours of perceptual training. The perceptual training sessions used pair-wise lexical tone identification, and progressed through seven levels of difficulty for each tone pair. The levels of difficulty included progression in speaker variability from one to four speakers and progression through four levels of acoustic exaggeration of duration, pitch range, and pitch contour. Behavioral results for the natural speech stimuli revealed significant training-induced improvement in identification of Tones 1, 3, and 4. Improvements in identification of Tone 4 generalized to novel stimuli as well. Additionally, comparison between discrimination of across-category and within-category stimulus pairs taken from a synthetic continuum revealed a training-induced shift toward more native-like categorical perception of the Mandarin lexical tones. Analysis of the Mismatch Negativity (MMN) responses in the ERP data revealed increased amplitude and decreased latency for pre-attentive processing of across-category discrimination as a result of training. There were also laterality changes in the MMN responses to the non-speech control stimuli, which could reflect reallocation of brain resources in processing pitch patterns for the across-category lexical tone contrast. Overall, the results support the use of IDS characteristics in training non-native speech contrasts and provide impetus for further research.

  20. Effects of context and word class on lexical retrieval in Chinese speakers with anomic aphasia.

    PubMed

    Law, Sam-Po; Kong, Anthony Pak-Hin; Lai, Loretta Wing-Shan; Lai, Christy

    2015-01-01

    Differences in processing nouns and verbs have been investigated intensely in psycholinguistics and neuropsychology in past decades. However, the majority of studies examining retrieval of these word classes have involved tasks of single word stimuli or responses. While the results have provided rich information for addressing issues about grammatical class distinctions, it is unclear whether they have adequate ecological validity for understanding lexical retrieval in connected speech which characterizes daily verbal communication. Previous investigations comparing retrieval of nouns and verbs in single word production and connected speech have reported either discrepant performance between the two contexts with presence of word class dissociation in picture naming but absence in connected speech, or null effects of word class. In addition, word finding difficulties have been found to be less severe in connected speech than picture naming. However, these studies have failed to match target stimuli of the two word classes and between tasks on psycholinguistic variables known to affect performance in response latency and/or accuracy. The present study compared lexical retrieval of nouns and verbs in picture naming and connected speech from picture description, procedural description, and story-telling among 19 Chinese speakers with anomic aphasia and their age, gender, and education matched healthy controls, to understand the influence of grammatical class on word production across speech contexts when target items were balanced for confounding variables between word classes and tasks. Elicitation of responses followed the protocol of the AphasiaBank consortium (http://talkbank.org/AphasiaBank/). Target words for confrontation naming were based on well-established naming tests, while those for narrative were drawn from a large database of normal speakers. Selected nouns and verbs in the two contexts were matched for age-of-acquisition (AoA) and familiarity. Influence of imageability was removed through statistical control. When AoA and familiarity were balanced, nouns were retrieved better than verbs, and performance was higher in picture naming than connected speech. When imageability was further controlled for, only the effect of task remained significant. The absence of word class effects when confounding variables are controlled for is similar to many previous reports; however, the pattern of better word retrieval in naming is rare but compatible with the account that processing demands are higher in narrative than naming. The overall findings have strongly suggested the importance of including connected speech tasks in any language assessment and evaluation of language rehabilitation of individuals with aphasia.

  1. Effects of context and word class on lexical retrieval in Chinese speakers with anomic aphasia

    PubMed Central

    Law, Sam-Po; Kong, Anthony Pak-Hin; Lai, Loretta Wing-Shan; Lai, Christy

    2014-01-01

    Background Differences in processing nouns and verbs have been investigated intensely in psycholinguistics and neuropsychology in past decades. However, the majority of studies examining retrieval of these word classes have involved tasks of single word stimuli or responses. While the results have provided rich information for addressing issues about grammatical class distinctions, it is unclear whether they have adequate ecological validity for understanding lexical retrieval in connected speech which characterizes daily verbal communication. Previous investigations comparing retrieval of nouns and verbs in single word production and connected speech have reported either discrepant performance between the two contexts with presence of word class dissociation in picture naming but absence in connected speech, or null effects of word class. In addition, word finding difficulties have been found to be less severe in connected speech than picture naming. However, these studies have failed to match target stimuli of the two word classes and between tasks on psycholinguistic variables known to affect performance in response latency and/or accuracy. Aims The present study compared lexical retrieval of nouns and verbs in picture naming and connected speech from picture description, procedural description, and story-telling among 19 Chinese speakers with anomic aphasia and their age, gender, and education matched healthy controls, to understand the influence of grammatical class on word production across speech contexts when target items were balanced for confounding variables between word classes and tasks. Methods & Procedures Elicitation of responses followed the protocol of the AphasiaBank consortium (http://talkbank.org/AphasiaBank/). Target words for confrontation naming were based on well-established naming tests, while those for narrative were drawn from a large database of normal speakers. Selected nouns and verbs in the two contexts were matched for age-of-acquisition (AoA) and familiarity. Influence of imageability was removed through statistical control. Outcomes & Results When AoA and familiarity were balanced, nouns were retrieved better than verbs, and performance was higher in picture naming than connected speech. When imageability was further controlled for, only the effect of task remained significant. Conclusions The absence of word class effects when confounding variables are controlled for is similar to many previous reports; however, the pattern of better word retrieval in naming is rare but compatible with the account that processing demands are higher in narrative than naming. The overall findings have strongly suggested the importance of including connected speech tasks in any language assessment and evaluation of language rehabilitation of individuals with aphasia. PMID:25505810

  2. Speech Communication and Communication Processes: Abstracts of Doctoral Dissertations Published in "Dissertation Abstracts International," April and May 1978 (Vol. 38 Nos. 10 and 11).

    ERIC Educational Resources Information Center

    ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.

    This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 25 titles deal with a variety of topics, including the following: the nature of creativity in advertising communication; speech communication difficulties of international professors; rhetorical arguments regarding the…

  3. Meeting the Needs of Children and Young People with Speech, Language and Communication Difficulties

    ERIC Educational Resources Information Center

    Lindsay, Geoff; Dockrell, Julie; Desforges, Martin; Law, James; Peacey, Nick

    2010-01-01

    Background: The UK government set up a review of provision for children and young people with the full range of speech, language and communication needs led by a Member of Parliament, John Bercow. A research study was commissioned to provide empirical evidence to inform the Bercow Review. Aims: To examine the efficiency and effectiveness of…

  4. Comparing the Impact of Rates of Text-to-Speech Software on Reading Fluency and Comprehension for Adults with Reading Difficulties

    ERIC Educational Resources Information Center

    Coleman, Mari Beth; Killdare, Laura K.; Bell, Sherry Mee; Carter, Amanda M.

    2014-01-01

    The purpose of this study was to determine the impact of text-to-speech software on reading fluency and comprehension for four postsecondary students with below average reading fluency and comprehension including three students diagnosed with learning disabilities and concomitant conditions (e.g., attention deficit hyperactivity disorder, seizure…

  5. A Letter to the Parent(s) of a Child with Developmental Apraxia of Speech. Part III: Other Problems Often Associated with the Disorder.

    ERIC Educational Resources Information Center

    Hall, Penelope K.

    2000-01-01

    One of a series of letters to parents of children with developmental apraxia of speech, this letter discusses other problems associated with the disorder including language development problems, academic problems, motor skill problems, and chewing and swallowing difficulties. An annotated bibliography of two further readings for parents is…

  6. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  7. Speech intelligibility index predictions for young and old listeners in automobile noise: Can the index be improved by incorporating factors other than absolute threshold?

    NASA Astrophysics Data System (ADS)

    Saweikis, Meghan; Surprenant, Aimée M.; Davies, Patricia; Gallant, Don

    2003-10-01

    While young and old subjects with comparable audiograms tend to perform comparably on speech recognition tasks in quiet environments, the older subjects have more difficulty than the younger subjects with recognition tasks in degraded listening conditions. This suggests that factors other than an absolute threshold may account for some of the difficulty older listeners have on recognition tasks in noisy environments. Many metrics, including the Speech Intelligibility Index (SII), used to measure speech intelligibility, only consider an absolute threshold when accounting for age related hearing loss. Therefore these metrics tend to overestimate the performance for elderly listeners in noisy environments [Tobias et al., J. Acoust. Soc. Am. 83, 859-895 (1988)]. The present studies examine the predictive capabilities of the SII in an environment with automobile noise present. This is of interest because people's evaluation of the automobile interior sound is closely linked to their ability to carry on conversations with their fellow passengers. The four studies examine whether, for subjects with age related hearing loss, the accuracy of the SII can be improved by incorporating factors other than an absolute threshold into the model. [Work supported by Ford Motor Company.

  8. Motor Speech Disorders Associated with Primary Progressive Aphasia

    PubMed Central

    Duffy, Joseph R.; Strand, Edythe A.; Josephs, Keith A.

    2014-01-01

    Background Primary progressive aphasia (PPA) and conditions that overlap with it can be accompanied by motor speech disorders. Recognition and understanding of motor speech disorders can contribute to a fuller clinical understanding of PPA and its management as well as its localization and underlying pathology. Aims To review the types of motor speech disorders that may occur with PPA, its primary variants, and its overlap syndromes (progressive supranuclear palsy syndrome, corticobasal syndrome, motor neuron disease), as well as with primary progressive apraxia of speech. Main Contribution The review should assist clinicians' and researchers' understanding of the relationship between motor speech disorders and PPA and its major variants. It also highlights the importance of recognizing neurodegenerative apraxia of speech as a condition that can occur with little or no evidence of aphasia. Conclusion Motor speech disorders can occur with PPA. Their recognition can contribute to clinical diagnosis and management of PPA and to understanding and predicting the localization and pathology associated with PPA variants and conditions that can overlap with them. PMID:25309017

  9. Social dominance orientation, nonnative accents, and hiring recommendations.

    PubMed

    Hansen, Karolina; Dovidio, John F

    2016-10-01

    Discrimination against nonnative speakers is widespread and largely socially acceptable. Nonnative speakers are evaluated negatively because accent is a sign that they belong to an outgroup and because understanding their speech requires unusual effort from listeners. The present research investigated intergroup bias, based on stronger support for hierarchical relations between groups (social dominance orientation [SDO]), as a predictor of hiring recommendations of nonnative speakers. In an online experiment using an adaptation of the thin-slices methodology, 65 U.S. adults (54% women; 80% White; Mage = 35.91, range = 18-67) heard a recording of a job applicant speaking with an Asian (Mandarin Chinese) or a Latino (Spanish) accent. Participants indicated how likely they would be to recommend hiring the speaker, answered questions about the text, and indicated how difficult it was to understand the applicant. Independent of objective comprehension, participants high in SDO reported that it was more difficult to understand a Latino speaker than an Asian speaker. SDO predicted hiring recommendations of the speakers, but this relationship was mediated by the perception that nonnative speakers were difficult to understand. This effect was stronger for speakers from lower status groups (Latinos relative to Asians) and was not related to objective comprehension. These findings suggest a cycle of prejudice toward nonnative speakers: Not only do perceptions of difficulty in understanding cause prejudice toward them, but also prejudice toward low-status groups can lead to perceived difficulty in understanding members of these groups. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. The Relationship Between Spectral Modulation Detection and Speech Recognition: Adult Versus Pediatric Cochlear Implant Recipients

    PubMed Central

    Noble, Jack H.; Camarata, Stephen M.; Sunderhaus, Linsey W.; Dwyer, Robert T.; Dawant, Benoit M.; Dietrich, Mary S.; Labadie, Robert F.

    2018-01-01

    Adult cochlear implant (CI) recipients demonstrate a reliable relationship between spectral modulation detection and speech understanding. Prior studies documenting this relationship have focused on postlingually deafened adult CI recipients—leaving an open question regarding the relationship between spectral resolution and speech understanding for adults and children with prelingual onset of deafness. Here, we report CI performance on the measures of speech recognition and spectral modulation detection for 578 CI recipients including 477 postlingual adults, 65 prelingual adults, and 36 prelingual pediatric CI users. The results demonstrated a significant correlation between spectral modulation detection and various measures of speech understanding for 542 adult CI recipients. For 36 pediatric CI recipients, however, there was no significant correlation between spectral modulation detection and speech understanding in quiet or in noise nor was spectral modulation detection significantly correlated with listener age or age at implantation. These findings suggest that pediatric CI recipients might not depend upon spectral resolution for speech understanding in the same manner as adult CI recipients. It is possible that pediatric CI users are making use of different cues, such as those contained within the temporal envelope, to achieve high levels of speech understanding. Further investigation is warranted to investigate the relationship between spectral and temporal resolution and speech recognition to describe the underlying mechanisms driving peripheral auditory processing in pediatric CI users. PMID:29716437

  11. Temporal and speech processing skills in normal hearing individuals exposed to occupational noise.

    PubMed

    Kumar, U Ajith; Ameenudin, Syed; Sangamanatha, A V

    2012-01-01

    Prolonged exposure to high levels of occupational noise can cause damage to hair cells in the cochlea and result in permanent noise-induced cochlear hearing loss. Consequences of cochlear hearing loss on speech perception and psychophysical abilities have been well documented. Primary goal of this research was to explore temporal processing and speech perception Skills in individuals who are exposed to occupational noise of more than 80 dBA and not yet incurred clinically significant threshold shifts. Contribution of temporal processing skills to speech perception in adverse listening situation was also evaluated. A total of 118 participants took part in this research. Participants comprised three groups of train drivers in the age range of 30-40 (n= 13), 41 50 ( = 13), 41-50 (n = 9), and 51-60 (n = 6) years and their non-noise-exposed counterparts (n = 30 in each age group). Participants of all the groups including the train drivers had hearing sensitivity within 25 dB HL in the octave frequencies between 250 and 8 kHz. Temporal processing was evaluated using gap detection, modulation detection, and duration pattern tests. Speech recognition was tested in presence multi-talker babble at -5dB SNR. Differences between experimental and control groups were analyzed using ANOVA and independent sample t-tests. Results showed a trend of reduced temporal processing skills in individuals with noise exposure. These deficits were observed despite normal peripheral hearing sensitivity. Speech recognition scores in the presence of noise were also significantly poor in noise-exposed group. Furthermore, poor temporal processing skills partially accounted for the speech recognition difficulties exhibited by the noise-exposed individuals. These results suggest that noise can cause significant distortions in the processing of suprathreshold temporal cues which may add to difficulties in hearing in adverse listening conditions.

  12. Awareness of Rhythm Patterns in Speech and Music in Children with Specific Language Impairments

    PubMed Central

    Cumming, Ruth; Wilson, Angela; Leong, Victoria; Colling, Lincoln J.; Goswami, Usha

    2015-01-01

    Children with specific language impairments (SLIs) show impaired perception and production of language, and also show impairments in perceiving auditory cues to rhythm [amplitude rise time (ART) and sound duration] and in tapping to a rhythmic beat. Here we explore potential links between language development and rhythm perception in 45 children with SLI and 50 age-matched controls. We administered three rhythmic tasks, a musical beat detection task, a tapping-to-music task, and a novel music/speech task, which varied rhythm and pitch cues independently or together in both speech and music. Via low-pass filtering, the music sounded as though it was played from a low-quality radio and the speech sounded as though it was muffled (heard “behind the door”). We report data for all of the SLI children (N = 45, IQ varying), as well as for two independent subgroupings with intact IQ. One subgroup, “Pure SLI,” had intact phonology and reading (N = 16), the other, “SLI PPR” (N = 15), had impaired phonology and reading. When IQ varied (all SLI children), we found significant group differences in all the rhythmic tasks. For the Pure SLI group, there were rhythmic impairments in the tapping task only. For children with SLI and poor phonology (SLI PPR), group differences were found in all of the filtered speech/music AXB tasks. We conclude that difficulties with rhythmic cues in both speech and music are present in children with SLIs, but that some rhythmic measures are more sensitive than others. The data are interpreted within a “prosodic phrasing” hypothesis, and we discuss the potential utility of rhythmic and musical interventions in remediating speech and language difficulties in children. PMID:26733848

  13. Awareness of Rhythm Patterns in Speech and Music in Children with Specific Language Impairments.

    PubMed

    Cumming, Ruth; Wilson, Angela; Leong, Victoria; Colling, Lincoln J; Goswami, Usha

    2015-01-01

    Children with specific language impairments (SLIs) show impaired perception and production of language, and also show impairments in perceiving auditory cues to rhythm [amplitude rise time (ART) and sound duration] and in tapping to a rhythmic beat. Here we explore potential links between language development and rhythm perception in 45 children with SLI and 50 age-matched controls. We administered three rhythmic tasks, a musical beat detection task, a tapping-to-music task, and a novel music/speech task, which varied rhythm and pitch cues independently or together in both speech and music. Via low-pass filtering, the music sounded as though it was played from a low-quality radio and the speech sounded as though it was muffled (heard "behind the door"). We report data for all of the SLI children (N = 45, IQ varying), as well as for two independent subgroupings with intact IQ. One subgroup, "Pure SLI," had intact phonology and reading (N = 16), the other, "SLI PPR" (N = 15), had impaired phonology and reading. When IQ varied (all SLI children), we found significant group differences in all the rhythmic tasks. For the Pure SLI group, there were rhythmic impairments in the tapping task only. For children with SLI and poor phonology (SLI PPR), group differences were found in all of the filtered speech/music AXB tasks. We conclude that difficulties with rhythmic cues in both speech and music are present in children with SLIs, but that some rhythmic measures are more sensitive than others. The data are interpreted within a "prosodic phrasing" hypothesis, and we discuss the potential utility of rhythmic and musical interventions in remediating speech and language difficulties in children.

  14. Children with Autism Understand Indirect Speech Acts: Evidence from a Semi-Structured Act-Out Task

    PubMed Central

    Kissine, Mikhail; Cano-Chervel, Julie; Carlier, Sophie; De Brabanter, Philippe; Ducenne, Lesley; Pairon, Marie-Charlotte; Deconinck, Nicolas; Delvenne, Véronique; Leybaert, Jacqueline

    2015-01-01

    Children with Autism Spectrum Disorder are often said to present a global pragmatic impairment. However, there is some observational evidence that context-based comprehension of indirect requests may be preserved in autism. In order to provide experimental confirmation to this hypothesis, indirect speech act comprehension was tested in a group of 15 children with autism between 7 and 12 years and a group of 20 typically developing children between 2:7 and 3:6 years. The aim of the study was to determine whether children with autism can display genuinely contextual understanding of indirect requests. The experiment consisted of a three-pronged semi-structured task involving Mr Potato Head. In the first phase a declarative sentence was uttered by one adult as an instruction to put a garment on a Mr Potato Head toy; in the second the same sentence was uttered as a comment on a picture by another speaker; in the third phase the same sentence was uttered as a comment on a picture by the first speaker. Children with autism complied with the indirect request in the first phase and demonstrated the capacity to inhibit the directive interpretation in phases 2 and 3. TD children had some difficulty in understanding the indirect instruction in phase 1. These results call for a more nuanced view of pragmatic dysfunction in autism. PMID:26551648

  15. Cortical activation patterns correlate with speech understanding after cochlear implantation

    PubMed Central

    Olds, Cristen; Pollonini, Luca; Abaya, Homer; Larky, Jannine; Loy, Megan; Bortfeld, Heather; Beauchamp, Michael S.; Oghalai, John S.

    2015-01-01

    Objectives Cochlear implants are a standard therapy for deafness, yet the ability of implanted patients to understand speech varies widely. To better understand this variability in outcomes, we used functional near-infrared spectroscopy (fNIRS) to image activity within regions of the auditory cortex and compare the results to behavioral measures of speech perception. Design We studied 32 deaf adults hearing through cochlear implants and 35 normal-hearing controls. We used fNIRS to measure responses within the lateral temporal lobe and the superior temporal gyrus to speech stimuli of varying intelligibility. The speech stimuli included normal speech, channelized speech (vocoded into 20 frequency bands), and scrambled speech (the 20 frequency bands were shuffled in random order). We also used environmental sounds as a control stimulus. Behavioral measures consisted of the Speech Reception Threshold, CNC words, and AzBio Sentence tests measured in quiet. Results Both control and implanted participants with good speech perception exhibited greater cortical activations to natural speech than to unintelligible speech. In contrast, implanted participants with poor speech perception had large, indistinguishable cortical activations to all stimuli. The ratio of cortical activation to normal speech to that of scrambled speech directly correlated with the CNC Words and AzBio Sentences scores. This pattern of cortical activation was not correlated with auditory threshold, age, side of implantation, or time after implantation. Turning off the implant reduced cortical activations in all implanted participants. Conclusions Together, these data indicate that the responses we measured within the lateral temporal lobe and the superior temporal gyrus correlate with behavioral measures of speech perception, demonstrating a neural basis for the variability in speech understanding outcomes after cochlear implantation. PMID:26709749

  16. The Relationship Between Apraxia of Speech and Oral Apraxia: Association or Dissociation?

    PubMed

    Whiteside, Sandra P; Dyson, Lucy; Cowell, Patricia E; Varley, Rosemary A

    2015-11-01

    Acquired apraxia of speech (AOS) is a motor speech disorder that affects the implementation of articulatory gestures and the fluency and intelligibility of speech. Oral apraxia (OA) is an impairment of nonspeech volitional movement. Although many speakers with AOS also display difficulties with volitional nonspeech oral movements, the relationship between the 2 conditions is unclear. This study explored the relationship between speech and volitional nonspeech oral movement impairment in a sample of 50 participants with AOS. We examined levels of association and dissociation between speech and OA using a battery of nonspeech oromotor, speech, and auditory/aphasia tasks. There was evidence of a moderate positive association between the 2 impairments across participants. However, individual profiles revealed patterns of dissociation between the 2 in a few cases, with evidence of double dissociation of speech and oral apraxic impairment. We discuss the implications of these relationships for models of oral motor and speech control. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Evidence-Based Practice, Response to Intervention, and the Prevention of Reading Difficulties

    ERIC Educational Resources Information Center

    Justice, Laura M.

    2006-01-01

    Purpose: This article provides an evidence-based perspective on what school communities can do to lower the prevalence of reading difficulties among their pupils through preventive interventions. It also delineates the roles that speech-language pathologists (SLPs) might play in these interventions. Method: This article is organized to first…

  18. Communication Interventions and Their Impact on Behaviour in the Young Child: A Systematic Review

    ERIC Educational Resources Information Center

    Law, James; Plunkett, Charlene C.; Stringer, Helen

    2012-01-01

    Speech, language and communication needs (SLCN) and social, emotional and behaviour difficulties (SEBD) commonly overlap, yet we know relatively little about the mechanism linking the two, specifically to what extent it is possible to reduce behaviour difficulties by targeted communication skills. The EPPI Centre systematic review methodology was…

  19. Assessment and Management of the Communication Difficulties of Children with Cerebral Palsy: A UK Survey of SLT Practice

    ERIC Educational Resources Information Center

    Watson, Rose Mary; Pennington, Lindsay

    2015-01-01

    Background: Communication difficulties are common in cerebral palsy (CP) and are frequently associated with motor, intellectual and sensory impairments. Speech and language therapy research comprises single-case experimental design and small group studies, limiting evidence-based intervention and possibly exacerbating variation in practice. Aims:…

  20. Improving Comprehension in Adolescents with Severe Receptive Language Impairments: A Randomized Control Trial of Intervention for Coordinating Conjunctions

    ERIC Educational Resources Information Center

    Ebbels, Susan H.; Maric, Nataša; Murphy, Aoife; Turner, Gail

    2014-01-01

    Background: Little evidence exists for the effectiveness of therapy for children with receptive language difficulties, particularly those whose difficulties are severe and persistent. Aims: To establish the effectiveness of explicit speech and language therapy with visual support for secondary school-aged children with language impairments…

  1. How Does Fragile X Syndrome Affect Speech and Language Skills? FPG Snapshot. Number 51. January 2008

    ERIC Educational Resources Information Center

    FPG Child Development Institute, 2008

    2008-01-01

    Children with fragile X syndrome (FXS), the most common known inherited cause of intellectual disability, typically experience communication difficulties. Children with other intellectual disabilities such as Down syndrome also experience communication difficulties. Further, many boys with FXS (some estimates are as high as 35 percent) also are…

  2. Family Histories of Children with SLI Who Show Extended Optional Infinitives.

    ERIC Educational Resources Information Center

    Rice, Mabel L.; Haney, Karla R.; Wexler, Kenneth

    1998-01-01

    A study examined family histories of 31 children with specific language impairments who were known to have particular grammatical limitations in a core feature of grammatical acquisition, a stage known as Extended Optional Infinitives. The families had significantly more speech and language difficulties, as well as language-related difficulties,…

  3. Cognitive Load Reduces Perceived Linguistic Convergence Between Dyads.

    PubMed

    Abel, Jennifer; Babel, Molly

    2017-09-01

    Speech convergence is the tendency of talkers to become more similar to someone they are listening or talking to, whether that person is a conversational partner or merely a voice heard repeating words. To elucidate the nature of the mechanisms underlying convergence, this study uses different levels of task difficulty on speech convergence within dyads collaborating on a task. Dyad members had to build identical LEGO® constructions without being able to see each other's construction, and with each member having half of the instructions required to complete the construction. Three levels of task difficulty were created, with five dyads at each level (30 participants total). Task difficulty was also measured using completion time and error rate. Listeners who heard pairs of utterances from each dyad judged convergence to be occurring in the Easy condition and to a lesser extent in the Medium condition, but not in the Hard condition. Amplitude envelope acoustic similarity analyses of the same utterance pairs showed that convergence occurred in dyads with shorter completion times and lower error rates. Together, these results suggest that while speech convergence is a highly variable behavior, it may occur more in contexts of low cognitive load. The relevance of these results for the current automatic and socially-driven models of convergence is discussed.

  4. Polysyllable Speech Accuracy and Predictors of Later Literacy Development in Preschool Children With Speech Sound Disorders.

    PubMed

    Masso, Sarah; Baker, Elise; McLeod, Sharynne; Wang, Cen

    2017-07-12

    The aim of this study was to determine if polysyllable accuracy in preschoolers with speech sound disorders (SSD) was related to known predictors of later literacy development: phonological processing, receptive vocabulary, and print knowledge. Polysyllables-words of three or more syllables-are important to consider because unlike monosyllables, polysyllables have been associated with phonological processing and literacy difficulties in school-aged children. They therefore have the potential to help identify preschoolers most at risk of future literacy difficulties. Participants were 93 preschool children with SSD from the Sound Start Study. Participants completed the Polysyllable Preschool Test (Baker, 2013) as well as phonological processing, receptive vocabulary, and print knowledge tasks. Cluster analysis was completed, and 2 clusters were identified: low polysyllable accuracy and moderate polysyllable accuracy. The clusters were significantly different based on 2 measures of phonological awareness and measures of receptive vocabulary, rapid naming, and digit span. The clusters were not significantly different on sound matching accuracy or letter, sound, or print concept knowledge. The participants' poor performance on print knowledge tasks suggested that as a group, they were at risk of literacy difficulties but that there was a cluster of participants at greater risk-those with both low polysyllable accuracy and poor phonological processing.

  5. Are there more adverse effects with lingual orthodontics?

    PubMed

    Madurantakam, Parthasarathy; Kumar, Satish

    2017-12-22

    Data sourcesPubMed, Embase, Cochrane Library and LILACS database, review of references cited in included articles and a manual search of leading orthodontic journals. No language restrictions were imposed in the search. Study authors were contacted when necessary.Study selectionRandomised controlled trials (RCTs) and controlled clinical trials (CCTs) in healthy patients that directly compared the adverse effects following treatment using buccal and lingual appliances. Studies involving single arch or dual arch appliances were considered. Studies on patients with systemic diseases, animal studies and in vitro studies were excluded. The primary outcomes of interest to the authors were a list of adverse effects: pain, caries, eating and speech difficulties and oral hygiene.Data extraction and synthesisTwo authors reviewed the titles and abstracts of all studies identified through the search without blinding to names of authors or publication dates. Selected articles from searches were evaluated independently by two authors against established inclusion criteria, disagreements were resolved by consensus or by consulting a third author. Two authors independently assessed the risk of bias using the Cochrane Collaboration's tool (randomised trials) and the Newcastle-Ottawa Scale for non-randomised studies. The level of agreement between the authors was assessed using the Cohen kappa statistic. A meta-analysis was performed to provide pooled effect estimate (expressed as odds ratio) as well as 95% confidence interval. The outcomes of interest were pain, caries, eating difficulties, speech difficulties and deficient oral hygiene. Heterogeneity was quantified using I2 statistic and potential causes explored. Publication bias was assessed using a funnel plot.ResultsEight articles were included; three RCTs and five CCTs. One RCT was considered to be at high risk of bias, one moderate risk and one low risk. Of the non-randomised studies, four were low risk and one was high risk of bias. Six studies involving a total of 131 patients were included in a meta-analysis. The lingual appliance was associated with significant pain in the tongue (OR=28.32, 95% CI 8.6-93.28), difficulty in maintaining oral hygiene (OR=3.49, 95%CI 1.02-11.95) and greater speech difficulty (OR = 9.39, 95% CI 3.78-23.33) compared to buccal appliances. On the other hand, patients with lingual appliances had decreased pain in the lips and cheeks. There was no difference between the two appliances with regards to caries risk.ConclusionsLimited available evidence indicates that lingual orthodontic appliances are associated with increased pain in the tongue, speech difficulties and difficulty in maintaining oral hygiene.

  6. Behavioral and neurobiological correlates of childhood apraxia of speech in Italian children.

    PubMed

    Chilosi, Anna Maria; Lorenzini, Irene; Fiori, Simona; Graziosi, Valentina; Rossi, Giuseppe; Pasquariello, Rosa; Cipriani, Paola; Cioni, Giovanni

    2015-11-01

    Childhood apraxia of speech (CAS) is a neurogenic Speech Sound Disorder whose etiology and neurobiological correlates are still unclear. In the present study, 32 Italian children with idiopathic CAS underwent a comprehensive speech and language, genetic and neuroradiological investigation aimed to gather information on the possible behavioral and neurobiological markers of the disorder. The results revealed four main aggregations of behavioral symptoms that indicate a multi-deficit disorder involving both motor-speech and language competence. Six children presented with chromosomal alterations. The familial aggregation rate for speech and language difficulties and the male to female ratio were both very high in the whole sample, supporting the hypothesis that genetic factors make substantial contribution to the risk of CAS. As expected in accordance with the diagnosis of idiopathic CAS, conventional MRI did not reveal macrostructural pathogenic neuroanatomical abnormalities, suggesting that CAS may be due to brain microstructural alterations. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Hearing rehabilitation with single-stage bilateral vibroplasty in a child with Franceschetti syndrome.

    PubMed

    Sargsyan, Sona; Rahne, Torsten; Kösling, Sabrina; Eichler, Gerburg; Plontke, Stefan K

    2014-05-01

    Hearing is of utmost importance for normal speech and social development. Even children who have mild or unilateral permanent hearing loss may experience difficulties with understanding speech, as well as problems with educational and psycho-social development. The increasing advantages of middle-ear implant technologies are opening new perspectives for restoring hearing. Active middle-ear implants can be used in children and adolescents with hearing loss. In addition to the well-documented results for improving speech intelligibility and quality of hearing in sensorineural hearing loss active middle-ear implants are now successfully used in patients with conductive and mixed hearing loss. In this article we present a case of successful, single-stage vibroplasty, on the right side with the fixation of the FMT on the stapes and PORP CLiP vibroplasty on the left side in a 6-year-old girl with bilateral mixed hearing loss and multiple dyslalia associated with Franceschetti syndrome (mandibulofacial dysostosis). CT revealed bilateral middle-ear malformations as well as an atretic right and stenotic left external auditory canal. Due to craniofacial dysmorphia airway and (post)operative, management is significantly more difficult in patients with a Franceschetti syndrome which in this case favoured a single-stage bilateral procedure. No intra- or postoperative surgical complications were reported. The middle-ear implants were activated 4 weeks after surgery. In the audiological examination 6 months after surgery, the child showed 100% speech intelligibility with activated implants on each side.

  8. Inferring Speaker Affect in Spoken Natural Language Communication

    ERIC Educational Resources Information Center

    Pon-Barry, Heather Roberta

    2013-01-01

    The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

  9. Do 6-Month-Olds Understand That Speech Can Communicate?

    ERIC Educational Resources Information Center

    Vouloumanos, Athena; Martin, Alia; Onishi, Kristine H.

    2014-01-01

    Adults and 12-month-old infants recognize that even unfamiliar speech can communicate information between third parties, suggesting that they can separate the communicative function of speech from its lexical content. But do infants recognize that speech can communicate due to their experience understanding and producing language, or do they…

  10. Neural Mechanisms Underlying Musical Pitch Perception and Clinical Applications including Developmental Dyselxia

    PubMed Central

    Yuskaitis, Christopher J.; Parviz, Mahsa; Loui, Psyche; Wan, Catherine Y.; Pearl, Phillip L.

    2017-01-01

    Music production and perception invoke a complex set of cognitive functions that rely on the integration of sensory-motor, cognitive, and emotional pathways. Pitch is a fundamental perceptual attribute of sound and a building block for both music and speech. Although the cerebral processing of pitch is not completely understood, recent advances in imaging and electrophysiology have provided insight into the functional and anatomical pathways of pitch processing. This review examines the current understanding of pitch processing, behavioral and neural variations that give rise to difficulties in pitch processing, and potential applications of music education for language processing disorders such as dyslexia. PMID:26092314

  11. A CAI System for Visually Impaired Children to Improve Abilities of Orientation and Mobility

    NASA Astrophysics Data System (ADS)

    Yoneda, Takahiro; Kudo, Hiroaki; Minagawa, Hiroki; Ohnishi, Noboru; Matsubara, Shizuya

    Some visually impaired children have difficulty in simple locomotion, and need orientation and mobility training. We developed a computer assisted instruction system which assists this training. A user realizes a task given by a tactile map and synthesized speech. The user walks around a room according to the task. The system gives the gap of walk path from its target path via both auditory and tactile feedback after the end of a task. Then the user can understand how well the user walked. We describe the detail of the proposed system and task, and the experimental result with three visually impaired children.

  12. Merbromin poisoning

    MedlinePlus

    Merbromin is found in some antiseptics. A common brand name is Mercurochrome, which contains mercury. Compounds like ... balance and coordination Speech difficulties Tremor Mood or personality changes Insomnia

  13. Underlying neurological dysfunction in children with language, speech or learning difficulties and a verbal IQ--performance IQ discrepancy.

    PubMed

    Meulemans, J; Goeleven, A; Zink, I; Loyez, L; Lagae, L; Debruyne, F

    2012-01-01

    We investigated the relationship between possible underlying neurological dysfunction and a significant discrepancy between verbal IQ/performance IQ (VIQ-PIQ) in children with language, speech or learning difficulties. In a retrospective study, we analysed data obtained from intelligence testing and neurological evaluation in 49 children with a significant VIQ-PIQ discrepancy (> or = 25 points) who were referred because of language, speech or learning difficulties to the Multidisciplinary University Centre for Logopedics and Audiology (MUCLA) of the University Hospitals of Leuven, Belgium. The group of children broke down into a group of 35 children with PIQ > VIQ and a group of 14 children with VIQ > PIQ. In the first group, neurological data were present for 24 children. The neurological history and clinical neurological examination were normal in all cases. Brain MRI was performed in 15 cases and proved to be normal in all children. Brain activity was assessed with long-term video EEG monitoring in ten children. In two children, the EEG results were abnormal: there was an epileptic focus in one child and a manifest alteration in the EEG typical of Landau-Kleffner syndrome in the other. In the second group of 14 children whose VIQ was higher than the PIQ, neurological data were available for ten children. Neurological history and clinical neurological examination were normal in all cases. Brain MRI was performed in five cases and was normal in all children. EEG monitoring was performed in one child. This revealed benign childhood epilepsy with centrotemporal spikes. In a small number of children (9%) with speech, language and learning difficulties and a discrepancy between VIQ and PIQ, an underlying neurological abnormality is present. We recommend referring children with a significant VIQ-PIQ mismatch to a paediatric neurologist. As an epileptic disorder seems to be the most common underlying neurological pathology in this specific group of children, EEG monitoring should be recommended in these children. Neuro-imaging should only be used in selected patients.

  14. Audibility-based predictions of speech recognition for children and adults with normal hearing.

    PubMed

    McCreery, Ryan W; Stelmachowicz, Patricia G

    2011-12-01

    This study investigated the relationship between audibility and predictions of speech recognition for children and adults with normal hearing. The Speech Intelligibility Index (SII) is used to quantify the audibility of speech signals and can be applied to transfer functions to predict speech recognition scores. Although the SII is used clinically with children, relatively few studies have evaluated SII predictions of children's speech recognition directly. Children have required more audibility than adults to reach maximum levels of speech understanding in previous studies. Furthermore, children may require greater bandwidth than adults for optimal speech understanding, which could influence frequency-importance functions used to calculate the SII. Speech recognition was measured for 116 children and 19 adults with normal hearing. Stimulus bandwidth and background noise level were varied systematically in order to evaluate speech recognition as predicted by the SII and derive frequency-importance functions for children and adults. Results suggested that children required greater audibility to reach the same level of speech understanding as adults. However, differences in performance between adults and children did not vary across frequency bands. © 2011 Acoustical Society of America

  15. Variables associated with communicative participation in Parkinson's disease and its relationship to measures of health-related quality-of-life.

    PubMed

    McAuliffe, Megan J; Baylor, Carolyn R; Yorkston, Kathryn M

    2017-08-01

    Communication disorders associated with Parkinson's disease (PD) often lead to restricted participation in life roles, yet there is a limited understanding of influencing factors and few quantitative measurement tools available. This study aimed to identify variables associated with communicative participation in PD and to examine the relationship between the Communicative Participation Item Bank (CPIB) and existing health-related quality-of-life (HRQoL) measures. Self-report data from 378 participants with PD from the US and New Zealand were analysed. Data included responses to the CPIB, PD Questionnaire-8, sub-scales of the Global Health instrument from the Patient Reported Outcomes Measurement Information System (PROMIS) and additional self-report instruments. Greater perceived speech disorder, lower levels of speech usage, fatigue, cognitive and emotional problems and swallowing difficulties were associated with lower levels of communicative participation. Participants' age significantly influenced findings, interacting with country of residence, sex and speech usage. Scores on the CPIB were moderately correlated with HRQoL measures. Communicative participation in PD is complex and influenced by both demographic and disease-based variables, necessitating a broader view of the communicative experiences of those with PD. Measurement of communicative participation as a separate construct to existing HRQoL measures is recommended.

  16. Teacher candidates' mastery of phoneme-grapheme correspondence: massed versus distributed practice in teacher education.

    PubMed

    Sayeski, Kristin L; Earle, Gentry A; Eslinger, R Paige; Whitenton, Jessy N

    2017-04-01

    Matching phonemes (speech sounds) to graphemes (letters and letter combinations) is an important aspect of decoding (translating print to speech) and encoding (translating speech to print). Yet, many teacher candidates do not receive explicit training in phoneme-grapheme correspondence. Difficulty with accurate phoneme production and/or lack of understanding of sound-symbol correspondence can make it challenging for teachers to (a) identify student errors on common assessments and (b) serve as a model for students when teaching beginning reading or providing remedial reading instruction. For students with dyslexia, lack of teacher proficiency in this area is particularly problematic. This study examined differences between two learning conditions (massed and distributed practice) on teacher candidates' development of phoneme-grapheme correspondence knowledge and skills. An experimental, pretest-posttest-delayed test design was employed with teacher candidates (n = 52) to compare a massed practice condition (one, 60-min session) to a distributed practice condition (four, 15-min sessions distributed over 4 weeks) for learning phonemes associated with letters and letter combinations. Participants in the distributed practice condition significantly outperformed participants in the massed practice condition on their ability to correctly produce phonemes associated with different letters and letter combinations. Implications for teacher preparation are discussed.

  17. Development of a test of suprathreshold acuity in noise in Brazilian Portuguese: a new method for hearing screening and surveillance.

    PubMed

    Vaez, Nara; Desgualdo-Pereira, Liliane; Paglialonga, Alessia

    2014-01-01

    This paper describes the development of a speech-in-noise test for hearing screening and surveillance in Brazilian Portuguese based on the evaluation of suprathreshold acuity performances. The SUN test (Speech Understanding in Noise) consists of a list of intervocalic consonants in noise presented in a multiple-choice paradigm by means of a touch screen. The test provides one out of three possible results: "a hearing check is recommended" (red light), "a hearing check would be advisable" (yellow light), and "no hearing difficulties" (green light) (Paglialonga et al., Comput. Biol. Med. 2014). This novel test was developed in a population of 30 normal hearing young adults and 101 adults with varying degrees of hearing impairment and handicap, including normal hearing. The test had 84% sensitivity and 76% specificity compared to conventional pure-tone screening and 83% sensitivity and 86% specificity to detect disabling hearing impairment. The test outcomes were in line with the degree of self-perceived hearing handicap. The results found here paralleled those reported in the literature for the SUN test and for conventional speech-in-noise measures. This study showed that the proposed test might be a viable method to identify individuals with hearing problems to be referred to further audiological assessment and intervention.

  18. The roles of family history of dyslexia, language, speech production and phonological processing in predicting literacy progress.

    PubMed

    Carroll, Julia M; Mundy, Ian R; Cunningham, Anna J

    2014-09-01

    It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language and phonological processing fully account for the increased risk of dyslexia in children with FRD. One hundred and fifty-three 4-6-year-old children, 44 of whom had FRD, completed a battery of speech, language, phonology and literacy tasks. Word reading and spelling were retested 6 months later, and text reading accuracy and reading comprehension were tested 3 years later. The children with FRD were at increased risk of developing difficulties in reading accuracy, but not reading comprehension. Four groups were compared: good and poor readers with and without FRD. In most cases good readers outperformed poor readers regardless of family history, but there was an effect of family history on naming and nonword repetition regardless of literacy outcome, suggesting a role for speech production skills as an endophenotype of dyslexia. Phonological processing predicted spelling, while language predicted text reading accuracy and comprehension. FRD was a significant additional predictor of reading and spelling after controlling for speech production, language and phonological processing, suggesting that children with FRD show additional difficulties in literacy that cannot be fully explained in terms of their language and phonological skills. © 2014 John Wiley & Sons Ltd.

  19. Early hearing loss and language abilities in children with Down syndrome.

    PubMed

    Laws, Glynis; Hall, Amanda

    2014-01-01

    Although many children with Down syndrome experience hearing loss, there has been little research to investigate its impact on speech and language development. Studies that have investigated the association give inconsistent results. These have often been based on samples where children with the most severe hearing impairments have been excluded and so results do not generalize to the wider population with Down syndrome. Also, measuring children's hearing at the time of a language assessment does not take into account the fluctuating nature of hearing loss in children with Down syndrome or possible effects of losses in their early years. To investigate the impact of early hearing loss on language outcomes for children with Down syndrome. Retrospective audiology clinic records and parent report for 41 children were used to categorize them as either having had hearing difficulties from 2 to 4 years or more normal hearing. Differences between the groups on measures of language expression and comprehension, receptive vocabulary, a narrative task and speech accuracy were investigated. After accounting for the contributions of chronological age and nonverbal mental age to children's scores, there were significant differences between the groups on all measures. Early hearing loss has a significant impact on the speech and language development of children with Down syndrome. Results suggest that speech and language therapy should be provided when children are found to have ongoing hearing difficulties and that joint audiology and speech and language therapy clinics could be considered for preschool children. © 2014 Royal College of Speech and Language Therapists.

  20. Effects of Age and Hearing Loss on the Relationship between Discrimination of Stochastic Frequency Modulation and Speech Perception

    PubMed Central

    Sheft, Stanley; Shafiro, Valeriy; Lorenzi, Christian; McMullen, Rachel; Farrell, Caitlin

    2012-01-01

    Objective The frequency modulation (FM) of speech can convey linguistic information and also enhance speech-stream coherence and segmentation. Using a clinically oriented approach, the purpose of the present study was to examine the effects of age and hearing loss on the ability to discriminate between stochastic patterns of low-rate FM and determine whether difficulties in speech perception experienced by older listeners relate to a deficit in this ability. Design Data were collected from 18 normal-hearing young adults, and 18 participants who were at least 60 years old, nine normal-hearing and nine with a mild-to-moderate sensorineural hearing loss. Using stochastic frequency modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, discrimination thresholds were measured in terms of frequency excursion (ΔF) both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio (SNRFM) in the presence of a speech-babble masker. Speech perception ability was evaluated using Quick Speech-in-Noise (QuickSIN) sentences in four-talker babble. Results Results showed a significant effect of age, but not of hearing loss among the older listeners, for FM discrimination conditions with masking present (ΔF and SNRFM). The effect of age was not significant for the FM measures based on stimulus duration. ΔF and SNRFM were also the two conditions for which performance was significantly correlated with listener age when controlling for effect of hearing loss as measured by pure-tone average. With respect to speech-in-noise ability, results from the SNRFM condition were significantly correlated with QuickSIN performance. Conclusions Results indicate that aging is associated with reduced ability to discriminate moderate-duration patterns of low-rate stochastic FM. Furthermore, the relationship between QuickSIN performance and the SNRFM thresholds suggests that the difficulty experienced by older listeners with speech-in-noise processing may in part relate to diminished ability to process slower fine-structure modulation at low sensation levels. Results thus suggest that clinical consideration of stochastic FM discrimination measures may offer a fuller picture of auditory processing abilities. PMID:22790319

  1. Processing of Audiovisually Congruent and Incongruent Speech in School-Age Children with a History of Specific Language Impairment: A Behavioral and Event-Related Potentials Study

    ERIC Educational Resources Information Center

    Kaganovich, Natalya; Schumaker, Jennifer; Macias, Danielle; Gustafson, Dana

    2015-01-01

    Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we…

  2. Emerging Realities of Text-to-Speech Software for Nonnative-English-Speaking Community College Students in the Freshman Year

    ERIC Educational Resources Information Center

    Baker, Fiona S.

    2015-01-01

    This study explores the expectations and early and subsequent realities of text-to-speech software for 24 nonnative-English-speaking college students who were experiencing reading difficulties in their freshman year of college. The study took place over two semesters in one academic year (from September to June) at a community college on the…

  3. Understanding the Abstract Role of Speech in Communication at 12 Months

    ERIC Educational Resources Information Center

    Martin, Alia; Onishi, Kristine H.; Vouloumanos, Athena

    2012-01-01

    Adult humans recognize that even unfamiliar speech can communicate information between third parties, demonstrating an ability to separate communicative function from linguistic content. We examined whether 12-month-old infants understand that speech can communicate before they understand the meanings of specific words. Specifically, we test the…

  4. Preschoolers Benefit From Visually Salient Speech Cues

    PubMed Central

    Holt, Rachael Frush

    2015-01-01

    Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336

  5. Cingulo-opercular activity affects incidental memory encoding for speech in noise.

    PubMed

    Vaden, Kenneth I; Teubner-Rhodes, Susan; Ahlstrom, Jayne B; Dubno, Judy R; Eckert, Mark A

    2017-08-15

    Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Variations in Articulatory Movement with Changes in Speech Task.

    ERIC Educational Resources Information Center

    Tasko, Stephen M.; McClean, Michael D.

    2004-01-01

    Studies of normal and disordered articulatory movement often rely on the use of short, simple speech tasks. However, the severity of speech disorders can be observed to vary markedly with task. Understanding task-related variations in articulatory kinematic behavior may allow for an improved understanding of normal and disordered speech motor…

  7. Influence of musical training on understanding voiced and whispered speech in noise.

    PubMed

    Ruggles, Dorea R; Freyman, Richard L; Oxenham, Andrew J

    2014-01-01

    This study tested the hypothesis that the previously reported advantage of musicians over non-musicians in understanding speech in noise arises from more efficient or robust coding of periodic voiced speech, particularly in fluctuating backgrounds. Speech intelligibility was measured in listeners with extensive musical training, and in those with very little musical training or experience, using normal (voiced) or whispered (unvoiced) grammatically correct nonsense sentences in noise that was spectrally shaped to match the long-term spectrum of the speech, and was either continuous or gated with a 16-Hz square wave. Performance was also measured in clinical speech-in-noise tests and in pitch discrimination. Musicians exhibited enhanced pitch discrimination, as expected. However, no systematic or statistically significant advantage for musicians over non-musicians was found in understanding either voiced or whispered sentences in either continuous or gated noise. Musicians also showed no statistically significant advantage in the clinical speech-in-noise tests. Overall, the results provide no evidence for a significant difference between young adult musicians and non-musicians in their ability to understand speech in noise.

  8. Speech and communication in Parkinson’s disease: a cross-sectional exploratory study in the UK

    PubMed Central

    Barnish, Maxwell S; Horton, Simon M C; Butterfint, Zoe R; Clark, Allan B; Atkinson, Rachel A; Deane, Katherine H O

    2017-01-01

    Objective To assess associations between cognitive status, intelligibility, acoustics and functional communication in PD. Design Cross-sectional exploratory study of functional communication, including a within-participants experimental design for listener assessment. Setting A major academic medical centre in the East of England, UK. Participants Questionnaire data were assessed for 45 people with Parkinson’s disease (PD), who had self-reported speech or communication difficulties and did not have clinical dementia. Acoustic and listener analyses were conducted on read and conversational speech for 20 people with PD and 20 familiar conversation partner controls without speech, language or cognitive difficulties. Main outcome measures Functional communication assessed by the Communicative Participation Item Bank (CPIB) and Communicative Effectiveness Survey (CES). Results People with PD had lower intelligibility than controls for both the read (mean difference 13.7%, p=0.009) and conversational (mean difference 16.2%, p=0.04) sentences. Intensity and pause were statistically significant predictors of intelligibility in read sentences. Listeners were less accurate identifying the intended emotion in the speech of people with PD (14.8% point difference across conditions, p=0.02) and this was associated with worse speaker cognitive status (16.7% point difference, p=0.04). Cognitive status was a significant predictor of functional communication using CPIB (F=8.99, p=0.005, η2 = 0.15) but not CES. Intelligibility in conversation sentences was a statistically significant predictor of CPIB (F=4.96, p=0.04, η2 = 0.19) and CES (F=13.65, p=0.002, η2 = 0.43). Read sentence intelligibility was not a significant predictor of either outcome. Conclusions Cognitive status was an important predictor of functional communication—the role of intelligibility was modest and limited to conversational and not read speech. Our results highlight the importance of focusing on functional communication as well as physical speech impairment in speech and language therapy (SLT) for PD. Our results could inform future trials of SLT techniques for PD. PMID:28554918

  9. Identifying Residual Speech Sound Disorders in Bilingual Children: A Japanese-English Case Study

    PubMed Central

    Preston, Jonathan L.; Seki, Ayumi

    2012-01-01

    Purpose The purposes are to (1) describe the assessment of residual speech sound disorders (SSD) in bilinguals by distinguishing speech patterns associated with second language acquisition from patterns associated with misarticulations, and (2) describe how assessment of domains such as speech motor control and phonological awareness can provide a more complete understanding of SSDs in bilinguals. Method A review of Japanese phonology is provided to offer a context for understanding the transfer of Japanese to English productions. A case study of an 11-year-old is presented, demonstrating parallel speech assessments in English and Japanese. Speech motor and phonological awareness tasks were conducted in both languages. Results Several patterns were observed in the participant’s English that could be plausibly explained by the influence of Japanese phonology. However, errors indicating a residual SSD were observed in both Japanese and English. A speech motor assessment suggested possible speech motor control problems, and phonological awareness was judged to be within the typical range of performance in both languages. Conclusion Understanding the phonological characteristics of L1 can help clinicians recognize speech patterns in L2 associated with transfer. Once these differences are understood, patterns associated with a residual SSD can be identified. Supplementing a relational speech analysis with measures of speech motor control and phonological awareness can provide a more comprehensive understanding of a client’s strengths and needs. PMID:21386046

  10. [Computer assisted application of mandarin speech test materials].

    PubMed

    Zhang, Hua; Wang, Shuo; Chen, Jing; Deng, Jun-Min; Yang, Xiao-Lin; Guo, Lian-Sheng; Zhao, Xiao-Yan; Shao, Guang-Yu; Han, De-Min

    2008-06-01

    To design an intelligent speech test system with reliability and convenience using the computer software and to evaluate this system. First, the intelligent system was designed by the Delphi program language. Second, the seven monosyllabic word lists recorded on CD were separated by Cool Edit Pro v2.1 software and put into the system as test materials. Finally, the intelligent system was used to evaluate the equivalence of difficulty between seven lists. Fifty-five college students with normal hearing participated in the study. The seven monosyllabic word lists had equivalent difficulty (F = 1.582, P > 0.05) to the subjects between each other and the system was proved as reliability and convenience. The intelligent system has the feasibility in the clinical practice.

  11. Restoration of facial symmetry in a patient with bell palsy using a modified maxillary complete denture: a case report.

    PubMed

    Bagchi, Gautam; Nath, Dilip Kumar

    2012-01-01

    Permanent facial paralysis can be devastating for a patient. Modern society's emphasis on appearance and physical beauty contributes to this problem and often leads to isolation of patients embarrassed by their appearance. Lagophthalmos with ocular exposure, loss of oral competence with resultant drooling, alar collapse with nasal airway obstruction, and difficulties with mastication and speech production are all potential consequences of facial paralysis. Affected patients are confronted with both a cosmetic defect and the functional deficits associated with loss of facial nerve function. In this case history report, a modified maxillary complete denture permitted a patient with Bell palsy to carry on daily activities with minimal facial distortion, pain, speech difficulty, and associated emotional trauma.

  12. Children with Word Finding Difficulties: Continuities and Profiles of Abilities

    ERIC Educational Resources Information Center

    Messer, David; Dockrell, Julie E.

    2013-01-01

    Word finding difficulties (WFDs) occur in more than a quarter of children who are receiving speech and language therapy. This study provides the first investigation of the continuity in WFDs and investigates whether WFDs are associated with phonological or semantically related abilities. Thirty-eight children with WFDs were seen at age 7;0 and at…

  13. Grammatical Gender in L2: A Production or a Real-Time Processing Problem?

    ERIC Educational Resources Information Center

    Gruter, Theres; Lew-Williams, Casey; Fernald, Anne

    2012-01-01

    Mastery of grammatical gender is difficult to achieve in a second language (L2). This study investigates whether persistent difficulty with grammatical gender often observed in the speech of otherwise highly proficient L2 learners is best characterized as a production-specific performance problem, or as difficulty with the retrieval of gender…

  14. Exploring the Acceptability of Innovative Technology: A Pilot Study Using LENA with Parents of Young Deaf Children in the UK

    ERIC Educational Resources Information Center

    Allen, Sarah; Crawford, Paul; Mulla, Imran

    2017-01-01

    Early intervention is widely recommended for children at risk of difficulties with speech, language and communication. Evidence for effective practice remains limited due in part to inherent difficulties in defining complex interventions and measuring change. The innovative Language Environment Analysis (LENA) system has exciting potential for…

  15. Natural history of Sanfilippo syndrome type A.

    PubMed

    Buhrman, Dakota; Thakkar, Kavita; Poe, Michele; Escolar, Maria L

    2014-05-01

    To describe the natural history of Sanfilippo syndrome type A. We performed a retrospective review of 46 children (21 boys, 25 girls) with Sanfilippo syndrome type A evaluated between January 2000 and April 2013. Assessments included neurodevelopmental evaluations, audiologic testing, and assessment of growth, adaptive behavior, cognitive behavior, motor function, and speech/language skills. Only the baseline evaluation was included for patients who received hematopoietic stem cell transplantation. Median age at diagnosis was 35 months, with a median delay between initial symptoms to diagnosis of 24 months. The most common initial symptoms were speech/language delay (48%), dysmorphology (22%), and hearing loss (20%). Early behavioral problems included perseverative chewing and difficulty with toilet training. All children developed sleep difficulties and behavioral changes (e.g., hyperactivity, aggression). More than 93% of the children experienced somatic symptoms such as hepatomegaly (67%), abnormal dentition (39%), enlarged tongue (37%), coarse facial features (76%), and protuberant abdomen (43%). Kaplan-Meier analysis showed a 60% probability of surviving past 17 years of age. Sanfilippo type A is characterized by severe hearing loss and speech delay, followed by a rapid decline in cognitive skills by 3 years of age. Significant somatic disease occurs in more than half of patients. Behavioral difficulties presented between 2 and 4 years of age during a rapid period of cognitive decline. Gross motor abilities are maintained during this period, which results in an active child with impaired cognition. Sleep difficulties are concurrent with the period of cognitive degeneration. There is currently an unacceptable delay in diagnosis, highlighting the need to increase awareness of this disease among clinicians.

  16. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    PubMed

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  17. Speech planning happens before speech execution: online reaction time methods in the study of apraxia of speech.

    PubMed

    Maas, Edwin; Mailend, Marja-Liisa

    2012-10-01

    The purpose of this article is to present an argument for the use of online reaction time (RT) methods to the study of apraxia of speech (AOS) and to review the existing small literature in this area and the contributions it has made to our fundamental understanding of speech planning (deficits) in AOS. Following a brief description of limitations of offline perceptual methods, we provide a narrative review of various types of RT paradigms from the (speech) motor programming and psycholinguistic literatures and their (thus far limited) application with AOS. On the basis of the review of the literature, we conclude that with careful consideration of potential challenges and caveats, RT approaches hold great promise to advance our understanding of AOS, in particular with respect to the speech planning processes that generate the speech signal before initiation. A deeper understanding of the nature and time course of speech planning and its disruptions in AOS may enhance diagnosis and treatment for AOS. Only a handful of published studies on apraxia of speech have used reaction time methods. However, these studies have provided deeper insight into speech planning impairments in AOS based on a variety of experimental paradigms.

  18. Role of contextual cues on the perception of spectrally reduced interrupted speech.

    PubMed

    Patro, Chhayakanta; Mendel, Lisa Lucks

    2016-08-01

    Understanding speech within an auditory scene is constantly challenged by interfering noise in suboptimal listening environments when noise hinders the continuity of the speech stream. In such instances, a typical auditory-cognitive system perceptually integrates available speech information and "fills in" missing information in the light of semantic context. However, individuals with cochlear implants (CIs) find it difficult and effortful to understand interrupted speech compared to their normal hearing counterparts. This inefficiency in perceptual integration of speech could be attributed to further degradations in the spectral-temporal domain imposed by CIs making it difficult to utilize the contextual evidence effectively. To address these issues, 20 normal hearing adults listened to speech that was spectrally reduced and spectrally reduced interrupted in a manner similar to CI processing. The Revised Speech Perception in Noise test, which includes contextually rich and contextually poor sentences, was used to evaluate the influence of semantic context on speech perception. Results indicated that listeners benefited more from semantic context when they listened to spectrally reduced speech alone. For the spectrally reduced interrupted speech, contextual information was not as helpful under significant spectral reductions, but became beneficial as the spectral resolution improved. These results suggest top-down processing facilitates speech perception up to a point, and it fails to facilitate speech understanding when the speech signals are significantly degraded.

  19. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    PubMed

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p < .01) and with mean length of utterance produced during a written picture description (r = .96, p < .01). Correlations between inner speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  20. Developing Professional Learning for Staff Working with Children with Speech, Language and Communication Needs Combined with Moderate-Severe Learning Difficulties

    ERIC Educational Resources Information Center

    Anderson, Carolyn

    2011-01-01

    This article presents research undertaken as part of a PhD by Carolyn Anderson who is a senior lecturer on the BSc (Hons) in Speech and Language Pathology at the University of Strathclyde. The study explores the professional learning experiences of 49 teachers working in eight schools and units for children with additional support needs in…

  1. The Effects of a Peer-Tutoring Intervention on the Text Production of Students with Learning and Speech Problems: A Case Report

    ERIC Educational Resources Information Center

    Grünke, Matthias; Janning, Andriana Maria; Sperling, Marko

    2016-01-01

    The purpose of this single-case study was to evaluate the effects of a peer-tutoring intervention on the text production skills of three third graders with severe learning and speech difficulties. All tutees were initially able to produce only very short stories. During the course of the treatment, higher performing classmates taught them how to…

  2. Standard operating procedure: implementation, critical analysis, and validation in the Audiology Department at CESTEH/Fiocruz.

    PubMed

    Freitas, Anelisse Vasco Mascarenhas de; Quixabeiro, Elinaldo Leite; Luz, Geórgia Rosangela Soares; Franco, Viviane Moreira; Santos, Viviane Fontes Dos

    2016-01-01

    Evaluate three standard operational procedures (SOPs), regarding the application of the brainstem auditory evoked potential (BAEP) test, implemented by the Audiology Department of the Center for Studies in Occupational Health and Human Ecology (CESTEH) through the application of a questionnaire and to verify whether the SOPs are effective and assess the necessity for improvement. The study was conducted in three phases: in the first phase, eight speech-language pathologists and seven physicians, with no experience in BAEP, were instructed to read and perform each SOP, eventually all individuals evaluated the SOPs by responding to a questionnaire; in the second phase, the questionnaires were analyzed and the three SOP texts were reviewed; in the third phase, nine speech-language pathologists and six physicians, also with no experience in BAEP, read and re-evaluated the reviewed SOPs through a questionnaire. In the first phase, difficulties in understanding the texts were found, raising doubts about the procedures; however, every participant was able to perform the procedure as a whole. In the third phase, after the review, all individuals were able to perform the procedures appropriately and continuously without any doubts. The assessment of the SOPs by questionnaires showed the need for adaptation in the texts. After the texts were reviewed according to the suggestions of the health professionals, it was possible to observe that the SOPs assisted in the execution of the task, which was conducted without any difficulties or doubts, being regarded effective and ensuring quality to the service offered.

  3. Effect of 24 hours of sleep deprivation on auditory and linguistic perception: a comparison among young controls, sleep-deprived participants, dyslexic readers, and aging adults.

    PubMed

    Fostick, Leah; Babkoff, Harvey; Zukerman, Gil

    2014-06-01

    To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Fifty-five sleep-deprived young adults were compared with 29 aging adults (older than 60 years) and with 18 young controls on auditory temporal order judgment (TOJ) and on speech perception tasks (Experiment 1). The sleep deprived were also compared with 51 dyslexic readers and with the young controls on TOJ and phonological awareness tasks (One-Minute Test for Pseudowords, Phoneme Deletion, Pig Latin, and Spoonerism; Experiment 2). Sleep deprivation resulted in longer TOJ thresholds, poorer speech perception, and poorer nonword reading compared with controls. The TOJ thresholds of the sleep deprived were comparable to those of the aging adults, but their pattern of speech performance differed. They also performed better on TOJ and phonological awareness than dyslexic readers. A variety of linguistic skills are affected by sleep deprivation. The comparison of sleep-deprived individuals with other groups with known difficulties in these linguistic skills might suggest that different groups exhibit common difficulties.

  4. The Contribution of Cognitive Factors to Individual Differences in Understanding Noise-Vocoded Speech in Young and Older Adults

    PubMed Central

    Rosemann, Stephanie; Gießing, Carsten; Özyurt, Jale; Carroll, Rebecca; Puschmann, Sebastian; Thiel, Christiane M.

    2017-01-01

    Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome. PMID:28638329

  5. Factors contributing to speech perception scores in long-term pediatric cochlear implant users.

    PubMed

    Davidson, Lisa S; Geers, Ann E; Blamey, Peter J; Tobey, Emily A; Brenner, Christine A

    2011-02-01

    The objectives of this report are to (1) describe the speech perception abilities of long-term pediatric cochlear implant (CI) recipients by comparing scores obtained at elementary school (CI-E, 8 to 9 yrs) with scores obtained at high school (CI-HS, 15 to 18 yrs); (2) evaluate speech perception abilities in demanding listening conditions (i.e., noise and lower intensity levels) at adolescence; and (3) examine the relation of speech perception scores to speech and language development over this longitudinal timeframe. All 112 teenagers were part of a previous nationwide study of 8- and 9-yr-olds (N = 181) who received a CI between 2 and 5 yrs of age. The test battery included (1) the Lexical Neighborhood Test (LNT; hard and easy word lists); (2) the Bamford Kowal Bench sentence test; (3) the Children's Auditory-Visual Enhancement Test; (4) the Test of Auditory Comprehension of Language at CI-E; (5) the Peabody Picture Vocabulary Test at CI-HS; and (6) the McGarr sentences (consonants correct) at CI-E and CI-HS. CI-HS speech perception was measured in both optimal and demanding listening conditions (i.e., background noise and low-intensity level). Speech perception scores were compared based on age at test, lexical difficulty of stimuli, listening environment (optimal and demanding), input mode (visual and auditory-visual), and language age. All group mean scores significantly increased with age across the two test sessions. Scores of adolescents significantly decreased in demanding listening conditions. The effect of lexical difficulty on the LNT scores, as evidenced by the difference in performance between easy versus hard lists, increased with age and decreased for adolescents in challenging listening conditions. Calculated curves for percent correct speech perception scores (LNT and Bamford Kowal Bench) and consonants correct on the McGarr sentences plotted against age-equivalent language scores on the Test of Auditory Comprehension of Language and Peabody Picture Vocabulary Test achieved asymptote at similar ages, around 10 to 11 yrs. On average, children receiving CIs between 2 and 5 yrs of age exhibited significant improvement on tests of speech perception, lipreading, speech production, and language skills measured between primary grades and adolescence. Evidence suggests that improvement in speech perception scores with age reflects increased spoken language level up to a language age of about 10 yrs. Speech perception performance significantly decreased with softer stimulus intensity level and with introduction of background noise. Upgrades to newer speech processing strategies and greater use of frequency-modulated systems may be beneficial for ameliorating performance under these demanding listening conditions.

  6. Speech Understanding in Noise by Patients with Cochlear Implants Using a Monaural Adaptive Beamformer

    ERIC Educational Resources Information Center

    Dorman, Michael F.; Natale, Sarah; Spahr, Anthony; Castioni, Erin

    2017-01-01

    Purpose: The aim of this experiment was to compare, for patients with cochlear implants (CIs), the improvement for speech understanding in noise provided by a monaural adaptive beamformer and for two interventions that produced bilateral input (i.e., bilateral CIs and hearing preservation [HP] surgery). Method: Speech understanding scores for…

  7. GraphoGame – a catalyst for multi-level promotion of literacy in diverse contexts

    PubMed Central

    Ojanen, Emma; Ronimus, Miia; Ahonen, Timo; Chansa-Kabali, Tamara; February, Pamela; Jere-Folotiya, Jacqueline; Kauppinen, Karri-Pekka; Ketonen, Ritva; Ngorosho, Damaris; Pitkänen, Mikko; Puhakka, Suzanne; Sampa, Francis; Walubita, Gabriel; Yalukanda, Christopher; Pugh, Ken; Richardson, Ulla; Serpell, Robert; Lyytinen, Heikki

    2015-01-01

    GraphoGame (GG) is originally a technology-based intervention method for supporting children with reading difficulties. It is now known that children who face problems in reading acquisition have difficulties in learning to differentiate and manipulate speech sounds and consequently, in connecting these sounds to corresponding letters. GG was developed to provide intensive training in matching speech sounds and larger units of speech to their written counterparts. GG has been shown to benefit children with reading difficulties and the game is now available for all Finnish school children for literacy support. Presently millions of children in Africa fail to learn to read despite years of primary school education. As many African languages have transparent writing systems similar in structure to Finnish, it was hypothesized that GG-based training of letter-sound correspondences could also be effective in supporting children’s learning in African countries. In this article we will describe how GG has been developed from a Finnish dyslexia prevention game to an intervention method that can be used not only to improve children’s reading performance but also to raise teachers’ and parents’ awareness of the development of reading skill and effective reading instruction methods. We will also provide an overview of the GG activities in Zambia, Kenya, Tanzania, and Namibia, and the potential to promote education for all with a combination of scientific research and mobile learning. PMID:26113825

  8. The effects of noise exposure and musical training on suprathreshold auditory processing and speech perception in noise.

    PubMed

    Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey

    2017-09-01

    Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  9. A study on nonlinear characteristics of speech sound with reference to some languages of North East region

    NASA Astrophysics Data System (ADS)

    Dutta, Rashmi

    INTRODUCTION : Speech science is, in fact, a sub-discipline of the Nonlinear Dynamical System [2,104 ]. There are two different types of Dynamical System. A Continuous Dynamical System may be defined for the continuous time case, by the equation: x = F (x), where x is a vector of length d, defining a point in a d- dimensional space, F is some function (linear or nonlinear) operating on x, and x is the time derivative of x. This system is deterministic, in that it is possible to completely specify its evolution or flow of trajectories in the d- dimensional space, given the initial starting conditions. A Discrete Dynamical System can be defined as a map [by the process of literations]: Xn+1 = G ( Xn ), where Xn is again a d- length vector at time step n, and G is an operator function. Given an initial state, X0, it is possible to calculate the value of xn for any n > 0. Speech has evolved as a primary form of communication between humans, i.e. speech and hearing are the man's most used means of communication [104, 114]. Analysis of human speech has been a goal of Research during the last few decades [105, 108]. With the rapid development of information technology (IT), the human-machine communication, using natural speech, has received wide attention from both academic and business communities. One highly quantitative approach of characterizing the communications potential of speech is in terms of information theory ideas as introduced by Shannon [C.E. Shannon, "A Mathematical Theory of Communication," Bell System Tech journal, Vol 27, pp623- 656, October, 1968]. According to information theory, speech can be represented in terms of its message content, or information. An alternative way of characterizing speech is in terms of the signal carrying the message information, i.e., the acoustic waveform. Although information theoretic ideas have played a major role in sophisticated communications systems, it is the speech representation based on the waveform, or some parametric model, which has been most useful in practical applications. Developing a system that can understand natural language has been a continuing goal of speech researchers. Fully automatic high quality machine translation systems are extremely difficult to build. The difficulty arises from the following reasons: In any natural language text, only part of the information to be conveyed is explicitly expressed. It is the human mind which fills up and supplements the details using contextual.

  10. Difficulties Generated by Allergies.

    ERIC Educational Resources Information Center

    Baker, Barbara M.; Baker, Claude D.

    1980-01-01

    Allergies have recently been related to the development of speech, language, and hearing problems in students. Diagnosis and treatment is compounded by multiple complaints or the absence of complaints. (Authors/CJ)

  11. An Analysis of Difficulties of Children with Stuttering Enrolled in Turkish Primary Inclusive Classes Who Encounter in Academic and Social Activities: From Their Perspectives

    ERIC Educational Resources Information Center

    Sari, Hakan; Gökdag, Hatice

    2017-01-01

    Stuttering means that children have difficulties in rhythm, sound, syllable, word and phrase repetitions, or flow of speech cut in the form of extension or block form. In the "International Classification of Diseases" (1992) ("International Classification of Diseases-10" ("ICD-10"), Stuttering was defined as speech…

  12. Upholding the human right of children in New Zealand experiencing communication difficulties to voice their needs and dreams.

    PubMed

    Doell, Elizabeth; Clendon, Sally

    2018-02-01

    New Zealand Ministry of Education's proposal for an updated service to support children experiencing communication difficulties provides an opportunity to consider the essential criteria required for children to express their opinion, information and ideas as outlined under Article 19 of the Universal Declaration of Human Rights. This commentary begins with a summary of key policies that provide strategic direction for enhancing children's rights to be actively involved in the development of services designed to support them and to communicate and participate in inclusive environments. The authors use a human rights lens to inform the development of speech-language pathology services that facilitate individuals' contribution and engagement and are responsive to their needs. A review of international literature describing the lived experience of children and young people identifies key factors related to accessible information, service coordination, holistic practice, and partnerships that facilitate co-constructed understanding and decision-making. The commentary concludes with suggested recommendations for structuring services, establishing partnership models, and capability building.

  13. An algorithm to improve speech recognition in noise for hearing-impaired listeners

    PubMed Central

    Healy, Eric W.; Yoho, Sarah E.; Wang, Yuxuan; Wang, DeLiang

    2013-01-01

    Despite considerable effort, monaural (single-microphone) algorithms capable of increasing the intelligibility of speech in noise have remained elusive. Successful development of such an algorithm is especially important for hearing-impaired (HI) listeners, given their particular difficulty in noisy backgrounds. In the current study, an algorithm based on binary masking was developed to separate speech from noise. Unlike the ideal binary mask, which requires prior knowledge of the premixed signals, the masks used to segregate speech from noise in the current study were estimated by training the algorithm on speech not used during testing. Sentences were mixed with speech-shaped noise and with babble at various signal-to-noise ratios (SNRs). Testing using normal-hearing and HI listeners indicated that intelligibility increased following processing in all conditions. These increases were larger for HI listeners, for the modulated background, and for the least-favorable SNRs. They were also often substantial, allowing several HI listeners to improve intelligibility from scores near zero to values above 70%. PMID:24116438

  14. Assessing Auditory Discrimination Skill of Malay Children Using Computer-based Method.

    PubMed

    Ting, H; Yunus, J; Mohd Nordin, M Z

    2005-01-01

    The purpose of this paper is to investigate the auditory discrimination skill of Malay children using computer-based method. Currently, most of the auditory discrimination assessments are conducted manually by Speech-Language Pathologist. These conventional tests are actually general tests of sound discrimination, which do not reflect the client's specific speech sound errors. Thus, we propose computer-based Malay auditory discrimination test to automate the whole process of assessment as well as to customize the test according to the specific speech error sounds of the client. The ability in discriminating voiced and unvoiced Malay speech sounds was studied for the Malay children aged between 7 and 10 years old. The study showed no major difficulty for the children in discriminating the Malay speech sounds except differentiating /g/-/k/ sounds. Averagely the children of 7 years old failed to discriminate /g/-/k/ sounds.

  15. Talking to children matters: Early language experience strengthens processing and builds vocabulary

    PubMed Central

    Weisleder, Adriana; Fernald, Anne

    2016-01-01

    Infants differ substantially in their rates of language growth, and slower growth predicts later academic difficulties. This study explored how the amount of speech to infants in Spanish-speaking families low in socioeconomic status (SES) influenced the development of children's skill in real-time language processing and vocabulary learning. All-day recordings of parent-infant interactions at home revealed striking variability among families in how much speech caregivers addressed to their child. Infants who experienced more child-directed speech became more efficient in processing familiar words in real time and had larger expressive vocabularies by 24 months, although speech simply overheard by the child was unrelated to vocabulary outcomes. Mediation analyses showed that the effect of child-directed speech on expressive vocabulary was explained by infants’ language-processing efficiency, suggesting that richer language experience strengthens processing skills that facilitate language growth. PMID:24022649

  16. Perceptual consequences of changes in vocoded speech parameters in various reverberation conditions.

    PubMed

    Drgas, Szymon; Blaszak, Magdalena A

    2009-08-01

    To study the perceptual consequences of changes in parameters of vocoded speech in various reverberation conditions. The 3 controlled variables were number of vocoder bands, instantaneous frequency change rate, and reverberation conditions. The effects were quantified in terms of (a) nonsense words' recognition scores for young normal-hearing listeners, (b) ease of listening based on the time of response (response delay), and (c) the subjective measure of difficulty (10-degree scale). It has been shown that the fine structure of a signal is a relevant cue in speech perception in reverberation conditions. The results obtained for different number of bands, frequency-modulation cutoff frequencies, and reverberation conditions have shown that all these parameters are important for speech perception in reverberation. Only slow variations in the instantaneous frequency (<50 Hz) seem to play a critical role in speech intelligibility in anechoic conditions. In reverberant enclosures, however, fast fluctuations of instantaneous frequency are also significant.

  17. Dysarthria and broader motor speech deficits in Dravet syndrome.

    PubMed

    Turner, Samantha J; Brown, Amy; Arpone, Marta; Anderson, Vicki; Morgan, Angela T; Scheffer, Ingrid E

    2017-02-21

    To analyze the oral motor, speech, and language phenotype in 20 children and adults with Dravet syndrome (DS) associated with mutations in SCN1A . Fifteen verbal and 5 minimally verbal DS patients with SCN1A mutations (aged 15 months-28 years) underwent a tailored assessment battery. Speech was characterized by imprecise articulation, abnormal nasal resonance, voice, and pitch, and prosody errors. Half of verbal patients had moderate to severely impaired conversational speech intelligibility. Oral motor impairment, motor planning/programming difficulties, and poor postural control were typical. Nonverbal individuals had intentional communication. Cognitive skills varied markedly, with intellectual functioning ranging from the low average range to severe intellectual disability. Language impairment was congruent with cognition. We describe a distinctive speech, language, and oral motor phenotype in children and adults with DS associated with mutations in SCN1A. Recognizing this phenotype will guide therapeutic intervention in patients with DS. © 2017 American Academy of Neurology.

  18. Dysarthria and broader motor speech deficits in Dravet syndrome

    PubMed Central

    Turner, Samantha J.; Brown, Amy; Arpone, Marta; Anderson, Vicki; Morgan, Angela T.

    2017-01-01

    Objective: To analyze the oral motor, speech, and language phenotype in 20 children and adults with Dravet syndrome (DS) associated with mutations in SCN1A. Methods: Fifteen verbal and 5 minimally verbal DS patients with SCN1A mutations (aged 15 months-28 years) underwent a tailored assessment battery. Results: Speech was characterized by imprecise articulation, abnormal nasal resonance, voice, and pitch, and prosody errors. Half of verbal patients had moderate to severely impaired conversational speech intelligibility. Oral motor impairment, motor planning/programming difficulties, and poor postural control were typical. Nonverbal individuals had intentional communication. Cognitive skills varied markedly, with intellectual functioning ranging from the low average range to severe intellectual disability. Language impairment was congruent with cognition. Conclusions: We describe a distinctive speech, language, and oral motor phenotype in children and adults with DS associated with mutations in SCN1A. Recognizing this phenotype will guide therapeutic intervention in patients with DS. PMID:28148630

  19. Comparison of adults who stutter with and without social anxiety disorder.

    PubMed

    Iverach, Lisa; Jones, Mark; Lowe, Robyn; O'Brian, Susan; Menzies, Ross G; Packman, Ann; Onslow, Mark

    2018-06-01

    Social anxiety disorder is a debilitating anxiety disorder associated with significant life impairment. The purpose of the present study is to evaluate overall functioning for adults who stutter with and without a diagnosis of social anxiety disorder. Participants were 275 adults who stuttered (18-80 years), including 219 males (79.6%) and 56 females (20.4%), who were enrolled to commence speech treatment for stuttering. Comparisons were made between participants diagnosed with social anxiety disorder (n = 82, 29.8%) and those without that diagnosis (n = 193, 70.2%). Although the socially anxious group was significantly younger than the non-socially anxious group, no other demographic differences were found. When compared to the non-socially anxious group, the socially anxious group did not demonstrate significantly higher self-reported stuttering severity or percentage of syllables stuttered. Yet the socially anxious group reported more speech dissatisfaction and avoidance of speaking situations, significantly more psychological problems, and a greater negative impact of stuttering. Significant differences in speech and psychological variables between groups suggest that, despite not demonstrating more severe stuttering, socially anxious adults who stutter demonstrate more psychological difficulties and have a more negative view of their speech. The present findings suggest that the demographic status of adults who stutter is not worse for those with social anxiety disorder. These findings pertain to a clinical sample, and cannot be generalized to the wider population of adults who stutter from the general community. Further research is needed to understand the longer-term impact of social anxiety disorder for those who stutter. Copyright © 2018. Published by Elsevier Inc.

  20. Assessment and management of the communication difficulties of children with cerebral palsy: a UK survey of SLT practice

    PubMed Central

    Mary Watson, Rose; Pennington, Lindsay

    2015-01-01

    Background Communication difficulties are common in cerebral palsy (CP) and are frequently associated with motor, intellectual and sensory impairments. Speech and language therapy research comprises single-case experimental design and small group studies, limiting evidence-based intervention and possibly exacerbating variation in practice. Aims To describe the assessment and intervention practices of speech–language therapist (SLTs) in the UK in their management of communication difficulties associated with CP in childhood. Methods & Procedures An online survey of the assessments and interventions employed by UK SLTs working with children and young people with CP was conducted. The survey was publicized via NHS trusts, the Royal College of Speech and Language Therapists (RCSLT) and private practice associations using a variety of social media. The survey was open from 5 December 2011 to 30 January 2012. Outcomes & Results Two hundred and sixty-five UK SLTs who worked with children and young people with CP in England (n = 199), Wales (n = 13), Scotland (n = 36) and Northern Ireland (n = 17) completed the survey. SLTs reported using a wide variety of published, standardized tests, but most commonly reported assessing oromotor function, speech, receptive and expressive language, and communication skills by observation or using assessment schedules they had developed themselves. The most highly prioritized areas for intervention were: dysphagia, alternative and augmentative (AAC)/interaction and receptive language. SLTs reported using a wide variety of techniques to address difficulties in speech, language and communication. Some interventions used have no supporting evidence. Many SLTs felt unable to estimate the hours of therapy per year children and young people with CP and communication disorders received from their service. Conclusions & Implications The assessment and management of communication difficulties associated with CP in childhood varies widely in the UK. Lack of standard assessment practices prevents comparisons across time or services. The adoption of a standard set of agreed clinical measures would enable benchmarking of service provision, permit the development of large-scale research studies using routine clinical data and facilitate the identification of potential participants for research studies in the UK. Some interventions provided lack evidence. Recent systematic reviews could guide intervention, but robust evidence is needed in most areas addressed in clinical practice. PMID:25652139

  1. Speech processing and production in two-year-old children acquiring isiXhosa: A tale of two children

    PubMed Central

    Rossouw, Kate; Fish, Laura; Jansen, Charne; Manley, Natalie; Powell, Michelle; Rosen, Loren

    2016-01-01

    We investigated the speech processing and production of 2-year-old children acquiring isiXhosa in South Africa. Two children (2 years, 5 months; 2 years, 8 months) are presented as single cases. Speech input processing, stored phonological knowledge and speech output are described, based on data from auditory discrimination, naming, and repetition tasks. Both children were approximating adult levels of accuracy in their speech output, although naming was constrained by vocabulary. Performance across tasks was variable: One child showed a relative strength with repetition, and experienced most difficulties with auditory discrimination. The other performed equally well in naming and repetition, and obtained 100% for her auditory task. There is limited data regarding typical development of isiXhosa, and the focus has mainly been on speech production. This exploratory study describes typical development of isiXhosa using a variety of tasks understood within a psycholinguistic framework. We describe some ways in which speech and language therapists can devise and carry out assessment with children in situations where few formal assessments exist, and also detail the challenges of such work. PMID:27245131

  2. Effects of Within-Talker Variability on Speech Intelligibility in Mandarin-Speaking Adult and Pediatric Cochlear Implant Patients

    PubMed Central

    Su, Qiaotong; Galvin, John J.; Zhang, Guoping; Li, Yongxin

    2016-01-01

    Cochlear implant (CI) speech performance is typically evaluated using well-enunciated speech produced at a normal rate by a single talker. CI users often have greater difficulty with variations in speech production encountered in everyday listening. Within a single talker, speaking rate, amplitude, duration, and voice pitch information may be quite variable, depending on the production context. The coarse spectral resolution afforded by the CI limits perception of voice pitch, which is an important cue for speech prosody and for tonal languages such as Mandarin Chinese. In this study, sentence recognition from the Mandarin speech perception database was measured in adult and pediatric Mandarin-speaking CI listeners for a variety of speaking styles: voiced speech produced at slow, normal, and fast speaking rates; whispered speech; voiced emotional speech; and voiced shouted speech. Recognition of Mandarin Hearing in Noise Test sentences was also measured. Results showed that performance was significantly poorer with whispered speech relative to the other speaking styles and that performance was significantly better with slow speech than with fast or emotional speech. Results also showed that adult and pediatric performance was significantly poorer with Mandarin Hearing in Noise Test than with Mandarin speech perception sentences at the normal rate. The results suggest that adult and pediatric Mandarin-speaking CI patients are highly susceptible to whispered speech, due to the lack of lexically important voice pitch cues and perhaps other qualities associated with whispered speech. The results also suggest that test materials may contribute to differences in performance observed between adult and pediatric CI users. PMID:27363714

  3. Evaluating the Effort Expended to Understand Speech in Noise Using a Dual-Task Paradigm: The Effects of Providing Visual Speech Cues

    ERIC Educational Resources Information Center

    Fraser, Sarah; Gagne, Jean-Pierre; Alepins, Majolaine; Dubois, Pascale

    2010-01-01

    Purpose: Using a dual-task paradigm, 2 experiments (Experiments 1 and 2) were conducted to assess differences in the amount of listening effort expended to understand speech in noise in audiovisual (AV) and audio-only (A-only) modalities. Experiment 1 had equivalent noise levels in both modalities, and Experiment 2 equated speech recognition…

  4. A Near-Infrared Spectroscopy Study on Cortical Hemodynamic Responses to Normal and Whispered Speech in 3- to 7-Year-Old Children

    ERIC Educational Resources Information Center

    Remijn, Gerard B.; Kikuchi, Mitsuru; Yoshimura, Yuko; Shitamichi, Kiyomi; Ueno, Sanae; Tsubokawa, Tsunehisa; Kojima, Haruyuki; Higashida, Haruhiro; Minabe, Yoshio

    2017-01-01

    Purpose: The purpose of this study was to assess cortical hemodynamic response patterns in 3- to 7-year-old children listening to two speech modes: normally vocalized and whispered speech. Understanding whispered speech requires processing of the relatively weak, noisy signal, as well as the cognitive ability to understand the speaker's reason for…

  5. Done Wrong or Said Wrong? Young Children Understand the Normative Directions of Fit of Different Speech Acts

    ERIC Educational Resources Information Center

    Rakoczy, Hannes; Tomasello, Michael

    2009-01-01

    Young children use and comprehend different kinds of speech acts from the beginning of their communicative development. But it is not clear how they understand the conventional and normative structure of such speech acts. In particular, imperative speech acts have a world-to-word direction of fit, such that their fulfillment means that the world…

  6. Auditory and Visual Sustained Attention in Children with Speech Sound Disorder

    PubMed Central

    Murphy, Cristina F. B.; Pagan-Neves, Luciana O.; Wertzner, Haydée F.; Schochat, Eliane

    2014-01-01

    Although research has demonstrated that children with specific language impairment (SLI) and reading disorder (RD) exhibit sustained attention deficits, no study has investigated sustained attention in children with speech sound disorder (SSD). Given the overlap of symptoms, such as phonological memory deficits, between these different language disorders (i.e., SLI, SSD and RD) and the relationships between working memory, attention and language processing, it is worthwhile to investigate whether deficits in sustained attention also occur in children with SSD. A total of 55 children (18 diagnosed with SSD (8.11±1.231) and 37 typically developing children (8.76±1.461)) were invited to participate in this study. Auditory and visual sustained-attention tasks were applied. Children with SSD performed worse on these tasks; they committed a greater number of auditory false alarms and exhibited a significant decline in performance over the course of the auditory detection task. The extent to which performance is related to auditory perceptual difficulties and probable working memory deficits is discussed. Further studies are needed to better understand the specific nature of these deficits and their clinical implications. PMID:24675815

  7. Processing of speech and non-speech stimuli in children with specific language impairment

    NASA Astrophysics Data System (ADS)

    Basu, Madhavi L.; Surprenant, Aimee M.

    2003-10-01

    Specific Language Impairment (SLI) is a developmental language disorder in which children demonstrate varying degrees of difficulties in acquiring a spoken language. One possible underlying cause is that children with SLI have deficits in processing sounds that are of short duration or when they are presented rapidly. Studies so far have compared their performance on speech and nonspeech sounds of unequal complexity. Hence, it is still unclear whether the deficit is specific to the perception of speech sounds or whether it more generally affects the auditory function. The current study aims to answer this question by comparing the performance of children with SLI on speech and nonspeech sounds synthesized from sine-wave stimuli. The children will be tested using the classic categorical perception paradigm that includes both the identification and discrimination of stimuli along a continuum. If there is a deficit in the performance on both speech and nonspeech tasks, it will show that these children have a deficit in processing complex sounds. Poor performance on only the speech sounds will indicate that the deficit is more related to language. The findings will offer insights into the exact nature of the speech perception deficits in children with SLI. [Work supported by ASHF.

  8. Speech fluency profile in Williams-Beuren syndrome: a preliminary study.

    PubMed

    Rossi, Natalia Freitas; Souza, Deise Helena de; Moretti-Ferreira, Danilo; Giacheti, Célia Maria

    2009-01-01

    the speech fluency pattern attributed to individuals with Williams-Beuren syndrome (WBS) is supported by the effectiveness of the phonological loop. Some studies have reported the occurrence of speech disruptions caused by lexical and semantic deficits. However, the type and frequency of such speech disruptions has not been well elucidated. to determine the speech fluency profile of individuals with WBS and to compare the speech performance of these individuals to a control group matched by gender and mental age. Twelve subjects with Williams-Beuren syndrome, chronologically aged between 6.6 and 23.6 years and mental age ranging from 4.8 to 14.3 years, were evaluated. They were compared with another group consisting of 12 subjects with similar mental age and with no speech or learning difficulties. Speech fluency parameters were assessed according to the ABFW Language Test: type and frequency of speech disruptions and speech rate. The obtained results were compared between the groups. In comparison with individuals of similar mental age and typical speech and language development, the group with Williams-Beuren syndrome showed a greater percentage of speech discontinuity, and an increased frequency of common hesitations and word repetition. The speech fluency profile presented by individuals with WBS in this study suggests that the presence of disfluencies can be caused by deficits in the lexical, semantic, and syntactic processing of verbal information. The authors stress that further systematic investigations on the subject are warranted.

  9. Lateralized electrical brain activity reveals covert attention allocation during speaking.

    PubMed

    Rommers, Joost; Meyer, Antje S; Praamstra, Peter

    2017-01-27

    Speakers usually begin to speak while only part of the utterance has been planned. Earlier work has shown that speech planning processes are reflected in speakers' eye movements as they describe visually presented objects. However, to-be-named objects can be processed to some extent before they have been fixated upon, presumably because attention can be allocated to objects covertly, without moving the eyes. The present study investigated whether EEG could track speakers' covert attention allocation as they produced short utterances to describe pairs of objects (e.g., "dog and chair"). The processing difficulty of each object was varied by presenting it in upright orientation (easy) or in upside down orientation (difficult). Background squares flickered at different frequencies in order to elicit steady-state visual evoked potentials (SSVEPs). The N2pc component, associated with the focusing of attention on an item, was detectable not only prior to speech onset, but also during speaking. The time course of the N2pc showed that attention shifted to each object in the order of mention prior to speech onset. Furthermore, greater processing difficulty increased the time speakers spent attending to each object. This demonstrates that the N2pc can track covert attention allocation in a naming task. In addition, an effect of processing difficulty at around 200-350ms after stimulus onset revealed early attention allocation to the second to-be-named object. The flickering backgrounds elicited SSVEPs, but SSVEP amplitude was not influenced by processing difficulty. These results help complete the picture of the coordination of visual information uptake and motor output during speaking. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Assessing the Importance of Lexical Tone Contour to Sentence Perception in Mandarin-Speaking Children With Normal Hearing.

    PubMed

    Zhu, Shufeng; Wong, Lena L N; Wang, Bin; Chen, Fei

    2017-07-12

    The aim of the present study was to evaluate the influence of lexical tone contour and age on sentence perception in quiet and in noise conditions in Mandarin-speaking children ages 7 to 11 years with normal hearing. Test materials were synthesized Mandarin sentences, each word with a manipulated lexical contour, that is, normal contour, flat contour, or a tone contour randomly selected from the four Mandarin lexical tone contours. A convenience sample of 75 Mandarin-speaking participants with normal hearing, ages 7, 9, and 11 years (25 participants in each age group), was selected. Participants were asked to repeat the synthesized speech in quiet and in speech spectrum-shaped noise at 0 dB signal-to-noise ratio. In quiet, sentence recognition by the 11-year-old children was similar to that of adults, and misrepresented lexical tone contours did not have a detrimental effect. However, the performance of children ages 9 and 7 years was significantly poorer. The performance of all three age groups, especially the younger children, declined significantly in noise. The present research suggests that lexical tone contour plays an important role in Mandarin sentence recognition, and misrepresented tone contours result in greater difficulty in sentence recognition in younger children. These results imply that maturation and/or language use experience play a role in the processing of tone contours for Mandarin speech understanding, particularly in noise.

  11. Seven- to Nine-Year-Olds' Understandings of Speech Marks: Some Issues and Problems

    ERIC Educational Resources Information Center

    Hall, Nigel; Sing, Sue

    2011-01-01

    At first sight the speech mark would seem to be one of the easiest to use of all punctuation marks. After all, all one has to do is take the piece of speech or written language and surround it with the appropriately shaped marks. But, are speech marks as easy to understand and use as suggested above, especially for young children beginning their…

  12. The effect of presentation level and stimulation rate on speech perception and modulation detection for cochlear implant users.

    PubMed

    Brochier, Tim; McDermott, Hugh J; McKay, Colette M

    2017-06-01

    In order to improve speech understanding for cochlear implant users, it is important to maximize the transmission of temporal information. The combined effects of stimulation rate and presentation level on temporal information transfer and speech understanding remain unclear. The present study systematically varied presentation level (60, 50, and 40 dBA) and stimulation rate [500 and 2400 pulses per second per electrode (pps)] in order to observe how the effect of rate on speech understanding changes for different presentation levels. Speech recognition in quiet and noise, and acoustic amplitude modulation detection thresholds (AMDTs) were measured with acoustic stimuli presented to speech processors via direct audio input (DAI). With the 500 pps processor, results showed significantly better performance for consonant-vowel nucleus-consonant words in quiet, and a reduced effect of noise on sentence recognition. However, no rate or level effect was found for AMDTs, perhaps partly because of amplitude compression in the sound processor. AMDTs were found to be strongly correlated with the effect of noise on sentence perception at low levels. These results indicate that AMDTs, at least when measured with the CP910 Freedom speech processor via DAI, explain between-subject variance of speech understanding, but do not explain within-subject variance for different rates and levels.

  13. They Still Can't Count: Assessing and Supporting Children's Counting Difficulties in the Early Years of Schooling

    ERIC Educational Resources Information Center

    van Klinken, Eduarda; Juleff, Emma

    2015-01-01

    In this article, the authors describe their efforts to teach counting skills to their class of 5- to 7-year-olds at the Glenleighden School, located in a a suburb of Brisbane. As Glenleighden early childhood teachers, they work in collaboration with a multi-disciplinary team that supports children with speech and language difficulties.…

  14. Effect of Parkinson's disease on the production of structured and unstructured speaking tasks: Respiratory physiologic and linguistic considerations

    PubMed Central

    Huber, Jessica E.; Darling, Meghan

    2012-01-01

    Purpose The purpose of the present study was to examine the effects of cognitive-linguistic deficits and respiratory physiologic changes on respiratory support for speech in PD, using two speech tasks, reading and extemporaneous speech. Methods Five women with PD, 9 men with PD, and 14 age- and sex-matched control participants read a passage and spoke extemporaneously on a topic of their choice at comfortable loudness. Sound pressure level, syllables per breath group, speech rate, and lung volume parameters were measured. Number of formulation errors, disfluencies, and filled pauses were counted. Results Individuals with PD produced shorter utterances as compared to control participants. The relationships between utterance length and lung volume initiation and inspiratory duration were weaker in individuals with PD than for control participants, particularly for the extemporaneous speech task. These results suggest less consistent planning for utterance length by individuals with PD in extemporaneous speech. Individuals with PD produced more formulation errors in both tasks and significantly fewer filled pauses in extemporaneous speech. Conclusions Both respiratory physiologic and cognitive-linguistic issues affected speech production by individuals with PD. Overall, individuals with PD had difficulty planning or coordinating language formulation and respiratory support, in particular during extemporaneous speech. PMID:20844256

  15. Communicative and psychological dimensions of the KiddyCAT

    PubMed Central

    Clark, Chagit E.; Conture, Edward G.; Frankel, Carl B.; Walden, Tedra A.

    2012-01-01

    Purpose The purpose of the present study was to investigate the underlying constructs of the Communication Attitude Test for Preschool and Kindergarten Children Who Stutter (KiddyCAT; Vanryckeghem & Brutten, 2007), especially those related to awareness of stuttering and negative speech-associated attitudes. Method Participants were 114 preschool-age children who stutter (CWS; n = 52; 15 females) and children who do not stutter (CWNS; n = 62; 31 females). Their scores on the KiddyCAT were assessed to determine whether they differed with respect to talker group (CWS vs. CWNS), chronological age, younger versus older age groups, and gender. A categorical data principal components factor analysis (CATPCA) assessed the quantity and quality of the KiddyCAT dimensions. Results Findings indicated that preschool-age CWS scored significantly higher than CWNS on the KiddyCAT, regardless of age or gender. Additionally, the extraction of a single factor from the CATPCA indicated that one dimension—speech difficulty—appears to underlie the KiddyCAT items. Conclusions As reported by its test developers, the KiddyCAT differentiates between CWS and CWNS. Furthermore, one factor, which appears related to participants’ attitudes towards speech difficulty, underlies the questionnaire. Findings were taken to suggest that children’s responses to the KiddyCAT are related to their perception that speech is difficult, which, for CWS, may be associated with relatively frequent experiences with their speaking difficulties (i.e., stuttering). PMID:22333753

  16. The Role of Visual Speech Information in Supporting Perceptual Learning of Degraded Speech

    ERIC Educational Resources Information Center

    Wayne, Rachel V.; Johnsrude, Ingrid S.

    2012-01-01

    Following cochlear implantation, hearing-impaired listeners must adapt to speech as heard through their prosthesis. Visual speech information (VSI; the lip and facial movements of speech) is typically available in everyday conversation. Here, we investigate whether learning to understand a popular auditory simulation of speech as transduced by a…

  17. TEACHER'S GUIDE TO HIGH SCHOOL SPEECH.

    ERIC Educational Resources Information Center

    JENKINSON, EDWARD B., ED.

    THIS GUIDE TO HIGH SCHOOL SPEECH FOCUSES ON SPEECH AS ORAL COMPOSITION, STRESSING THE IMPORTANCE OF CLEAR THINKING AND COMMUNICATION. THE PROPOSED 1-SEMESTER BASIC COURSE IN SPEECH ATTEMPTS TO IMPROVE THE STUDENT'S ABILITY TO COMPOSE AND DELIVER SPEECHES, TO THINK AND LISTEN CRITICALLY, AND TO UNDERSTAND THE SOCIAL FUNCTION OF SPEECH. IN ADDITION…

  18. The occurrence of 'what', 'where', 'what house' and other repair initiations in the home environment of hearing-impaired individuals.

    PubMed

    Pajo, Kati

    2013-01-01

    Even though research has increasingly focused on the qualitative features of natural conversations, which have improved the communication therapy for hearing-impaired individuals (HI) and familiar partners (FP), very little is known about the interactions that occur outside clinical settings. This study investigated qualitatively how both HI and FP initiated repair due to misperceptions or to a difficulty in understanding during conversations conducted at home. The HI participant's multimodal production style was adopted in the present analysis, and the frequencies were calculated for the different types of verbal repair initiations. Participants with acquired hearing loss (43-69 years) and their familiar partners (24-67 years) were video recorded (total time approximately 9 h) in their homes. The data consisted of eight conversational dyads. The transcription and analysis utilized Conversation Analysis. A total of 209 (HI 164/FP 45) verbal repair initiations were identified. The five major types of initiations found in the data (used by both HI and FP) were: open repair initiation, targeting question word, question word with repetition, repetition, and candidate understanding. HI participants rarely explicitly verbalized their difficulty to hear, but the production style, which included a fast speech rate and 'trouble posture', indicated a sensitive routine that was visible particularly in clear misperceptions. Furthermore, the alerting action of overlapping turn taking with the FP participant's turn could be seen to reveal the depth of misperception. The individual differences between HI participants were found predominantly in the frequency of their repair initiations, but also in how they used the different types of repair initiation. Through a deeper qualitative analysis, conversational research can provide extended knowledge of the occurrence and style of ordinary repair initiations and highlight their relationship in certain conversational environments. A robust starting point in communication therapy is increasing the awareness of HI individuals' existing skills. © 2012 Royal College of Speech and Language Therapists.

  19. Evidence of degraded representation of speech in noise, in the aging midbrain and cortex

    PubMed Central

    Simon, Jonathan Z.; Anderson, Samira

    2016-01-01

    Humans have a remarkable ability to track and understand speech in unfavorable conditions, such as in background noise, but speech understanding in noise does deteriorate with age. Results from several studies have shown that in younger adults, low-frequency auditory cortical activity reliably synchronizes to the speech envelope, even when the background noise is considerably louder than the speech signal. However, cortical speech processing may be limited by age-related decreases in the precision of neural synchronization in the midbrain. To understand better the neural mechanisms contributing to impaired speech perception in older adults, we investigated how aging affects midbrain and cortical encoding of speech when presented in quiet and in the presence of a single-competing talker. Our results suggest that central auditory temporal processing deficits in older adults manifest in both the midbrain and in the cortex. Specifically, midbrain frequency following responses to a speech syllable are more degraded in noise in older adults than in younger adults. This suggests a failure of the midbrain auditory mechanisms needed to compensate for the presence of a competing talker. Similarly, in cortical responses, older adults show larger reductions than younger adults in their ability to encode the speech envelope when a competing talker is added. Interestingly, older adults showed an exaggerated cortical representation of speech in both quiet and noise conditions, suggesting a possible imbalance between inhibitory and excitatory processes, or diminished network connectivity that may impair their ability to encode speech efficiently. PMID:27535374

  20. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    ERIC Educational Resources Information Center

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  1. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    PubMed

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Machado-Joseph Disease

    MedlinePlus

    ... for drunkenness, difficulty with speech and swallowing, involuntary eye movements, double vision, and frequent urination. Some individuals also ... color and/or contrast, and inability to control eye movements. × Definition Machado-Joseph disease (MJD), which is also ...

  3. Severity-Based Adaptation with Limited Data for ASR to Aid Dysarthric Speakers

    PubMed Central

    Mustafa, Mumtaz Begum; Salim, Siti Salwah; Mohamed, Noraini; Al-Qatab, Bassam; Siong, Chng Eng

    2014-01-01

    Automatic speech recognition (ASR) is currently used in many assistive technologies, such as helping individuals with speech impairment in their communication ability. One challenge in ASR for speech-impaired individuals is the difficulty in obtaining a good speech database of impaired speakers for building an effective speech acoustic model. Because there are very few existing databases of impaired speech, which are also limited in size, the obvious solution to build a speech acoustic model of impaired speech is by employing adaptation techniques. However, issues that have not been addressed in existing studies in the area of adaptation for speech impairment are as follows: (1) identifying the most effective adaptation technique for impaired speech; and (2) the use of suitable source models to build an effective impaired-speech acoustic model. This research investigates the above-mentioned two issues on dysarthria, a type of speech impairment affecting millions of people. We applied both unimpaired and impaired speech as the source model with well-known adaptation techniques like the maximum likelihood linear regression (MLLR) and the constrained-MLLR(C-MLLR). The recognition accuracy of each impaired speech acoustic model is measured in terms of word error rate (WER), with further assessments, including phoneme insertion, substitution and deletion rates. Unimpaired speech when combined with limited high-quality speech-impaired data improves performance of ASR systems in recognising severely impaired dysarthric speech. The C-MLLR adaptation technique was also found to be better than MLLR in recognising mildly and moderately impaired speech based on the statistical analysis of the WER. It was found that phoneme substitution was the biggest contributing factor in WER in dysarthric speech for all levels of severity. The results show that the speech acoustic models derived from suitable adaptation techniques improve the performance of ASR systems in recognising impaired speech with limited adaptation data. PMID:24466004

  4. 'All the better for not seeing you': effects of communicative context on the speech of an individual with acquired communication difficulties.

    PubMed

    Bruce, Carolyn; Braidwood, Ursula; Newton, Caroline

    2013-01-01

    Evidence shows that speakers adjust their speech depending on the demands of the listener. However, it is unclear whether people with acquired communication disorders can and do make similar adaptations. This study investigated the impact of different conversational settings on the intelligibility of a speaker with acquired communication difficulties. Twenty-eight assessors listened to recordings of the speaker reading aloud 40 words and 32 sentences to a listener who was either face-to-face or unseen. The speaker's ability to convey information was measured by the accuracy of assessors' orthographic transcriptions of the words and sentences. Assessors' scores were significantly higher in the unseen condition for the single word task particularly if they had heard the face-to-face condition first. Scores for the sentence task were significantly higher in the second presentation regardless of the condition. The results from this study suggest that therapy conducted in situations where the client is not able to see their conversation partner may encourage them to perform at a higher level and increase the clarity of their speech. Readers will be able to describe: (1) the range of conversational adjustments made by speakers without communication difficulties; (2) differences between these tasks in offering contextual information to the listener; and (3) the potential for using challenging communicative situations to improve the performance of adults with communication disorders. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Speech after Mao: Literature and Belonging

    ERIC Educational Resources Information Center

    Hsieh, Victoria Linda

    2012-01-01

    This dissertation aims to understand the apparent failure of speech in post-Mao literature to fulfill its conventional functions of representation and communication. In order to understand this pattern, I begin by looking back on the utility of speech for nation-building in modern China. In addition to literary analysis of key authors and works,…

  6. Modulations of 'late' event-related brain potentials in humans by dynamic audiovisual speech stimuli.

    PubMed

    Lebib, Riadh; Papo, David; Douiri, Abdel; de Bode, Stella; Gillon Dowens, Margaret; Baudonnière, Pierre-Marie

    2004-11-30

    Lipreading reliably improve speech perception during face-to-face conversation. Within the range of good dubbing, however, adults tolerate some audiovisual (AV) discrepancies and lipreading, then, can give rise to confusion. We used event-related brain potentials (ERPs) to study the perceptual strategies governing the intermodal processing of dynamic and bimodal speech stimuli, either congruently dubbed or not. Electrophysiological analyses revealed that non-coherent audiovisual dubbings modulated in amplitude an endogenous ERP component, the N300, we compared to a 'N400-like effect' reflecting the difficulty to integrate these conflicting pieces of information. This result adds further support for the existence of a cerebral system underlying 'integrative processes' lato sensu. Further studies should take advantage of this 'N400-like effect' with AV speech stimuli to open new perspectives in the domain of psycholinguistics.

  7. Sound frequency affects speech emotion perception: results from congenital amusia

    PubMed Central

    Lolli, Sydney L.; Lewenstein, Ari D.; Basurto, Julian; Winnik, Sean; Loui, Psyche

    2015-01-01

    Congenital amusics, or “tone-deaf” individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech. PMID:26441718

  8. Five Lectures on Artificial Intelligence

    DTIC Science & Technology

    1974-09-01

    large systems The current projects on speech understanding (which I will describe iater) are an exception to this, dealing explicitly with the problem...learns that "Fred lives in Sydney", we must find some new fact to resolve the tension — 1 SPEECH UNDERSTANDING SYSTEMS perhaps he lives in a zco It is...possible Speech Understanding Systems Most of the problems described above might be characterized as relating to the chunking of knowledge. Such ideas are

  9. Effects of sensorineural hearing loss on temporal coding of narrowband and broadband signals in the auditory periphery

    PubMed Central

    Henry, Kenneth S.; Heinz, Michael G.

    2013-01-01

    People with sensorineural hearing loss have substantial difficulty understanding speech under degraded listening conditions. Behavioral studies suggest that this difficulty may be caused by changes in auditory processing of the rapidly-varying temporal fine structure (TFS) of acoustic signals. In this paper, we review the presently known effects of sensorineural hearing loss on processing of TFS and slower envelope modulations in the peripheral auditory system of mammals. Cochlear damage has relatively subtle effects on phase locking by auditory-nerve fibers to the temporal structure of narrowband signals under quiet conditions. In background noise, however, sensorineural loss does substantially reduce phase locking to the TFS of pure-tone stimuli. For auditory processing of broadband stimuli, sensorineural hearing loss has been shown to severely alter the neural representation of temporal information along the tonotopic axis of the cochlea. Notably, auditory-nerve fibers innervating the high-frequency part of the cochlea grow increasingly responsive to low-frequency TFS information and less responsive to temporal information near their characteristic frequency (CF). Cochlear damage also increases the correlation of the response to TFS across fibers of varying CF, decreases the traveling-wave delay between TFS responses of fibers with different CFs, and can increase the range of temporal modulation frequencies encoded in the periphery for broadband sounds. Weaker neural coding of temporal structure in background noise and degraded coding of broadband signals along the tonotopic axis of the cochlea are expected to contribute considerably to speech perception problems in people with sensorineural hearing loss. PMID:23376018

  10. Longer-term needs of stroke survivors with communication difficulties living in the community: a systematic review and thematic synthesis of qualitative studies

    PubMed Central

    Clarke, David

    2017-01-01

    Objective To review and synthesise qualitative literature relating to the longer-term needs of community dwelling stroke survivors with communication difficulties including aphasia, dysarthria and apraxia of speech. Design Systematic review and thematic synthesis. Method We included studies employing qualitative methodology which focused on the perceived or expressed needs, views or experiences of stroke survivors with communication difficulties in relation to the day-to-day management of their condition following hospital discharge. We searched MEDLINE, EMBASE, PsycINFO, CINAHL, The Cochrane Library, International Bibliography of the Social Sciences and AMED and undertook grey literature searches. Studies were assessed for methodological quality by two researchers independently and the findings were combined using thematic synthesis. Results Thirty-two studies were included in the thematic synthesis. The synthesis reveals the ongoing difficulties stroke survivors can experience in coming to terms with the loss of communication and in adapting to life with a communication difficulty. While some were able to adjust, others struggled to maintain their social networks and to participate in activities which were meaningful to them. The challenges experienced by stroke survivors with communication difficulties persisted for many years poststroke. Four themes relating to longer-term need were developed: managing communication outside of the home, creating a meaningful role, creating or maintaining a support network and taking control and actively moving forward with life. Conclusions Understanding the experiences of stroke survivors with communication difficulties is vital for ensuring that longer-term care is designed according to their needs. Wider psychosocial factors must be considered in the rehabilitation of people with poststroke communication difficulties. Self-management interventions may be appropriate to help this subgroup of stroke survivors manage their condition in the longer-term; however, such approaches must be designed to help survivors to manage the unique psychosocial consequences of poststroke communication difficulties. PMID:28988185

  11. SPEEDY babies: A putative new behavioral syndrome of unbalanced motor-speech development

    PubMed Central

    Haapanen, Marja-Leena; Aro, Tuomo; Isotalo, Elina

    2008-01-01

    Even though difficulties in motor development in children with speech and language disorders are widely known, hardly any attention is paid to the association between atypically rapidly occurring unassisted walking and delayed speech development. The four children described here presented with a developmental behavioral triad: 1) atypically speedy motor development, 2) impaired expressive speech, and 3) tongue carriage dysfunction resulting in related misarticulations. Those characteristics might be phenotypically or genetically clustered. These children didn’t have impaired cognition, neurological or mental disease, defective sense organs, craniofacial dysmorphology or susceptibility to upper respiratory infections, particularly recurrent otitis media. Attention should be paid on discordant and unbalanced achievement of developmental milestones. Present children are termed SPEEDY babies, where SPEEDY refers to rapid independent walking, SPEE and DY to dyspractic or dysfunctional speech development and lingual dysfunction resulting in linguoalveolar misarticulations. SPEEDY babies require health care that recognizes and respects their motor skills and supports their needs for motor activities and on the other hand include treatment for impaired speech. The parents may need advice and support with these children. PMID:19337462

  12. Trajectory and outcomes of speech language therapy in the Prader-Willi syndrome (PWS): case report.

    PubMed

    Misquiatti, Andréa Regina Nunes; Cristovão, Melina Pavini; Brito, Maria Claudia

    2011-03-01

    The aim of this study was to describe the trajectory and the outcomes of speech-language therapy in Prader-Willi syndrome through a longitudinal study of the case of an 8 year-old boy, along four years of speech-language therapy follow-up. The therapy sessions were filmed and documental analysis of information from the child's records regarding anamnesis, evaluation and speech-language therapy reports and multidisciplinary evaluations were carried out. The child presented typical characteristics of Prader-Willi syndrome, such as obesity, hyperfagia, anxiety, behavioral problems and self aggression episodes. Speech-language pathology evaluation showed orofacial hypotony, sialorrhea, hypernasal voice, cognitive deficits, oral comprehension difficulties, communication using gestures and unintelligible isolated words. Initially, speech-language therapy had the aim to promote the language development emphasizing social interaction through recreational activities. With the evolution of the case, the main focus became the development of conversation and narrative abilities. It were observed improvements in attention, symbolic play, social contact and behavior. Moreover, there was an increase in vocabulary, and evolution in oral comprehension and the development of narrative abilities. Hence, speech-language pathology intervention in the case described was effective in different linguistic levels, regarding phonological, syntactic, lexical and pragmatic abilities.

  13. Correlations between self-assessed hearing handicap and standard audiometric tests in elderly persons.

    PubMed

    Pedersen, K; Rosenhall, U

    1991-01-01

    The relationship between self-assessed hearing handicap and audiometric measures using pure-tone and speech audiometry was studied in a group of elderly persons representative of an urban Swedish population. The study population consisted of two cohorts, one of which was followed longitudinally. Significant correlations between measured and self-assessed hearing were found. Speech discrimination scores showed lower correlations with the self-estimated hearing than pure-tone averages and speech reception threshold. Questions concerning conversation with one person and concerning difficulty in hearing the doorbell showed lower correlations with measured hearing than the other questions. The discrimination score test is an inadequate tool for measuring hearing handicap.

  14. Development of a test battery for evaluating speech perception in complex listening environments.

    PubMed

    Brungart, Douglas S; Sheffield, Benjamin M; Kubli, Lina R

    2014-08-01

    In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.

  15. Speech Patterns and Racial Wage Inequality

    ERIC Educational Resources Information Center

    Grogger, Jeffrey

    2011-01-01

    Speech patterns differ substantially between whites and many African Americans. I collect and analyze speech data to understand the role that speech may play in explaining racial wage differences. Among blacks, speech patterns are highly correlated with measures of skill such as schooling and AFQT scores. They are also highly correlated with the…

  16. The Interpersonal Metafunction Analysis of Barack Obama's Victory Speech

    ERIC Educational Resources Information Center

    Ye, Ruijuan

    2010-01-01

    This paper carries on a tentative interpersonal metafunction analysis of Barack Obama's victory speech from the interpersonal metafunction, which aims to help readers understand and evaluate the speech regarding its suitability, thus to provide some guidance for readers to make better speeches. This study has promising implications for speeches as…

  17. The Effectiveness of Clear Speech as a Masker

    ERIC Educational Resources Information Center

    Calandruccio, Lauren; Van Engen, Kristin; Dhar, Sumitrajit; Bradlow, Ann R.

    2010-01-01

    Purpose: It is established that speaking clearly is an effective means of enhancing intelligibility. Because any signal-processing scheme modeled after known acoustic-phonetic features of clear speech will likely affect both target and competing speech, it is important to understand how speech recognition is affected when a competing speech signal…

  18. Children with Williams Syndrome: Language, Cognitive, and Behavioral Characteristics and their Implications for Intervention

    PubMed Central

    Mervis, Carolyn B.; Velleman, Shelley L.

    2012-01-01

    Williams syndrome (WS) is a rare genetic disorder characterized by heart disease, failure to thrive, hearing loss, intellectual or learning disability, speech and language delay, gregariousness, and non-social anxiety. The WS psycholinguistic profile is complex, including relative strengths in concrete vocabulary, phonological processing, and verbal short-term memory and relative weaknesses in relational/conceptual language, reading comprehension, and pragmatics. Many children evidence difficulties with finiteness marking and complex grammatical constructions. Speech-language intervention, support, and advocacy are crucial. PMID:22754603

  19. Detecting the Difficulty Level of Foreign Language Texts

    DTIC Science & Technology

    2010-02-01

    continuous tenses), as well as part- of- speech labels for words. The authors used a k-Nearest Neighbor ( kNN ) classifier (Cover and Hart, 1967; Mitchell, 1997...anticipate, and influence these situations and to operate in them is found in foreign language speech and text. For this reason, military linguists are...the language model system, LGR is the prediction of one of the grammar-based classifiers, and CkNN is a confidence value of the kNN prediction for the

  20. The Cleft Care UK study. Part 4: perceptual speech outcomes

    PubMed Central

    Sell, D; Mildinhall, S; Albery, L; Wills, A K; Sandy, J R; Ness, A R

    2015-01-01

    Structured Abstract Objectives To describe the perceptual speech outcomes from the Cleft Care UK (CCUK) study and compare them to the 1998 Clinical Standards Advisory Group (CSAG) audit. Setting and sample population A cross-sectional study of 248 children born with complete unilateral cleft lip and palate, between 1 April 2005 and 31 March 2007 who underwent speech assessment. Materials and methods Centre-based specialist speech and language therapists (SLT) took speech audio–video recordings according to nationally agreed guidelines. Two independent listeners undertook the perceptual analysis using the CAPS-A Audit tool. Intra- and inter-rater reliability were tested. Results For each speech parameter of intelligibility/distinctiveness, hypernasality, palatal/palatalization, backed to velar/uvular, glottal, weak and nasalized consonants, and nasal realizations, there was strong evidence that speech outcomes were better in the CCUK children compared to CSAG children. The parameters which did not show improvement were nasal emission, nasal turbulence, hyponasality and lateral/lateralization. Conclusion These results suggest that centralization of cleft care into high volume centres has resulted in improvements in UK speech outcomes in five-year-olds with unilateral cleft lip and palate. This may be associated with the development of a specialized workforce. Nevertheless, there still remains a group of children with significant difficulties at school entry. PMID:26567854

  1. Speech and Communication Changes Reported by People with Parkinson's Disease.

    PubMed

    Schalling, Ellika; Johansson, Kerstin; Hartelius, Lena

    2017-01-01

    Changes in communicative functions are common in Parkinson's disease (PD), but there are only limited data provided by individuals with PD on how these changes are perceived, what their consequences are, and what type of intervention is provided. To present self-reported information about speech and communication, the impact on communicative participation, and the amount and type of speech-language pathology services received by people with PD. Respondents with PD recruited via the Swedish Parkinson's Disease Society filled out a questionnaire accessed via a Web link or provided in a paper version. Of 188 respondents, 92.5% reported at least one symptom related to communication; the most common symptoms were weak voice, word-finding difficulties, imprecise articulation, and getting off topic in conversation. The speech and communication problems resulted in restricted communicative participation for between a quarter and a third of the respondents, and their speech caused embarrassment sometimes or more often to more than half. Forty-five percent of the respondents had received speech-language pathology services. Most respondents reported both speech and language symptoms, and many experienced restricted communicative participation. Access to speech-language pathology services is still inadequate. Services should also address cognitive/linguistic aspects to meet the needs of people with PD. © 2018 S. Karger AG, Basel.

  2. The Cleft Care UK study. Part 4: perceptual speech outcomes.

    PubMed

    Sell, D; Mildinhall, S; Albery, L; Wills, A K; Sandy, J R; Ness, A R

    2015-11-01

    To describe the perceptual speech outcomes from the Cleft Care UK (CCUK) study and compare them to the 1998 Clinical Standards Advisory Group (CSAG) audit. A cross-sectional study of 248 children born with complete unilateral cleft lip and palate, between 1 April 2005 and 31 March 2007 who underwent speech assessment. Centre-based specialist speech and language therapists (SLT) took speech audio-video recordings according to nationally agreed guidelines. Two independent listeners undertook the perceptual analysis using the CAPS-A Audit tool. Intra- and inter-rater reliability were tested. For each speech parameter of intelligibility/distinctiveness, hypernasality, palatal/palatalization, backed to velar/uvular, glottal, weak and nasalized consonants, and nasal realizations, there was strong evidence that speech outcomes were better in the CCUK children compared to CSAG children. The parameters which did not show improvement were nasal emission, nasal turbulence, hyponasality and lateral/lateralization. These results suggest that centralization of cleft care into high volume centres has resulted in improvements in UK speech outcomes in five-year-olds with unilateral cleft lip and palate. This may be associated with the development of a specialized workforce. Nevertheless, there still remains a group of children with significant difficulties at school entry. © The Authors. Orthodontics & Craniofacial Research Published by John Wiley & Sons Ltd.

  3. Speech-language pathology teletherapy in rural and remote educational settings: Decreasing service inequities.

    PubMed

    Fairweather, Glenn Craig; Lincoln, Michelle Ann; Ramsden, Robyn

    2016-12-01

    The objectives of this study were to investigate the efficacy of a speech-language pathology teletherapy program for children attending schools and early childcare settings in rural New South Wales, Australia, and their parents' views on the program's feasibility and acceptability. Nineteen children received speech-language pathology sessions delivered via Adobe Connect®, Facetime © or Skype © web-conferencing software. During semi-structured interviews, parents (n = 5) described factors that promoted or threatened the program's feasibility and acceptability. Participation in a speech-language pathology teletherapy program using low-bandwidth videoconferencing improved the speech and language skills of children in both early childhood settings and primary school. Emergent themes related to (a) practicality and convenience, (b) learning, (c) difficulties and (d) communication. Treatment outcome data and parental reports verified that the teletherapy service delivery was feasible and acceptable. However, it was also evident that regular discussion and communication between the various stakeholders involved in teletherapy programs may promote increased parental engagement and acceptability.

  4. A multimedia PDA/PC speech and language therapy tool for patients with aphasia.

    PubMed

    Reeves, Nina; Jefferies, Laura; Cunningham, Sally-Jo; Harris, Catherine

    2007-01-01

    Aphasia is a speech disorder usually caused by stroke or head injury and may involve a variety of communication difficulties. As 30% of stroke sufferers have a persisting speech and language disorder and therapy resources are low, there is clear scope for the development of technology to support patients between therapy sessions. This paper reports on an empirical study which evaluated SoundHelper, a multimedia application to demonstrate how to pronounce target speech sounds. Two prototypes, involving either video or animation, were developed and evaluated with 20 Speech and Language Therapists. Participants responded positively to both, with the video being preferred because of the perceived extra information provided. The potential for the use on portable devices, since internet access is limited in hospitals, is explored in the light of opinions of Augmented and Alternative Communication (AAC) device users in the UK nd Europe who have expressed a strong desire for more use of internet services.

  5. The demand for speech pathology services for children: Do we need more or just different?

    PubMed

    Reilly, Sheena; Harper, Megan; Goldfeld, Sharon

    2016-12-01

    An inability or difficulty communicating can have a profound impact on a child's future ability to participate in society as a productive adult. Over the past few years the number of interventions for children with speech and language problems has almost doubled; the majority are targeted interventions delivered by speech pathologists. In this paper we examine the distribution of speech pathology services in metropolitan Melbourne and how these are aligned with need as defined by vulnerability in language and social disadvantage. We identified three times as many private sector services compared to public services for the 0-5 year age group. Overall there was poorer availability of services in some of the most vulnerable areas. The profound and long-term impact of impoverished childhood language, coupled with the considerable limitations on public spending, provide a strong impetus to deliver more equitably distributed speech pathology services. © 2016 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  6. Genetics Home Reference: hypomyelination and congenital cataract

    MedlinePlus

    ... have reduced sensation in their arms and legs (peripheral neuropathy). In addition, affected individuals typically have speech difficulties ( ... need support, and they usually do not have peripheral neuropathy. Learn more about the gene associated with hypomyelination ...

  7. Genetics Home Reference: HIVEP2-related intellectual disability

    MedlinePlus

    ... to have difficulty with this activity; their walking style (gait) is often unbalanced and wide-based. Speech ... be inherited? More about Inheriting Genetic Conditions Diagnosis & Management Resources General Information from MedlinePlus (5 links) Diagnostic ...

  8. Genetics Home Reference: 48,XXYY syndrome

    MedlinePlus

    ... degree of difficulty with speech and language development. Learning disabilities, especially those that are language-based, are very ... Autism Speaks CHADD: The National Resource on ADHD Learning Disabilities Association of America National Center for Learning Disabilities ...

  9. The Atlanta Motor Speech Disorders Corpus: Motivation, Development, and Utility.

    PubMed

    Laures-Gore, Jacqueline; Russell, Scott; Patel, Rupal; Frankel, Michael

    2016-01-01

    This paper describes the design and collection of a comprehensive spoken language dataset from speakers with motor speech disorders in Atlanta, Ga., USA. This collaborative project aimed to gather a spoken database consisting of nonmainstream American English speakers residing in the Southeastern US in order to provide a more diverse perspective of motor speech disorders. Ninety-nine adults with an acquired neurogenic disorder resulting in a motor speech disorder were recruited. Stimuli include isolated vowels, single words, sentences with contrastive focus, sentences with emotional content and prosody, sentences with acoustic and perceptual sensitivity to motor speech disorders, as well as 'The Caterpillar' and 'The Grandfather' passages. Utility of this data in understanding the potential interplay of dialect and dysarthria was demonstrated with a subset of the speech samples existing in the database. The Atlanta Motor Speech Disorders Corpus will enrich our understanding of motor speech disorders through the examination of speech from a diverse group of speakers. © 2016 S. Karger AG, Basel.

  10. Contribution of speech and language difficulties to health-related quality-of-life in Australian children: A longitudinal analysis.

    PubMed

    Feeney, Rachel; Desha, Laura; Khan, Asaduzzaman; Ziviani, Jenny

    2017-04-01

    The trajectory of health-related quality-of-life (HRQoL) for children aged 4-9 years and its relationship with speech and language difficulties (SaLD) was examined using data from the Longitudinal Study of Australian Children (LSAC). Generalized linear latent and mixed modelling was used to analyse data from three waves of the LSAC across four HRQoL domains (physical, emotional, social and school functioning). Four domains of HRQoL, measured using the Paediatric Quality-of-Life Inventory (PedsQL™), were examined to find the contribution of SaLD while accounting for child-specific factors (e.g. gender, ethnicity, temperament) and family characteristics (social ecological considerations and psychosocial stressors). In multivariable analyses, one measure of SaLD, namely parent concern about receptive language, was negatively associated with all HRQoL domains. Covariates positively associated with all HRQoL domains included child's general health, maternal mental health, parental warmth and primary caregiver's engagement in the labour force. Findings suggest that SaLD are associated with reduced HRQoL. For most LSAC study children, having typical speech/language skills was a protective factor positively associated with HRQoL.

  11. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation maymore » decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.« less

  12. Speech in 10-Year-Olds Born With Cleft Lip and Palate: What Do Peers Say?

    PubMed

    Nyberg, Jill; Havstam, Christina

    2016-09-01

    The aim of this study was to explore how 10-year-olds describe speech and communicative participation in children born with unilateral cleft lip and palate in their own words, whether they perceive signs of velopharyngeal insufficiency (VPI) and articulation errors of different degrees, and if so, which terminology they use. Methods/Participants: Nineteen 10-year-olds participated in three focus group interviews where they listened to 10 to 12 speech samples with different types of cleft speech characteristics assessed by speech and language pathologists (SLPs) and described what they heard. The interviews were transcribed and analyzed with qualitative content analysis. The analysis resulted in three interlinked categories encompassing different aspects of speech, personality, and social implications: descriptions of speech, thoughts on causes and consequences, and emotional reactions and associations. Each category contains four subcategories exemplified with quotes from the children's statements. More pronounced signs of VPI were perceived but referred to in terms relevant to 10-year-olds. Articulatory difficulties, even minor ones, were noted. Peers reflected on the risk to teasing and bullying and on how children with impaired speech might experience their situation. The SLPs and peers did not agree on minor signs of VPI, but they were unanimous in their analysis of clinically normal and more severely impaired speech. Articulatory impairments may be more important to treat than minor signs of VPI based on what peers say.

  13. Speech Perception Deficits in Mandarin-Speaking School-Aged Children with Poor Reading Comprehension

    PubMed Central

    Liu, Huei-Mei; Tsao, Feng-Ming

    2017-01-01

    Previous studies have shown that children learning alphabetic writing systems who have language impairment or dyslexia exhibit speech perception deficits. However, whether such deficits exist in children learning logographic writing systems who have poor reading comprehension remains uncertain. To further explore this issue, the present study examined speech perception deficits in Mandarin-speaking children with poor reading comprehension. Two self-designed tasks, consonant categorical perception task and lexical tone discrimination task were used to compare speech perception performance in children (n = 31, age range = 7;4–10;2) with poor reading comprehension and an age-matched typically developing group (n = 31, age range = 7;7–9;10). Results showed that the children with poor reading comprehension were less accurate in consonant and lexical tone discrimination tasks and perceived speech contrasts less categorically than the matched group. The correlations between speech perception skills (i.e., consonant and lexical tone discrimination sensitivities and slope of consonant identification curve) and individuals’ oral language and reading comprehension were stronger than the correlations between speech perception ability and word recognition ability. In conclusion, the results revealed that Mandarin-speaking children with poor reading comprehension exhibit less-categorized speech perception, suggesting that imprecise speech perception, especially lexical tone perception, is essential to account for reading learning difficulties in Mandarin-speaking children. PMID:29312031

  14. The Performance-Perceptual Test (PPT) and its relationship to aided reported handicap and hearing aid satisfaction.

    PubMed

    Saunders, Gabrielle H; Forsline, Anna

    2006-06-01

    Results of objective clinical tests (e.g., measures of speech understanding in noise) often conflict with subjective reports of hearing aid benefit and satisfaction. The Performance-Perceptual Test (PPT) is an outcome measure in which objective and subjective evaluations are made by using the same test materials, testing format, and unit of measurement (signal-to-noise ratio, S/N), permitting a direct comparison between measured and perceived ability to hear. Two variables are measured: a Performance Speech Reception Threshold in Noise (SRTN) for 50% correct performance and a Perceptual SRTN, which is the S/N at which listeners perceive that they can understand the speech material. A third variable is computed: the Performance-Perceptual Discrepancy (PPDIS); it is the difference between the Performance and Perceptual SRTNs and measures the extent to which listeners "misjudge" their hearing ability. Saunders et al. in 2004 examined the relation between PPT scores and unaided hearing handicap. In this publication, the relations between the PPT, residual aided handicap, and hearing aid satisfaction are described. Ninety-four individuals between the ages of 47 and 86 yr participated. All had symmetrical sensorineural hearing loss and had worn binaural hearing aids for at least 6 wk before participating. All subjects underwent routine audiological examination and completed the PPT, the Hearing Handicap Inventory for the Elderly/Adults (HHIE/A), and the Satisfaction for Amplification in Daily Life questionnaire. Sixty-five subjects attended one research visit for participation in this study, and 29 attended a second visit to complete the PPT a second time. Performance and Perceptual SRTN and PPDIS scores were normally distributed and showed excellent test-retest reliability. Aided SRTNs were significantly better than unaided SRTNs; aided and unaided PPDIS values did not differ. Stepwise multiple linear regression showed that the PPDIS, the Performance SRTN, and age were significant predictors of scores on the HHIE/A such that greater reported handicap is associated with underestimating hearing ability, poorer aided ability to understand speech in noise, and being younger. Scores on the Satisfaction with Amplification in Daily Life were not well explained by the PPT, age, or audiometric thresholds. When individuals were grouped by their HHIE/A scores, it was seen that individuals who report more handicap than expected based on their audiometric thresholds, have a more negative PPDIS, i.e., underestimate their hearing ability, relative to individuals who report expected handicap, who in turn have a more negative PPDIS than individuals who report less handicap than expected. No such patterns were apparent for the Performance SRTN. The study showed the PPT to be a reliable outcome measure that can provide more information than a performance measure and/or a questionnaire measure alone, in that the PPDIS can provide the clinician with an explanation for discrepant objective and subjective reports of hearing difficulties. The finding that self-reported handicap is affected independently by both actual ability to hear and the (mis)perception of ability to hear underscores the difficulty clinicians encounter when trying to interpret outcomes questionnaires. We suggest that this variable should be measured and taken into account when interpreting questionnaires and counseling patients.

  15. Contemporary Reflections on Speech-Based Language Learning

    ERIC Educational Resources Information Center

    Gustafson, Marianne

    2009-01-01

    In "The Relation of Language to Mental Development and of Speech to Language Teaching," S.G. Davidson displayed several timeless insights into the role of speech in developing language and reasons for using speech as the basis for instruction for children who are deaf and hard of hearing. His understanding that speech includes more than merely…

  16. Drama to promote social and personal well-being in six- and seven-year-olds with communication difficulties: the Speech Bubbles project.

    PubMed

    Barnes, Jonathan

    2014-03-01

    This paper focuses on an innovative intersection between education, health and arts. Taking a broad definition of health it examines some social and psychological well-being impacts of extended collaborations between a theatre company and children with communication difficulties. It seeks to test aspects of Fredrickson's(1) broaden-and-build theory of positive emotions in a primary school curriculum context. The researcher participated in a project called Speech Bubbles. The programme was devised by theatre practitioners and aimed at six- and seven-year-olds with difficulties in speech, language and communication. Sessions were observed, videoed and analysed for levels of child well-being using an established scale. In addition, responses regarding perceived improvements in speech, language and communication were gathered from school records and teachers, teaching assistants, practitioners and parents. Data were captured using still images and videos, children's recorded commentaries, conversations, written feedback and observation. Using grounded research methods, themes and categories arose directly from the collected data. Fluency, vocabulary, inventiveness and concentration were enhanced in the large majority of referred children. The research also found significant positive developments in motivation and confidence. Teachers and their assistants credited the drama intervention with notable improvements in attitude, behaviour and relationships over the year. Aspects of many children's psychological well-being also showed marked signs of progress when measured against original reasons for referral and normal expectations over a year. An unexpected outcome was evidence of heightened well-being of the teaching assistants involved. Findings compared well with expectations based upon Fredrickson's theory and also the theatre company's view that theatre-making promotes emotional awareness and empathy. Improvements in both children's well-being and communication were at least in part related to the sustained and playful emphases on the processes and practice of drama, clear values and an inclusive environment.

  17. Effects of Hearing Loss and Cognitive Load on Speech Recognition with Competing Talkers.

    PubMed

    Meister, Hartmut; Schreitmüller, Stefan; Ortmann, Magdalene; Rählmann, Sebastian; Walger, Martin

    2016-01-01

    Everyday communication frequently comprises situations with more than one talker speaking at a time. These situations are challenging since they pose high attentional and memory demands placing cognitive load on the listener. Hearing impairment additionally exacerbates communication problems under these circumstances. We examined the effects of hearing loss and attention tasks on speech recognition with competing talkers in older adults with and without hearing impairment. We hypothesized that hearing loss would affect word identification, talker separation and word recall and that the difficulties experienced by the hearing impaired listeners would be especially pronounced in a task with high attentional and memory demands. Two listener groups closely matched for their age and neuropsychological profile but differing in hearing acuity were examined regarding their speech recognition with competing talkers in two different tasks. One task required repeating back words from one target talker (1TT) while ignoring the competing talker whereas the other required repeating back words from both talkers (2TT). The competing talkers differed with respect to their voice characteristics. Moreover, sentences either with low or high context were used in order to consider linguistic properties. Compared to their normal hearing peers, listeners with hearing loss revealed limited speech recognition in both tasks. Their difficulties were especially pronounced in the more demanding 2TT task. In order to shed light on the underlying mechanisms, different error sources, namely having misunderstood, confused, or omitted words were investigated. Misunderstanding and omitting words were more frequently observed in the hearing impaired than in the normal hearing listeners. In line with common speech perception models, it is suggested that these effects are related to impaired object formation and taxed working memory capacity (WMC). In a post-hoc analysis, the listeners were further separated with respect to their WMC. It appeared that higher capacity could be used in the sense of a compensatory mechanism with respect to the adverse effects of hearing loss, especially with low context speech.

  18. Voice technology and BBN

    NASA Technical Reports Server (NTRS)

    Wolf, Jared J.

    1977-01-01

    The following research was discussed: (1) speech signal processing; (2) automatic speech recognition; (3) continuous speech understanding; (4) speaker recognition; (5) speech compression; (6) subjective and objective evaluation of speech communication system; (7) measurement of the intelligibility and quality of speech when degraded by noise or other masking stimuli; (8) speech synthesis; (9) instructional aids for second-language learning and for training of the deaf; and (10) investigation of speech correlates of psychological stress. Experimental psychology, control systems, and human factors engineering, which are often relevant to the proper design and operation of speech systems are described.

  19. Learning Paths and Learning Styles in Dyslexia: Possibilites and Effectiveness--Case Study of Two Elementary School Students Aged 7 Years Old

    ERIC Educational Resources Information Center

    Tsampalas, Evangelos; Dimitrios, Sarris; Papadimitropoulou, Panagoula; Vergou, Maria; Zakopoulou, Victoria

    2018-01-01

    The difficulty in reading and writing, spelling mistakes and poor speech are considered as the main elements that characterize students with dyslexia. If one thinks that most of the things in a class are based on writing and reading, then the importance of such a learning difficulty is that it is recognized as soon as possible and with appropriate…

  20. Speech, Language, and Reading in 10-Year-Olds With Cleft: Associations With Teasing, Satisfaction With Speech, and Psychological Adjustment.

    PubMed

    Feragen, Kristin Billaud; Særvold, Tone Kristin; Aukner, Ragnhild; Stock, Nicola Marie

    2017-03-01

      Despite the use of multidisciplinary services, little research has addressed issues involved in the care of those with cleft lip and/or palate across disciplines. The aim was to investigate associations between speech, language, reading, and reports of teasing, subjective satisfaction with speech, and psychological adjustment.   Cross-sectional data collected during routine, multidisciplinary assessments in a centralized treatment setting, including speech and language therapists and clinical psychologists.   Children with cleft with palatal involvement aged 10 years from three birth cohorts (N = 170) and their parents.   Speech: SVANTE-N. Language: Language 6-16 (sentence recall, serial recall, vocabulary, and phonological awareness). Reading: Word Chain Test and Reading Comprehension Test. Psychological measures: Strengths and Difficulties Questionnaire and extracts from the Satisfaction With Appearance Scale and Child Experience Questionnaire.   Reading skills were associated with self- and parent-reported psychological adjustment in the child. Subjective satisfaction with speech was associated with psychological adjustment, while not being consistently associated with speech therapists' assessments. Parent-reported teasing was found to be associated with lower levels of reading skills. Having a medical and/or psychological condition in addition to the cleft was found to affect speech, language, and reading significantly.   Cleft teams need to be aware of speech, language, and/or reading problems as potential indicators of psychological risk in children with cleft. This study highlights the importance of multiple reports (self, parent, and specialist) and a multidisciplinary approach to cleft care and research.

  1. Assessment of voice and speech symptoms in early Parkinson's disease by the Robertson dysarthria profile.

    PubMed

    Defazio, Giovanni; Guerrieri, Marta; Liuzzi, Daniele; Gigante, Angelo Fabio; di Nicola, Vincenzo

    2016-03-01

    Changes in voice and speech are thought to involve 75-90% of people with PD, but the impact of PD progression on voice/speech parameters is not well defined. In this study, we assessed voice/speech symptoms in 48 parkinsonian patients staging <3 on the modified Hoehn and Yahr scale and 37 healthy subjects using the Robertson dysarthria profile (a clinical-perceptual method exploring all components potentially involved in speech difficulties), the Voice handicap index (a validated measure of the impact of voice symptoms on quality of life) and the speech evaluation parameter contained in the Unified Parkinson's Disease Rating Scale part III (UPDRS-III). Accuracy and metric properties of the Robertson dysarthria profile were also measured. On Robertson dysarthria profile, all parkinsonian patients yielded lower scores than healthy control subjects. Differently, the Voice Handicap Index and the speech evaluation parameter contained in the UPDRS-III could detect speech/voice disturbances in 10 and 75% of PD patients, respectively. Validation procedure in Parkinson's disease patients showed that the Robertson dysarthria profile has acceptable reliability, satisfactory internal consistency and scaling assumptions, lack of floor and ceiling effects, and partial correlations with UPDRS-III and Voice Handicap Index. We concluded that speech/voice disturbances are widely identified by the Robertson dysarthria profile in early parkinsonian patients, even when the disturbances do not carry a significant level of disability. Robertson dysarthria profile may be a valuable tool to detect speech/voice disturbances in Parkinson's disease.

  2. Miller Fisher Syndrome

    MedlinePlus

    ... weeks earlier. Slurred speech, difficulty swallowing and abnormal facial expression with inability to smile or whistle may also occur. Examination shows poor balance and coordination of the hands as well as loss of ... Facial weakness, enlarged or dilated pupils and a decreased ...

  3. Genetics Home Reference: Wilson disease

    MedlinePlus

    ... individuals diagnosed in adulthood and commonly occur in young adults with Wilson disease . Signs and symptoms of these problems can include clumsiness, tremors, difficulty walking, speech problems, impaired thinking ability, depression, anxiety, and mood swings. In many individuals with ...

  4. Mental states and activities in Danish narratives: children with autism and children with language impairment.

    PubMed

    Engberg-Pedersen, Elisabeth; Christensen, Rikke Vang

    2017-09-01

    This study focuses on the relationship between content elements and mental-state language in narratives from twenty-seven children with autism (ASD), twelve children with language impairment (LI), and thirty typically developing children (TD). The groups did not differ on chronological age (10;6-14;0) and non-verbal cognitive skills, and the groups with ASD and TD did not differ on language measures. The children with ASD and LI had fewer content elements of the storyline than the TD children. Compared with the TD children, the children with ASD used fewer subordinate clauses about the characters' thoughts, and preferred talking about mental states as reported speech, especially in the form of direct speech. The children with LI did not differ from the TD children on these measures. The results are discussed in the context of difficulties with socio-cognition in children with ASD and of language difficulties in children with LI.

  5. Sleep and Native Language Interference Affect Non-Native Speech Sound Learning

    PubMed Central

    Earle, F. Sayako; Myers, Emily B.

    2015-01-01

    Adults learning a new language are faced with a significant challenge: non-native speech sounds that are perceptually similar to sounds in one’s native language can be very difficult to acquire. Sleep and native language interference, two factors that may help to explain this difficulty in acquisition, are addressed in three studies. Results of Experiment 1 showed that participants trained on a non-native contrast at night improved in discrimination 24 hours after training, while those trained in the morning showed no such improvement. Experiments 2 and 3 addressed the possibility that incidental exposure to perceptually similar native language speech sounds during the day interfered with maintenance in the morning group. Taken together, results show that the ultimate success of non-native speech sound learning depends not only on the similarity of learned sounds to the native language repertoire, but also to interference from native language sounds before sleep. PMID:26280264

  6. Sleep and native language interference affect non-native speech sound learning.

    PubMed

    Earle, F Sayako; Myers, Emily B

    2015-12-01

    Adults learning a new language are faced with a significant challenge: non-native speech sounds that are perceptually similar to sounds in one's native language can be very difficult to acquire. Sleep and native language interference, 2 factors that may help to explain this difficulty in acquisition, are addressed in 3 studies. Results of Experiment 1 showed that participants trained on a non-native contrast at night improved in discrimination 24 hr after training, while those trained in the morning showed no such improvement. Experiments 2 and 3 addressed the possibility that incidental exposure to perceptually similar native language speech sounds during the day interfered with maintenance in the morning group. Taken together, results show that the ultimate success of non-native speech sound learning depends not only on the similarity of learned sounds to the native language repertoire, but also to interference from native language sounds before sleep. (c) 2015 APA, all rights reserved).

  7. Atypical coordination of cortical oscillations in response to speech in autism

    PubMed Central

    Jochaut, Delphine; Lehongre, Katia; Saitovitch, Ana; Devauchelle, Anne-Dominique; Olasagasti, Itsaso; Chabane, Nadia; Zilbovicius, Monica; Giraud, Anne-Lise

    2015-01-01

    Subjects with autism often show language difficulties, but it is unclear how they relate to neurophysiological anomalies of cortical speech processing. We used combined EEG and fMRI in 13 subjects with autism and 13 control participants and show that in autism, gamma and theta cortical activity do not engage synergistically in response to speech. Theta activity in left auditory cortex fails to track speech modulations, and to down-regulate gamma oscillations in the group with autism. This deficit predicts the severity of both verbal impairment and autism symptoms in the affected sample. Finally, we found that oscillation-based connectivity between auditory and other language cortices is altered in autism. These results suggest that the verbal disorder in autism could be associated with an altered balance of slow and fast auditory oscillations, and that this anomaly could compromise the mapping between sensory input and higher-level cognitive representations. PMID:25870556

  8. Auditory-motor interactions in pediatric motor speech disorders: neurocomputational modeling of disordered development.

    PubMed

    Terband, H; Maassen, B; Guenther, F H; Brumberg, J

    2014-01-01

    Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. The reader will be able to: (1) identify the difficulties in studying disordered speech motor development; (2) describe the differences in speech motor characteristics between SSD and subtype CAS; (3) describe the different types of learning that occur in the sensory-motor system during babbling and early speech acquisition; (4) identify the neural control subsystems involved in speech production; (5) describe the potential role of auditory self-monitoring in developmental speech disorders. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Speech and nonspeech: What are we talking about?

    PubMed

    Maas, Edwin

    2017-08-01

    Understanding of the behavioural, cognitive and neural underpinnings of speech production is of interest theoretically, and is important for understanding disorders of speech production and how to assess and treat such disorders in the clinic. This paper addresses two claims about the neuromotor control of speech production: (1) speech is subserved by a distinct, specialised motor control system and (2) speech is holistic and cannot be decomposed into smaller primitives. Both claims have gained traction in recent literature, and are central to a task-dependent model of speech motor control. The purpose of this paper is to stimulate thinking about speech production, its disorders and the clinical implications of these claims. The paper poses several conceptual and empirical challenges for these claims - including the critical importance of defining speech. The emerging conclusion is that a task-dependent model is called into question as its two central claims are founded on ill-defined and inconsistently applied concepts. The paper concludes with discussion of methodological and clinical implications, including the potential utility of diadochokinetic (DDK) tasks in assessment of motor speech disorders and the contraindication of nonspeech oral motor exercises to improve speech function.

  10. The effect of noise-induced hearing loss on the intelligibility of speech in noise

    NASA Astrophysics Data System (ADS)

    Smoorenburg, G. F.; Delaat, J. A. P. M.; Plomp, R.

    1981-06-01

    Speech reception thresholds, both in quiet and in noise, and tone audiograms were measured for 14 normal ears (7 subjects) and 44 ears (22 subjects) with noise-induced hearing loss. Maximum hearing loss in the 4-6 kHz region equalled 40 to 90 dB (losses exceeded by 90% and 10%, respectively). Hearing loss for speech in quiet measured with respect to the median speech reception threshold for normal ears ranged from 1.8 dB to 13.4 dB. For speech in noise the numbers are 1.2 dB to 7.0 dB which means that the subjects with noise-induced hearing loss need a 1.2 to 7.0 dB higher signal-to-noise ratio than normal to understand sentences equally well. A hearing loss for speech of 1 dB corresponds to a decrease in sentence intelligibility of 15 to 20%. The relation between hearing handicap conceived as a reduced ability to understand speech and tone audiogram is discussed. The higher signal-to-noise ratio needed by people with noise-induced hearing loss to understand speech in noisy environments is shown to be due partly to the decreased bandwidth of their hearing caused by the noise dip.

  11. Prevalence and Predictors of Persistent Speech Sound Disorder at Eight Years Old: Findings From a Population Cohort Study

    PubMed Central

    Miller, Laura L.; Peters, Tim J.; Emond, Alan; Roulstone, Sue

    2016-01-01

    Purpose The purpose of this study was to determine prevalence and predictors of persistent speech sound disorder (SSD) in children aged 8 years after disregarding children presenting solely with common clinical distortions (i.e., residual errors). Method Data from the Avon Longitudinal Study of Parents and Children (Boyd et al., 2012) were used. Children were classified as having persistent SSD on the basis of percentage of consonants correct measures from connected speech samples. Multivariable logistic regression analyses were performed to identify predictors. Results The estimated prevalence of persistent SSD was 3.6%. Children with persistent SSD were more likely to be boys and from families who were not homeowners. Early childhood predictors identified as important were weak sucking at 4 weeks, not often combining words at 24 months, limited use of word morphology at 38 months, and being unintelligible to strangers at age 38 months. School-age predictors identified as important were maternal report of difficulty pronouncing certain sounds and hearing impairment at age 7 years, tympanostomy tube insertion at any age up to 8 years, and a history of suspected coordination problems. The contribution of these findings to our understanding of risk factors for persistent SSD and the nature of the condition is considered. Conclusion Variables identified as predictive of persistent SSD suggest that factors across motor, cognitive, and linguistic processes may place a child at risk. PMID:27367606

  12. Increased pain intensity is associated with greater verbal communication difficulty and increased production of speech and co-speech gestures.

    PubMed

    Rowbotham, Samantha; Wardy, April J; Lloyd, Donna M; Wearden, Alison; Holler, Judith

    2014-01-01

    Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25)  = 2.21, p =  .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25)  = 3.57, p  = .001; Gestures: t(25)  = 3.66, p =  .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain.

  13. Effects of hearing loss on speech recognition under distracting conditions and working memory in the elderly.

    PubMed

    Na, Wondo; Kim, Gibbeum; Kim, Gungu; Han, Woojae; Kim, Jinsook

    2017-01-01

    The current study aimed to evaluate hearing-related changes in terms of speech-in-noise processing, fast-rate speech processing, and working memory; and to identify which of these three factors is significantly affected by age-related hearing loss. One hundred subjects aged 65-84 years participated in the study. They were classified into four groups ranging from normal hearing to moderate-to-severe hearing loss. All the participants were tested for speech perception in quiet and noisy conditions and for speech perception with time alteration in quiet conditions. Forward- and backward-digit span tests were also conducted to measure the participants' working memory. 1) As the level of background noise increased, speech perception scores systematically decreased in all the groups. This pattern was more noticeable in the three hearing-impaired groups than in the normal hearing group. 2) As the speech rate increased faster, speech perception scores decreased. A significant interaction was found between speed of speech and hearing loss. In particular, 30% of compressed sentences revealed a clear differentiation between moderate hearing loss and moderate-to-severe hearing loss. 3) Although all the groups showed a longer span on the forward-digit span test than the backward-digit span test, there was no significant difference as a function of hearing loss. The degree of hearing loss strongly affects the speech recognition of babble-masked and time-compressed speech in the elderly but does not affect the working memory. We expect these results to be applied to appropriate rehabilitation strategies for hearing-impaired elderly who experience difficulty in communication.

  14. Communicating by Language: The Speech Process.

    ERIC Educational Resources Information Center

    House, Arthur S., Ed.

    This document reports on a conference focused on speech problems. The main objective of these discussions was to facilitate a deeper understanding of human communication through interaction of conference participants with colleagues in other disciplines. Topics discussed included speech production, feedback, speech perception, and development of…

  15. Auditory agnosia.

    PubMed

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  16. Communication Capacity Research in the Majority World: Supporting the human right to communication specialist services.

    PubMed

    Hopf, Suzanne C

    2018-02-01

    Receipt of accessible and appropriate specialist services and resources by all people with communication and/or swallowing disability is a human right; however, it is a right rarely achieved in either Minority or Majority World contexts. This paper considers communication specialists' efforts to provide sustainable services for people with communication difficulties living in Majority World countries. The commentary draws on human rights literature, particularly Article 19 of the Universal Declaration of Human Rights and the Communication Capacity Research program that includes: (1) gathering knowledge from policy and literature; (2) gathering knowledge from the community; (3) understanding speech, language and literacy use and proficiency; and (4) developing culturally and linguistically appropriate resources and assessments. To inform the development of resources and assessments that could be used by speech-language pathologists as well as other communication specialists in Fiji, the Communication Capacity Research program involved collection and analysis of data from multiple sources including 144 community members, 75 school students and their families, and 25 teachers. The Communication Capacity Research program may be applicable for achieving the development of evidence-based, culturally and linguistically sustainable SLP services in similar contexts.

  17. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    PubMed

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.

  18. Measuring and Modeling Sound Interference and Reverberation Time in Classrooms

    NASA Astrophysics Data System (ADS)

    Gumina, Kaitlyn; Martell, Eric

    2015-04-01

    Research shows that children, even those without hearing difficulties, are affected by poor classroom acoustics, especially children with hearing loss, learning disabilities, speech delay, and attention problems. Poor acoustics can come in a variety of forms, including destructive interference causing ``dead spots'' and extended Reverberation Times (RT), where echoes persist too long, interfering with further speech. In this research, I measured sound intensity at locations throughout three different types of classrooms at frequencies commonly associated with human speech to see what effect seating position has on intensity. I also used a program called Wave Cloud to model the time necessary for intensity to decrease by 60 decibels (RT50), both in idealized classrooms and in classrooms modeled on the ones I studied.

  19. An exploratory trial of the effectiveness of an enhanced consultative approach to delivering speech and language intervention in schools.

    PubMed

    Mecrow, Carol; Beckwith, Jennie; Klee, Thomas

    2010-01-01

    Increased demand for access to specialist services for providing support to children with speech, language and communication needs prompted a local service review of how best to allocate limited resources. This study arose as a consequence of a wish to evaluate the effectiveness of an enhanced consultative approach to delivering speech and language intervention in local schools. The purpose was to evaluate an intensive speech and language intervention for children in mainstream schools delivered by specialist teaching assistants. A within-subjects, quasi-experimental exploratory trial was conducted, with each child serving as his or her own control with respect to the primary outcome measure. Thirty-five children between the ages of 4;2 and 6;10 (years; months) received speech and/or language intervention for an average of four 1-hour sessions per week over 10 weeks. The primary outcome measure consisted of change between pre- and post-intervention scores on probe tasks of treated and untreated behaviours summed across the group of children, and maintenance probes of treated behaviours. Secondary outcome measures included standardized tests (Clinical Evaluation of Language Fundamentals - Preschool(UK) (CELF-P(UK)); Diagnostic Evaluation of Articulation and Phonology (DEAP)) and questionnaires completed by parents/carers and school staff before and after the intervention period. The primary outcome measure showed improvement over the intervention period, with target behaviours showing a significantly larger increase than control behaviours. The gains made on the target behaviours as a result of intervention were sustained when reassessed 3-12 months later. These findings were replicated on a second set of targets and controls. Significant gains were also observed on CELF-Preschool(UK) receptive and expressive language standard scores from pre- to post-intervention. However, DEAP standard scores of speech ability did not increase over the intervention period, although improvements in raw scores were observed. Questionnaires completed before and after intervention showed some significant differences relating to how much the child's speech and language difficulties affected him/her at home and at school. This exploratory study demonstrates the benefit of an intensive therapy delivered by specialist teaching assistants for remediating speech and language difficulties experienced by young children in mainstream schools. The service delivery model was perceived by professionals as offering an inclusive and effective practice and provides empirical support for using both direct and indirect intervention in the school setting.

  20. Self-Assessed Hearing Handicap in Older Adults With Poorer-Than-Predicted Speech Recognition in Noise.

    PubMed

    Eckert, Mark A; Matthews, Lois J; Dubno, Judy R

    2017-01-01

    Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility. We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982) were significantly associated with measures of speech recognition in noise that controlled for differences in speech audibility. One hundred sixty-two middle-aged and older adults had HHIE total scores that were significantly associated with audibility-adjusted measures of speech recognition for low-context but not high-context sentences. These findings were driven by HHIE items involving negative feelings related to communication difficulties that also captured variance in subjective ratings of effort and frustration that predicted speech recognition. The average pure-tone threshold accounted for some of the variance in the association between the HHIE and audibility-adjusted speech recognition, suggesting an effect of central and peripheral auditory system decline related to elevated thresholds. The accumulation of difficult listening experiences appears to produce a self-assessment of hearing handicap resulting from (a) reduced audibility of stimuli, (b) declines in the central and peripheral auditory system function, and (c) additional individual variation in central nervous system function.

  1. Self-Assessed Hearing Handicap in Older Adults With Poorer-Than-Predicted Speech Recognition in Noise

    PubMed Central

    Matthews, Lois J.; Dubno, Judy R.

    2017-01-01

    Purpose Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility. Method We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982) were significantly associated with measures of speech recognition in noise that controlled for differences in speech audibility. Results One hundred sixty-two middle-aged and older adults had HHIE total scores that were significantly associated with audibility-adjusted measures of speech recognition for low-context but not high-context sentences. These findings were driven by HHIE items involving negative feelings related to communication difficulties that also captured variance in subjective ratings of effort and frustration that predicted speech recognition. The average pure-tone threshold accounted for some of the variance in the association between the HHIE and audibility-adjusted speech recognition, suggesting an effect of central and peripheral auditory system decline related to elevated thresholds. Conclusion The accumulation of difficult listening experiences appears to produce a self-assessment of hearing handicap resulting from (a) reduced audibility of stimuli, (b) declines in the central and peripheral auditory system function, and (c) additional individual variation in central nervous system function. PMID:28060993

  2. Speech Perception in Noise by Children With Cochlear Implants

    PubMed Central

    Caldwell, Amanda; Nittrouer, Susan

    2013-01-01

    Purpose Common wisdom suggests that listening in noise poses disproportionately greater difficulty for listeners with cochlear implants (CIs) than for peers with normal hearing (NH). The purpose of this study was to examine phonological, language, and cognitive skills that might help explain speech-in-noise abilities for children with CIs. Method Three groups of kindergartners (NH, hearing aid wearers, and CI users) were tested on speech recognition in quiet and noise and on tasks thought to underlie the abilities that fit into the domains of phonological awareness, general language, and cognitive skills. These last measures were used as predictor variables in regression analyses with speech-in-noise scores as dependent variables. Results Compared to children with NH, children with CIs did not perform as well on speech recognition in noise or on most other measures, including recognition in quiet. Two surprising results were that (a) noise effects were consistent across groups and (b) scores on other measures did not explain any group differences in speech recognition. Conclusions Limitations of implant processing take their primary toll on recognition in quiet and account for poor speech recognition and language/phonological deficits in children with CIs. Implications are that teachers/clinicians need to teach language/phonology directly and maximize signal-to-noise levels in the classroom. PMID:22744138

  3. Teachers' perceptions of students with speech sound disorders: a quantitative and qualitative analysis.

    PubMed

    Overby, Megan; Carrell, Thomas; Bernthal, John

    2007-10-01

    This study examined 2nd-grade teachers' perceptions of the academic, social, and behavioral competence of students with speech sound disorders (SSDs). Forty-eight 2nd-grade teachers listened to 2 groups of sentences differing by intelligibility and pitch but spoken by a single 2nd grader. For each sentence group, teachers rated the speaker's academic, social, and behavioral competence using an adapted version of the Teacher Rating Scale of the Self-Perception Profile for Children (S. Harter, 1985) and completed 3 open-ended questions. The matched-guise design controlled for confounding speaker and stimuli variables that were inherent in prior studies. Statistically significant differences in teachers' expectations of children's academic, social, and behavioral performances were found between moderately intelligible and normal intelligibility speech. Teachers associated moderately intelligible low-pitched speech with more behavior problems than moderately intelligible high-pitched speech or either pitch with normal intelligibility. One third of the teachers reported that they could not accurately predict a child's school performance based on the child's speech skills, one third of the teachers causally related school difficulty to SSD, and one third of the teachers made no comment. Intelligibility and speaker pitch appear to be speech variables that influence teachers' perceptions of children's school performance.

  4. The influence of target-masker similarity on across-ear interference in dichotic listening

    NASA Astrophysics Data System (ADS)

    Brungart, Douglas; Simpson, Brian

    2004-05-01

    In most dichotic listening tasks, the comprehension of a target speech signal presented in one ear is unaffected by the presence of irrelevant speech in the opposite ear. However, recent results have shown that contralaterally presented interfering speech signals do influence performance when a second interfering speech signal is present in the same ear as the target speech. In this experiment, we examined the influence of target-masker similarity on this effect by presenting ipsilateral and contralateral masking phrases spoken by the same talker, a different same-sex talker, or a different-sex talker than the one used to generate the target speech. The results show that contralateral target-masker similarity has the greatest influence on performance when an easily segregated different-sex masker is presented in the target ear, and the least influence when a difficult-to-segregate same-talker masker is presented in the target ear. These results indicate that across-ear interference in dichotic listening is not directly related to the difficulty of the segregation task in the target ear, and suggest that contralateral maskers are least likely to interfere with dichotic speech perception when the same general strategy could be used to segregate the target from the masking voices in the ipsilateral and contralateral ears.

  5. Speech and Communication Disorders

    MedlinePlus

    ... to being completely unable to speak or understand speech. Causes include Hearing disorders and deafness Voice problems, ... or those caused by cleft lip or palate Speech problems like stuttering Developmental disabilities Learning disorders Autism ...

  6. Children with dyslexia show a reduced processing benefit from bimodal speech information compared to their typically developing peers.

    PubMed

    Schaadt, Gesa; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Männel, Claudia

    2018-01-17

    During information processing, individuals benefit from bimodally presented input, as has been demonstrated for speech perception (i.e., printed letters and speech sounds) or the perception of emotional expressions (i.e., facial expression and voice tuning). While typically developing individuals show this bimodal benefit, school children with dyslexia do not. Currently, it is unknown whether the bimodal processing deficit in dyslexia also occurs for visual-auditory speech processing that is independent of reading and spelling acquisition (i.e., no letter-sound knowledge is required). Here, we tested school children with and without spelling problems on their bimodal perception of video-recorded mouth movements pronouncing syllables. We analyzed the event-related potential Mismatch Response (MMR) to visual-auditory speech information and compared this response to the MMR to monomodal speech information (i.e., auditory-only, visual-only). We found a reduced MMR with later onset to visual-auditory speech information in children with spelling problems compared to children without spelling problems. Moreover, when comparing bimodal and monomodal speech perception, we found that children without spelling problems showed significantly larger responses in the visual-auditory experiment compared to the visual-only response, whereas children with spelling problems did not. Our results suggest that children with dyslexia exhibit general difficulties in bimodal speech perception independently of letter-speech sound knowledge, as apparent in altered bimodal speech perception and lacking benefit from bimodal information. This general deficit in children with dyslexia may underlie the previously reported reduced bimodal benefit for letter-speech sound combinations and similar findings in emotion perception. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Development of Trivia Game for speech understanding in background noise.

    PubMed

    Schwartz, Kathryn; Ringleb, Stacie I; Sandberg, Hilary; Raymer, Anastasia; Watson, Ginger S

    2015-01-01

    Listening in noise is an everyday activity and poses a challenge for many people. To improve the ability to understand speech in noise, a computerized auditory rehabilitation game was developed. In Trivia Game players are challenged to answer trivia questions spoken aloud. As players progress through the game, the level of background noise increases. A study using Trivia Game was conducted as a proof-of-concept investigation in healthy participants. College students with normal hearing were randomly assigned to a control (n = 13) or a treatment (n = 14) group. Treatment participants played Trivia Game 12 times over a 4-week period. All participants completed objective (auditory-only and audiovisual formats) and subjective listening in noise measures at baseline and 4 weeks later. There were no statistical differences between the groups at baseline. At post-test, the treatment group significantly improved their overall speech understanding in noise in the audiovisual condition and reported significant benefits in their functional listening abilities. Playing Trivia Game improved speech understanding in noise in healthy listeners. Significant findings for the audiovisual condition suggest that participants improved face-reading abilities. Trivia Game may be a platform for investigating changes in speech understanding in individuals with sensory, linguistic and cognitive impairments.

  8. Pronunciation difficulty, temporal regularity, and the speech-to-song illusion.

    PubMed

    Margulis, Elizabeth H; Simchy-Gross, Rhimmon; Black, Justin L

    2015-01-01

    The speech-to-song illusion (Deutsch et al., 2011) tracks the perceptual transformation from speech to song across repetitions of a brief spoken utterance. Because it involves no change in the stimulus itself, but a dramatic change in its perceived affiliation to speech or to music, it presents a unique opportunity to comparatively investigate the processing of language and music. In this study, native English-speaking participants were presented with brief spoken utterances that were subsequently repeated ten times. The utterances were drawn either from languages that are relatively difficult for a native English speaker to pronounce, or languages that are relatively easy for a native English speaker to pronounce. Moreover, the repetition could occur at regular or irregular temporal intervals. Participants rated the utterances before and after the repetitions on a 5-point Likert-like scale ranging from "sounds exactly like speech" to "sounds exactly like singing." The difference in ratings before and after was taken as a measure of the strength of the speech-to-song illusion in each case. The speech-to-song illusion occurred regardless of whether the repetitions were spaced at regular temporal intervals or not; however, it occurred more readily if the utterance was spoken in a language difficult for a native English speaker to pronounce. Speech circuitry seemed more liable to capture native and easy-to-pronounce languages, and more reluctant to relinquish them to perceived song across repetitions.

  9. Adverse effects of lingual and buccal orthodontic techniques: A systematic review and meta-analysis.

    PubMed

    Ata-Ali, Fadi; Ata-Ali, Javier; Ferrer-Molina, Marcela; Cobo, Teresa; De Carlos, Felix; Cobo, Juan

    2016-06-01

    The aim of this systematic review was to assess the prevalence of adverse effects associated with lingual and buccal fixed orthodontic techniques. Two authors searched the PubMed, EMBASE, Cochrane Library, and LILACS databases up to October 2014. Agreement between the authors was quantified by the Cohen kappa statistic. The following variables were analyzed: pain, caries, eating and speech difficulties, and oral hygiene. The Newcastle-Ottawa scale was used to assess risk of bias in nonrandomized studies, and the Cochrane Collaboration's tool for assessing risk of bias was used for randomized controlled trials. Eight articles were included in this systematic review. Meta-analysis showed a statistically greater risk of pain of the tongue (odds ratio [OR], 28.32; 95% confidence interval [95% CI], 8.60-93.28; P <0.001), cheeks (OR, 0.087; 95% CI, 0.036-0.213; P <0.0010), and lips (OR, 0.13; 95% CI, 0.04-0.39; P <0.001), as well as for the variables of speech difficulties (OR, 9.39; 95% CI, 3.78-23.33; P <0.001) and oral hygiene (OR, 3.49; 95% CI, 1.02-11.95; P = 0.047) with lingual orthodontics. However, no statistical difference was found with respect to eating difficulties (OR, 3.74; 95% CI, 0.86-16.28; P = 0.079) and caries (OR, 1.15; 95% CI, 0.17-7.69; P = 0.814 [Streptococcus mutans] and OR, 0.67; 95% CI, 0.20-2.23; P = 0.515 [Lactobacillus]). This systematic review suggests that patients wearing lingual appliances have more pain, speech difficulties, and problems in maintaining adequate oral hygiene, although no differences for eating and caries risk were identified. Further prospective studies involving larger sample sizes and longer follow-up periods are needed to confirm these results. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  10. The Mechanism of Speech Processing in Congenital Amusia: Evidence from Mandarin Speakers

    PubMed Central

    Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren

    2012-01-01

    Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results. PMID:22347374

  11. Neural encoding of the speech envelope by children with developmental dyslexia.

    PubMed

    Power, Alan J; Colling, Lincoln J; Mead, Natasha; Barnes, Lisa; Goswami, Usha

    2016-09-01

    Developmental dyslexia is consistently associated with difficulties in processing phonology (linguistic sound structure) across languages. One view is that dyslexia is characterised by a cognitive impairment in the "phonological representation" of word forms, which arises long before the child presents with a reading problem. Here we investigate a possible neural basis for developmental phonological impairments. We assess the neural quality of speech encoding in children with dyslexia by measuring the accuracy of low-frequency speech envelope encoding using EEG. We tested children with dyslexia and chronological age-matched (CA) and reading-level matched (RL) younger children. Participants listened to semantically-unpredictable sentences in a word report task. The sentences were noise-vocoded to increase reliance on envelope cues. Envelope reconstruction for envelopes between 0 and 10Hz showed that the children with dyslexia had significantly poorer speech encoding in the 0-2Hz band compared to both CA and RL controls. These data suggest that impaired neural encoding of low frequency speech envelopes, related to speech prosody, may underpin the phonological deficit that causes dyslexia across languages. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Children's views of communication and speech-language pathology.

    PubMed

    Merrick, Rosalind; Roulstone, Sue

    2011-08-01

    Children have the right to express their views and influence decisions in matters that affect them. Yet decisions regarding speech-language pathology are often made on their behalf, and research into the perspectives of children who receive speech-language pathology intervention is currently limited. This paper reports a qualitative study which explored experiences of communication and of speech-language pathology from the perspectives of children with speech, language, and communication needs (SLCN). The aim was to explore their perspectives of communication, communication impairment, and assistance. Eleven school-children participated in the study, aged between 7-10 years. They were recruited through a speech-language pathology service in south west England, to include a range of ages and severity of difficulties. The study used open-ended interviews within which non-verbal activities such as drawing, taking photographs, and compiling a scrapbook were used to create a context for supported conversations. Findings were analysed according to the principles of grounded theory. Three ways of talking about communication emerged. These were in terms of impairment, learning, and behaviour. Findings offer insight into dialogue between children with SLCN and adults; the way communication is talked about has implications for children's view of themselves, their skills, and their participation.

  13. Emerging technologies with potential for objectively evaluating speech recognition skills.

    PubMed

    Rawool, Vishakha Waman

    2016-01-01

    Work-related exposure to noise and other ototoxins can cause damage to the cochlea, synapses between the inner hair cells, the auditory nerve fibers, and higher auditory pathways, leading to difficulties in recognizing speech. Procedures designed to determine speech recognition scores (SRS) in an objective manner can be helpful in disability compensation cases where the worker claims to have poor speech perception due to exposure to noise or ototoxins. Such measures can also be helpful in determining SRS in individuals who cannot provide reliable responses to speech stimuli, including patients with Alzheimer's disease, traumatic brain injuries, and infants with and without hearing loss. Cost-effective neural monitoring hardware and software is being rapidly refined due to the high demand for neurogaming (games involving the use of brain-computer interfaces), health, and other applications. More specifically, two related advances in neuro-technology include relative ease in recording neural activity and availability of sophisticated analysing techniques. These techniques are reviewed in the current article and their applications for developing objective SRS procedures are proposed. Issues related to neuroaudioethics (ethics related to collection of neural data evoked by auditory stimuli including speech) and neurosecurity (preservation of a person's neural mechanisms and free will) are also discussed.

  14. The mechanism of speech processing in congenital amusia: evidence from Mandarin speakers.

    PubMed

    Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren

    2012-01-01

    Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results.

  15. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss.

    PubMed

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-03-01

    Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/-B/aa) and more intact onset responses for nonword repetition (Baz for/-B/az). Thus visual speech altered both discrimination and identification in the CHL-to a large extent for the/B/onsets but only minimally for the/G/onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children's discrimination skills (i.e., d' analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets-even after variation due to the other variables was controlled. These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Visual Speech Alters the Discrimination and Identification of Non-Intact Auditory Speech in Children with Hearing Loss

    PubMed Central

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé

    2017-01-01

    Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/–B/aa) and more intact onset responses for nonword repetition (Baz for/–B/az). Thus visual speech altered both discrimination and identification in the CHL—to a large extent for the /B/ onsets but only minimally for the /G/ onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children’s discrimination skills (i.e., d’ analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets—even after variation due to the other variables was controlled. Conclusions These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. PMID:28167003

  17. Is complex signal processing for bone conduction hearing aids useful?

    PubMed

    Kompis, Martin; Kurz, Anja; Pfiffner, Flurin; Senn, Pascal; Arnold, Andreas; Caversaccio, Marco

    2014-05-01

    To establish whether complex signal processing is beneficial for users of bone anchored hearing aids. Review and analysis of two studies from our own group, each comparing a speech processor with basic digital signal processing (either Baha Divino or Baha Intenso) and a processor with complex digital signal processing (either Baha BP100 or Baha BP110 power). The main differences between basic and complex signal processing are the number of audiologist accessible frequency channels and the availability and complexity of the directional multi-microphone noise reduction and loudness compression systems. Both studies show a small, statistically non-significant improvement of speech understanding in quiet with the complex digital signal processing. The average improvement for speech in noise is +0.9 dB, if speech and noise are emitted both from the front of the listener. If noise is emitted from the rear and speech from the front of the listener, the advantage of the devices with complex digital signal processing as opposed to those with basic signal processing increases, on average, to +3.2 dB (range +2.3 … +5.1 dB, p ≤ 0.0032). Complex digital signal processing does indeed improve speech understanding, especially in noise coming from the rear. This finding has been supported by another study, which has been published recently by a different research group. When compared to basic digital signal processing, complex digital signal processing can increase speech understanding of users of bone anchored hearing aids. The benefit is most significant for speech understanding in noise.

  18. Normal Aspects of Speech, Hearing, and Language.

    ERIC Educational Resources Information Center

    Minifie, Fred. D., Ed.; And Others

    This book is written as a guide to the understanding of the processes involved in human speech communication. Ten authorities contributed material to provide an introduction to the physiological aspects of speech production and reception, the acoustical aspects of speech production and transmission, the psychophysics of sound reception, the nature…

  19. The evolution of primary progressive apraxia of speech

    PubMed Central

    Duffy, Joseph R.; Strand, Edythe A.; Machulda, Mary M.; Senjem, Matthew L.; Gunter, Jeffrey L.; Schwarz, Christopher G.; Reid, Robert I.; Spychalla, Anthony J.; Lowe, Val J.; Jack, Clifford R.; Whitwell, Jennifer L.

    2014-01-01

    Primary progressive apraxia of speech is a recently described neurodegenerative disorder in which patients present with an isolated apraxia of speech and show focal degeneration of superior premotor cortex. Little is known about how these individuals progress over time, making it difficult to provide prognostic estimates. Thirteen subjects with primary progressive apraxia of speech underwent two serial comprehensive clinical and neuroimaging evaluations 2.4 years apart [median age of onset = 67 years (range: 49–76), seven females]. All underwent detailed speech and language, neurological and neuropsychological assessments, and magnetic resonance imaging, diffusion tensor imaging and 18F-fluorodeoxyglucose positron emission tomography at both baseline and follow-up. Rates of change of whole brain, ventricle, and midbrain volumes were calculated using the boundary-shift integral and atlas-based parcellation, and rates of regional grey matter atrophy were assessed using tensor-based morphometry. White matter tract degeneration was assessed on diffusion-tensor imaging at each time-point. Patterns of hypometabolism were assessed at the single subject-level. Neuroimaging findings were compared with a cohort of 20 age, gender, and scan-interval matched healthy controls. All subjects developed extrapyramidal signs. In eight subjects the apraxia of speech remained the predominant feature. In the other five there was a striking progression of symptoms that had evolved into a progressive supranuclear palsy-like syndrome; they showed a combination of severe parkinsonism, near mutism, dysphagia with choking, vertical supranuclear gaze palsy or slowing, balance difficulties with falls and urinary incontinence, and one was wheelchair bound. Rates of whole brain atrophy (1.5% per year; controls = 0.4% per year), ventricular expansion (8.0% per year; controls = 3.3% per year) and midbrain atrophy (1.5% per year; controls = 0.1% per year) were elevated (P ≤ 0.001) in all 13, compared to controls. Increased rates of brain atrophy over time were observed throughout the premotor cortex, as well as prefrontal cortex, motor cortex, basal ganglia and midbrain, while white matter tract degeneration spread into the splenium of the corpus callosum and motor cortex white matter. Hypometabolism progressed over time in almost all subjects. These findings demonstrate that some subjects with primary progressive apraxia of speech will rapidly evolve and develop a devastating progressive supranuclear palsy-like syndrome ∼ 5 years after onset, perhaps related to progressive involvement of neocortex, basal ganglia and midbrain. These findings help improve our understanding of primary progressive apraxia of speech and provide some important prognostic guidelines. PMID:25113789

  20. An international perspective: supporting adolescents with speech, language, and communication needs in the United Kingdom.

    PubMed

    Joffe, Victoria

    2015-02-01

    This article provides an overview of the education system in the United Kingdom, with a particular focus on the secondary school context and supporting older children and young people with speech, language, and communication needs (SLCNs). Despite the pervasive nature of speech, language, and communication difficulties and their long-term impact on academic performance, mental health, and well-being, evidence suggests that there is limited support to older children and young people with SLCNs in the United Kingdom, relative to what is available in the early years. Focus in secondary schools is predominantly on literacy, with little attention to supporting oral language. The article provides a synopsis of the working practices of pediatric speech and language therapists working with adolescents in the United Kingdom and the type and level of speech and language therapy support provided for older children and young people with SLCNs in secondary and further education. Implications for the nature and type of specialist support to adolescents and adults with SLCNs are discussed. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  1. Aging-related gains and losses associated with word production in connected speech.

    PubMed

    Dennis, Paul A; Hess, Thomas M

    2016-11-01

    Older adults have been observed to use more nonnormative, or atypical, words than younger adults in connected speech. We examined whether aging-related losses in word-finding abilities or gains in language expertise underlie these age differences. Sixty younger and 60 older adults described two neutral photographs. These descriptions were processed into word types, and textual analysis was used to identify interrupted speech (e.g., pauses), reflecting word-finding difficulty. Word types were assessed for normativeness, with nonnormative word types defined as those used by six (5%) or fewer participants to describe a particular picture. Accuracy and precision ratings were provided by another sample of 48 high-vocabulary younger and older adults. Older adults produced more interrupted and, as predicted, nonnormative words than younger adults. Older adults were more likely than younger adults to use nonnormative language via interrupted speech, suggesting a compensatory process. However, older adults' nonnormative words were more precise and trended for having higher accuracy, reflecting expertise. In tasks offering response flexibility, like connected speech, older adults may be able to offset instances of aging-related deficits by maximizing their expertise in other instances.

  2. Speech Planning Happens before Speech Execution: Online Reaction Time Methods in the Study of Apraxia of Speech

    ERIC Educational Resources Information Center

    Maas, Edwin; Mailend, Marja-Liisa

    2012-01-01

    Purpose: The purpose of this article is to present an argument for the use of online reaction time (RT) methods to the study of apraxia of speech (AOS) and to review the existing small literature in this area and the contributions it has made to our fundamental understanding of speech planning (deficits) in AOS. Method: Following a brief…

  3. Narrative Processing in Typically Developing Children and Children with Early Unilateral Brain Injury: Seeing Gesture Matters

    PubMed Central

    Demir, Özlem Ece; Fisher, Joan A.; Goldin-Meadow, Susan; Levine, Susan C.

    2014-01-01

    Narrative skill in kindergarteners has been shown to be a reliable predictor of later reading comprehension and school achievement. However, we know little about how to scaffold children’s narrative skill. Here we examine whether the quality of kindergarten children’s narrative retellings depends on the kind of narrative elicitation they are given. We asked this question in typically developing (TD) kindergarten children and in children with pre- or perinatal unilateral brain injury (PL), a group that has been shown to have difficulty with narrative production. We compared children’s skill in story retellings under four different elicitation formats: (1) wordless cartoons, (2) stories told by a narrator through the auditory modality, (3) stories told by a narrator through the audiovisual modality without co-speech gestures, and (4) stories told by a narrator in the audiovisual modality with co-speech gestures. We found that children told better structured narratives in the fourth, audiovisual + gesture elicitation format than in the other three elicitation formats, consistent with findings that co-speech gestures can scaffold other aspects of language and memory. The audiovisual + gesture elicitation format was particularly beneficial to children who had the most difficulty telling a well-structured narrative, a group that included children with larger lesions associated with cerebrovascular infarcts. PMID:24127729

  4. Age and measurement time-of-day effects on speech recognition in noise.

    PubMed

    Veneman, Carrie E; Gordon-Salant, Sandra; Matthews, Lois J; Dubno, Judy R

    2013-01-01

    The purpose of this study was to determine the effect of measurement time of day on speech recognition in noise and the extent to which time-of-day effects differ with age. Older adults tend to have more difficulty understanding speech in noise than younger adults, even when hearing is normal. Two possible contributors to this age difference in speech recognition may be measurement time of day and inhibition. Most younger adults are "evening-type," showing peak circadian arousal in the evening, whereas most older adults are "morning-type," with circadian arousal peaking in the morning. Tasks that require inhibition of irrelevant information have been shown to be affected by measurement time of day, with maximum performance attained at one's peak time of day. The authors hypothesized that a change in inhibition will be associated with measurement time of day and therefore affect speech recognition in noise, with better performance in the morning for older adults and in the evening for younger adults. Fifteen younger evening-type adults (20-28 years) and 15 older morning-type adults with normal hearing (66-78 years) listened to the Hearing in Noise Test (HINT) and the Quick Speech in Noise (QuickSIN) test in the morning and evening (peak and off-peak times). Time of day preference was assessed using the Morningness-Eveningness Questionnaire. Sentences and noise were presented binaurally through insert earphones. During morning and evening sessions, participants solved word-association problems within the visual-distraction task (VDT), which was used as an estimate of inhibition. After each session, participants rated perceived mental demand of the tasks using a revised version of the NASA Task Load Index. Younger adults performed significantly better on the speech-in-noise tasks and rated themselves as requiring significantly less mental demand when tested at their peak (evening) than off-peak (morning) time of day. In contrast, time-of-day effects were not observed for the older adults on the speech recognition or rating tasks. Although older adults required significantly more advantageous signal-to-noise ratios than younger adults for equivalent speech-recognition performance, a significantly larger younger versus older age difference in speech recognition was observed in the evening than in the morning. Older adults performed significantly poorer than younger adults on the VDT, but performance was not affected by measurement time of day. VDT performance for misleading distracter items was significantly correlated with HINT and QuickSIN test performance at the peak measurement time of day. Although all participants had normal hearing, speech recognition in noise was significantly poorer for older than younger adults, with larger age-related differences in the evening (an off-peak time for older adults) than in the morning. The significant effect of measurement time of day suggests that this factor may impact the clinical assessment of speech recognition in noise for all individuals. It appears that inhibition, as estimated by a visual distraction task for misleading visual items, is a cognitive mechanism that is related to speech-recognition performance in noise, at least at a listener's peak time of day.

  5. Understanding the Oral Mind: Implications for Speech Education.

    ERIC Educational Resources Information Center

    Cocetti, Robert A.

    The primary goal of the basic course in speech should be to investigate oral communication rather than public speaking. Fundamental to understanding oral communication is some understanding of the oral mind, that operates when orality is the primary means of expression. Since narrative invites action rather than leisurely analysis, the oral mind…

  6. Noise-induced cochlear synaptopathy: Past findings and future studies.

    PubMed

    Kobel, Megan; Le Prell, Colleen G; Liu, Jennifer; Hawks, John W; Bao, Jianxin

    2017-06-01

    For decades, we have presumed the death of hair cells and spiral ganglion neurons are the main cause of hearing loss and difficulties understanding speech in noise, but new findings suggest synapse loss may be the key contributor. Specifically, recent preclinical studies suggest that the synapses between inner hair cells and spiral ganglion neurons with low spontaneous rates and high thresholds are the most vulnerable subcellular structures, with respect to insults during aging and noise exposure. This cochlear synaptopathy can be "hidden" because this synaptic loss can occur without permanent hearing threshold shifts. This new discovery of synaptic loss opens doors to new research directions. Here, we review a number of recent studies and make suggestions in two critical future research directions. First, based on solid evidence of cochlear synaptopathy in animal models, it is time to apply molecular approaches to identify the underlying molecular mechanisms; improved understanding is necessary for developing rational, effective therapies against this cochlear synaptopathy. Second, in human studies, the data supporting cochlear synaptopathy are indirect although rapid progress has been made. To fully identify changes in function that are directly related this hidden synaptic damage, we argue that a battery of tests including both electrophysiological and behavior tests should be combined for diagnosis of "hidden hearing loss" in clinical studies. This new approach may provide a direct link between cochlear synaptopathy and perceptual difficulties. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Visually Impaired Persons' Comprehension of Text Presented with Speech Synthesis.

    ERIC Educational Resources Information Center

    Hjelmquist, E.; And Others

    1992-01-01

    This study of 48 individuals with visual impairments (16 middle-aged with experience in synthetic speech, 16 middle-aged inexperienced, and 16 older inexperienced) found that speech synthesis, compared to natural speech, generally yielded lower results with respect to memory and understanding of texts. Experience had no effect on performance.…

  8. Five-year speech and language outcomes in children with cleft lip-palate.

    PubMed

    Prathanee, Benjamas; Pumnum, Tawitree; Seepuaham, Cholada; Jaiyong, Pechcharat

    2016-10-01

    To investigate 5-year speech and language outcomes in children with cleft lip/palate (CLP). Thirty-eight children aged 4-7 years and 8 months were recruited for this study. Speech abilities including articulation, resonance, voice, and intelligibility were assessed based on Thai Universal Parameters of Speech Outcomes. Language ability was assessed by the Language Screening Test. The findings revealed that children with clefts had speech and language delay, abnormal understandability, resonance abnormality, and voice disturbance; articulation defects that were 8.33 (1.75, 22.47), 50.00 (32.92, 67.08), 36.11 (20.82, 53.78), 30.56 (16.35, 48.11), and 94.44 (81.34, 99.32). Articulation errors were the most common speech and language defects in children with clefts, followed by abnormal understandability, resonance abnormality, and voice disturbance. These results should be of critical concern. Protocol reviewing and early intervention programs are needed for improved speech outcomes. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  9. 'Insuring' a correct differential diagnosis--a 'forensic' collaborative experience.

    PubMed

    Abudarham, S; White, A

    2001-01-01

    Mr. J was referred to a speech and language therapist (SLT) by a consultant psychiatrist. He had sustained an industrial accident which he claimed was responsible for a range of problems which included a speech and language problem. Some three years after his accident, he brought an action for damages arising out of the accident. His solicitor, on the recommendation of the consultant psychiatrist, contacted the SLT requesting his views as to whether Mr. J's speech difficulties were due to the injuries sustained and requesting recommendations for further treatment. The SLT saw Mr. J and concluded that he had problems at all communication levels, the greatest being an articulatory impairment. Some reports suggested a psychological basis for his problems and others, including the psychiatrists', suggested an organic basis.

  10. Objective support for subjective reports of successful inner speech in two people with aphasia.

    PubMed

    Hayward, William; Snider, Sarah F; Luta, George; Friedman, Rhonda B; Turkeltaub, Peter E

    2016-01-01

    People with aphasia frequently report being able to say a word correctly in their heads, even if they are unable to say that word aloud. It is difficult to know what is meant by these reports of "successful inner speech". We probe the experience of successful inner speech in two people with aphasia. We show that these reports are associated with correct overt speech and phonologically related nonword errors, that they relate to word characteristics associated with ease of lexical access but not ease of production, and that they predict whether or not individual words are relearned during anomia treatment. These findings suggest that reports of successful inner speech are meaningful and may be useful to study self-monitoring in aphasia, to better understand anomia, and to predict treatment outcomes. Ultimately, the study of inner speech in people with aphasia could provide critical insights that inform our understanding of normal language.

  11. Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing

    PubMed Central

    Rauschecker, Josef P; Scott, Sophie K

    2010-01-01

    Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production. PMID:19471271

  12. The Role of Sensorimotor Difficulties in Autism Spectrum Conditions

    PubMed Central

    Hannant, Penelope; Tavassoli, Teresa; Cassidy, Sarah

    2016-01-01

    In addition to difficulties in social communication, current diagnostic criteria for autism spectrum conditions (ASC) also incorporate sensorimotor difficulties, repetitive motor movements, and atypical reactivity to sensory input (1). This paper explores whether sensorimotor difficulties are associated with the development and maintenance of symptoms in ASC. First, studies have shown difficulties coordinating sensory input into planning and executing movement effectively in ASC. Second, studies have shown associations between sensory reactivity and motor coordination with core ASC symptoms, suggesting these areas each strongly influence the development of social and communication skills. Third, studies have begun to demonstrate that sensorimotor difficulties in ASC could account for reduced social attention early in development, with a cascading effect on later social, communicative and emotional development. These results suggest that sensorimotor difficulties not only contribute to non-social difficulties such as narrow circumscribed interests, but also to the development of social behaviors such as effectively coordinating eye contact with speech and gesture, interpreting others’ behavior, and responding appropriately. Further research is needed to explore the link between sensory and motor difficulties in ASC and their contribution to the development and maintenance of ASC. PMID:27559329

  13. My speech problem, your listening problem, and my frustration: the experience of living with childhood speech impairment.

    PubMed

    McCormack, Jane; McLeod, Sharynne; McAllister, Lindy; Harrison, Linda J

    2010-10-01

    The purpose of this article was to understand the experience of speech impairment (speech sound disorders) in everyday life as described by children with speech impairment and their communication partners. Interviews were undertaken with 13 preschool children with speech impairment (mild to severe) and 21 significant others (family members and teachers). A phenomenological analysis of the interview transcripts revealed 2 global themes regarding the experience of living with speech impairment for these children and their families. The first theme encompassed the problems experienced by participants, namely (a) the child's inability to "speak properly," (b) the communication partner's failure to "listen properly," and (c) frustration caused by the speaking and listening problems. The second theme described the solutions participants used to overcome the problems. Solutions included (a) strategies to improve the child's speech accuracy (e.g., home practice, speech-language pathology) and (b) strategies to improve the listener's understanding (e.g., using gestures, repetition). Both short- and long-term solutions were identified. Successful communication is dependent on the skills of speakers and listeners. Intervention with children who experience speech impairment needs to reflect this reciprocity by supporting both the speaker and the listener and by addressing the frustration they experience.

  14. Vowel reduction across tasks for male speakers of American English.

    PubMed

    Kuo, Christina; Weismer, Gary

    2016-07-01

    This study examined acoustic variation of vowels within speakers across speech tasks. The overarching goal of the study was to understand within-speaker variation as one index of the range of normal speech motor behavior for American English vowels. Ten male speakers of American English performed four speech tasks including citation form sentence reading with a clear-speech style (clear-speech), citation form sentence reading (citation), passage reading (reading), and conversational speech (conversation). Eight monophthong vowels in a variety of consonant contexts were studied. Clear-speech was operationally defined as the reference point for describing variation. Acoustic measures associated with the conventions of vowel targets were obtained and examined. These included temporal midpoint formant frequencies for the first three formants (F1, F2, and F3) and the derived Euclidean distances in the F1-F2 and F2-F3 planes. Results indicated that reduction toward the center of the F1-F2 and F2-F3 planes increased in magnitude across the tasks in the order of clear-speech, citation, reading, and conversation. The cross-task variation was comparable for all speakers despite fine-grained individual differences. The characteristics of systematic within-speaker acoustic variation across tasks have potential implications for the understanding of the mechanisms of speech motor control and motor speech disorders.

  15. Redistribution of neural phase coherence reflects establishment of feedforward map in speech motor adaptation

    PubMed Central

    Sengupta, Ranit

    2015-01-01

    Despite recent progress in our understanding of sensorimotor integration in speech learning, a comprehensive framework to investigate its neural basis is lacking at behaviorally relevant timescales. Structural and functional imaging studies in humans have helped us identify brain networks that support speech but fail to capture the precise spatiotemporal coordination within the networks that takes place during speech learning. Here we use neuronal oscillations to investigate interactions within speech motor networks in a paradigm of speech motor adaptation under altered feedback with continuous recording of EEG in which subjects adapted to the real-time auditory perturbation of a target vowel sound. As subjects adapted to the task, concurrent changes were observed in the theta-gamma phase coherence during speech planning at several distinct scalp regions that is consistent with the establishment of a feedforward map. In particular, there was an increase in coherence over the central region and a decrease over the fronto-temporal regions, revealing a redistribution of coherence over an interacting network of brain regions that could be a general feature of error-based motor learning in general. Our findings have implications for understanding the neural basis of speech motor learning and could elucidate how transient breakdown of neuronal communication within speech networks relates to speech disorders. PMID:25632078

  16. Longer-term needs of stroke survivors with communication difficulties living in the community: a systematic review and thematic synthesis of qualitative studies.

    PubMed

    Wray, Faye; Clarke, David

    2017-10-06

    To review and synthesise qualitative literature relating to the longer-term needs of community dwelling stroke survivors with communication difficulties including aphasia, dysarthria and apraxia of speech. Systematic review and thematic synthesis. We included studies employing qualitative methodology which focused on the perceived or expressed needs, views or experiences of stroke survivors with communication difficulties in relation to the day-to-day management of their condition following hospital discharge. We searched MEDLINE, EMBASE, PsycINFO, CINAHL, The Cochrane Library, International Bibliography of the Social Sciences and AMED and undertook grey literature searches. Studies were assessed for methodological quality by two researchers independently and the findings were combined using thematic synthesis. Thirty-two studies were included in the thematic synthesis. The synthesis reveals the ongoing difficulties stroke survivors can experience in coming to terms with the loss of communication and in adapting to life with a communication difficulty. While some were able to adjust, others struggled to maintain their social networks and to participate in activities which were meaningful to them. The challenges experienced by stroke survivors with communication difficulties persisted for many years poststroke. Four themes relating to longer-term need were developed: managing communication outside of the home, creating a meaningful role, creating or maintaining a support network and taking control and actively moving forward with life. Understanding the experiences of stroke survivors with communication difficulties is vital for ensuring that longer-term care is designed according to their needs. Wider psychosocial factors must be considered in the rehabilitation of people with poststroke communication difficulties. Self-management interventions may be appropriate to help this subgroup of stroke survivors manage their condition in the longer-term; however, such approaches must be designed to help survivors to manage the unique psychosocial consequences of poststroke communication difficulties. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance

    ERIC Educational Resources Information Center

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2013-01-01

    Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…

  18. Family-Centered Services for Children with ASD and Limited Speech: The Experiences of Parents and Speech-Language Pathologists

    ERIC Educational Resources Information Center

    Mandak, Kelsey; Light, Janice

    2018-01-01

    Although family-centered services have long been discussed as essential in providing successful services to families of children with autism spectrum disorder (ASD), ideal implementation is often lacking. This study aimed to increase understanding of how families with children with ASD and limited speech receive services from speech-language…

  19. Association of Orofacial Muscle Activity and Movement during Changes in Speech Rate and Intensity

    ERIC Educational Resources Information Center

    McClean, Michael D.; Tasko, Stephen M.

    2003-01-01

    Understanding how orofacial muscle activity and movement covary across changes in speech rate and intensity has implications for the neural control of speech production and the use of clinical procedures that manipulate speech prosody. The present study involved a correlation analysis relating average lower-lip and jaw-muscle activity to lip and…

  20. Refinement of Speech Breathing in Healthy 4- to 6-Year-Old Children

    ERIC Educational Resources Information Center

    Boliek, Carol A.; Hixon, Thomas J.; Watson, Peter J.; Jones, Patricia B.

    2009-01-01

    Purpose: The purpose of this study was to offer a better understanding of the development of neuromotor control for speech breathing and provide a normative data set that can serve as a useful standard for clinical evaluation and management of young children with speech disorders involving the breathing subsystem. Method: Speech breathing was…

Top