Sample records for completed speech samples

  1. Relations among questionnaire and experience sampling measures of inner speech: a smartphone app study

    PubMed Central

    Alderson-Day, Ben; Fernyhough, Charles

    2015-01-01

    Inner speech is often reported to be a common and central part of inner experience, but its true prevalence is unclear. Many questionnaire-based measures appear to lack convergent validity and it has been claimed that they overestimate inner speech in comparison to experience sampling methods (which involve collecting data at random timepoints). The present study compared self-reporting of inner speech collected via a general questionnaire and experience sampling, using data from a custom-made smartphone app (Inner Life). Fifty-one university students completed a generalized self-report measure of inner speech (the Varieties of Inner Speech Questionnaire, VISQ) and responded to at least seven random alerts to report on incidences of inner speech over a 2-week period. Correlations and pairwise comparisons were used to compare generalized endorsements and randomly sampled scores for each VISQ subscale. Significant correlations were observed between general and randomly sampled measures for only two of the four VISQ subscales, and endorsements of inner speech with evaluative or motivational characteristics did not correlate at all across different measures. Endorsement of inner speech items was significantly lower for random sampling compared to generalized self-report, for all VISQ subscales. Exploratory analysis indicated that specific inner speech characteristics were also related to anxiety and future-oriented thinking. PMID:25964773

  2. Describing Speech Usage in Daily Activities in Typical Adults.

    PubMed

    Anderson, Laine; Baylor, Carolyn R; Eadie, Tanya L; Yorkston, Kathryn M

    2016-01-01

    "Speech usage" refers to what people want or need to do with their speech to meet communication demands in life roles. The purpose of this study was to contribute to validation of the Levels of Speech Usage scale by providing descriptive data from a sample of adults without communication disorders, comparing this scale to a published Occupational Voice Demands scale and examining predictors of speech usage levels. This is a survey design. Adults aged ≥25 years without reported communication disorders were recruited nationally to complete an online questionnaire. The questionnaire included the Levels of Speech Usage scale, questions about relevant occupational and nonoccupational activities (eg, socializing, hobbies, childcare, and so forth), and demographic information. Participants were also categorized according to Koufman and Isaacson occupational voice demands scale. A total of 276 participants completed the questionnaires. People who worked for pay tended to report higher levels of speech usage than those who do not work for pay. Regression analyses showed employment to be the major contributor to speech usage; however, considerable variance left unaccounted for suggests that determinants of speech usage and the relationship between speech usage, employment, and other life activities are not yet fully defined. The Levels of Speech Usage may be a viable instrument to systematically rate speech usage because it captures both occupational and nonoccupational speech demands. These data from a sample of typical adults may provide a reference to help in interpreting the impact of communication disorders on speech usage patterns. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  3. Speech and language development in 2-year-old children with cerebral palsy.

    PubMed

    Hustad, Katherine C; Allison, Kristen; McFadd, Emily; Riehle, Katherine

    2014-06-01

    We examined early speech and language development in children who had cerebral palsy. Questions addressed whether children could be classified into early profile groups on the basis of speech and language skills and whether there were differences on selected speech and language measures among groups. Speech and language assessments were completed on 27 children with CP who were between the ages of 24 and 30 months (mean age 27.1 months; SD 1.8). We examined several measures of expressive and receptive language, along with speech intelligibility. Two-step cluster analysis was used to identify homogeneous groups of children based on their performance on the seven dependent variables characterizing speech and language performance. Three groups of children identified were those not yet talking (44% of the sample); those whose talking abilities appeared to be emerging (41% of the sample); and those who were established talkers (15% of the sample). Group differences were evident on all variables except receptive language skills. 85% of 2-year-old children with CP in this study had clinical speech and/or language delays relative to age expectations. Findings suggest that children with CP should receive speech and language assessment and treatment at or before 2 years of age.

  4. Speech Characteristics and Intelligibility in Adults with Mild and Moderate Intellectual Disabilities

    PubMed Central

    Coppens-Hofman, Marjolein C.; Terband, Hayo; Snik, Ad F.M.; Maassen, Ben A.M.

    2017-01-01

    Purpose Adults with intellectual disabilities (ID) often show reduced speech intelligibility, which affects their social interaction skills. This study aims to establish the main predictors of this reduced intelligibility in order to ultimately optimise management. Method Spontaneous speech and picture naming tasks were recorded in 36 adults with mild or moderate ID. Twenty-five naïve listeners rated the intelligibility of the spontaneous speech samples. Performance on the picture-naming task was analysed by means of a phonological error analysis based on expert transcriptions. Results The transcription analyses showed that the phonemic and syllabic inventories of the speakers were complete. However, multiple errors at the phonemic and syllabic level were found. The frequencies of specific types of errors were related to intelligibility and quality ratings. Conclusions The development of the phonemic and syllabic repertoire appears to be completed in adults with mild-to-moderate ID. The charted speech difficulties can be interpreted to indicate speech motor control and planning difficulties. These findings may aid the development of diagnostic tests and speech therapies aimed at improving speech intelligibility in this specific group. PMID:28118637

  5. Speech and Language Development in 2 Year Old Children with Cerebral Palsy

    PubMed Central

    Hustad, Katherine C.; Allison, Kristen; McFadd, Emily; Riehle, Katherine

    2013-01-01

    Objective We examined early speech and language development in children who had cerebral palsy. Questions addressed whether children could be classified into early profile groups on the basis of speech and language skills and whether there were differences on selected speech and language measures among groups. Methods Speech and language assessments were completed on 27 children with CP who were between the ages of 24-30 months (mean age 27.1 months; SD 1.8). We examined several measures of expressive and receptive language, along with speech intelligibility. Results 2-step cluster analysis was used to identify homogeneous groups of children based on their performance on the 7 dependent variables characterizing speech and language performance. Three groups of children identified were those not yet talking (44% of the sample); those whose talking abilities appeared to be emerging (41% of the sample); and those who were established talkers (15% of the sample). Group differences were evident on all variables except receptive language skills. Conclusion 85% of 2 year old children with CP in this study had clinical speech and /or language delays relative to age expectations. Findings suggest that children with CP should receive speech and language assessment and treatment to identify and treat those with delays at or before 2 years of age. PMID:23627373

  6. The Influence of Native Language on Auditory-Perceptual Evaluation of Vocal Samples Completed by Brazilian and Canadian SLPs.

    PubMed

    Chaves, Cristiane Ribeiro; Campbell, Melanie; Côrtes Gama, Ana Cristina

    2017-03-01

    This study aimed to determine the influence of native language on the auditory-perceptual assessment of voice, as completed by Brazilian and Anglo-Canadian listeners using Brazilian vocal samples and the grade, roughness, breathiness, asthenia, strain (GRBAS) scale. This is an analytical, observational, comparative, and transversal study conducted at the Speech Language Pathology Department of the Federal University of Minas Gerais in Brazil, and at the Communication Sciences and Disorders Department of the University of Alberta in Canada. The GRBAS scale, connected speech, and a sustained vowel were used in this study. The vocal samples were drawn randomly from a database of recorded speech of Brazilian adults, some with healthy voices and some with voice disorders. The database is housed at the Federal University of Minas Gerais. Forty-six samples of connected speech (recitation of days of the week), produced by 35 women and 11 men, and 46 samples of the sustained vowel /a/, produced by 37 women and 9 men, were used in this study. The listeners were divided into two groups of three speech therapists, according to nationality: Brazilian or Anglo-Canadian. The groups were matched according to the years of professional experience of participants. The weighted kappa was used to calculate the intra- and inter-rater agreements, with 95% confidence intervals, respectively. An analysis of the intra-rater agreement showed that Brazilians and Canadians had similar results in auditory-perceptual evaluation of sustained vowel and connected speech. The results of the inter-rater agreement of connected speech and sustained vowel indicated that Brazilians and Canadians had, respectively, moderate agreement on the overall severity (0.57 and 0.50), breathiness (0.45 and 0.45), and asthenia (0.50 and 0.46); poor correlation on roughness (0.19 and 0.007); and weak correlation on strain to connected speech (0.22), and moderate correlation to sustained vowel (0.50). In general, auditory-perceptual evaluation is not influenced by the native language on most dimensions of the perceptual parameters of the GRBAS scale. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  7. Intelligibility assessment in developmental phonological disorders: accuracy of caregiver gloss.

    PubMed

    Kwiatkowski, J; Shriberg, L D

    1992-10-01

    Fifteen caregivers each glossed a simultaneously videotaped and audiotaped sample of their child with speech delay engaged in conversation with a clinician. One of the authors generated a reference gloss for each sample, aided by (a) prior knowledge of the child's speech-language status and error patterns, (b) glosses from the child's clinician and the child's caregiver, (c) unlimited replays of the taped sample, and (d) the information gained from completing a narrow phonetic transcription of the sample. Caregivers glossed an average of 78% of the utterances and 81% of the words. A comparison of their glosses to the reference glosses suggested that they accurately understood an average of 58% of the utterances and 73% of the words. Discussion considers the implications of such findings for methodological and theoretical issues underlying children's moment-to-moment intelligibility breakdowns during speech-language processing.

  8. Articulatory-acoustic vowel space: application to clear speech in individuals with Parkinson's disease.

    PubMed

    Whitfield, Jason A; Goberman, Alexander M

    2014-01-01

    Individuals with Parkinson disease (PD) often exhibit decreased range of movement secondary to the disease process, which has been shown to affect articulatory movements. A number of investigations have failed to find statistically significant differences between control and disordered groups, and between speaking conditions, using traditional vowel space area measures. The purpose of the current investigation was to evaluate both between-group (PD versus control) and within-group (habitual versus clear) differences in articulatory function using a novel vowel space measure, the articulatory-acoustic vowel space (AAVS). The novel AAVS is calculated from continuously sampled formant trajectories of connected speech. In the current study, habitual and clear speech samples from twelve individuals with PD along with habitual control speech samples from ten neurologically healthy adults were collected and acoustically analyzed. In addition, a group of listeners completed perceptual rating of speech clarity for all samples. Individuals with PD were perceived to exhibit decreased speech clarity compared to controls. Similarly, the novel AAVS measure was significantly lower in individuals with PD. In addition, the AAVS measure significantly tracked changes between the habitual and clear conditions that were confirmed by perceptual ratings. In the current study, the novel AAVS measure is shown to be sensitive to disease-related group differences and within-person changes in articulatory function of individuals with PD. Additionally, these data confirm that individuals with PD can modulate the speech motor system to increase articulatory range of motion and speech clarity when given a simple prompt. The reader will be able to (i) describe articulatory behavior observed in the speech of individuals with Parkinson disease; (ii) describe traditional measures of vowel space area and how they relate to articulation; (iii) describe a novel measure of vowel space, the articulatory-acoustic vowel space and its relationship to articulation and the perception of speech clarity. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Validation of the Focus on the Outcomes of Communication under Six outcome measure

    PubMed Central

    Thomas-Stonell, Nancy; Oddson, Bruce; Robertson, Bernadette; Rosenbaum, Peter

    2013-01-01

    Aim The aim of this study was to establish the construct validity of the Focus on the Outcomes of Communication Under Six (FOCUS©),a tool designed to measure changes in communication skills in preschool children. Method Participating families' children (n=97; 68 males, 29 females; mean age 2y 8mo; SD 1.04y, range 10mo–4y 11mo) were recruited through eight Canadian organizations. The children were on a waiting list for speech and language intervention. Parents completed the Ages and Stages Questionnaire – Social/Emotional (ASQ-SE) and the FOCUS three times: at assessment and at the start and end of treatment. A second sample (n=28; 16 males 12 females) was recruited from another organization to correlate the FOCUS scores with speech, intelligibility and language measures. Second sample participants ranged in age from 3 years 1 month to 4 years 9 months (mean 3y 11mo; SD 0.41y). At the start and end of treatment, children were videotaped to obtain speech and language samples. Parents and speech–language pathologists (SLPs) independently completed the FOCUS tool. SLPs who were blind to the pre/post order of the videotapes analysed the samples. Results The FOCUS measured significantly more change (p<0.01) during treatment than during the waiting list period. It demonstrated both convergent and discriminant validity against the ASQ-SE. The FOCUS change corresponded to change measured by a combination of clinical speech and language measures (κ=0.31, p<0.05). Conclusion The FOCUS shows strong construct validity as a change-detecting instrument. PMID:23461266

  10. Speech disorders in neurofibromatosis type 1: a sample survey.

    PubMed

    Cosyns, Marjan; Vandeweghe, Lies; Mortier, Geert; Janssens, Sandra; Van Borsel, John

    2010-01-01

    Neurofibromatosis type 1 (NF1) is an autosomal-dominant neurocutaneous disorder with an estimated prevalence of two to three cases per 10,000 population. While the physical characteristics have been well documented, speech disorders have not been fully characterized in NF1 patients. This study serves as a pilot to identify key issues in the speech of NF1 patients. In particular, the aim is to explore further the occurrence and nature of problems associated with speech as perceived by the patients themselves. A questionnaire was sent to 149 patients with NF1 registered at the Department of Genetics, Ghent University Hospital. The questionnaire inquired about articulation, hearing, breathing, voice, resonance and fluency. Sixty individuals ranging in age from 4.5 to 61.3 years returned completed questionnaires and these served as the database for the study. The results of this sample survey were compared with data of the normal population. About two-thirds of participants experienced at least one speech or speech-related problem of any type. Compared with the normal population, the NF1 group indicated more articulation difficulties, hearing impairment, abnormalities in loudness, and stuttering. The results indicate that speech difficulties are an area of interest in the NF1 population. Further research to elucidate these findings is needed.

  11. Speech outcome in unilateral complete cleft lip and palate patients: a descriptive study.

    PubMed

    Rullo, R; Di Maggio, D; Addabbo, F; Rullo, F; Festa, V M; Perillo, L

    2014-09-01

    In this study, resonance and articulation disorders were examined in a group of patients surgically treated for cleft lip and palate, considering family social background, and children's ability of self monitoring their speech output while speaking. Fifty children (32 males and 18 females) mean age 6.5 ± 1.6 years, affected by non-syndromic complete unilateral cleft of the lip and palate underwent the same surgical protocol. The speech level was evaluated using the Accordi's speech assessment protocol that focuses on intelligibility, nasality, nasal air escape, pharyngeal friction, and glottal stop. Pearson product-moment correlation analysis was used to detect significant associations between analysed parameters. A total of 16% (8 children) of the sample had severe to moderate degree of nasality and nasal air escape, presence of pharyngeal friction and glottal stop, which obviously compromise speech intelligibility. Ten children (10%) showed a barely acceptable phonological outcome: nasality and nasal air escape were mild to moderate, but the intelligibility remained poor. Thirty-two children (64%) had normal speech. Statistical analysis revealed a significant correlation between the severity of nasal resonance and nasal air escape (p ≤ 0.05). No statistical significant correlation was found between the final intelligibility and the patient social background, neither between the final intelligibility nor the age of the patients. The differences in speech outcome could be explained with a specific, subjective, and inborn ability, different for each child, in self-monitoring their speech output.

  12. Exposure reduces negative bias in self-rated performance in public speaking fearful participants.

    PubMed

    Cheng, Joyce; Niles, Andrea N; Craske, Michelle G

    2017-03-01

    Individuals with public speaking anxiety (PSA) under-rate their performance compared to objective observers. The present study examined whether exposure reduces the discrepancy between self and observer performance ratings and improved observer-rated performance in individuals with PSA. PSA participants gave a speech in front of a small audience and rated their performance using a questionnaire before and after completing repeated exposures to public speaking. Non-anxious control participants gave a speech and completed the questionnaire one time only. Objective observers watched videos of the speeches and rated performance using the same questionnaire. PSA participants underrated their performance to a greater degree than did controls prior to exposure, but also performed significantly more poorly than did controls when rated objectively. Bias significantly decreased and objective-rated performance significantly increased following completion of exposure in PSA participants, and on one performance measure, anxious participants no longer showed a greater discrepancy between self and observer performance ratings compared to controls. The study employed non-clinical student sample, but the results should be replicated in clinical anxiety samples. These findings indicate that exposure alone significantly reduces negative performance bias among PSA individuals, but additional exposure or additional interventions may be necessary to fully correct bias and performance deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Speech Discrimination Difficulties in High-Functioning Autism Spectrum Disorder Are Likely Independent of Auditory Hypersensitivity

    PubMed Central

    Dunlop, William A.; Enticott, Peter G.; Rajan, Ramesh

    2016-01-01

    Autism Spectrum Disorder (ASD), characterized by impaired communication skills and repetitive behaviors, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD) individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants. PMID:27555814

  14. Modification and preliminary use of the five-minute speech sample in the postpartum: associations with postnatal depression and posttraumatic stress.

    PubMed

    Iles, Jane; Spiby, Helen; Slade, Pauline

    2014-10-01

    Little is known about what constitutes key components of partner support during the childbirth experience. This study modified the five minute speech sample, a measure of expressed emotion (EE), for use with new parents in the immediate postpartum. A coding framework was developed to rate the speech samples on dimensions of couple support. Associations were explored between these codes and subsequent symptoms of postnatal depression and posttraumatic stress. 372 couples were recruited in the early postpartum and individually provided short speech samples. Posttraumatic stress and postnatal depression symptoms were assessed via questionnaire measures at six and thirteen weeks. Two hundred and twelve couples completed all time-points. Key elements of supportive interactions were identified and reliably categorised. Mothers' posttraumatic stress was associated with criticisms of the partner during childbirth, general relationship criticisms and men's perception of helplessness. Postnatal depression was associated with absence of partner empathy and any positive comments regarding the partner's support. The content of new parents' descriptions of labour and childbirth, their partner during labour and birth and their relationship within the immediate postpartum may have significant implications for later psychological functioning. Interventions to enhance specific supportive elements between couples during the antenatal period merit development and evaluation.

  15. Speech intelligibility enhancement after maxillary denture treatment and its impact on quality of life.

    PubMed

    Knipfer, Christian; Riemann, Max; Bocklet, Tobias; Noeth, Elmar; Schuster, Maria; Sokol, Biljana; Eitner, Stephan; Nkenke, Emeka; Stelzle, Florian

    2014-01-01

    Tooth loss and its prosthetic rehabilitation significantly affect speech intelligibility. However, little is known about the influence of speech deficiencies on oral health-related quality of life (OHRQoL). The aim of this study was to investigate whether speech intelligibility enhancement through prosthetic rehabilitation significantly influences OHRQoL in patients wearing complete maxillary dentures. Speech intelligibility by means of an automatic speech recognition system (ASR) was prospectively evaluated and compared with subjectively assessed Oral Health Impact Profile (OHIP) scores. Speech was recorded in 28 edentulous patients 1 week prior to the fabrication of new complete maxillary dentures and 6 months thereafter. Speech intelligibility was computed based on the word accuracy (WA) by means of an ASR and compared with a matched control group. One week before and 6 months after rehabilitation, patients assessed themselves for OHRQoL. Speech intelligibility improved significantly after 6 months. Subjects reported a significantly higher OHRQoL after maxillary rehabilitation with complete dentures. No significant correlation was found between the OHIP sum score or its subscales to the WA. Speech intelligibility enhancement achieved through the fabrication of new complete maxillary dentures might not be in the forefront of the patients' perception of their quality of life. For the improvement of OHRQoL in patients wearing complete maxillary dentures, food intake and mastication as well as freedom from pain play a more prominent role.

  16. Speech-recognition interfaces for music information retrieval

    NASA Astrophysics Data System (ADS)

    Goto, Masataka

    2005-09-01

    This paper describes two hands-free music information retrieval (MIR) systems that enable a user to retrieve and play back a musical piece by saying its title or the artist's name. Although various interfaces for MIR have been proposed, speech-recognition interfaces suitable for retrieving musical pieces have not been studied. Our MIR-based jukebox systems employ two different speech-recognition interfaces for MIR, speech completion and speech spotter, which exploit intentionally controlled nonverbal speech information in original ways. The first is a music retrieval system with the speech-completion interface that is suitable for music stores and car-driving situations. When a user only remembers part of the name of a musical piece or an artist and utters only a remembered fragment, the system helps the user recall and enter the name by completing the fragment. The second is a background-music playback system with the speech-spotter interface that can enrich human-human conversation. When a user is talking to another person, the system allows the user to enter voice commands for music playback control by spotting a special voice-command utterance in face-to-face or telephone conversations. Experimental results from use of these systems have demonstrated the effectiveness of the speech-completion and speech-spotter interfaces. (Video clips: http://staff.aist.go.jp/m.goto/MIR/speech-if.html)

  17. THE COMPREHENSION OF RAPID SPEECH BY THE BLIND, PART III.

    ERIC Educational Resources Information Center

    FOULKE, EMERSON

    A REVIEW OF THE RESEARCH ON THE COMPREHENSION OF RAPID SPEECH BY THE BLIND IDENTIFIES FIVE METHODS OF SPEECH COMPRESSION--SPEECH CHANGING, ELECTROMECHANICAL SAMPLING, COMPUTER SAMPLING, SPEECH SYNTHESIS, AND FREQUENCY DIVIDING WITH THE HARMONIC COMPRESSOR. THE SPEECH CHANGING AND ELECTROMECHANICAL SAMPLING METHODS AND THE NECESSARY APPARATUS HAVE…

  18. The Cleft Care UK study. Part 4: perceptual speech outcomes

    PubMed Central

    Sell, D; Mildinhall, S; Albery, L; Wills, A K; Sandy, J R; Ness, A R

    2015-01-01

    Structured Abstract Objectives To describe the perceptual speech outcomes from the Cleft Care UK (CCUK) study and compare them to the 1998 Clinical Standards Advisory Group (CSAG) audit. Setting and sample population A cross-sectional study of 248 children born with complete unilateral cleft lip and palate, between 1 April 2005 and 31 March 2007 who underwent speech assessment. Materials and methods Centre-based specialist speech and language therapists (SLT) took speech audio–video recordings according to nationally agreed guidelines. Two independent listeners undertook the perceptual analysis using the CAPS-A Audit tool. Intra- and inter-rater reliability were tested. Results For each speech parameter of intelligibility/distinctiveness, hypernasality, palatal/palatalization, backed to velar/uvular, glottal, weak and nasalized consonants, and nasal realizations, there was strong evidence that speech outcomes were better in the CCUK children compared to CSAG children. The parameters which did not show improvement were nasal emission, nasal turbulence, hyponasality and lateral/lateralization. Conclusion These results suggest that centralization of cleft care into high volume centres has resulted in improvements in UK speech outcomes in five-year-olds with unilateral cleft lip and palate. This may be associated with the development of a specialized workforce. Nevertheless, there still remains a group of children with significant difficulties at school entry. PMID:26567854

  19. Data-Driven Subclassification of Speech Sound Disorders in Preschool Children

    PubMed Central

    Vick, Jennell C.; Campbell, Thomas F.; Shriberg, Lawrence D.; Green, Jordan R.; Truemper, Klaus; Rusiewicz, Heather Leavy; Moore, Christopher A.

    2015-01-01

    Purpose The purpose of the study was to determine whether distinct subgroups of preschool children with speech sound disorders (SSD) could be identified using a subgroup discovery algorithm (SUBgroup discovery via Alternate Random Processes, or SUBARP). Of specific interest was finding evidence of a subgroup of SSD exhibiting performance consistent with atypical speech motor control. Method Ninety-seven preschool children with SSD completed speech and nonspeech tasks. Fifty-three kinematic, acoustic, and behavioral measures from these tasks were input to SUBARP. Results Two distinct subgroups were identified from the larger sample. The 1st subgroup (76%; population prevalence estimate = 67.8%–84.8%) did not have characteristics that would suggest atypical speech motor control. The 2nd subgroup (10.3%; population prevalence estimate = 4.3%– 16.5%) exhibited significantly higher variability in measures of articulatory kinematics and poor ability to imitate iambic lexical stress, suggesting atypical speech motor control. Both subgroups were consistent with classes of SSD in the Speech Disorders Classification System (SDCS; Shriberg et al., 2010a). Conclusion Characteristics of children in the larger subgroup were consistent with the proportionally large SDCS class termed speech delay; characteristics of children in the smaller subgroup were consistent with the SDCS subtype termed motor speech disorder—not otherwise specified. The authors identified candidate measures to identify children in each of these groups. PMID:25076005

  20. Validation of the Intelligibility in Context Scale for Jamaican Creole-Speaking Preschoolers.

    PubMed

    Washington, Karla N; McDonald, Megan M; McLeod, Sharynne; Crowe, Kathryn; Devonish, Hubert

    2017-08-15

    To describe validation of the Intelligibility in Context Scale (ICS; McLeod, Harrison, & McCormack, 2012a) and ICS-Jamaican Creole (ICS-JC; McLeod, Harrison, & McCormack, 2012b) in a sample of typically developing 3- to 6-year-old Jamaicans. One-hundred and forty-five preschooler-parent dyads participated in the study. Parents completed the 7-item ICS (n = 145) and ICS-JC (n = 98) to rate children's speech intelligibility (5-point scale) across communication partners (parents, immediate family, extended family, friends, acquaintances, strangers). Preschoolers completed the Diagnostic Evaluation of Articulation and Phonology (DEAP; Dodd, Hua, Crosbie, Holm, & Ozanne, 2006) in English and Jamaican Creole to establish speech-sound competency. For this sample, we examined validity and reliability (interrater, test-rest, internal consistency) evidence using measures of speech-sound production: (a) percentage of consonants correct, (b) percentage of vowels correct, and (c) percentage of phonemes correct. ICS and ICS-JC ratings showed preschoolers were always (5) to usually (4) understood across communication partners (ICS, M = 4.43; ICS-JC, M = 4.50). Both tools demonstrated excellent internal consistency (α = .91), high interrater, and test-retest reliability. Significant correlations between the two tools and between each measure and language-specific percentage of consonants correct, percentage of vowels correct, and percentage of phonemes correct provided criterion-validity evidence. A positive correlation between the ICS and age further strengthened validity evidence for that measure. Both tools show promising evidence of reliability and validity in describing functional speech intelligibility for this group of typically developing Jamaican preschoolers.

  1. Method and apparatus for obtaining complete speech signals for speech recognition applications

    NASA Technical Reports Server (NTRS)

    Abrash, Victor (Inventor); Cesari, Federico (Inventor); Franco, Horacio (Inventor); George, Christopher (Inventor); Zheng, Jing (Inventor)

    2009-01-01

    The present invention relates to a method and apparatus for obtaining complete speech signals for speech recognition applications. In one embodiment, the method continuously records an audio stream comprising a sequence of frames to a circular buffer. When a user command to commence or terminate speech recognition is received, the method obtains a number of frames of the audio stream occurring before or after the user command in order to identify an augmented audio signal for speech recognition processing. In further embodiments, the method analyzes the augmented audio signal in order to locate starting and ending speech endpoints that bound at least a portion of speech to be processed for recognition. At least one of the speech endpoints is located using a Hidden Markov Model.

  2. Extensions to the Speech Disorders Classification System (SDCS)

    PubMed Central

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    This report describes three extensions to a classification system for pediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three subtypes of motor speech disorders. Part II describes the Madison Speech Assessment Protocol (MSAP), an approximately two-hour battery of 25 measures that includes 15 speech tests and tasks. Part III describes the Competence, Precision, and Stability Analytics (CPSA) framework, a current set of approximately 90 perceptual- and acoustic-based indices of speech, prosody, and voice used to quantify and classify subtypes of Speech Sound Disorders (SSD). A companion paper, Shriberg, Fourakis, et al. (2010) provides reliability estimates for the perceptual and acoustic data reduction methods used in the SDCS. The agreement estimates in the companion paper support the reliability of SDCS methods and illustrate the complementary roles of perceptual and acoustic methods in diagnostic analyses of SSD of unknown origin. Examples of research using the extensions to the SDCS described in the present report include diagnostic findings for a sample of youth with motor speech disorders associated with galactosemia (Shriberg, Potter, & Strand, 2010) and a test of the hypothesis of apraxia of speech in a group of children with autism spectrum disorders (Shriberg, Paul, Black, & van Santen, 2010). All SDCS methods and reference databases running in the PEPPER (Programs to Examine Phonetic and Phonologic Evaluation Records; [Shriberg, Allen, McSweeny, & Wilson, 2001]) environment will be disseminated without cost when complete. PMID:20831378

  3. Influence of speech sample on perceptual rating of hypernasality.

    PubMed

    Medeiros, Maria Natália Leite de; Fukushiro, Ana Paula; Yamashita, Renata Paciello

    2016-07-07

    To investigate the influence of speech sample of spontaneous conversation or sentences repetition on intra and inter-rater hypernasality reliability. One hundred and twenty audio recorded speech samples (60 containing spontaneous conversation and 60 containing repeated sentences) of individuals with repaired cleft palate±lip, both genders, aged between 6 and 52 years old (mean=21±10) were selected and edited. Three experienced speech and language pathologists rated hypernasality according to their own criteria using 4-point scale: 1=absence of hypernasality, 2=mild hypernasality, 3=moderate hypernasality and 4=severe hypernasality, first in spontaneous speech samples and 30 days after, in sentences repetition samples. Intra- and inter-rater agreements were calculated for both speech samples and were statistically compared by the Z test at a significance level of 5%. Comparison of intra-rater agreements between both speech samples showed an increase of the coefficients obtained in the analysis of sentences repetition compared to those obtained in spontaneous conversation. Comparison between inter-rater agreement showed no significant difference among the three raters for the two speech samples. Sentences repetition improved intra-raters reliability of perceptual judgment of hypernasality. However, the speech sample had no influence on reliability among different raters.

  4. Monitoring Progress in Vocal Development in Young Cochlear Implant Recipients: Relationships between Speech Samples and Scores from the Conditioned Assessment of Speech Production (CASP)

    PubMed Central

    Ertmer, David J.; Jung, Jongmin

    2012-01-01

    Background Evidence of auditory-guided speech development can be heard as the prelinguistic vocalizations of young cochlear implant recipients become increasingly complex, phonetically diverse, and speech-like. In research settings, these changes are most often documented by collecting and analyzing speech samples. Sampling, however, may be too time-consuming and impractical for widespread use in clinical settings. The Conditioned Assessment of Speech Production (CASP; Ertmer & Stoel-Gammon, 2008) is an easily administered and time-efficient alternative to speech sample analysis. The current investigation examined the concurrent validity of the CASP and data obtained from speech samples recorded at the same intervals. Methods Nineteen deaf children who received CIs before their third birthdays participated in the study. Speech samples and CASP scores were gathered at 6, 12, 18, and 24 months post-activation. Correlation analyses were conducted to assess the concurrent validity of CASP scores and data from samples. Results CASP scores showed strong concurrent validity with scores from speech samples gathered across all recording sessions (6 – 24 months). Conclusions The CASP was found to be a valid, reliable, and time-efficient tool for assessing progress in vocal development during young CI recipient’s first 2 years of device experience. PMID:22628109

  5. Quantification and Systematic Characterization of Stuttering-Like Disfluencies in Acquired Apraxia of Speech.

    PubMed

    Bailey, Dallin J; Blomgren, Michael; DeLong, Catharine; Berggren, Kiera; Wambaugh, Julie L

    2017-06-22

    The purpose of this article is to quantify and describe stuttering-like disfluencies in speakers with acquired apraxia of speech (AOS), utilizing the Lidcombe Behavioural Data Language (LBDL). Additional purposes include measuring test-retest reliability and examining the effect of speech sample type on disfluency rates. Two types of speech samples were elicited from 20 persons with AOS and aphasia: repetition of mono- and multisyllabic words from a protocol for assessing AOS (Duffy, 2013), and connected speech tasks (Nicholas & Brookshire, 1993). Sampling was repeated at 1 and 4 weeks following initial sampling. Stuttering-like disfluencies were coded using the LBDL, which is a taxonomy that focuses on motoric aspects of stuttering. Disfluency rates ranged from 0% to 13.1% for the connected speech task and from 0% to 17% for the word repetition task. There was no significant effect of speech sampling time on disfluency rate in the connected speech task, but there was a significant effect of time for the word repetition task. There was no significant effect of speech sample type. Speakers demonstrated both major types of stuttering-like disfluencies as categorized by the LBDL (fixed postures and repeated movements). Connected speech samples yielded more reliable tallies over repeated measurements. Suggestions are made for modifying the LBDL for use in AOS in order to further add to systematic descriptions of motoric disfluencies in this disorder.

  6. The persuasiveness of synthetic speech versus human speech.

    PubMed

    Stern, S E; Mullennix, J W; Dyson, C; Wilson, S J

    1999-12-01

    Is computer-synthesized speech as persuasive as the human voice when presenting an argument? After completing an attitude pretest, 193 participants were randomly assigned to listen to a persuasive appeal under three conditions: a high-quality synthesized speech system (DECtalk Express), a low-quality synthesized speech system (Monologue), and a tape recording of a human voice. Following the appeal, participants completed a posttest attitude survey and a series of questionnaires designed to assess perceptions of speech qualities, perceptions of the speaker, and perceptions of the message. The human voice was generally perceived more favorably than the computer-synthesized voice, and the speaker was perceived more favorably when the voice was a human voice than when it was computer synthesized. There was, however, no evidence that computerized speech, as compared with the human voice, affected persuasion or perceptions of the message. Actual or potential applications of this research include issues that should be considered when designing synthetic speech systems.

  7. Profiles of verbal working memory growth predict speech and language development in children with cochlear implants.

    PubMed

    Kronenberger, William G; Pisoni, David B; Harris, Michael S; Hoen, Helena M; Xu, Huiping; Miyamoto, Richard T

    2013-06-01

    Verbal short-term memory (STM) and working memory (WM) skills predict speech and language outcomes in children with cochlear implants (CIs) even after conventional demographic, device, and medical factors are taken into account. However, prior research has focused on single end point outcomes as opposed to the longitudinal process of development of verbal STM/WM and speech-language skills. In this study, the authors investigated relations between profiles of verbal STM/WM development and speech-language development over time. Profiles of verbal STM/WM development were identified through the use of group-based trajectory analysis of repeated digit span measures over at least a 2-year time period in a sample of 66 children (ages 6-16 years) with CIs. Subjects also completed repeated assessments of speech and language skills during the same time period. Clusters representing different patterns of development of verbal STM (digit span forward scores) were related to the growth rate of vocabulary and language comprehension skills over time. Clusters representing different patterns of development of verbal WM (digit span backward scores) were related to the growth rate of vocabulary and spoken word recognition skills over time. Different patterns of development of verbal STM/WM capacity predict the dynamic process of development of speech and language skills in this clinical population.

  8. [Effects of acaoustic adaptation of classrooms on the quality of verbal communication].

    PubMed

    Mikulski, Witold

    2013-01-01

    Voice organ disorders among teachers are caused by excessive voice strain. One of the measures to reduce this strain is to decrease background noise when teaching. Increasing the acoustic absorption of the room is a technical measure for achieving this aim. The absorption level also improves speech intelligibility rated by the following parameters: room reverberation time and speech transmission index (STI). This article presents the effects of acoustic adaptation of classrooms on the quality of verbal communication, aimed at getting the speech intelligibility at the good or excellent level. The article lists the criteria for evaluating classrooms in terms of the quality of verbal communication. The parameters were defined, using the measurement methods according to PN-EN ISO 3382-2:2010 and PN-EN 60268-16:2011. Acoustic adaptations were completed in two classrooms. After completing acoustic adaptations the reverberation time for the frequency of 1 kHz was reduced: in room no. 1 from 1.45 s to 0.44 s and in room no. 2 from 1.03 s to 0.37 s (maximum 0.65 s). At the same time, the speech transmission index increased: in room no. 1 from 0.55 (satisfactory speech intelligibility) to 0.75 (speech intelligibility close to excellent); in room no. 2 from 0.63 (good speech intelligibility) to 0.80 (excellent speech intelligibility). Therefore, it can be stated that prior to completing acoustic adaptations room no. 1 did not comply and room no. 2 barely complied with the criterion (speech transmission index of 0.62). After completing acoustic adaptations both rooms meet the requirements.

  9. Negative communication in psychosis: understanding pathways to poorer patient outcomes.

    PubMed

    Finnegan, Deirdre; Onwumere, Juliana; Green, Catherine; Freeman, Daniel; Garety, Philippa; Kuipers, Elizabeth

    2014-11-01

    High expressed emotion (EE) is a robust predictor of elevated rates of relapse and readmission in schizophrenia. However, far less is known about how high EE leads to poorer patient outcomes. This study was designed to examine links between high EE (criticism), affect, and multidimensional aspects of positive symptoms in patients with psychosis. Thirty-eight individuals with nonaffective psychosis were randomly exposed to proxy high-EE or neutral speech samples and completed self-report measures of affect and psychosis symptoms. Patients reported significant increases in anxiety, anger, and distress after exposure to the proxy high-EE speech sample as well as increases in their appraisals of psychosis symptoms: voice controllability, delusional preoccupation, and conviction. These findings offer further evidence of the potential deleterious impact of a negative interpersonal environment on patient symptoms in psychosis.

  10. [Occurrence of child abuse: knowledge and possibility of action of speech-language pathologists].

    PubMed

    Noguchi, Milica Satake; de Assis, Simone Gonçalves; Malaquias, Juaci Vitória

    2006-01-01

    This work presents the results of an epidemiological survey about the professional experience of Speech-Language Pathologists and Audiologists of Rio de Janeiro (Brazil) with children and adolescents who are victims of domestic violence. To understand the occurrence of child abuse and neglect of children and adolescents treated by speech-language pathologists, characterizing the victims according to: most affected age group, gender, form of violence, aggressor, most frequent speech-language complaint, how the abuse was identified and follow-up. 500 self-administered mail surveys were sent to a random sample of professional living in Rio de Janeiro. The survey forms were identified only by numbers to assure anonymity. 224 completed surveys were mailed back. 54 respondents indicated exposure to at least one incident of abuse. The majority of victims were children, the main abuser was the mother, and physical violence was the most frequent form of abuse. The main speech disorder was late language development. In most cases, the victim himself told the therapist about the abuse--through verbal expression or other means of expression such as drawings, story telling, dramatizing or playing. As the majority of the victims abandoned speech-language therapy, it was not possible to follow-up the cases. Due to the importance if this issue and the limited Brazilian literature about Speech-Language and Hearing Sciences and child abuse, it is paramount to invest in the training of speech-language pathologists. It is the duty of speech-language pathologists to expose this complex problem and to give voice to children who are victims of violence, understanding that behind a speech-language complaint there might be a cry for help.

  11. An Evaluation of the Applicability of the Tripartite Constructs to Social Anxiety in Adolescents

    ERIC Educational Resources Information Center

    Anderson, Emily R.; Veed, Glen J.; Inderbitzen-Nolan, Heidi M.; Hansen, David J.

    2010-01-01

    The current study examined the tripartite model of anxiety and depression in relation to social phobia in a nonclinical sample of adolescents (ages 13-17). Adolescent/parent dyads participated in a semistructured interview and completed self-report measures of the tripartite constructs and social anxiety. Adolescents gave an impromptu speech, and…

  12. Differentiating primary progressive aphasias in a brief sample of connected speech

    PubMed Central

    Evans, Emily; O'Shea, Jessica; Powers, John; Boller, Ashley; Weinberg, Danielle; Haley, Jenna; McMillan, Corey; Irwin, David J.; Rascovsky, Katya; Grossman, Murray

    2013-01-01

    Objective: A brief speech expression protocol that can be administered and scored without special training would aid in the differential diagnosis of the 3 principal forms of primary progressive aphasia (PPA): nonfluent/agrammatic PPA, logopenic variant PPA, and semantic variant PPA. Methods: We used a picture-description task to elicit a short speech sample, and we evaluated impairments in speech-sound production, speech rate, lexical retrieval, and grammaticality. We compared the results with those obtained by a longer, previously validated protocol and further validated performance with multimodal imaging to assess the neuroanatomical basis of the deficits. Results: We found different patterns of impaired grammar in each PPA variant, and additional language production features were impaired in each: nonfluent/agrammatic PPA was characterized by speech-sound errors; logopenic variant PPA by dysfluencies (false starts and hesitations); and semantic variant PPA by poor retrieval of nouns. Strong correlations were found between this brief speech sample and a lengthier narrative speech sample. A composite measure of grammaticality and other measures of speech production were correlated with distinct regions of gray matter atrophy and reduced white matter fractional anisotropy in each PPA variant. Conclusions: These findings provide evidence that large-scale networks are required for fluent, grammatical expression; that these networks can be selectively disrupted in PPA syndromes; and that quantitative analysis of a brief speech sample can reveal the corresponding distinct speech characteristics. PMID:23794681

  13. Lee Silverman Voice Treatment versus standard speech and language therapy versus control in Parkinson's disease: a pilot randomised controlled trial (PD COMM pilot).

    PubMed

    Sackley, Catherine M; Smith, Christina H; Rick, Caroline E; Brady, Marian C; Ives, Natalie; Patel, Smitaa; Woolley, Rebecca; Dowling, Francis; Patel, Ramilla; Roberts, Helen; Jowett, Sue; Wheatley, Keith; Kelly, Debbie; Sands, Gina; Clarke, Carl E

    2018-01-01

    Speech-related problems are common in Parkinson's disease (PD), but there is little evidence for the effectiveness of standard speech and language therapy (SLT) or Lee Silverman Voice Treatment (LSVT LOUD®). The PD COMM pilot was a three-arm, assessor-blinded, randomised controlled trial (RCT) of LSVT LOUD®, SLT and no intervention (1:1:1 ratio) to assess the feasibility and to inform the design of a full-scale RCT. Non-demented patients with idiopathic PD and speech problems and no SLT for speech problems in the past 2 years were eligible. LSVT LOUD® is a standardised regime (16 sessions over 4 weeks). SLT comprised individualised content per local practice (typically weekly sessions for 6-8 weeks). Outcomes included recruitment and retention, treatment adherence, and data completeness. Outcome data collected at baseline, 3, 6, and 12 months included patient-reported voice and quality of life measures, resource use, and assessor-rated speech recordings. Eighty-nine patients were randomised with 90% in the therapy groups and 100% in the control group completing the trial. The response rate for Voice Handicap Index (VHI) in each arm was ≥ 90% at all time-points. VHI was highly correlated with the other speech-related outcome measures. There was a trend to improvement in VHI with LSVT LOUD® (difference at 3 months compared with control: - 12.5 points; 95% CI - 26.2, 1.2) and SLT (difference at 3 months compared with control: - 9.8 points; 95% CI - 23.2, 3.7) which needs to be confirmed in an adequately powered trial. Randomisation to a three-arm trial of speech therapy including a no intervention control is feasible and acceptable. Compliance with both interventions was good. VHI and other patient-reported outcomes were relevant measures and provided data to inform the sample size for a substantive trial. International Standard Randomised Controlled Trial Number Register: ISRCTN75223808. registered 22 March 2012.

  14. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, J.F.; Ng, L.C.

    1998-03-17

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.

  15. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  16. The Effects of Phonological Short-Term Memory and Speech Perception on Spoken Sentence Comprehension in Children: Simulating Deficits in an Experimental Design.

    PubMed

    Higgins, Meaghan C; Penney, Sarah B; Robertson, Erin K

    2017-10-01

    The roles of phonological short-term memory (pSTM) and speech perception in spoken sentence comprehension were examined in an experimental design. Deficits in pSTM and speech perception were simulated through task demands while typically-developing children (N [Formula: see text] 71) completed a sentence-picture matching task. Children performed the control, simulated pSTM deficit, simulated speech perception deficit, or simulated double deficit condition. On long sentences, the double deficit group had lower scores than the control and speech perception deficit groups, and the pSTM deficit group had lower scores than the control group and marginally lower scores than the speech perception deficit group. The pSTM and speech perception groups performed similarly to groups with real deficits in these areas, who completed the control condition. Overall, scores were lowest on noncanonical long sentences. Results show pSTM has a greater effect than speech perception on sentence comprehension, at least in the tasks employed here.

  17. Speech and Communication Disorders

    MedlinePlus

    ... to being completely unable to speak or understand speech. Causes include Hearing disorders and deafness Voice problems, ... or those caused by cleft lip or palate Speech problems like stuttering Developmental disabilities Learning disorders Autism ...

  18. Speech rehabilitation of maxillectomy patients with hollow bulb obturator.

    PubMed

    Kumar, Pravesh; Jain, Veena; Thakar, Alok

    2012-09-01

    To evaluate the effect of hollow bulb obturator prosthesis on articulation and nasalance in maxillectomy patients. A total of 10 patients, who were to undergo maxillectomy, falling under Aramany classes I and II, with normal speech and hearing pattern were selected for the study. They were provided with definitive maxillary obturators after complete healing of the defect. The patients were asked to wear the obturator for six weeks and speech analysis was done to measure changes in articulation and nasalance at four different stages of treatment, namely, preoperative, postoperative (after complete healing, that is, 3-4 months after surgery), after 24 hours, and after six weeks of providing the obturators. Articulation was measured objectively for distortion, addition, substitution, and omission by a speech pathologist, and nasalance was measured by Dr. Speech software. The statistical comparison of preoperative and six weeks post rehabilitation levels showed insignificance in articulation and nasalance. Comparison of post surgery complete healing with six weeks after rehabilitation showed significant differences in both nasalance and articulation. Providing an obturator improves the speech closer to presurgical levels of articulation and there is improvement in nasality also.

  19. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, J.F.; Ng, L.C.

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used formore » purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.« less

  20. Adaptation to spectrally-rotated speech.

    PubMed

    Green, Tim; Rosen, Stuart; Faulkner, Andrew; Paterson, Ruth

    2013-08-01

    Much recent interest surrounds listeners' abilities to adapt to various transformations that distort speech. An extreme example is spectral rotation, in which the spectrum of low-pass filtered speech is inverted around a center frequency (2 kHz here). Spectral shape and its dynamics are completely altered, rendering speech virtually unintelligible initially. However, intonation, rhythm, and contrasts in periodicity and aperiodicity are largely unaffected. Four normal hearing adults underwent 6 h of training with spectrally-rotated speech using Continuous Discourse Tracking. They and an untrained control group completed pre- and post-training speech perception tests, for which talkers differed from the training talker. Significantly improved recognition of spectrally-rotated sentences was observed for trained, but not untrained, participants. However, there were no significant improvements in the identification of medial vowels in /bVd/ syllables or intervocalic consonants. Additional tests were performed with speech materials manipulated so as to isolate the contribution of various speech features. These showed that preserving intonational contrasts did not contribute to the comprehension of spectrally-rotated speech after training, and suggested that improvements involved adaptation to altered spectral shape and dynamics, rather than just learning to focus on speech features relatively unaffected by the transformation.

  1. Audiovisual speech perception development at varying levels of perceptual processing

    PubMed Central

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318

  2. Audiovisual speech perception development at varying levels of perceptual processing.

    PubMed

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  3. Evidence for the treatment of co-occurring stuttering and speech sound disorder: A clinical case series.

    PubMed

    Unicomb, Rachael; Hewat, Sally; Spencer, Elizabeth; Harrison, Elisabeth

    2017-06-01

    There is a paucity of evidence to guide treatment for children with co-occurring stuttering and speech sound disorder. Some guidelines suggest treating the two disorders simultaneously using indirect treatment approaches; however, the research supporting these recommendations is over 20 years old. In this clinical case series, we investigate whether these co-occurring disorders could be treated concurrently using direct treatment approaches supported by up-to-date, high-level evidence, and whether this could be done in an efficacious, safe and efficient manner. Five pre-school-aged participants received individual concurrent, direct intervention for both stuttering and speech sound disorder. All participants used the Lidcombe Program, as manualised. Direct treatment for speech sound disorder was individualised based on analysis of each child's sound system. At 12 months post commencement of treatment, all except one participant had completed the Lidcombe Program, and were less than 1.0% syllables stuttered on samples gathered within and beyond the clinic. These four participants completed Stage 1 of the Lidcombe Program in between 14 and 22 clinic visits, consistent with current benchmark data for this programme. At the same assessment point, all five participants exhibited significant increases in percentage of consonants correct and were in alignment with age-expected estimates of this measure. Further, they were treated in an average number of clinic visits that compares favourably with other research on treatment for speech sound disorder. These preliminary results indicate that young children with co-occurring stuttering and speech sound disorder may be treated concurrently using direct treatment approaches. This method of service delivery may have implications for cost and time efficiency and may also address the crucial need for early intervention in both disorders. These positive findings highlight the need for further research in the area and contribute to the limited evidence base.

  4. Swahili speech development: preliminary normative data from typically developing pre-school children in Tanzania.

    PubMed

    Gangji, Nazneen; Pascoe, Michelle; Smouse, Mantoa

    2015-01-01

    Swahili is widely spoken in East Africa, but to date there are no culturally and linguistically appropriate materials available for speech-language therapists working in the region. The challenges are further exacerbated by the limited research available on the typical acquisition of Swahili phonology. To describe the speech development of 24 typically developing first language Swahili-speaking children between the ages of 3;0 and 5;11 years in Dar es Salaam, Tanzania. A cross-sectional design was used with six groups of four children in 6-month age bands. Single-word speech samples were obtained from each child using a set of culturally appropriate pictures designed to elicit all consonants and vowels of Swahili. Each child's speech was audio-recorded and phonetically transcribed using International Phonetic Alphabet (IPA) conventions. Children's speech development is described in terms of (1) phonetic inventory, (2) syllable structure inventory, (3) phonological processes and (4) percentage consonants correct (PCC) and percentage vowels correct (PVC). Results suggest a gradual progression in the acquisition of speech sounds and syllables between the ages of 3;0 and 5;11 years. Vowel acquisition was completed and most of the consonants acquired by age 3;0. Fricatives/z, s, h/ were later acquired at 4 years and /θ/and /r/ were the last acquired consonants at age 5;11. Older children were able to produce speech sounds more accurately and had fewer phonological processes in their speech than younger children. Common phonological processes included lateralization and sound preference substitutions. The study contributes a preliminary set of normative data on speech development of Swahili-speaking children. Findings are discussed in relation to theories of phonological development, and may be used as a basis for further normative studies with larger numbers of children and ultimately the development of a contextually relevant assessment of the phonology of Swahili-speaking children. © 2014 Royal College of Speech and Language Therapists.

  5. The Prevalence of Speech Disorders among University Students in Jordan

    ERIC Educational Resources Information Center

    Alaraifi, Jehad Ahmad; Amayreh, Mousa Mohammad; Saleh, Mohammad Yusef

    2014-01-01

    Problem: There are no available studies on the prevalence, and distribution of speech disorders among Arabic speaking undergraduate students in Jordan. Method: A convenience sample of 400 undergraduate students at the University of Jordan was screened for speech disorders. Two spontaneous speech samples and an oral reading of a passage were…

  6. Oral reading fluency analysis in patients with Alzheimer disease and asymptomatic control subjects.

    PubMed

    Martínez-Sánchez, F; Meilán, J J G; García-Sevilla, J; Carro, J; Arana, J M

    2013-01-01

    Many studies highlight that an impaired ability to communicate is one of the key clinical features of Alzheimer disease (AD). To study temporal organisation of speech in an oral reading task in patients with AD and in matched healthy controls using a semi-automatic method, and evaluate that method's ability to discriminate between the 2 groups. A test with an oral reading task was administered to 70 subjects, comprising 35 AD patients and 35 controls. Before speech samples were recorded, participants completed a battery of neuropsychological tests. There were no differences between groups with regard to age, sex, or educational level. All of the study variables showed impairment in the AD group. According to the results, AD patients' oral reading was marked by reduced speech and articulation rates, low effectiveness of phonation time, and increases in the number and proportion of pauses. Signal processing algorithms applied to reading fluency recordings were shown to be capable of differentiating between AD patients and controls with an accuracy of 80% (specificity 74.2%, sensitivity 77.1%) based on speech rate. Analysis of oral reading fluency may be useful as a tool for the objective study and quantification of speech deficits in AD. Copyright © 2012 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.

  7. A dissociation of objective and subjective workload measures in assessing the impact of speech controls in advanced helicopters

    NASA Technical Reports Server (NTRS)

    Vidulich, Michael A.; Bortolussi, Michael R.

    1988-01-01

    Among the new technologies that are expected to aid helicopter designers are speech controls. Proponents suggest that speech controls could reduce the potential for manual control overloads and improve time-sharing performance in environments that have heavy demands for manual control. This was tested in a simulation of an advanced single-pilot, scout/attack helicopter. Objective performance indicated that the speech controls were effective in decreasing the interference of discrete responses during moments of heavy flight control activity. However, subjective ratings indicated that the use of speech controls required extra effort to speak precisely and to attend to feedback. Although the operational reliability of speech controls must be improved, the present results indicate that reliable speech controls could enhance the time-sharing efficiency of helicopter pilots. Furthermore, the results demonstrated the importance of using multiple assessment techniques to completely assess a task. Neither the objective nor the subjective measures alone provided complete information. It was the contrast between the measures that was most informative.

  8. Factors that Enhance English-Speaking Speech-Language Pathologists' Transcription of Cantonese-Speaking Children's Consonants

    ERIC Educational Resources Information Center

    Lockart, Rebekah; McLeod, Sharynne

    2013-01-01

    Purpose: To investigate speech-language pathology students' ability to identify errors and transcribe typical and atypical speech in Cantonese, a nonnative language. Method: Thirty-three English-speaking speech-language pathology students completed 3 tasks in an experimental within-subjects design. Results: Task 1 (baseline) involved transcribing…

  9. Methodological Choices in Rating Speech Samples

    ERIC Educational Resources Information Center

    O'Brien, Mary Grantham

    2016-01-01

    Much pronunciation research critically relies upon listeners' judgments of speech samples, but researchers have rarely examined the impact of methodological choices. In the current study, 30 German native listeners and 42 German L2 learners (L1 English) rated speech samples produced by English-German L2 learners along three continua: accentedness,…

  10. Speech Rehabilitation of Maxillectomy Patients with Hollow Bulb Obturator

    PubMed Central

    Kumar, Pravesh; Jain, Veena; Thakar, Alok

    2012-01-01

    Aim: To evaluate the effect of hollow bulb obturator prosthesis on articulation and nasalance in maxillectomy patients. Materials and Methods: A total of 10 patients, who were to undergo maxillectomy, falling under Aramany classes I and II, with normal speech and hearing pattern were selected for the study. They were provided with definitive maxillary obturators after complete healing of the defect. The patients were asked to wear the obturator for six weeks and speech analysis was done to measure changes in articulation and nasalance at four different stages of treatment, namely, preoperative, postoperative (after complete healing, that is, 3-4 months after surgery), after 24 hours, and after six weeks of providing the obturators. Articulation was measured objectively for distortion, addition, substitution, and omission by a speech pathologist, and nasalance was measured by Dr. Speech software. Results: The statistical comparison of preoperative and six weeks post rehabilitation levels showed insignificance in articulation and nasalance. Comparison of post surgery complete healing with six weeks after rehabilitation showed significant differences in both nasalance and articulation. Conclusion: Providing an obturator improves the speech closer to presurgical levels of articulation and there is improvement in nasality also. PMID:23440022

  11. Speech Characteristics of Patients with Pallido-Ponto-Nigral Degeneration and Their Application to Presymptomatic Detection in At-Risk Relatives

    ERIC Educational Resources Information Center

    Liss, Julie M.; Krein-Jones, Kari; Wszolek, Zbigniew K.; Caviness, John N.

    2006-01-01

    Purpose: This report describes the speech characteristics of individuals with a neurodegenerative syndrome called pallido-ponto-nigral degeneration (PPND) and examines the speech samples of at-risk, but asymptomatic, relatives for possible preclinical detection. Method: Speech samples of 9 members of a PPND kindred were subjected to perceptual…

  12. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Feenaughty, Lynda

    Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners judged each speech sample using the perceptual construct of Speech Severity using a visual analog scale. Additional measures obtained to describe participants included the Sentence Intelligibility Test (SIT), the 10-item Communication Participation Item Bank (CPIB), and standard biopsychosocial measures of depression (Beck Depression Inventory-Fast Screen; BDI-FS), fatigue (Fatigue Severity Scale; FSS), and overall disease severity (Expanded Disability Status Scale; EDSS). Healthy controls completed all measures, with the exception of the CPIB and EDSS. All data were analyzed using standard, descriptive and parametric statistics. For the MSCI group, the relationship between neuropsychological test scores and speech-language variables were explored for each speech task using Pearson correlations. The relationship between neuropsychological test scores and Speech Severity also was explored. Results and Discussion: Topic familiarity for descriptive discourse did not strongly influence speech production or perceptual variables; however, results indicated predicted task-related differences for some spoken language measures. With the exception of the MSCI group, all speaker groups produced the same or slower global speech timing (i.e., speech and articulatory rates), more silent and filled pauses, more grammatical and longer silent pause durations in spontaneous discourse compared to reading aloud. Results revealed no appreciable task differences for linguistic complexity measures. Results indicated group differences for speech rate. The MSCI group produced significantly faster speech rates compared to the MSDYS group. Both the MSDYS and the MSCI groups were judged to have significantly poorer perceived Speech Severity compared to typically aging adults. The Task x Group interaction was only significant for the number of silent pauses. The MSDYS group produced fewer silent pauses in spontaneous speech and more silent pauses in the reading task compared to other groups. Finally, correlation analysis revealed moderate relationships between neuropsychological test scores and speech hesitation measures, within the MSCI group. Slower information processing and poorer memory were significantly correlated with more silent pauses and poorer executive function was associated with fewer filled pauses in the Unfamiliar discourse task. Results have both clinical and theoretical implications. Overall, clinicians should demonstrate caution when interpreting global measures of speech timing and perceptual measures in the absence of information about cognitive ability. Results also have implications for a comprehensive model of spoken language incorporating cognitive, linguistic, and motor variables.

  13. The Comprehension of Rapid Speech by the Blind: Part III. Final Report.

    ERIC Educational Resources Information Center

    Foulke, Emerson

    Accounts of completed and ongoing research conducted from 1964 to 1968 are presented on the subject of accelerated speech as a substitute for the written word. Included are a review of the research on intelligibility and comprehension of accelerated speech, some methods for controlling the word rate of recorded speech, and a comparison of…

  14. Changes in Second-Language Learners' Oral Skills and Socio-Affective Profiles Following Study Abroad: A Mixed-Methods Approach

    ERIC Educational Resources Information Center

    Hardison, Debra M.

    2014-01-01

    Research on the effectiveness of short-term study-abroad (SA) programs for improving oral skills has shown mixed results. In this study, 24 L2 German learners (L1 English) provided pre- and post-SA speech samples addressing a hypothetical situation and completed surveys on cross-cultural interest and adaptability; L2 communication affect,…

  15. Feasibility of automated speech sample collection with stuttering children using interactive voice response (IVR) technology.

    PubMed

    Vogel, Adam P; Block, Susan; Kefalianos, Elaina; Onslow, Mark; Eadie, Patricia; Barth, Ben; Conway, Laura; Mundt, James C; Reilly, Sheena

    2015-04-01

    To investigate the feasibility of adopting automated interactive voice response (IVR) technology for remotely capturing standardized speech samples from stuttering children. Participants were 10 6-year-old stuttering children. Their parents called a toll-free number from their homes and were prompted to elicit speech from their children using a standard protocol involving conversation, picture description and games. The automated IVR system was implemented using an off-the-shelf telephony software program and delivered by a standard desktop computer. The software infrastructure utilizes voice over internet protocol. Speech samples were automatically recorded during the calls. Video recordings were simultaneously acquired in the home at the time of the call to evaluate the fidelity of the telephone collected samples. Key outcome measures included syllables spoken, percentage of syllables stuttered and an overall rating of stuttering severity using a 10-point scale. Data revealed a high level of relative reliability in terms of intra-class correlation between the video and telephone acquired samples on all outcome measures during the conversation task. Findings were less consistent for speech samples during picture description and games. Results suggest that IVR technology can be used successfully to automate remote capture of child speech samples.

  16. Depression-related difficulties disengaging from negative faces are associated with sustained attention to negative feedback during social evaluation and predict stress recovery

    PubMed Central

    Romero, Nuria; De Raedt, Rudi

    2017-01-01

    The present study aimed to clarify: 1) the presence of depression-related attention bias related to a social stressor, 2) its association with depression-related attention biases as measured under standard conditions, and 3) their association with impaired stress recovery in depression. A sample of 39 participants reporting a broad range of depression levels completed a standard eye-tracking paradigm in which they had to engage/disengage their gaze with/from emotional faces. Participants then underwent a stress induction (i.e., giving a speech), in which their eye movements to false emotional feedback were measured, and stress reactivity and recovery were assessed. Depression level was associated with longer times to engage/disengage attention with/from negative faces under standard conditions and with sustained attention to negative feedback during the speech. These depression-related biases were associated and mediated the association between depression level and self-reported stress recovery, predicting lower recovery from stress after giving the speech. PMID:28362826

  17. Changes in objective acoustic measurements and subjective voice complaints in call center customer-service advisors during one working day.

    PubMed

    Lehto, Laura; Laaksonen, Laura; Vilkman, Erkki; Alku, Paavo

    2008-03-01

    The aim of this study was to investigate how different acoustic parameters, extracted both from speech pressure waveforms and glottal flows, can be used in measuring vocal loading in modern working environments and how these parameters reflect the possible changes in the vocal function during a working day. In addition, correlations between objective acoustic parameters and subjective voice symptoms were addressed. The subjects were 24 female and 8 male customer-service advisors, who mainly use telephone during their working hours. Speech samples were recorded from continuous speech four times during a working day and voice symptom questionnaires were completed simultaneously. Among the various objective parameters, only F0 resulted in a statistically significant increase for both genders. No correlations between the changes in objective and subjective parameters appeared. However, the results encourage researchers within the field of occupational voice use to apply versatile measurement techniques in studying occupational voice loading.

  18. Speech-language pathologists' practices regarding assessment, analysis, target selection, intervention, and service delivery for children with speech sound disorders.

    PubMed

    Mcleod, Sharynne; Baker, Elise

    2014-01-01

    A survey of 231 Australian speech-language pathologists (SLPs) was undertaken to describe practices regarding assessment, analysis, target selection, intervention, and service delivery for children with speech sound disorders (SSD). The participants typically worked in private practice, education, or community health settings and 67.6% had a waiting list for services. For each child, most of the SLPs spent 10-40 min in pre-assessment activities, 30-60 min undertaking face-to-face assessments, and 30-60 min completing paperwork after assessments. During an assessment SLPs typically conducted a parent interview, single-word speech sampling, collected a connected speech sample, and used informal tests. They also determined children's stimulability and estimated intelligibility. With multilingual children, informal assessment procedures and English-only tests were commonly used and SLPs relied on family members or interpreters to assist. Common analysis techniques included determination of phonological processes, substitutions-omissions-distortions-additions (SODA), and phonetic inventory. Participants placed high priority on selecting target sounds that were stimulable, early developing, and in error across all word positions and 60.3% felt very confident or confident selecting an appropriate intervention approach. Eight intervention approaches were frequently used: auditory discrimination, minimal pairs, cued articulation, phonological awareness, traditional articulation therapy, auditory bombardment, Nuffield Centre Dyspraxia Programme, and core vocabulary. Children typically received individual therapy with an SLP in a clinic setting. Parents often observed and participated in sessions and SLPs typically included siblings and grandparents in intervention sessions. Parent training and home programs were more frequently used than the group therapy. Two-thirds kept up-to-date by reading journal articles monthly or every 6 months. There were many similarities with previously reported practices for children with SSD in the US, UK, and the Netherlands, with some (but not all) practices aligning with current research evidence.

  19. Speech motor correlates of treatment-related changes in stuttering severity and speech naturalness.

    PubMed

    Tasko, Stephen M; McClean, Michael D; Runyan, Charles M

    2007-01-01

    Participants of stuttering treatment programs provide an opportunity to evaluate persons who stutter as they demonstrate varying levels of fluency. Identifying physiologic correlates of altered fluency levels may lead to insights about mechanisms of speech disfluency. This study examined respiratory, orofacial kinematic and acoustic measures in 35 persons who stutter prior to and as they were completing a 1-month intensive stuttering treatment program. Participants showed a marked reduction in stuttering severity as they completed the treatment program. Coincident with reduced stuttering severity, participants increased the amplitude and duration of speech breaths, reduced the rate of lung volume change during inspiration, reduced the amplitude and speed of lip movements early in the test utterance, increased lip and jaw movement durations, and reduced syllable rate. A multiple regression model that included two respiratory measures and one orofacial kinematic measure accounted for 62% of the variance in changes in stuttering severity. Finally, there was a weak but significant tendency for speech of participants with the largest reductions in stuttering severity to be rated as more unnatural as they completed the treatment program.

  20. Evaluation of NASA speech encoder

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.

  1. Inner Speech and Clarity of Self-Concept in Thought Disorder and Auditory-Verbal Hallucinations

    PubMed Central

    de Sousa, Paulo; Sellwood, William; Spray, Amy; Fernyhough, Charles; Bentall, Richard P.

    2016-01-01

    Abstract Eighty patients and thirty controls were interviewed using one interview that promoted personal disclosure and another about everyday topics. Speech was scored using the Thought, Language and Communication scale (TLC). All participants completed the Self-Concept Clarity Scale (SCCS) and the Varieties of Inner Speech Questionnaire (VISQ). Patients scored lower than comparisons on the SCCS. Low scores were associated the disorganized dimension of TD. Patients also scored significantly higher on condensed and other people in inner speech, but not on dialogical or evaluative inner speech. The poverty of speech dimension of TD was associated with less dialogical inner speech, other people in inner speech, and less evaluative inner speech. Hallucinations were significantly associated with more other people in inner speech and evaluative inner speech. Clarity of self-concept and qualities of inner speech are differentially associated with dimensions of TD. The findings also support inner speech models of hallucinations. PMID:27898489

  2. Inner Speech and Clarity of Self-Concept in Thought Disorder and Auditory-Verbal Hallucinations.

    PubMed

    de Sousa, Paulo; Sellwood, William; Spray, Amy; Fernyhough, Charles; Bentall, Richard P

    2016-12-01

    Eighty patients and thirty controls were interviewed using one interview that promoted personal disclosure and another about everyday topics. Speech was scored using the Thought, Language and Communication scale (TLC). All participants completed the Self-Concept Clarity Scale (SCCS) and the Varieties of Inner Speech Questionnaire (VISQ). Patients scored lower than comparisons on the SCCS. Low scores were associated the disorganized dimension of TD. Patients also scored significantly higher on condensed and other people in inner speech, but not on dialogical or evaluative inner speech. The poverty of speech dimension of TD was associated with less dialogical inner speech, other people in inner speech, and less evaluative inner speech. Hallucinations were significantly associated with more other people in inner speech and evaluative inner speech. Clarity of self-concept and qualities of inner speech are differentially associated with dimensions of TD. The findings also support inner speech models of hallucinations.

  3. [Combining speech sample and feature bilateral selection algorithm for classification of Parkinson's disease].

    PubMed

    Zhang, Xiaoheng; Wang, Lirui; Cao, Yao; Wang, Pin; Zhang, Cheng; Yang, Liuyang; Li, Yongming; Zhang, Yanling; Cheng, Oumei

    2018-02-01

    Diagnosis of Parkinson's disease (PD) based on speech data has been proved to be an effective way in recent years. However, current researches just care about the feature extraction and classifier design, and do not consider the instance selection. Former research by authors showed that the instance selection can lead to improvement on classification accuracy. However, no attention is paid on the relationship between speech sample and feature until now. Therefore, a new diagnosis algorithm of PD is proposed in this paper by simultaneously selecting speech sample and feature based on relevant feature weighting algorithm and multiple kernel method, so as to find their synergy effects, thereby improving classification accuracy. Experimental results showed that this proposed algorithm obtained apparent improvement on classification accuracy. It can obtain mean classification accuracy of 82.5%, which was 30.5% higher than the relevant algorithm. Besides, the proposed algorithm detected the synergy effects of speech sample and feature, which is valuable for speech marker extraction.

  4. Asynchronous sampling of speech with some vocoder experimental results

    NASA Technical Reports Server (NTRS)

    Babcock, M. L.

    1972-01-01

    The method of asynchronously sampling speech is based upon the derivatives of the acoustical speech signal. The following results are apparent from experiments to date: (1) It is possible to represent speech by a string of pulses of uniform amplitude, where the only information contained in the string is the spacing of the pulses in time; (2) the string of pulses may be produced in a simple analog manner; (3) the first derivative of the original speech waveform is the most important for the encoding process; (4) the resulting pulse train can be utilized to control an acoustical signal production system to regenerate the intelligence of the original speech.

  5. Investigation of habitual pitch during free play activities for preschool-aged children.

    PubMed

    Chen, Yang; Kimelman, Mikael D Z; Micco, Katie

    2009-01-01

    This study is designed to compare the habitual pitch measured in two different speech activities (free play activity and traditionally used structured speech activity) for normally developing preschool-aged children to explore to what extent preschoolers vary their vocal pitch among different speech environments. Habitual pitch measurements were conducted for 10 normally developing children (2 boys, 8 girls) between the ages of 31 months and 71 months during two different activities: (1) free play; and (2) structured speech. Speech samples were recorded using a throat microphone connected with a wireless transmitter in both activities. The habitual pitch (in Hz) was measured for all collected speech samples by using voice analysis software (Real-Time Pitch). Significantly higher habitual pitch is found during free play in contrast to structured speech activities. In addition, there is no showing of significant difference of habitual pitch elicited across a variety of structured speech activities. Findings suggest that the vocal usage of preschoolers appears to be more effortful during free play than during structured activities. It is recommended that a comprehensive evaluation for young children's voice needs to be based on the speech/voice samples collected from both free play and structured activities.

  6. Revealing the dual streams of speech processing.

    PubMed

    Fridriksson, Julius; Yourganov, Grigori; Bonilha, Leonardo; Basilakos, Alexandra; Den Ouden, Dirk-Bart; Rorden, Christopher

    2016-12-27

    Several dual route models of human speech processing have been proposed suggesting a large-scale anatomical division between cortical regions that support motor-phonological aspects vs. lexical-semantic aspects of speech processing. However, to date, there is no complete agreement on what areas subserve each route or the nature of interactions across these routes that enables human speech processing. Relying on an extensive behavioral and neuroimaging assessment of a large sample of stroke survivors, we used a data-driven approach using principal components analysis of lesion-symptom mapping to identify brain regions crucial for performance on clusters of behavioral tasks without a priori separation into task types. Distinct anatomical boundaries were revealed between a dorsal frontoparietal stream and a ventral temporal-frontal stream associated with separate components. Collapsing over the tasks primarily supported by these streams, we characterize the dorsal stream as a form-to-articulation pathway and the ventral stream as a form-to-meaning pathway. This characterization of the division in the data reflects both the overlap between tasks supported by the two streams as well as the observation that there is a bias for phonological production tasks supported by the dorsal stream and lexical-semantic comprehension tasks supported by the ventral stream. As such, our findings show a division between two processing routes that underlie human speech processing and provide an empirical foundation for studying potential computational differences that distinguish between the two routes.

  7. 76 FR 19776 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-08

    ... Program to Provide Outpatient Physical Therapy and/or Speech Pathology Services, and (CMS-1893) Outpatient Physical Therapy--Speech Pathology Survey Report; Use: CMS-1856 is used as an application to be completed by providers of outpatient physical therapy and/or speech- [[Page 19777

  8. Cognitive-Perceptual Examination of Remediation Approaches to Hypokinetic Dysarthria

    ERIC Educational Resources Information Center

    McAuliffe, Megan J.; Kerr, Sarah E.; Gibson, Elizabeth M. R.; Anderson, Tim; LaShell, Patrick J.

    2014-01-01

    Purpose: To determine how increased vocal loudness and reduced speech rate affect listeners' cognitive-perceptual processing of hypokinetic dysarthric speech associated with Parkinson's disease. Method: Fifty-one healthy listener participants completed a speech perception experiment. Listeners repeated phrases produced by 5 individuals…

  9. A Survey of Practices and Strategies for Marketing Communication Majors.

    ERIC Educational Resources Information Center

    Gray, Philip A.; Wilson, Gerald L.

    Fifty college speech departments responded to a survey intended to discover some of the common practices and strategies for marketing undergraduate speech communication majors. The results indicated that the most frequent name for the departments responding was "Communication" rather than "Speech Communication," completely the opposite of what was…

  10. [Speech fluency developmental profile in Brazilian Portuguese speakers].

    PubMed

    Martins, Vanessa de Oliveira; Andrade, Claudia Regina Furquim de

    2008-01-01

    speech fluency varies from one individual to the next, fluent or stutterer, depending on several factors. Studies that investigate the influence of age on fluency patterns have been identified; however these differences were investigated in isolated age groups. Studies about life span fluency variations were not found. to verify the speech fluency developmental profile. speech samples of 594 fluent participants of both genders, with ages between 2:0 and 99:11 years, speakers of the Brazilian Portuguese language, were analyzed. Participants were grouped as follows: pre-scholars, scholars, early adolescence, late adolescence, adults and elderlies. Speech samples were analyzed according to the Speech Fluency Profile variables and were compared regarding: typology of speech disruptions (typical and less typical), speech rate (words and syllables per minute) and frequency of speech disruptions (percentage of speech discontinuity). although isolated variations were identified, overall there was no significant difference between the age groups for the speech disruption indexes (typical and less typical speech disruptions and percentage of speech discontinuity). Significant differences were observed between the groups when considering speech rate. the development of the neurolinguistic system for speech fluency, in terms of speech disruptions, seems to stabilize itself during the first years of life, presenting no alterations during the life span. Indexes of speech rate present variations in the age groups, indicating patterns of acquisition, development, stabilization and degeneration.

  11. The McGurk effect in children with autism and Asperger syndrome.

    PubMed

    Bebko, James M; Schroeder, Jessica H; Weiss, Jonathan A

    2014-02-01

    Children with autism may have difficulties in audiovisual speech perception, which has been linked to speech perception and language development. However, little has been done to examine children with Asperger syndrome as a group on tasks assessing audiovisual speech perception, despite this group's often greater language skills. Samples of children with autism, Asperger syndrome, and Down syndrome, as well as a typically developing sample, were presented with an auditory-only condition, a speech-reading condition, and an audiovisual condition designed to elicit the McGurk effect. Children with autism demonstrated unimodal performance at the same level as the other groups, yet showed a lower rate of the McGurk effect compared with the Asperger, Down and typical samples. These results suggest that children with autism may have unique intermodal speech perception difficulties linked to their representations of speech sounds. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  12. Is talking to an automated teller machine natural and fun?

    PubMed

    Chan, F Y; Khalid, H M

    Usability and affective issues of using automatic speech recognition technology to interact with an automated teller machine (ATM) are investigated in two experiments. The first uncovered dialogue patterns of ATM users for the purpose of designing the user interface for a simulated speech ATM system. Applying the Wizard-of-Oz methodology, multiple mapping and word spotting techniques, the speech driven ATM accommodates bilingual users of Bahasa Melayu and English. The second experiment evaluates the usability of a hybrid speech ATM, comparing it with a simulated manual ATM. The aim is to investigate how natural and fun can talking to a speech ATM be for these first-time users. Subjects performed the withdrawal and balance enquiry tasks. The ANOVA was performed on the usability and affective data. The results showed significant differences between systems in the ability to complete the tasks as well as in transaction errors. Performance was measured on the time taken by subjects to complete the task and the number of speech recognition errors that occurred. On the basis of user emotions, it can be said that the hybrid speech system enabled pleasurable interaction. Despite the limitations of speech recognition technology, users are set to talk to the ATM when it becomes available for public use.

  13. Language Sampling for Preschoolers With Severe Speech Impairments

    PubMed Central

    Ragsdale, Jamie; Bustos, Aimee

    2016-01-01

    Purpose The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Method Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Results Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur–Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Conclusion Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information. PMID:27552110

  14. Language Sampling for Preschoolers With Severe Speech Impairments.

    PubMed

    Binger, Cathy; Ragsdale, Jamie; Bustos, Aimee

    2016-11-01

    The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information.

  15. The Impact of Strong Assimilation on the Perception of Connected Speech

    ERIC Educational Resources Information Center

    Gaskell, M. Gareth; Snoeren, Natalie D.

    2008-01-01

    Models of compensation for phonological variation in spoken word recognition differ in their ability to accommodate complete assimilatory alternations (such as run assimilating fully to rum in the context of a quick run picks you up). Two experiments addressed whether such complete changes can be observed in casual speech, and if so, whether they…

  16. An Attempt to Raise Japanese EFL Learners' Pragmatic Awareness Using Online Discourse Completion Tasks

    ERIC Educational Resources Information Center

    Tanaka, Hiroya; Oki, Nanaho

    2015-01-01

    This practical paper discusses the effect of explicit instruction to raise Japanese EFL learners' pragmatic awareness using online discourse completion tasks. The five-part tasks developed by the authors use American TV drama scenes depicting particular speech acts and include explicit instruction in these speech acts. 46 Japanese EFL college…

  17. Speech Characteristics Associated with Three Genotypes of Ataxia

    ERIC Educational Resources Information Center

    Sidtis, John J.; Ahn, Ji Sook; Gomez, Christopher; Sidtis, Diana

    2011-01-01

    Purpose: Advances in neurobiology are providing new opportunities to investigate the neurological systems underlying motor speech control. This study explores the perceptual characteristics of the speech of three genotypes of spino-cerebellar ataxia (SCA) as manifest in four different speech tasks. Methods: Speech samples from 26 speakers with SCA…

  18. Automated Speech Rate Measurement in Dysarthria

    ERIC Educational Resources Information Center

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  19. Speech Analysis of Bengali Speaking Children with Repaired Cleft Lip & Palate

    ERIC Educational Resources Information Center

    Chakrabarty, Madhushree; Kumar, Suman; Chatterjee, Indranil; Maheshwari, Neha

    2012-01-01

    The present study aims at analyzing speech samples of four Bengali speaking children with repaired cleft palates with a view to differentiate between the misarticulations arising out of a deficit in linguistic skills and structural or motoric limitations. Spontaneous speech samples were collected and subjected to a number of linguistic analyses…

  20. Applications of Text Analysis Tools for Spoken Response Grading

    ERIC Educational Resources Information Center

    Crossley, Scott; McNamara, Danielle

    2013-01-01

    This study explores the potential for automated indices related to speech delivery, language use, and topic development to model human judgments of TOEFL speaking proficiency in second language (L2) speech samples. For this study, 244 transcribed TOEFL speech samples taken from 244 L2 learners were analyzed using automated indices taken from…

  1. The Long Road to Automation: Neurocognitive Development of Letter-Speech Sound Processing

    ERIC Educational Resources Information Center

    Froyen, Dries J. W.; Bonte, Milene L.; van Atteveldt, Nienke; Blomert, Leo

    2009-01-01

    In transparent alphabetic languages, the expected standard for complete acquisition of letter-speech sound associations is within one year of reading instruction. The neural mechanisms underlying the acquisition of letter-speech sound associations have, however, hardly been investigated. The present article describes an ERP study with beginner and…

  2. Working Papers in Experimental Speech-Language Pathology and Audiology.

    ERIC Educational Resources Information Center

    Jablon, Ann, Ed.; And Others

    Five papers describe clinical research completed by graduate students in speech-language pathology and audiology. The first study examines the linguistic input of three adults (a mother, teacher, and clinician) to a language impaired 8-year-old. The clinician's approach, less directive than that of the other two, facilitated spontaneous speech and…

  3. [Effect of speech estimation on social anxiety].

    PubMed

    Shirotsuki, Kentaro; Sasagawa, Satoko; Nomura, Shinobu

    2009-02-01

    This study investigates the effect of speech estimation on social anxiety to further understanding of this characteristic of Social Anxiety Disorder (SAD). In the first study, we developed the Speech Estimation Scale (SES) to assess negative estimation before giving a speech which has been reported to be the most fearful social situation in SAD. Undergraduate students (n = 306) completed a set of questionnaires, which consisted of the Short Fear of Negative Evaluation Scale (SFNE), the Social Interaction Anxiety Scale (SIAS), the Social Phobia Scale (SPS), and the SES. Exploratory factor analysis showed an adequate one-factor structure with eight items. Further analysis indicated that the SES had good reliability and validity. In the second study, undergraduate students (n = 315) completed the SFNE, SIAS, SPS, SES, and the Self-reported Depression Scale (SDS). The results of path analysis showed that fear of negative evaluation from others (FNE) predicted social anxiety, and speech estimation mediated the relationship between FNE and social anxiety. These results suggest that speech estimation might maintain SAD symptoms, and could be used as a specific target for cognitive intervention in SAD.

  4. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    NASA Astrophysics Data System (ADS)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  5. Automated Speech Rate Measurement in Dysarthria.

    PubMed

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-06-01

    In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. The new algorithm was trained and tested using Dutch speech samples of 36 speakers with no history of speech impairment and 40 speakers with mild to moderate dysarthria. We tested the algorithm under various conditions: according to speech task type (sentence reading, passage reading, and storytelling) and algorithm optimization method (speaker group optimization and individual speaker optimization). Correlations between automated and human SR determination were calculated for each condition. High correlations between automated and human SR determination were found in the various testing conditions. The new algorithm measures SR in a sufficiently reliable manner. It is currently being integrated in a clinical software tool for assessing and managing prosody in dysarthric speech. Further research is needed to fine-tune the algorithm to severely dysarthric speech, to make the algorithm less sensitive to background noise, and to evaluate how the algorithm deals with syllabic consonants.

  6. Using on-line altered auditory feedback treating Parkinsonian speech

    NASA Astrophysics Data System (ADS)

    Wang, Emily; Verhagen, Leo; de Vries, Meinou H.

    2005-09-01

    Patients with advanced Parkinson's disease tend to have dysarthric speech that is hesitant, accelerated, and repetitive, and that is often resistant to behavior speech therapy. In this pilot study, the speech disturbances were treated using on-line altered feedbacks (AF) provided by SpeechEasy (SE), an in-the-ear device registered with the FDA for use in humans to treat chronic stuttering. Eight PD patients participated in the study. All had moderate to severe speech disturbances. In addition, two patients had moderate recurring stuttering at the onset of PD after long remission since adolescence, two had bilateral STN DBS, and two bilateral pallidal DBS. An effective combination of delayed auditory feedback and frequency-altered feedback was selected for each subject and provided via SE worn in one ear. All subjects produced speech samples (structured-monologue and reading) under three conditions: baseline, with SE without, and with feedbacks. The speech samples were randomly presented and rated for speech intelligibility goodness using UPDRS-III item 18 and the speaking rate. The results indicted that SpeechEasy is well tolerated and AF can improve speech intelligibility in spontaneous speech. Further investigational use of this device for treating speech disorders in PD is warranted [Work partially supported by Janus Dev. Group, Inc.].

  7. Speech and phonology in Swedish-speaking 3-year-olds with unilateral complete cleft lip and palate following different methods for primary palatal surgery.

    PubMed

    Klintö, Kristina; Svensson, Henry; Elander, Anna; Lohmander, Anette

    2014-05-01

    Objective : To describe and compare speech and phonology at age 3 years in children born with unilateral complete cleft lip and palate treated with three different methods for primary palatal surgery. Design : Prospective study. Setting : Primary care university hospitals. Participants : Twenty-eight Swedish-speaking children born with nonsyndromic unilateral complete cleft lip and palate. Interventions : Three methods for primary palatal surgery: two-stage closure with soft palate closure between 3.4 and 6.4 months and hard palate closure at mean age 12.3 months (n = 9) or 36.2 months (n = 9) or one-stage closure at mean age 13.6 months (n = 10). Main Outcome Measures : Based on independent judgments performed by two speech-language pathologists from standardized video recordings: percent correct consonants adjusted for age, percent active cleft speech characteristics, total number of phonological processes, number of different phonological processes, hypernasality, and audible nasal air leakage. The hard palate was unrepaired in nine of the children treated with two-stage closure. Results : The group treated with one-stage closure showed significantly better results than the group with an unoperated hard palate regarding percent active cleft speech characteristics and total number of phonological processes. Conclusions : Early primary palatal surgery in one or two stages did not result in any significant differences in speech production at age 3 years. However, children with an unoperated hard palate had significantly poorer speech and phonology than peers who had been treated with one-stage palatal closure at about 13 months of age.

  8. Soldier experiments and assessments using SPEAR speech control system for UGVs

    NASA Astrophysics Data System (ADS)

    Brown, Jonathan; Blanco, Chris; Czerniak, Jeffrey; Hoffman, Brian; Hoffman, Orin; Juneja, Amit; Ngia, Lester; Pruthi, Tarun; Liu, Dongqing

    2010-04-01

    This paper reports on a Soldier Experiment performed by the Army Research Lab's Human Research Engineering Directorate (HRED) Field Element located at the Maneuver Center of Excellence, Ft. Benning, and a Limited Use Assessment conducted by the Marine Corps Forces Pacific Command Experimentation Center (MEC) at Camp Pendleton evaluating the effectiveness of using speech commands to control an Unmanned Ground Vehicle. SPEAR, developed by Think-A-Move, Ltd., provides speech control of UGVs. SPEAR detects user speech in the ear canal with an earpiece containing an in-ear microphone. The system design provides up to 30 dB of passive noise reduction, enabling it to work well in high-noise environments, where traditional speech systems, using external microphones, fail; it also utilizes a proprietary speech recognition engine. SPEAR has been integrated with iRobot's PackBot 510 with FasTac Kit, and with Multi-Robot Operator Control Unit (MOCU), developed by SPAWAR Systems Center Pacific. These integrated systems allow speech to supplement the hand-controller for multi-modal control of different UGV functions simultaneously. HRED's experiment measured the impact of SPEAR on reducing the cognitive load placed on UGV Operators and the time to complete specific tasks. Army NCOs and Officer School Candidates participated in this experiment, which found that speech control was faster than manual control to complete tasks requiring menu navigation, as well as reducing the cognitive load on UGV Operators. The MEC assessment examined speech commands used for two different missions: Route Clearance and Cordon and Search; participants included Explosive Ordnance Disposal Technicians and Combat Engineers. The majority of the Marines thought it was easier to complete the mission scenarios with SPEAR than with only using manual controls, and that using SPEAR improved their situational awareness. Overall results of these Assessments are reported in the paper, along with possible applications to autonomous mine detection systems.

  9. Is Presurgery and Early Postsurgery Performance Related to Speech and Language Outcomes at 3 Years of Age for Children with Cleft Palate?

    ERIC Educational Resources Information Center

    Chapman, Kathy L.

    2004-01-01

    This study examined the relationship between presurgery speech measures and speech and language performance at 39 months as well as the relationship between early postsurgery speech measures and speech and language performance at 39 months of age. Fifteen children with cleft lip and palate participated in the study. Spontaneous speech samples were…

  10. Measurement of trained speech patterns in stuttering: interjudge and intrajudge agreement of experts by means of modified time-interval analysis.

    PubMed

    Alpermann, Anke; Huber, Walter; Natke, Ulrich; Willmes, Klaus

    2010-09-01

    Improved fluency after stuttering therapy is usually measured by the percentage of stuttered syllables. However, outcome studies rarely evaluate the use of trained speech patterns that speakers use to manage stuttering. This study investigated whether the modified time interval analysis can distinguish between trained speech patterns, fluent speech, and stuttered speech. Seventeen German experts on stuttering judged a speech sample on two occasions. Speakers of the sample were stuttering adults, who were not undergoing therapy, as well as participants in a fluency shaping and a stuttering modification therapy. Results showed satisfactory inter-judge and intra-judge agreement above 80%. Intervals with trained speech patterns were identified as consistently as stuttered and fluent intervals. We discuss limitations of the study, as well as implications of our findings for the development of training for identification of trained speech patterns and future outcome studies. The reader will be able to (a) explain different methods to measure the use of trained speech patterns, (b) evaluate whether German experts are able to discriminate intervals with trained speech patterns reliably from fluent and stuttered intervals and (c) describe how the measurement of trained speech patterns can contribute to outcome studies.

  11. Speech and Language Consequences of Unilateral Hearing Loss: A Systematic Review.

    PubMed

    Anne, Samantha; Lieu, Judith E C; Cohen, Michael S

    2017-10-01

    Objective Unilateral hearing loss has been shown to have negative consequences for speech and language development in children. The objective of this study was to systematically review the current literature to quantify the impact of unilateral hearing loss on children, with the use of objective measures of speech and language. Data Sources PubMed, EMBASE, Medline, CINAHL, and Cochrane Library were searched from inception to March 2015. Manual searches of references were also completed. Review Methods All studies that described speech and language outcomes for children with unilateral hearing loss were included. Outcome measures included results from any test of speech and language that evaluated or had age-standardized norms. Due to heterogeneity of the data, quantitative analysis could not be completed. Qualitative analysis was performed on the included studies. Two independent evaluators reviewed each abstract and article. Results A total of 429 studies were identified; 13 met inclusion criteria and were reviewed. Overall, 7 studies showed poorer scores on various speech and language tests, with effects more pronounced for children with severe to profound hearing loss. Four studies did not demonstrate any difference in testing results between patients with unilateral hearing loss and those with normal hearing. Two studies that evaluated effects on speech and language longitudinally showed initial speech problems, with improvement in scores over time. Conclusions There are inconsistent data regarding effects of unilateral hearing loss on speech and language outcomes for children. The majority of recent studies suggest poorer speech and language testing results, especially for patients with severe to profound unilateral hearing loss.

  12. Transitioning from analog to digital audio recording in childhood speech sound disorders.

    PubMed

    Shriberg, Lawrence D; McSweeny, Jane L; Anderson, Bruce E; Campbell, Thomas F; Chial, Michael R; Green, Jordan R; Hauner, Katherina K; Moore, Christopher A; Rusiewicz, Heather L; Wilson, David L

    2005-06-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants' speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise.

  13. Transitioning from analog to digital audio recording in childhood speech sound disorders

    PubMed Central

    Shriberg, Lawrence D.; McSweeny, Jane L.; Anderson, Bruce E.; Campbell, Thomas F.; Chial, Michael R.; Green, Jordan R.; Hauner, Katherina K.; Moore, Christopher A.; Rusiewicz, Heather L.; Wilson, David L.

    2014-01-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants’ speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise. PMID:16019779

  14. Modifying interpretation biases: Effects on symptomatology, behavior, and physiological reactivity in social anxiety.

    PubMed

    Nowakowski, Matilda E; Antony, Martin M; Koerner, Naomi

    2015-12-01

    The present study investigated the effects of computerized interpretation training and cognitive restructuring on symptomatology, behavior, and physiological reactivity in an analogue social anxiety sample. Seventy-two participants with elevated social anxiety scores were randomized to one session of computerized interpretation training (n = 24), cognitive restructuring (n = 24), or an active placebo control condition (n = 24). Participants completed self-report questionnaires focused on interpretation biases and social anxiety symptomatology at pre and posttraining and a speech task at posttraining during which subjective, behavioral, and physiological measures of anxiety were assessed. Only participants in the interpretation training condition endorsed significantly more positive than negative interpretations of ambiguous social situations at posttraining. There was no evidence of generalizability of interpretation training effects to self-report measures of interpretation biases and symptomatology or the anxiety response during the posttraining speech task. Participants in the cognitive restructuring condition were rated as having higher quality speeches and showing fewer signs of anxiety during the posttraining speech task compared to participants in the interpretation training condition. The present study did not include baseline measures of speech performance or computer assessed interpretation biases. The results of the present study bring into question the generalizability of computerized interpretation training as well as the effectiveness of a single session of cognitive restructuring in modifying the full anxiety response. Clinical and theoretical implications are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. A speech-controlled environmental control system for people with severe dysarthria.

    PubMed

    Hawley, Mark S; Enderby, Pam; Green, Phil; Cunningham, Stuart; Brownsell, Simon; Carmichael, James; Parker, Mark; Hatzis, Athanassios; O'Neill, Peter; Palmer, Rebecca

    2007-06-01

    Automatic speech recognition (ASR) can provide a rapid means of controlling electronic assistive technology. Off-the-shelf ASR systems function poorly for users with severe dysarthria because of the increased variability of their articulations. We have developed a limited vocabulary speaker dependent speech recognition application which has greater tolerance to variability of speech, coupled with a computerised training package which assists dysarthric speakers to improve the consistency of their vocalisations and provides more data for recogniser training. These applications, and their implementation as the interface for a speech-controlled environmental control system (ECS), are described. The results of field trials to evaluate the training program and the speech-controlled ECS are presented. The user-training phase increased the recognition rate from 88.5% to 95.4% (p<0.001). Recognition rates were good for people with even the most severe dysarthria in everyday usage in the home (mean word recognition rate 86.9%). Speech-controlled ECS were less accurate (mean task completion accuracy 78.6% versus 94.8%) but were faster to use than switch-scanning systems, even taking into account the need to repeat unsuccessful operations (mean task completion time 7.7s versus 16.9s, p<0.001). It is concluded that a speech-controlled ECS is a viable alternative to switch-scanning systems for some people with severe dysarthria and would lead, in many cases, to more efficient control of the home.

  16. Patient Fatigue during Aphasia Treatment: A Survey of Speech-Language Pathologists

    ERIC Educational Resources Information Center

    Riley, Ellyn A.

    2017-01-01

    The purpose of this study was to measure speech-language pathologists' (SLPs) perceptions of fatigue in clients with aphasia and identify strategies used to manage client fatigue during speech and language therapy. SLPs completed a short online survey containing a series of questions related to their perceptions of patient fatigue. Of 312…

  17. What Iconic Gesture Fragments Reveal about Gesture-Speech Integration: When Synchrony Is Lost, Memory Can Help

    ERIC Educational Resources Information Center

    Obermeier, Christian; Holle, Henning; Gunter, Thomas C.

    2011-01-01

    The present series of experiments explores several issues related to gesture-speech integration and synchrony during sentence processing. To be able to more precisely manipulate gesture-speech synchrony, we used gesture fragments instead of complete gestures, thereby avoiding the usual long temporal overlap of gestures with their coexpressive…

  18. The Dyslexic Student and the Public Speaking Notecard.

    ERIC Educational Resources Information Center

    Hayward, Pamela A.

    To facilitate the extemporaneous speaking style, the preferred method of speech delivery in public speaking classes, students are advised to take a notecard with key words and phrases on it with them as they deliver the speech. In other words, the speech is to be well rehearsed but not given completely from memory or from a detailed manuscript.…

  19. Cleft Audit Protocol for Speech (CAPS-A): A Comprehensive Training Package for Speech Analysis

    ERIC Educational Resources Information Center

    Sell, D.; John, A.; Harding-Bell, A.; Sweeney, T.; Hegarty, F.; Freeman, J.

    2009-01-01

    Background: The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been…

  20. Speech-Language Pathologists' Assessment Practices for Children with Suspected Speech Sound Disorders: Results of a National Survey

    ERIC Educational Resources Information Center

    Skahan, Sarah M.; Watson, Maggie; Lof, Gregory L.

    2007-01-01

    Purpose: This study examined assessment procedures used by speech-language pathologists (SLPs) when assessing children suspected of having speech sound disorders (SSD). This national survey also determined the information participants obtained from clients' speech samples, evaluation of non-native English speakers, and time spent on assessment.…

  1. Attitudes toward Speech Disorders: Sampling the Views of Cantonese-Speaking Americans.

    ERIC Educational Resources Information Center

    Bebout, Linda; Arthur, Bradford

    1997-01-01

    A study of 60 Chinese Americans and 46 controls found the Chinese Americans were more likely to believe persons with speech disorders could improve speech by "trying hard," to view people using deaf speech and people with cleft palates as perhaps being emotionally disturbed, and to regard deaf speech as a limitation. (Author/CR)

  2. Using DEDICOM for completely unsupervised part-of-speech tagging.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chew, Peter A.; Bader, Brett William; Rozovskaya, Alla

    A standard and widespread approach to part-of-speech tagging is based on Hidden Markov Models (HMMs). An alternative approach, pioneered by Schuetze (1993), induces parts of speech from scratch using singular value decomposition (SVD). We introduce DEDICOM as an alternative to SVD for part-of-speech induction. DEDICOM retains the advantages of SVD in that it is completely unsupervised: no prior knowledge is required to induce either the tagset or the associations of terms with tags. However, unlike SVD, it is also fully compatible with the HMM framework, in that it can be used to estimate emission- and transition-probability matrices which can thenmore » be used as the input for an HMM. We apply the DEDICOM method to the CONLL corpus (CONLL 2000) and compare the output of DEDICOM to the part-of-speech tags given in the corpus, and find that the correlation (almost 0.5) is quite high. Using DEDICOM, we also estimate part-of-speech ambiguity for each term, and find that these estimates correlate highly with part-of-speech ambiguity as measured in the original corpus (around 0.88). Finally, we show how the output of DEDICOM can be evaluated and compared against the more familiar output of supervised HMM-based tagging.« less

  3. Changes in Speech Production Associated with Alphabet Supplementation

    ERIC Educational Resources Information Center

    Hustad, Katherine C.; Lee, Jimin

    2008-01-01

    Purpose: This study examined the effect of alphabet supplementation (AS) on temporal and spectral features of speech production in individuals with cerebral palsy and dysarthria. Method: Twelve speakers with dysarthria contributed speech samples using habitual speech and while using AS. One hundred twenty listeners orthographically transcribed…

  4. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension.

    PubMed

    Drijvers, Linda; Özyürek, Asli

    2017-01-01

    This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions. When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.

  5. Hiding Information under Speech

    DTIC Science & Technology

    2005-12-12

    therefore, an image has more room to hide data; and (2) speech steganography has not led to many money-making commercial businesses. For these two...3 3.3 Difference between Data Hiding with Images and Data Hiding with Speech... steganography , whose goal is to make the embedded data completely undetectable. In addi- tion, we must dismiss the idea of hiding data by using any

  6. Minimal Pair Distinctions and Intelligibility in Preschool Children with and without Speech Sound Disorders

    ERIC Educational Resources Information Center

    Hodge, Megan M.; Gotzke, Carrie L.

    2011-01-01

    Listeners' identification of young children's productions of minimally contrastive words and predictive relationships between accurately identified words and intelligibility scores obtained from a 100-word spontaneous speech sample were determined for 36 children with typically developing speech (TDS) and 36 children with speech sound disorders…

  7. Speaking legibly: Qualitative perceptions of altered voice among oral tongue cancer survivors

    PubMed Central

    Philiponis, Genevieve; Kagan, Sarah H.

    2015-01-01

    Objective: Treatment for oral tongue cancer poses unique challenges to restoring and maintaining personally acceptable, intelligible speech. Methods: We report how oral tongue cancer survivors describe their speech after treatment in a qualitative descriptive approach using constant comparative technique to complete a focal analysis of interview data from a larger grounded theory study of oral tongue cancer survivorship. Interviews were completed with 16 tongue cancer survivors 3 months to 12 years postdiagnosis with stage I-IV disease and treated with surgery alone, surgery and radiotherapy, or chemo-radiation. All interview data from the main study were analyzed for themes describing perceptions of speech as oral tongue cancer survivors. Results: Actual speech impairments varied among survivors. None experienced severe impairments that inhibited their daily lives. However, all expressed some level of concern about speech. Concerns about altered speech began when survivors heard their treatment plans and continued through to survivorship without being fully resolved. The overarching theme, maintaining a pattern and character of speech acceptable to the survivor, was termed “speaking legibly” using one survivor's vivid in vivo statement. Speaking legibly integrate the sub-themes of “fears of sounding unusual”, “learning to talk again”, “problems and adjustments”, and “social impact”. Conclusions: Clinical and scientific efforts to further understand and address concerns about speech, personal presentation, and identity among those diagnosed with oral tongue are important to improving care processes and patient-centered experience. PMID:27981121

  8. The Effects of Phonological Short-Term Memory and Speech Perception on Spoken Sentence Comprehension in Children: Simulating Deficits in an Experimental Design

    ERIC Educational Resources Information Center

    Higgins, Meaghan C.; Penney, Sarah B.; Robertson, Erin K.

    2017-01-01

    The roles of phonological short-term memory (pSTM) and speech perception in spoken sentence comprehension were examined in an experimental design. Deficits in pSTM and speech perception were simulated through task demands while typically-developing children (N = 71) completed a sentence-picture matching task. Children performed the control,…

  9. Variability and Diagnostic Accuracy of Speech Intelligibility Scores in Children

    ERIC Educational Resources Information Center

    Hustad, Katherine C.; Oakes, Ashley; Allison, Kristen

    2015-01-01

    Purpose: We examined variability of speech intelligibility scores and how well intelligibility scores predicted group membership among 5-year-old children with speech motor impairment (SMI) secondary to cerebral palsy and an age-matched group of typically developing (TD) children. Method: Speech samples varying in length from 1-4 words were…

  10. The Suitability of Cloud-Based Speech Recognition Engines for Language Learning

    ERIC Educational Resources Information Center

    Daniels, Paul; Iwago, Koji

    2017-01-01

    As online automatic speech recognition (ASR) engines become more accurate and more widely implemented with call software, it becomes important to evaluate the effectiveness and the accuracy of these recognition engines using authentic speech samples. This study investigates two of the most prominent cloud-based speech recognition engines--Apple's…

  11. Speech recognition systems on the Cell Broadband Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Jones, H; Vaidya, S

    In this paper we describe our design, implementation, and first results of a prototype connected-phoneme-based speech recognition system on the Cell Broadband Engine{trademark} (Cell/B.E.). Automatic speech recognition decodes speech samples into plain text (other representations are possible) and must process samples at real-time rates. Fortunately, the computational tasks involved in this pipeline are highly data-parallel and can receive significant hardware acceleration from vector-streaming architectures such as the Cell/B.E. Identifying and exploiting these parallelism opportunities is challenging, but also critical to improving system performance. We observed, from our initial performance timings, that a single Cell/B.E. processor can recognize speech from thousandsmore » of simultaneous voice channels in real time--a channel density that is orders-of-magnitude greater than the capacity of existing software speech recognizers based on CPUs (central processing units). This result emphasizes the potential for Cell/B.E.-based speech recognition and will likely lead to the future development of production speech systems using Cell/B.E. clusters.« less

  12. Audiologic correlates of hearing handicap in the elderly.

    PubMed

    Weinstein, B E; Ventry, I M

    1983-03-01

    This investigation was conducted to determine the relationship between self-assessed hearing handicap and audiometric measures in a large sample of noninstitutionalized elderly individuals. Eighty subjects underwent a complete audiological evaluation and responded to the Hearing Measurement Scale (HMS). Each of the correlations between measures of sensitivity and the HMS score was statistically significant. The speech discrimination scores showed a somewhat lower correlation with the HMS score than did pure-tone measures. The implications of the above findings are discussed.

  13. Predictions interact with missing sensory evidence in semantic processing areas.

    PubMed

    Scharinger, Mathias; Bendixen, Alexandra; Herrmann, Björn; Henry, Molly J; Mildner, Toralf; Obleser, Jonas

    2016-02-01

    Human brain function draws on predictive mechanisms that exploit higher-level context during lower-level perception. These mechanisms are particularly relevant for situations in which sensory information is compromised or incomplete, as for example in natural speech where speech segments may be omitted due to sluggish articulation. Here, we investigate which brain areas support the processing of incomplete words that were predictable from semantic context, compared with incomplete words that were unpredictable. During functional magnetic resonance imaging (fMRI), participants heard sentences that orthogonally varied in predictability (semantically predictable vs. unpredictable) and completeness (complete vs. incomplete, i.e. missing their final consonant cluster). The effects of predictability and completeness interacted in heteromodal semantic processing areas, including left angular gyrus and left precuneus, where activity did not differ between complete and incomplete words when they were predictable. The same regions showed stronger activity for incomplete than for complete words when they were unpredictable. The interaction pattern suggests that for highly predictable words, the speech signal does not need to be complete for neural processing in semantic processing areas. Hum Brain Mapp 37:704-716, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  14. Inhibitory Control as a Moderator of Threat-related Interference Biases in Social Anxiety

    PubMed Central

    Gorlin, Eugenia I.; Teachman, Bethany A.

    2014-01-01

    Prior findings are mixed regarding the presence and direction of threat-related interference biases in social anxiety. The current study examined general inhibitory control (IC), measured by the classic color-word Stroop, as a moderator of the relationship between both threat interference biases (indexed by the emotional Stroop) and several social anxiety indicators. High socially anxious undergraduate students (N=159) completed the emotional and color-word Stroop tasks, followed by an anxiety-inducing speech task. Participants completed measures of trait social anxiety, state anxiety before and during the speech, negative task-interfering cognitions during the speech, and overall self-evaluation of speech performance. Speech duration was used to measure behavioral avoidance. In line with hypotheses, IC moderated the relationship between emotional Stroop bias and every anxiety indicator (with the exception of behavioral avoidance), such that greater social-threat interference was associated with higher anxiety among those with weak IC, whereas lesser social-threat interference was associated with higher anxiety among those with strong IC. Implications for the theory and treatment of threat interference biases in socially anxious individuals are discussed. PMID:24967719

  15. Beliefs and behavioural intentions towards pharmacotherapy for stuttering: A survey of adults who stutter.

    PubMed

    McGroarty, Allan; McCartan, Rebecca

    Although considerable efforts have been made to investigate the effectiveness of pharmacological treatments for stuttering, little is known about how the stuttering community perceives these treatments. This study aimed to assess and quantify beliefs regarding pharmacotherapy for adults who stutter and to establish whether behavioural intentions to undertake treatment were related to these beliefs. An adapted version of the Beliefs about Medicine Questionnaire was completed by adults who stutter. Participants also reported perceptions of their stuttering including its overall impact, ratings of previous speech therapy, and behavioural intentions to initiate pharmacotherapy and speech therapy in future. Necessity and concern beliefs were distributed widely across the sample and in a pattern indicating a relatively balanced perception of the benefits and costs of medication prescribed specifically for stuttering. Of the study's measures, the necessity-concerns differential most strongly predicted the behavioural intention to initiate pharmacotherapy. The overall impact of stuttering predicted intentions to seek both pharmacotherapy and speech therapy. Participants reported the likelihood of pursuing pharmacotherapy and speech therapy in equal measure. The theoretical model of medication representations appears to be a useful framework for understanding the beliefs of adults who stutter towards the medical treatment of their disorder. The findings of this study may be of interest to clinicians and researchers working in the field of stuttering treatment and to people who stutter considering pharmacotherapy. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Shot through with voices: Dissociation mediates the relationship between varieties of inner speech and auditory hallucination proneness

    PubMed Central

    Alderson-Day, Ben; McCarthy-Jones, Simon; Bedford, Sarah; Collins, Hannah; Dunne, Holly; Rooke, Chloe; Fernyhough, Charles

    2014-01-01

    Inner speech is a commonly experienced but poorly understood phenomenon. The Varieties of Inner Speech Questionnaire (VISQ; McCarthy-Jones & Fernyhough, 2011) assesses four characteristics of inner speech: dialogicality, evaluative/motivational content, condensation, and the presence of other people. Prior findings have linked anxiety and proneness to auditory hallucinations (AH) to these types of inner speech. This study extends that work by examining how inner speech relates to self-esteem and dissociation, and their combined impact upon AH-proneness. 156 students completed the VISQ and measures of self-esteem, dissociation and AH-proneness. Correlational analyses indicated that evaluative inner speech and other people in inner speech were associated with lower self-esteem and greater frequency of dissociative experiences. Dissociation and VISQ scores, but not self-esteem, predicted AH-proneness. Structural equation modelling supported a mediating role for dissociation between specific components of inner speech (evaluative and other people) and AH-proneness. Implications for the development of “hearing voices” are discussed. PMID:24980910

  17. Korean speech-language pathologists' attitudes toward stuttering according to clinical experiences.

    PubMed

    Lee, Kyungjae

    2014-11-01

    Negative attitudes toward stuttering and people who stutter (PWS) are found in various groups of people in many regions. However the results of previous studies examining the influence of fluency coursework and clinical certification on the attitudes of speech-language pathologists (SLPs) toward PWS are equivocal. Furthermore, there have been few empirical studies on the attitudes of Korean SLPs toward stuttering. To determine whether the attitudes of Korean SLPs and speech-language pathology students toward stuttering would be different according to the status of clinical certification, stuttering coursework completion and clinical practicum in stuttering. Survey data from 37 certified Korean SLPs and 70 undergraduate students majoring in speech-language pathology were analysed. All the participants completed the modified Clinician Attitudes Toward Stuttering (CATS) Inventory. Results showed that the diagnosogenic view was still accepted by many participants. Significant differences were found in seven out of 46 CATS Inventory items according to the certification status. In addition significant differences were also found in three items and one item according to stuttering coursework completion and clinical practicum experience in stuttering, respectively. Clinical and educational experience appears to have mixed influences on SLPs' and students' attitudes toward stuttering. While SLPs and students may demonstrate more appropriate understanding and knowledge in certain areas of stuttering, they may feel difficulty in their clinical experience, possibly resulting in low self-efficacy. © 2014 Royal College of Speech and Language Therapists.

  18. Cleft audit protocol for speech (CAPS-A): a comprehensive training package for speech analysis.

    PubMed

    Sell, D; John, A; Harding-Bell, A; Sweeney, T; Hegarty, F; Freeman, J

    2009-01-01

    The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been paid to this issue. To design, execute, and evaluate a training programme for speech and language therapists on the systematic and reliable use of the Cleft Audit Protocol for Speech-Augmented (CAPS-A), addressing issues of standardized speech samples, data acquisition, recording, playback, and listening guidelines. Thirty-six specialist speech and language therapists undertook the training programme over four days. This consisted of two days' training on the CAPS-A tool followed by a third day, making independent ratings and transcriptions on ten new cases which had been previously recorded during routine audit data collection. This task was repeated on day 4, a minimum of one month later. Ratings were made using the CAPS-A record form with the CAPS-A definition table. An analysis was made of the speech and language therapists' CAPS-A ratings at occasion 1 and occasion 2 and the intra- and inter-rater reliability calculated. Trained therapists showed consistency in individual judgements on specific sections of the tool. Intraclass correlation coefficients were calculated for each section with good agreement on eight of 13 sections. There were only fair levels of agreement on anterior oral cleft speech characteristics, non-cleft errors/immaturities and voice. This was explained, at least in part, by their low prevalence which affects the calculation of the intraclass correlation coefficient statistic. Speech and language therapists benefited from training on the CAPS-A, focusing on specific aspects of speech using definitions of parameters and scalar points, in order to apply the tool systematically and reliably. Ratings are enhanced by ensuring a high degree of attention to the nature of the data, standardizing the speech sample, data acquisition, the listening process together with the use of high-quality recording and playback equipment. In addition, a method is proposed for maintaining listening skills following training as part of an individual's continuing education.

  19. The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder

    PubMed Central

    Shriberg, Lawrence D.; Paul, Rhea; Black, Lois M.; van Santen, Jan P.

    2010-01-01

    In a sample of 46 children aged 4 to 7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants’ speech, prosody, and voice were compared with data from 40 typically-developing children, 13 preschool children with Speech Delay, and 15 participants aged 5 to 49 years with CAS in neurogenetic disorders. Speech Delay and Speech Errors, respectively, were modestly and substantially more prevalent in participants with ASD than reported population estimates. Double dissociations in speech, prosody, and voice impairments in ASD were interpreted as consistent with a speech attunement framework, rather than with the motor speech impairments that define CAS. Key Words: apraxia, dyspraxia, motor speech disorder, speech sound disorder PMID:20972615

  20. Characteristics of speaking style and implications for speech recognition.

    PubMed

    Shinozaki, Takahiro; Ostendorf, Mari; Atlas, Les

    2009-09-01

    Differences in speaking style are associated with more or less spectral variability, as well as different modulation characteristics. The greater variation in some styles (e.g., spontaneous speech and infant-directed speech) poses challenges for recognition but possibly also opportunities for learning more robust models, as evidenced by prior work and motivated by child language acquisition studies. In order to investigate this possibility, this work proposes a new method for characterizing speaking style (the modulation spectrum), examines spontaneous, read, adult-directed, and infant-directed styles in this space, and conducts pilot experiments in style detection and sampling for improved speech recognizer training. Speaking style classification is improved by using the modulation spectrum in combination with standard pitch and energy variation. Speech recognition experiments on a small vocabulary conversational speech recognition task show that sampling methods for training with a small amount of data benefit from the new features.

  1. A characterization of verb use in Turkish agrammatic narrative speech.

    PubMed

    Arslan, Seçkin; Bamyacı, Elif; Bastiaanse, Roelien

    2016-01-01

    This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large and regular verb inflection paradigm where verbs are inflected for evidentiality (i.e. direct versus indirect evidence available to the speaker). Particularly, we explored the general characteristics of the speech samples (e.g. utterance length) and the uses of lexical, finite and non-finite verbs and direct and indirect evidentials. The results show that speech rate is slow, verbs per utterance are lower than normal and the verb diversity is reduced in the agrammatic speakers. Verb inflection is relatively intact; however, a trade-off pattern between inflection for direct evidentials and verb diversity is found. The implications of the data are discussed in connection with narrative-speech production studies on other languages.

  2. Bilingual Language Assessment: Contemporary Versus Recommended Practice in American Schools.

    PubMed

    Arias, Graciela; Friberg, Jennifer

    2017-01-01

    The purpose of this study was to identify current practices of school-based speech-language pathologists (SLPs) in the United States for bilingual language assessment and compare them to American Speech-Language-Hearing Association (ASHA) best practice guidelines and mandates of the Individuals with Disabilities Education Act (IDEA, 2004). The study was modeled to replicate portions of Caesar and Kohler's (2007) study and expanded to include a nationally representative sample. A total of 166 respondents completed an electronic survey. Results indicated that the majority of respondents have performed bilingual language assessments. Furthermore, the most frequently used informal and standardized assessments were identified. SLPs identified supports, and barriers to assessment, as well as their perceptions of graduate preparation. The findings of this study demonstrated that although SLPs have become more compliant to ASHA and IDEA guidelines, there is room for improvement in terms of adequate training in bilingual language assessment.

  3. Training Implicit Social Anxiety Associations: An Experimental Intervention

    PubMed Central

    Clerkin, Elise M.; Teachman, Bethany A.

    2010-01-01

    The current study investigates an experimental anxiety reduction intervention among a highly socially anxious sample (N=108; n=36 per Condition; 80 women). Using a conditioning paradigm, our goal was to modify implicit social anxiety associations to directly test the premise from cognitive models that biased cognitive processing may be causally related to anxious responding. Participants were trained to preferentially process non-threatening information through repeated pairings of self-relevant stimuli and faces indicating positive social feedback. As expected, participants in this positive training condition (relative to our two control conditions) displayed less negative implicit associations following training, and were more likely to complete an impromptu speech (though they did not report less anxiety during the speech). These findings offer partial support for cognitive models and indicate that implicit associations are not only correlated with social anxiety, they may be causally related to anxiety reduction as well. PMID:20102788

  4. Standardization of pitch-range settings in voice acoustic analysis.

    PubMed

    Vogel, Adam P; Maruff, Paul; Snyder, Peter J; Mundt, James C

    2009-05-01

    Voice acoustic analysis is typically a labor-intensive, time-consuming process that requires the application of idiosyncratic parameters tailored to individual aspects of the speech signal. Such processes limit the efficiency and utility of voice analysis in clinical practice as well as in applied research and development. In the present study, we analyzed 1,120 voice files, using standard techniques (case-by-case hand analysis), taking roughly 10 work weeks of personnel time to complete. The results were compared with the analytic output of several automated analysis scripts that made use of preset pitch-range parameters. After pitch windows were selected to appropriately account for sex differences, the automated analysis scripts reduced processing time of the 1,120 speech samples to less than 2.5 h and produced results comparable to those obtained with hand analysis. However, caution should be exercised when applying the suggested preset values to pathological voice populations.

  5. Training implicit social anxiety associations: an experimental intervention.

    PubMed

    Clerkin, Elise M; Teachman, Bethany A

    2010-04-01

    The current study investigates an experimental anxiety reduction intervention among a highly socially anxious sample (N=108; n=36 per Condition; 80 women). Using a conditioning paradigm, our goal was to modify implicit social anxiety associations to directly test the premise from cognitive models that biased cognitive processing may be causally related to anxious responding. Participants were trained to preferentially process non-threatening information through repeated pairings of self-relevant stimuli and faces indicating positive social feedback. As expected, participants in this positive training condition (relative to our two control conditions) displayed less negative implicit associations following training, and were more likely to complete an impromptu speech (though they did not report less anxiety during the speech). These findings offer partial support for cognitive models and indicate that implicit associations are not only correlated with social anxiety, they may be causally related to anxiety reduction as well. (c) 2010 Elsevier Ltd. All rights reserved.

  6. Cognitive Load in Voice Therapy Carry-Over Exercises.

    PubMed

    Iwarsson, Jenny; Morris, David Jackson; Balling, Laura Winther

    2017-01-01

    The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication. Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire. Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks. The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.

  7. Classification of Parkinson's disease utilizing multi-edit nearest-neighbor and ensemble learning algorithms with speech samples.

    PubMed

    Zhang, He-Hua; Yang, Liuyang; Liu, Yuchuan; Wang, Pin; Yin, Jun; Li, Yongming; Qiu, Mingguo; Zhu, Xueru; Yan, Fang

    2016-11-16

    The use of speech based data in the classification of Parkinson disease (PD) has been shown to provide an effect, non-invasive mode of classification in recent years. Thus, there has been an increased interest in speech pattern analysis methods applicable to Parkinsonism for building predictive tele-diagnosis and tele-monitoring models. One of the obstacles in optimizing classifications is to reduce noise within the collected speech samples, thus ensuring better classification accuracy and stability. While the currently used methods are effect, the ability to invoke instance selection has been seldomly examined. In this study, a PD classification algorithm was proposed and examined that combines a multi-edit-nearest-neighbor (MENN) algorithm and an ensemble learning algorithm. First, the MENN algorithm is applied for selecting optimal training speech samples iteratively, thereby obtaining samples with high separability. Next, an ensemble learning algorithm, random forest (RF) or decorrelated neural network ensembles (DNNE), is used to generate trained samples from the collected training samples. Lastly, the trained ensemble learning algorithms are applied to the test samples for PD classification. This proposed method was examined using a more recently deposited public datasets and compared against other currently used algorithms for validation. Experimental results showed that the proposed algorithm obtained the highest degree of improved classification accuracy (29.44%) compared with the other algorithm that was examined. Furthermore, the MENN algorithm alone was found to improve classification accuracy by as much as 45.72%. Moreover, the proposed algorithm was found to exhibit a higher stability, particularly when combining the MENN and RF algorithms. This study showed that the proposed method could improve PD classification when using speech data and can be applied to future studies seeking to improve PD classification methods.

  8. Speech comprehension and emotional/behavioral problems in children with specific language impairment (SLI).

    PubMed

    Gregl, Ana; Kirigin, Marin; Bilać, Snjeiana; Sućeska Ligutić, Radojka; Jaksić, Nenad; Jakovljević, Miro

    2014-09-01

    This research aims to investigate differences in speech comprehension between children with specific language impairment (SLI) and their developmentally normal peers, and the relationship between speech comprehension and emotional/behavioral problems on Achenbach's Child Behavior Checklist (CBCL) and Caregiver Teacher's Report Form (C-TRF) according to the DSMIV The clinical sample comprised 97preschool children with SLI, while the peer sample comprised 60 developmentally normal preschool children. Children with SLI had significant delays in speech comprehension and more emotional/behavioral problems than peers. In children with SLI, speech comprehension significantly correlated with scores on Attention Deficit/Hyperactivity Problems (CBCL and C-TRF), and Pervasive Developmental Problems scales (CBCL)(p<0.05). In the peer sample, speech comprehension significantly correlated with scores on Affective Problems and Attention Deficit/Hyperactivity Problems (C-TRF) scales. Regression analysis showed that 12.8% of variance in speech comprehension is saturated with 5 CBCL variables, of which Attention Deficit/Hyperactivity (beta = -0.281) and Pervasive Developmental Problems (beta = -0.280) are statistically significant (p < 0.05). In the reduced regression model Attention Deficit/Hyperactivity explains 7.3% of the variance in speech comprehension, (beta = -0.270, p < 0.01). It is possible that, to a certain degree, the same neurodevelopmental process lies in the background of problems with speech comprehension, problems with attention and hyperactivity, and pervasive developmental problems. This study confirms the importance of triage for behavioral problems and attention training in the rehabilitation of children with SLI and children with normal language development that exhibit ADHD symptoms.

  9. The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Shriberg, Lawrence D.; Paul, Rhea; Black, Lois M.; van Santen, Jan P.

    2011-01-01

    In a sample of 46 children aged 4-7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants' speech, prosody, and voice were compared with data from 40 typically-developing children, 13…

  10. The Effect of Background Noise on Intelligibility of Dysphonic Speech

    ERIC Educational Resources Information Center

    Ishikawa, Keiko; Boyce, Suzanne; Kelchner, Lisa; Powell, Maria Golla; Schieve, Heidi; de Alarcon, Alessandro; Khosla, Sid

    2017-01-01

    Purpose: The aim of this study is to determine the effect of background noise on the intelligibility of dysphonic speech and to examine the relationship between intelligibility in noise and an acoustic measure of dysphonia--cepstral peak prominence (CPP). Method: A study of speech perception was conducted using speech samples from 6 adult speakers…

  11. Automatic Method of Pause Measurement for Normal and Dysarthric Speech

    ERIC Educational Resources Information Center

    Rosen, Kristin; Murdoch, Bruce; Folker, Joanne; Vogel, Adam; Cahill, Louise; Delatycki, Martin; Corben, Louise

    2010-01-01

    This study proposes an automatic method for the detection of pauses and identification of pause types in conversational speech for the purpose of measuring the effects of Friedreich's Ataxia (FRDA) on speech. Speech samples of [approximately] 3 minutes were recorded from 13 speakers with FRDA and 18 healthy controls. Pauses were measured from the…

  12. Autonomic and Emotional Responses of Graduate Student Clinicians in Speech-Language Pathology to Stuttered Speech

    ERIC Educational Resources Information Center

    Guntupalli, Vijaya K.; Nanjundeswaran, Chayadevie; Dayalu, Vikram N.; Kalinowski, Joseph

    2012-01-01

    Background: Fluent speakers and people who stutter manifest alterations in autonomic and emotional responses as they view stuttered relative to fluent speech samples. These reactions are indicative of an aroused autonomic state and are hypothesized to be triggered by the abrupt breakdown in fluency exemplified in stuttered speech. Furthermore,…

  13. The Effectiveness of SpeechEasy during Situations of Daily Living

    ERIC Educational Resources Information Center

    O'Donnell, Jennifer J.; Armson, Joy; Kiefte, Michael

    2008-01-01

    A multiple single-subject design was used to examine the effects of SpeechEasy on stuttering frequency in the laboratory and in longitudinal samples of speech produced in situations of daily living (SDL). Seven adults who stutter participated, all of whom had exhibited at least 30% reduction in stuttering frequency while using SpeechEasy during…

  14. Psychophysics of Complex Auditory and Speech Stimuli

    DTIC Science & Technology

    1993-10-31

    unexpected, and does not seem to l:a ý a dice-ct counterpart in the extensive research on pitch perception. Experiment 2 was designed to quantify our...project is to use of different procedures to provide converging evidence on the natuge of perceptual spaces for speech categories. Completed research ...prior speech research on classification procedures may have led to errors. Thus, the opposite (falling F2 & F3) transitions lead somewhat ambiguous

  15. The minor third communicates sadness in speech, mirroring its use in music.

    PubMed

    Curtis, Meagan E; Bharucha, Jamshed J

    2010-06-01

    There is a long history of attempts to explain why music is perceived as expressing emotion. The relationship between pitches serves as an important cue for conveying emotion in music. The musical interval referred to as the minor third is generally thought to convey sadness. We reveal that the minor third also occurs in the pitch contour of speech conveying sadness. Bisyllabic speech samples conveying four emotions were recorded by 9 actresses. Acoustic analyses revealed that the relationship between the 2 salient pitches of the sad speech samples tended to approximate a minor third. Participants rated the speech samples for perceived emotion, and the use of numerous acoustic parameters as cues for emotional identification was modeled using regression analysis. The minor third was the most reliable cue for identifying sadness. Additional participants rated musical intervals for emotion, and their ratings verified the historical association between the musical minor third and sadness. These findings support the theory that human vocal expressions and music share an acoustic code for communicating sadness.

  16. F-15s complete Hungarian deployment > U.S. Air Force > Article Display

    Science.gov Websites

    Speeches Archive Former AF Top 3 Viewpoints and Speeches Air Force Warrior Games 2017 Events 2018 Air Force Strategic Documents Desert Storm 25th Anniversary Observances DoD Warrior Games Portraits in Courage

  17. Cross-Cultural Attitudes toward Speech Disorders.

    ERIC Educational Resources Information Center

    Bebout, Linda; Arthur, Bradford

    1992-01-01

    University students (n=166) representing English-speaking North American culture and several other cultures completed questionnaires examining attitudes toward four speech disorders (cleft palate, dysfluency, hearing impairment, and misarticulations). Results showed significant group differences in beliefs about the emotional health of persons…

  18. Sound frequency affects speech emotion perception: results from congenital amusia

    PubMed Central

    Lolli, Sydney L.; Lewenstein, Ari D.; Basurto, Julian; Winnik, Sean; Loui, Psyche

    2015-01-01

    Congenital amusics, or “tone-deaf” individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech. PMID:26441718

  19. HOCUS: The Haskins optically-corrected ultrasound system for measuring speech articulation

    NASA Astrophysics Data System (ADS)

    Whalen, D. H.; Iskarous, Khalil; Tiede, Mark K.; Ostry, David J.

    2004-05-01

    The tongue is the most important supralaryngeal articulator for speech, yet, because it is typically out of view, its movements have been difficult to quantify. Here is described a new combination of techniques involving ultrasound in conjunction with an optoelectric motion measurement system (Optotrak). Combining these, the movements of the tongue are imaged and simultaneously corrected for motion of the head and of the ultrasound transceiver. Optotrak's infrared-emitting diodes are placed on the transceiver and the speakers head in order to localize the ultrasound image of the tongue relative to the hard palate. The palate can be imaged with ultrasound by having the ultrasound signal penetrate a water bolus held against the palate by the tongue. This trace is coregistered with the head and potentially with the same talker's sagittal MR image, to provide additional information on the unimaged remainder of the tract. The tongue surface, from the larynx to near the tip, can then be localized in relationship to the hard palate. The result is a fairly complete view of the tongue within the vocal tract at sampling rates appropriate for running speech. A comparison with other imaging vocal tract systems will be presented. [Work supported by NIH Grant DC-02717.

  20. Development and preliminary evaluation of a new test of ongoing speech comprehension

    PubMed Central

    Best, Virginia; Keidser, Gitte; Buchholz, Jörg M.; Freeston, Katrina

    2016-01-01

    Objective The overall goal of this work is to create new speech perception tests that more closely resemble real world communication and offer an alternative or complement to the commonly used sentence recall test. Design We describe the development of a new ongoing speech comprehension test based on short everyday passages and on-the-go questions. We also describe the results of an experiment conducted to compare the psychometric properties of this test to those of a sentence test. Study Sample Both tests were completed by a group of listeners that included normal hearers as well as hearing-impaired listeners who participated with and without their hearing aids. Results Overall, the psychometric properties of the two tests were similar, and thresholds were significantly correlated. However, there was some evidence of age/cognitive effects in the comprehension test that were not revealed by the sentence test. Conclusions This new comprehension test promises to be useful for the larger goal of creating laboratory tests that combine realistic acoustic environments with realistic communication tasks. Further efforts will be required to assess whether the test can ultimately improve predictions of real-world outcomes. PMID:26158403

  1. Use of Language Sample Analysis by School-Based SLPs: Results of a Nationwide Survey

    ERIC Educational Resources Information Center

    Pavelko, Stacey L.; Owens, Robert E., Jr.; Ireland, Marie; Hahs-Vaughn, Debbie L.

    2016-01-01

    Purpose: This article examines use of language sample analysis (LSA) by school-based speech-language pathologists (SLPs), including characteristics of language samples, methods of transcription and analysis, barriers to LSA use, and factors affecting LSA use, such as American Speech-Language-Hearing Association certification, number of years'…

  2. Communication rehabilitation in sub-Saharan Africa: A workforce profile of speech and language therapists

    PubMed Central

    McAllister, Lindy; Davidson, Bronwyn; Marshall, Julie

    2016-01-01

    Background There is an urgent global need to strengthen rehabilitation services for people with disabilities. In sub-Saharan Africa, rehabilitation services for people with communication disabilities continue to be underdeveloped. A first step in strengthening services for people with a communication disabilities is to understand the composition and conditions of the current workforce. Objectives This research describes a sample of the speech and language therapists (SLTs) working in SSA (excluding South Africa). This study explores the characteristics of this workforce, including their demographics, education, experience and geographical stability. Method A mixed-methods survey was used to collect data from SLTs within Anglophone countries of SSA. Completed surveys were received from 33 respondents working in 44 jobs across nine countries. Analysis included descriptive and non-parametric inferential statistics. This study reports on a subset of descriptive and quantitative data from the wider survey. Results A background profile of SLTs across the region is presented. Results indicated that the workforce of SLTs comprised a mix of local and international SLTs, with university-level education. Local SLTs were educated both within and outside of Africa, with more recent graduates trained in Africa. These data reflected the local emergence of speech and language therapy training in SSA. Conclusion This sample comprised a mix of African and international SLTs, with indications of growing localisation of the workforce. Workforce localisation offers potential advantages of linguistic diversity and stability. Challenges including workforce support and developing culturally and contextually relevant SLT practices are discussed. PMID:28730052

  3. Measures of digit span and verbal rehearsal speed in deaf children after more than 10 years of cochlear implantation.

    PubMed

    Pisoni, David B; Kronenberger, William G; Roman, Adrienne S; Geers, Ann E

    2011-02-01

    Conventional assessments of outcomes in deaf children with cochlear implants (CIs) have focused primarily on endpoint or product measures of speech and language. Little attention has been devoted to understanding the basic underlying core neurocognitive factors involved in the development and processing of speech and language. In this study, we examined the development of factors related to the quality of phonological information in immediate verbal memory, including immediate memory capacity and verbal rehearsal speed, in a sample of deaf children after >10 yrs of CI use and assessed the correlations between these two process measures and a set of speech and language outcomes. Of an initial sample of 180 prelingually deaf children with CIs assessed at ages 8 to 9 yrs after 3 to 7 yrs of CI use, 112 returned for testing again in adolescence after 10 more years of CI experience. In addition to completing a battery of conventional speech and language outcome measures, subjects were administered the Wechsler Intelligence Scale for Children-III Digit Span subtest to measure immediate verbal memory capacity. Sentence durations obtained from the McGarr speech intelligibility test were used as a measure of verbal rehearsal speed. Relative to norms for normal-hearing children, Digit Span scores were well below average for children with CIs at both elementary and high school ages. Improvement was observed over the 8-yr period in the mean longest digit span forward score but not in the mean longest digit span backward score. Longest digit span forward scores at ages 8 to 9 yrs were significantly correlated with all speech and language outcomes in adolescence, but backward digit spans correlated significantly only with measures of higher-order language functioning over that time period. While verbal rehearsal speed increased for almost all subjects between elementary grades and high school, it was still slower than the rehearsal speed obtained from a control group of normal-hearing adolescents. Verbal rehearsal speed at ages 8 to 9 yrs was also found to be strongly correlated with speech and language outcomes and Digit Span scores in adolescence. Despite improvement after 8 additional years of CI use, measures of immediate verbal memory capacity and verbal rehearsal speed, which reflect core fundamental information processing skills associated with representational efficiency and information processing capacity, continue to be delayed in children with CIs relative to NH peers. Furthermore, immediate verbal memory capacity and verbal rehearsal speed at 8 to 9 yrs of age were both found to predict speech and language outcomes in adolescence, demonstrating the important contribution of these processing measures for speech-language development in children with CIs. Understanding the relations between these core underlying processes and speech-language outcomes in children with CIs may help researchers to develop new approaches to intervention and treatment of deaf children who perform poorly with their CIs. Moreover, this knowledge could be used for early identification of deaf children who may be at high risk for poor speech and language outcomes after cochlear implantation as well as for the development of novel targeted interventions that focus selectively on these core elementary information processing variables.

  4. Recognition and Localization of Speech by Adult Cochlear Implant Recipients Wearing a Digital Hearing Aid in the Nonimplanted Ear (Bimodal Hearing)

    PubMed Central

    Potts, Lisa G.; Skinner, Margaret W.; Litovsky, Ruth A.; Strube, Michael J; Kuk, Francis

    2010-01-01

    Background The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing). Purpose This study evaluated the effect of wearing a cochlear implant in one ear and a digital hearing aid in the opposite ear on speech recognition and localization. Research Design A repeated-measures correlational study was completed. Study Sample Nineteen adult Cochlear Nucleus 24 implant recipients participated in the study. Intervention The participants were fit with a Widex Senso Vita 38 hearing aid to achieve maximum audibility and comfort within their dynamic range. Data Collection and Analysis Soundfield thresholds, loudness growth, speech recognition, localization, and subjective questionnaires were obtained six–eight weeks after the hearing aid fitting. Testing was completed in three conditions: hearing aid only, cochlear implant only, and cochlear implant and hearing aid (bimodal). All tests were repeated four weeks after the first test session. Repeated-measures analysis of variance was used to analyze the data. Significant effects were further examined using pairwise comparison of means or in the case of continuous moderators, regression analyses. The speech-recognition and localization tasks were unique, in that a speech stimulus presented from a variety of roaming azimuths (140 degree loudspeaker array) was used. Results Performance in the bimodal condition was significantly better for speech recognition and localization compared to the cochlear implant–only and hearing aid–only conditions. Performance was also different between these conditions when the location (i.e., side of the loudspeaker array that presented the word) was analyzed. In the bimodal condition, the speech-recognition and localization tasks were equal regardless of which side of the loudspeaker array presented the word, while performance was significantly poorer for the monaural conditions (hearing aid only and cochlear implant only) when the words were presented on the side with no stimulation. Binaural loudness summation of 1–3 dB was seen in soundfield thresholds and loudness growth in the bimodal condition. Measures of the audibility of sound with the hearing aid, including unaided thresholds, soundfield thresholds, and the Speech Intelligibility Index, were significant moderators of speech recognition and localization. Based on the questionnaire responses, participants showed a strong preference for bimodal stimulation. Conclusions These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid. PMID:19594084

  5. Virtual reality exposure therapy for social anxiety disorder: a randomized controlled trial.

    PubMed

    Anderson, Page L; Price, Matthew; Edwards, Shannan M; Obasaju, Mayowa A; Schmertz, Stefan K; Zimand, Elana; Calamaras, Martha R

    2013-10-01

    This is the first randomized trial comparing virtual reality exposure therapy to in vivo exposure for social anxiety disorder. Participants with a principal diagnosis of social anxiety disorder who identified public speaking as their primary fear (N = 97) were recruited from the community, resulting in an ethnically diverse sample (M age = 39 years) of mostly women (62%). Participants were randomly assigned to and completed 8 sessions of manualized virtual reality exposure therapy, exposure group therapy, or wait list. Standardized self-report measures were collected at pretreatment, posttreatment, and 12-month follow-up, and process measures were collected during treatment. A standardized speech task was delivered at pre- and posttreatment, and diagnostic status was reassessed at 3-month follow-up. Analysis of covariance showed that, relative to wait list, people completing either active treatment significantly improved on all but one measure (length of speech for exposure group therapy and self-reported fear of negative evaluation for virtual reality exposure therapy). At 12-month follow-up, people showed significant improvement from pretreatment on all measures. There were no differences between the active treatments on any process or outcome measure at any time, nor differences on achieving partial or full remission. Virtual reality exposure therapy is effective for treating social fears, and improvement is maintained for 1 year. Virtual reality exposure therapy is equally effective as exposure group therapy; further research with a larger sample is needed, however, to better control and statistically test differences between the treatments.

  6. Therapy service use in children and adolescents with cerebral palsy: An Australian perspective.

    PubMed

    Meehan, Elaine; Harvey, Adrienne; Reid, Susan M; Reddihough, Dinah S; Williams, Katrina; Crompton, Kylie E; Omar, Suhaila; Scheinberg, Adam

    2016-03-01

    The aim of this study was to describe the patterns of therapy service use for a sample of children and adolescents with cerebral palsy over a 1 year period and to identify factors associated with frequency of therapy and parental satisfaction with therapy frequency. Parents of 83 children completed a survey on their child's use of occupational therapy, physiotherapy and speech and language pathology services over the previous year. Participants were randomly selected from a sample stratified by age and Gross Motor Function Classification System (GMFCS) level. During the year prior to survey completion, 83% of children had received occupational therapy, 88% had received physiotherapy and 60% had received speech and language pathology services. Frequency of therapy was higher for younger children (P < 0.01), those classified at GMFCS levels IV-V (P < 0.05) and those attending schools specifically for children with disabilities. Current structures for therapy service delivery for children with cerebral palsy are systems-based, and age-based funding systems and the organisation of services around the education system are preventing the delivery of needs-based therapy. Paediatricians that care for children and young people with cerebral palsy need to pay particular attention to those that may miss out on therapy due to age or school type, and support these families in accessing appropriate therapy. © 2015 The Authors. Journal of Paediatrics and Child Health © 2015 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  7. Long-Term Trajectories of the Development of Speech Sound Production in Pediatric Cochlear Implant Recipients

    PubMed Central

    Tomblin, J. Bruce; Peng, Shu-Chen; Spencer, Linda J.; Lu, Nelson

    2011-01-01

    Purpose This study characterized the development of speech sound production in prelingually deaf children with a minimum of 8 years of cochlear implant (CI) experience. Method Twenty-seven pediatric CI recipients' spontaneous speech samples from annual evaluation sessions were phonemically transcribed. Accuracy for these speech samples was evaluated in piecewise regression models. Results As a group, pediatric CI recipients showed steady improvement in speech sound production following implantation, but the improvement rate declined after 6 years of device experience. Piecewise regression models indicated that the slope estimating the participants' improvement rate was statistically greater than 0 during the first 6 years postimplantation, but not after 6 years. The group of pediatric CI recipients' accuracy of speech sound production after 4 years of device experience reasonably predicts their speech sound production after 5–10 years of device experience. Conclusions The development of speech sound production in prelingually deaf children stabilizes after 6 years of device experience, and typically approaches a plateau by 8 years of device use. Early growth in speech before 4 years of device experience did not predict later rates of growth or levels of achievement. However, good predictions could be made after 4 years of device use. PMID:18695018

  8. The influence of speaking rate on nasality in the speech of hearing-impaired individuals.

    PubMed

    Dwyer, Claire H; Robb, Michael P; O'Beirne, Greg A; Gilbert, Harvey R

    2009-10-01

    The purpose of this study was to determine whether deliberate increases in speaking rate would serve to decrease the amount of nasality in the speech of severely hearing-impaired individuals. The participants were 11 severely to profoundly hearing-impaired students, ranging in age from 12 to 19 years (M = 16 years). Each participant provided a baseline speech sample (R1) followed by 3 training sessions during which participants were trained to increase their speaking rate. Following the training sessions, a second speech sample was obtained (R2). Acoustic and perceptual analyses of the speech samples obtained at R1 and R2 were undertaken. The acoustic analysis focused on changes in first (F(1)) and second (F(2)) formant frequency and formant bandwidths. The perceptual analysis involved listener ratings of the speech samples (at R1 and R2) for perceived nasality. Findings indicated a significant increase in speaking rate at R2. In addition, significantly narrower F(2) bandwidth and lower perceptual rating scores of nasality were obtained at R2 across all participants, suggesting a decrease in nasality as speaking rate increases. The nasality demonstrated by hearing-impaired individuals is amenable to change when speaking rate is increased. The influences of speaking rate changes on the perception and production of nasality in hearing-impaired individuals are discussed.

  9. Investigation of Preservice Teachers' Speech Anxiety with Different Points of View

    ERIC Educational Resources Information Center

    Kana, Fatih

    2015-01-01

    The purpose of this study is to find out the level of speech anxiety of last year students at Education Faculties and the effects of speech anxiety. For this purpose, speech anxiety inventory was delivered to 540 pre-service teachers at 2013-2014 academic year using stratified sampling method. Relational screening model was used in the study. To…

  10. Speech Abilities in Preschool Children with Speech Sound Disorder with and without Co-Occurring Language Impairment

    ERIC Educational Resources Information Center

    Macrae, Toby; Tyler, Ann A.

    2014-01-01

    Purpose: The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. Method: In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different…

  11. The Prevalence of Speech and Language Disorders in French-Speaking Preschool Children From Yaoundé (Cameroon).

    PubMed

    Tchoungui Oyono, Lilly; Pascoe, Michelle; Singh, Shajila

    2018-05-17

    The purpose of this study was to determine the prevalence of speech and language disorders in French-speaking preschool-age children in Yaoundé, the capital city of Cameroon. A total of 460 participants aged 3-5 years were recruited from the 7 communes of Yaoundé using a 2-stage cluster sampling method. Speech and language assessment was undertaken using a standardized speech and language test, the Evaluation du Langage Oral (Khomsi, 2001), which was purposefully renormed on the sample. A predetermined cutoff of 2 SDs below the normative mean was applied to identify articulation, expressive language, and receptive language disorders. Fluency and voice disorders were identified using clinical judgment by a speech-language pathologist. Overall prevalence was calculated as follows: speech disorders, 14.7%; language disorders, 4.3%; and speech and language disorders, 17.1%. In terms of disorders, prevalence findings were as follows: articulation disorders, 3.6%; expressive language disorders, 1.3%; receptive language disorders, 3%; fluency disorders, 8.4%; and voice disorders, 3.6%. Prevalence figures are higher than those reported for other countries and emphasize the urgent need to develop speech and language services for the Cameroonian population.

  12. The Effects of Cognitive Training on Private Speech and Task Performance during Problem Solving among Learning Disabled and Normally Achieving Children.

    ERIC Educational Resources Information Center

    Harris, Karen R.

    To investigate task performance and the use of private speech and to examine the effects of a cognitive training approach, 30 learning disabled (LD) and 30 nonLD Ss (7 to 8 years old) were given a 17 piece wooden puzzle rigged so that it could not be completed correctly. Six variables were measured: (1) proportion of private speech that was task…

  13. Speech rate and fluency in children with phonological disorder.

    PubMed

    Novaes, Priscila Maronezi; Nicolielo-Carrilho, Ana Paola; Lopes-Herrera, Simone Aparecida

    2015-01-01

    To identify and describe the speech rate and fluency of children with phonological disorder (PD) with and without speech-language therapy. Thirty children, aged 5-8 years old, both genders, were divided into three groups: experimental group 1 (G1) — 10 children with PD in intervention; experimental group 2 (G2) — 10 children with PD without intervention; and control group (CG) — 10 children with typical development. Speech samples were collected and analyzed according to parameters of specific protocol. The children in CG had higher number of words per minute compared to those in G1, which, in turn, performed better in this aspect compared to children in G2. Regarding the number of syllables per minute, the CG showed the best result. In this aspect, the children in G1 showed better results than those in G2. Comparing children's performance in the assessed groups regarding the tests, those with PD in intervention had higher time of speech sample and adequate speech rate, which may be indicative of greater auditory monitoring of their own speech as a result of the intervention.

  14. Assessing Disfluencies in School-Age Children Who Stutter: How Much Speech Is Enough?

    ERIC Educational Resources Information Center

    Gregg, Brent A.; Sawyer, Jean

    2015-01-01

    The question of what size speech sample is sufficient to accurately identify stuttering and its myriad characteristics is a valid one. Short samples have a risk of over- or underrepresenting disfluency types or characteristics. In recent years, there has been a trend toward using shorter samples because they are less time-consuming for…

  15. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain

    PubMed Central

    Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon

    2013-01-01

    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472

  16. Speech recognition technology: an outlook for human-to-machine interaction.

    PubMed

    Erdel, T; Crooks, S

    2000-01-01

    Speech recognition, as an enabling technology in healthcare-systems computing, is a topic that has been discussed for quite some time, but is just now coming to fruition. Traditionally, speech-recognition software has been constrained by hardware, but improved processors and increased memory capacities are starting to remove some of these limitations. With these barriers removed, companies that create software for the healthcare setting have the opportunity to write more successful applications. Among the criticisms of speech-recognition applications are the high rates of error and steep training curves. However, even in the face of such negative perceptions, there remains significant opportunities for speech recognition to allow healthcare providers and, more specifically, physicians, to work more efficiently and ultimately spend more time with their patients and less time completing necessary documentation. This article will identify opportunities for inclusion of speech-recognition technology in the healthcare setting and examine major categories of speech-recognition software--continuous speech recognition, command and control, and text-to-speech. We will discuss the advantages and disadvantages of each area, the limitations of the software today, and how future trends might affect them.

  17. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech From Speech Delay: I. Development and Description of the Pause Marker

    PubMed Central

    Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose The goal of this article (PM I) is to describe the rationale for and development of the Pause Marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent childhood apraxia of speech from speech delay. Method The authors describe and prioritize 7 criteria with which to evaluate the research and clinical utility of a diagnostic marker for childhood apraxia of speech, including evaluation of the present proposal. An overview is given of the Speech Disorders Classification System, including extensions completed in the same approximately 3-year period in which the PM was developed. Results The finalized Speech Disorders Classification System includes a nosology and cross-classification procedures for childhood and persistent speech disorders and motor speech disorders (Shriberg, Strand, & Mabie, 2017). A PM is developed that provides procedural and scoring information, and citations to papers and technical reports that include audio exemplars of the PM and reference data used to standardize PM scores are provided. Conclusions The PM described here is an acoustic-aided perceptual sign that quantifies one aspect of speech precision in the linguistic domain of phrasing. This diagnostic marker can be used to discriminate early or persistent childhood apraxia of speech from speech delay. PMID:28384779

  18. Large Vocabulary Audio-Visual Speech Recognition

    DTIC Science & Technology

    2002-06-12

    www.is.cs.cmu.edu Email: waibel(a)cs.cmu~edu Inttractive Systenms Labs ttoctis Ssstms Labs Meeting Browser - -- Interpreting Human Communication "Why did...Speech Interacti Stams Labs t-cive Systms Focus of Attention Tracking Conclusion - Complete Model of Human Communication is Needed - Include all

  19. Measuring word complexity in speech screening: single-word sampling to identify phonological delay/disorder in preschool children.

    PubMed

    Anderson, Carolyn; Cohen, Wendy

    2012-01-01

    Children's speech sound development is assessed by comparing speech production with the typical development of speech sounds based on a child's age and developmental profile. One widely used method of sampling is to elicit a single-word sample along with connected speech. Words produced spontaneously rather than imitated may give a more accurate indication of a child's speech development. A published word complexity measure can be used to score later-developing speech sounds and more complex word patterns. There is a need for a screening word list that is quick to administer and reliably differentiates children with typically developing speech from children with patterns of delayed/disordered speech. To identify a short word list based on word complexity that could be spontaneously named by most typically developing children aged 3;00-5;05 years. One hundred and five children aged between 3;00 and 5;05 years from three local authority nursery schools took part in the study. Items from a published speech assessment were modified and extended to include a range of phonemic targets in different word positions in 78 monosyllabic and polysyllabic words. The 78 words were ranked both by phonemic/phonetic complexity as measured by word complexity and by ease of spontaneous production. The ten most complex words (hereafter Triage 10) were named spontaneously by more than 90% of the children. There was no significant difference between the complexity measures for five identified age groups when the data were examined in 6-month groups. A qualitative analysis revealed eight children with profiles of phonological delay or disorder. When these children were considered separately, there was a statistically significant difference (p < 0.005) between the mean word complexity measure of the group compared with the mean for the remaining children in all other age groups. The Triage 10 words reliably differentiated children with typically developing speech from those with delayed or disordered speech patterns. The Triage 10 words can be used as a screening tool for triage and general assessment and have the potential to monitor progress during intervention. Further testing is being undertaken to establish reliability with children referred to speech and language therapy services. © 2012 Royal College of Speech and Language Therapists.

  20. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    PubMed

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound disorders. Non-speech oral motor exercise use was most frequently reported in the treatment of dysarthria. Non-speech oral motor exercise use when targeting speech sound disorders is not widely endorsed in the literature.

  1. Building an Interdepartmental Major in Speech Communication.

    ERIC Educational Resources Information Center

    Litterst, Judith K.

    This paper describes a popular and innovative major program of study in speech communication at St. Cloud University in Minnesota: the Speech Communication Interdepartmental Major. The paper provides background on the program, discusses overall program requirements, presents sample student options, identifies ingredients for program success,…

  2. Preschoolers Benefit From Visually Salient Speech Cues

    PubMed Central

    Holt, Rachael Frush

    2015-01-01

    Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336

  3. Heterosexuals' attitudes toward hate crimes and hate speech against gays and lesbians: old-fashioned and modern heterosexism.

    PubMed

    Cowan, Gloria; Heiple, Becky; Marquez, Carolyn; Khatchadourian, Désirée; McNevin, Michelle

    2005-01-01

    Modern racism and sexism have been studied to examine the different ways that prejudice can be expressed; yet, little attention has been given to modern heterosexism. This study examined the extent to which modern heterosexism and old-fashioned heterosexism predict acceptance of hate crimes against gays and lesbians and perceptions of hate speech. Male (n = 74) and female (n = 95) heterosexual college students completed a survey consisting of scales that assessed modern and old-fashioned heterosexism, acceptance of violence against gays and lesbians, attitudes toward the harm of hate speech and its offensiveness, and the importance of freedom of speech. Results indicated strong negative relations between both modern and old-fashioned heterosexism and the perceived harm of hate speech. When old-fashioned heterosexism,modern heterosexism, and the importance of freedom of speech were combined to predict hate crime and hate speech attitudes, only old-fashioned heterosexism predicted acceptance of hate crimes. All three predictors contributed to the perception of the harm of hate speech. Gender differences in the role of the importance of freedom of speech in predicting attitudes toward hate crimes and hate speech are noted.

  4. Speech Entrainment Compensates for Broca's Area Damage

    PubMed Central

    Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris

    2015-01-01

    Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to speech entrainment. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during speech entrainment versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of speech entrainment to improve speech production and may help select patients for speech entrainment treatment. PMID:25989443

  5. Automatic initial and final segmentation in cleft palate speech of Mandarin speakers

    PubMed Central

    Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang

    2017-01-01

    The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with “quasi-unvoiced” or with “quasi-voiced” initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%. PMID:28926572

  6. Automatic initial and final segmentation in cleft palate speech of Mandarin speakers.

    PubMed

    He, Ling; Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang

    2017-01-01

    The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with "quasi-unvoiced" or with "quasi-voiced" initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%.

  7. Long-Term Follow-Up Study of Young Adults Treated for Unilateral Complete Cleft Lip, Alveolus, and Palate by a Treatment Protocol Including Two-Stage Palatoplasty: Speech Outcomes

    PubMed Central

    Bittermann, Dirk; Janssen, Laura; Bittermann, Gerhard Koendert Pieter; Boonacker, Chantal; Haverkamp, Sarah; de Wilde, Hester; Van Der Heul, Marise; Specken, Tom FJMC; Koole, Ron; Kon, Moshe; Breugem, Corstiaan Cornelis; Mink van der Molen, Aebele Barber

    2017-01-01

    Background No consensus exists on the optimal treatment protocol for orofacial clefts or the optimal timing of cleft palate closure. This study investigated factors influencing speech outcomes after two-stage palate repair in adults with a non-syndromal complete unilateral cleft lip and palate (UCLP). Methods This was a retrospective analysis of adult patients with a UCLP who underwent two-stage palate closure and were treated at our tertiary cleft centre. Patients ≥17 years of age were invited for a final speech assessment. Their medical history was obtained from their medical files, and speech outcomes were assessed by a speech pathologist during the follow-up consultation. Results Forty-eight patients were included in the analysis, with a mean age of 21 years (standard deviation, 3.4 years). Their mean age at the time of hard and soft palate closure was 3 years and 8.0 months, respectively. In 40% of the patients, a pharyngoplasty was performed. On a 5-point intelligibility scale, 84.4% received a score of 1 or 2; meaning that their speech was intelligible. We observed a significant correlation between intelligibility scores and the incidence of articulation errors (P<0.001). In total, 36% showed mild to moderate hypernasality during the speech assessment, and 11%–17% of the patients exhibited increased nasalance scores, assessed through nasometry. Conclusions The present study describes long-term speech outcomes after two-stage palatoplasty with hard palate closure at a mean age of 3 years old. We observed moderate long-term intelligibility scores, a relatively high incidence of persistent hypernasality, and a high pharyngoplasty incidence. PMID:28573094

  8. Development and evaluation of the British English coordinate response measure speech-in-noise test as an occupational hearing assessment tool.

    PubMed

    Semeraro, Hannah D; Rowan, Daniel; van Besouw, Rachel M; Allsopp, Adrian A

    2017-10-01

    The studies described in this article outline the design and development of a British English version of the coordinate response measure (CRM) speech-in-noise (SiN) test. Our interest in the CRM is as a SiN test with high face validity for occupational auditory fitness for duty (AFFD) assessment. Study 1 used the method of constant stimuli to measure and adjust the psychometric functions of each target word, producing a speech corpus with equal intelligibility. After ensuring all the target words had similar intelligibility, for Studies 2 and 3, the CRM was presented in an adaptive procedure in stationary speech-spectrum noise to measure speech reception thresholds and evaluate the test-retest reliability of the CRM SiN test. Studies 1 (n = 20) and 2 (n = 30) were completed by normal-hearing civilians. Study 3 (n = 22) was completed by hearing impaired military personnel. The results display good test-retest reliability (95% confidence interval (CI) < 2.1 dB) and concurrent validity when compared to the triple-digit test (r ≤ 0.65), and the CRM is sensitive to hearing impairment. The British English CRM using stationary speech-spectrum noise is a "ready to use" SiN test, suitable for investigation as an AFFD assessment tool for military personnel.

  9. Prosthodontic and speech rehabilitation after partial and complete glossectomy.

    PubMed

    Lauciello, F R; Vergo, T; Schaaf, N G; Zimmerman, R

    1980-02-01

    A multidisciplinary approach to the rehabilitation of four glossectomy patients has been presented. Points emphasized were the anatomic defects, their effect on the three oral functions of mastication, deglutition, and speech, and the various prosthetic modifications found most effective to restore the impaired oral functions.

  10. Accuracy of the NDI Wave Speech Research System

    ERIC Educational Resources Information Center

    Berry, Jeffrey J.

    2011-01-01

    Purpose: This work provides a quantitative assessment of the positional tracking accuracy of the NDI Wave Speech Research System. Method: Three experiments were completed: (a) static rigid-body tracking across different locations in the electromagnetic field volume, (b) dynamic rigid-body tracking across different locations within the…

  11. Communication Effectiveness of Individuals with Amyotrophic Lateral Sclerosis

    ERIC Educational Resources Information Center

    Ball, Laura J.; Beukelman, David R.; Pattee, Gary L.

    2004-01-01

    The purpose of this study was to examine the relationships among speech intelligibility and communication effectiveness as rated by speakers and their listeners. Participants completed procedures to measure (a) speech intelligibility, (b) self-perceptions of communication effectiveness, and (c) listener (spouse or family member) perceptions of…

  12. Phonological Awareness and Speech Comprehensibility: An Exploratory Study

    ERIC Educational Resources Information Center

    Venkatagiri, H. S.; Levis, John M.

    2007-01-01

    This study examined whether differences in phonological awareness were related to differences in speech comprehensibility. Seventeen adults who learned English as a foreign language (EFL) in academic settings completed 14 tests of phonological awareness that measured their explicit knowledge of English phonological structures, and three tests of…

  13. Echolalic and Spontaneous Phrase Speech in Autistic Children.

    ERIC Educational Resources Information Center

    Howlin, Patricia

    1982-01-01

    Investigates the syntactical level of spontaneous and echolalic utterances of 26 autistic boys at different stages of phrase speech development. Speech samples were collected over a 90-minute period in unstructured settings in participants' homes. Imitations were not deliberately elicited, and only unprompted, noncommunicative echoes were…

  14. A new method to sample stuttering in preschool children.

    PubMed

    O'Brian, Sue; Jones, Mark; Pilowsky, Rachel; Onslow, Mark; Packman, Ann; Menzies, Ross

    2010-06-01

    This study reports a new method for sampling the speech of preschool stuttering children outside the clinic environment. Twenty parents engaged their stuttering children in an everyday play activity in the home with a telephone handset nearby. A remotely located researcher telephoned the parent and recorded the play session with a phone-recording jack attached to a digital audio recorder at the remote location. The parent placed an audio recorder near the child for comparison purposes. Children as young as 2 years complied with the remote method of speech sampling. The quality of the remote recordings was superior to that of the in-home recordings. There was no difference in means or reliability of stutter-count measures made from the remote recordings compared with those made in-home. Advantages of the new method include: (1) cost efficiency of real-time measurement of percent syllables stuttered in naturalistic situations, (2) reduction of bias associated with parent-selected timing of home recordings, (3) standardization of speech sampling procedures, (4) improved parent compliance with sampling procedures, (5) clinician or researcher on-line control of the acoustic and linguistic quality of recordings, and (6) elimination of the need to lend equipment to parents for speech sampling.

  15. Comprehensive evaluation of a child with an auditory brainstem implant.

    PubMed

    Eisenberg, Laurie S; Johnson, Karen C; Martinez, Amy S; DesJardin, Jean L; Stika, Carren J; Dzubak, Danielle; Mahalak, Mandy Lutz; Rector, Emily P

    2008-02-01

    We had an opportunity to evaluate an American child whose family traveled to Italy to receive an auditory brainstem implant (ABI). The goal of this evaluation was to obtain insight into possible benefits derived from the ABI and to begin developing assessment protocols for pediatric clinical trials. Case study. Tertiary referral center. Pediatric ABI Patient 1 was born with auditory nerve agenesis. Auditory brainstem implant surgery was performed in December, 2005, in Verona, Italy. The child was assessed at the House Ear Institute, Los Angeles, in July 2006 at the age of 3 years 11 months. Follow-up assessment has continued at the HEAR Center in Birmingham, Alabama. Auditory brainstem implant. Performance was assessed for the domains of audition, speech and language, intelligence and behavior, quality of life, and parental factors. Patient 1 demonstrated detection of sound, speech pattern perception with visual cues, and inconsistent auditory-only vowel discrimination. Language age with signs was approximately 2 years, and vocalizations were increasing. Of normal intelligence, he exhibited attention deficits with difficulty completing structured tasks. Twelve months later, this child was able to identify speech patterns consistently; closed-set word identification was emerging. These results were within the range of performance for a small sample of similarly aged pediatric cochlear implant users. Pediatric ABI assessment with a group of well-selected children is needed to examine risk versus benefit in this population and to analyze whether open-set speech recognition is achievable.

  16. From the analysis of verbal data to the analysis of organizations: organizing as a dialogical process.

    PubMed

    Lorino, Philippe

    2014-12-01

    The analysis of conversational turn-taking and its implications on time (the speaker cannot completely anticipate the future effects of her/his speech) and sociality (the speech is co-produced by the various speakers rather than by the speaking individual) can provide a useful basis to analyze complex organizing processes and collective action: the actor cannot completely anticipate the future effects of her/his acts and the act is co-produced by multiple actors. This translation from verbal to broader classes of interaction stresses the performativity of speeches, the importance of the situation, the role of semiotic mediations to make temporally and spatially distant "ghosts" present in the dialog, and the dissymmetrical relationship between successive conversational turns, due to temporal irreversibility.

  17. Discourse Analysis of the Political Speeches of the Ousted Arab Presidents during the Arab Spring Revolution Using Halliday and Hasan's Framework of Cohesion

    ERIC Educational Resources Information Center

    Al-Majali, Wala'

    2015-01-01

    This study is designed to explore the salient linguistic features of the political speeches of the ousted Arab presidents during the Arab Spring Revolution. The sample of the study is composed of seven political speeches delivered by the ousted Arab presidents during the period from December 2010 to December 2012. Three speeches were delivered by…

  18. The Prompt Book for...Teaching the Art of Speech and Drama To Children: A Resource Guide for Teachers of Children in the Art of Speech and Drama.

    ERIC Educational Resources Information Center

    Dugger, Anita; And Others

    Providing for individual differences in ability, interest, and cultural values among students, this guide contains resources, goals, objectives, sample lesson plans, and activities for teaching speech and drama to elementary school students. The first section of the guide offers advice on the organization of a speech arts curriculum, approaches to…

  19. Speech Intelligibility in Severe Adductor Spasmodic Dysphonia

    ERIC Educational Resources Information Center

    Bender, Brenda K.; Cannito, Michael P.; Murry, Thomas; Woodson, Gayle E.

    2004-01-01

    This study compared speech intelligibility in nondisabled speakers and speakers with adductor spasmodic dysphonia (ADSD) before and after botulinum toxin (Botox) injection. Standard speech samples were obtained from 10 speakers diagnosed with severe ADSD prior to and 1 month following Botox injection, as well as from 10 age- and gender-matched…

  20. Movement of the velum during speech and singing in classically trained singers.

    PubMed

    Austin, S F

    1997-06-01

    The present study addresses two questions: (a) Is the action and/or posture of the velopharyngeal valve conducive to allow significant resonance during Western tradition classical singing? (b) How do the actions of the velopharyngeal valve observed in this style of singing compare with normal speech? A photodetector system was used to observe the area function of the velopharyngeal port during speech and classical style singing. Identical speech samples were produced by each subject in a normal speaking voice and then in the low, medium, and high singing ranges. Results indicate that in these four singers the velopharyngeal port was closed significantly longer in singing than in speaking samples. The amount of time the velopharyngeal port was opened was greatest in speech and diminished as the singer ascended in pitch. In the high voice condition, little or no opening of the velopharyngeal port was measured.

  1. Speech and Language Therapy Under an Automated Stimulus Control System.

    ERIC Educational Resources Information Center

    Garrett, Edgar Ray

    Programed instruction for speech and language therapy, based upon stimulus control programing and presented by a completely automated teaching machine, was evaluated with 32 mentally retarded children, 20 children with language disorders (childhood aphasia), six adult aphasics, and 60 normal elementary school children. Posttesting with the…

  2. Speech-Language Pathologists' Comfort Levels in English Language Learner Service Delivery

    ERIC Educational Resources Information Center

    Kimble, Carlotta

    2013-01-01

    This study examined speech-language pathologists' (SLPs) comfort levels in providing service delivery to English language learners (ELLs) and limited English proficient (LEP) students. Participants included 192 SLPs from the United States and Guam. Participants completed a brief, six-item questionnaire that investigated their perceptions regarding…

  3. Health Literacy and the Role of the Speech-Language Pathologist

    ERIC Educational Resources Information Center

    Hester, Eva Jackson; Stevens-Ratchford, Regena

    2009-01-01

    Purpose: This article reviews concepts of health literacy and discusses the role of speech-language pathologists in improving the health literacy of individuals with and without communication disorders. Method: A literature review was completed of health literacy definitions, concepts, and health literacy assessment and intervention studies with…

  4. Ventilation and Speech Characteristics during Submaximal Aerobic Exercise

    ERIC Educational Resources Information Center

    Baker, Susan E.; Hipp, Jenny; Alessio, Helaine

    2008-01-01

    Purpose: This study examined alterations in ventilation and speech characteristics as well as perceived dyspnea during submaximal aerobic exercise tasks. Method: Twelve healthy participants completed aerobic exercise-only and simultaneous speaking and aerobic exercise tasks at 50% and 75% of their maximum oxygen consumption (VO[subscript 2] max).…

  5. Spanish Native-Speaker Perception of Accentedness in Learner Speech

    ERIC Educational Resources Information Center

    Moranski, Kara

    2012-01-01

    Building upon current research in native-speaker (NS) perception of L2 learner phonology (Zielinski, 2008; Derwing & Munro, 2009), the present investigation analyzed multiple dimensions of NS speech perception in order to achieve a more complete understanding of the specific linguistic elements and attitudinal variables that contribute to…

  6. Listeners' Perceptions of Speech and Language Disorders

    ERIC Educational Resources Information Center

    Allard, Emily R.; Williams, Dale F.

    2008-01-01

    Using semantic differential scales with nine trait pairs, 445 adults rated five audio-taped speech samples, one depicting an individual without a disorder and four portraying communication disorders. Statistical analyses indicated that the no disorder sample was rated higher with respect to the trait of employability than were the articulation,…

  7. Fluency variation in adolescents.

    PubMed

    Furquim de Andrade, Claudia Regina; de Oliveira Martins, Vanessa

    2007-10-01

    The Speech Fluency Profile of fluent adolescent speakers of Brazilian Portuguese, were examined with respect to gender and neurolinguistic variations. Speech samples of 130 male and female adolescents, aged between 12;0 and 17;11 years were gathered. They were analysed according to type of speech disruption; speech rate; and frequency of speech disruptions. Statistical analysis did not find significant differences between genders for the variables studied. However, regarding the phases of adolescence (early: 12;0-14;11 years; late: 15;0-17;11 years), statistical differences were observed for all of the variables. As for neurolinguistic maturation, a decrease in the number of speech disruptions and an increase in speech rate occurred during the final phase of adolescence, indicating that the maturation of the motor and linguistic processes exerted an influence over the fluency profile of speech.

  8. The functional role of the tonsils in speech.

    PubMed

    Finkelstein, Y; Nachmani, A; Ophir, D

    1994-08-01

    To present illustrative cases showing various tonsillar influences on speech and to present a clinical method for patient evaluation establishing concepts of management and a rational therapeutic approach. The cases were selected from a group of approximately 1000 patients referred to the clinic because of suspected palatal diseases. Complete velopharyngeal assessment was made, including otolaryngologic, speech, and hearing examinations, polysomnography, nasendoscopy, multiview videofluoroscopy, and cephalometry. New observations further elucidate the intimate relation between the tonsils and the velopharyngeal valve. The potential influence of the tonsils on the velopharyngeal valve mechanism, in hindering or assisting speech, is described. In selected cases, the decision to perform tonsillectomy depends on its potential effect on speech. The combination of nasendoscopic and multiview videofluoroscopic studies of the mechanical properties of the tonsils during speech is required for patients who present with velopharyngeal insufficiency in whom tonsillar hypertrophy is found. These studies are also required in patients with palatal anomalies who are candidates for tonsillectomy.

  9. Comparing Measures of Voice Quality From Sustained Phonation and Continuous Speech.

    PubMed

    Gerratt, Bruce R; Kreiman, Jody; Garellek, Marc

    2016-10-01

    The question of what type of utterance-a sustained vowel or continuous speech-is best for voice quality analysis has been extensively studied but with equivocal results. This study examines whether previously reported differences derive from the articulatory and prosodic factors occurring in continuous speech versus sustained phonation. Speakers with voice disorders sustained vowels and read sentences. Vowel samples were excerpted from the steadiest portion of each vowel in the sentences. In addition to sustained and excerpted vowels, a 3rd set of stimuli was created by shortening sustained vowel productions to match the duration of vowels excerpted from continuous speech. Acoustic measures were made on the stimuli, and listeners judged the severity of vocal quality deviation. Sustained vowels and those extracted from continuous speech contain essentially the same acoustic and perceptual information about vocal quality deviation. Perceived and/or measured differences between continuous speech and sustained vowels derive largely from voice source variability across segmental and prosodic contexts and not from variations in vocal fold vibration in the quasisteady portion of the vowels. Approaches to voice quality assessment by using continuous speech samples average across utterances and may not adequately quantify the variability they are intended to assess.

  10. Impairments of speech fluency in Lewy body spectrum disorder.

    PubMed

    Ash, Sharon; McMillan, Corey; Gross, Rachel G; Cook, Philip; Gunawardena, Delani; Morgan, Brianna; Boller, Ashley; Siderowf, Andrew; Grossman, Murray

    2012-03-01

    Few studies have examined connected speech in demented and non-demented patients with Parkinson's disease (PD). We assessed the speech production of 35 patients with Lewy body spectrum disorder (LBSD), including non-demented PD patients, patients with PD dementia (PDD), and patients with dementia with Lewy bodies (DLB), in a semi-structured narrative speech sample in order to characterize impairments of speech fluency and to determine the factors contributing to reduced speech fluency in these patients. Both demented and non-demented PD patients exhibited reduced speech fluency, characterized by reduced overall speech rate and long pauses between sentences. Reduced speech rate in LBSD correlated with measures of between-utterance pauses, executive functioning, and grammatical comprehension. Regression analyses related non-fluent speech, grammatical difficulty, and executive difficulty to atrophy in frontal brain regions. These findings indicate that multiple factors contribute to slowed speech in LBSD, and this is mediated in part by disease in frontal brain regions. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Communication and social interaction anxiety enhance interleukin-1 beta and cortisol reactivity during high-stakes public speaking.

    PubMed

    Auer, Brandon J; Calvi, Jessica L; Jordan, Nicolas M; Schrader, David; Byrd-Craven, Jennifer

    2018-08-01

    Worry or fear related to speaking in front of others, or more broadly, communicating and interacting with others, is common. At elevated levels, however, it may contribute to heightened stress reactivity during acute speaking challenges. The purpose of this study was to examine multi-system physiological stress reactivity in the context of high-stakes public speaking while considering the impact of hypothesized individual difference risk factors. University student participants (n = 95) delivering speeches as a heavily-weighted component of their final grade had saliva samples collected immediately prior to speaking, immediately after, and 20 min after speech completion. Saliva samples were assayed for alpha amylase (sAA), cortisol, and interleukin-1 beta (IL-1β). Self-reported communication anxiety, social interaction anxiety, rejection sensitivity, and sex were assessed as risk factors for heightened stress reactivity. Salivary sAA, cortisol, and IL-1β significantly changed following speech delivery. Multivariate analyses demonstrated that elevated levels of self-reported communication anxiety and social interaction anxiety were independently associated with increased cortisol and IL-1β responses and combined to enhance HPA axis and inflammatory cytokine activity further (i.e., cortisol and IL-1β AUC I ). Sex and rejection sensitivity were unrelated to physiological stress reactivity. These findings suggest that individuals with elevated communication and interaction fears may be at increased risk of heightened neuroendocrine and inflammatory responses following exposure to acute social stressors. Both types of anxiety may combine to increase physiological reactivity further, with unknown, though likely insalubrious, health consequences over time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. A Wavelet Model for Vocalic Speech Coarticulation

    DTIC Science & Technology

    1994-10-01

    control vowel’s signal as the mother wavelet. A practical experiment is conducted to evaluate the coarticulation channel using samples 01 real speech...transformation from a control speech state (input) to an effected speech state (output). Specifically, a vowel produced in isolation is transformed into an...the wavelet transform of the effected vowel’s signal, using the control vowel’s signal as the mother wavelet. A practical experiment is conducted to

  13. White noise speech illusion and psychosis expression: An experimental investigation of psychosis liability.

    PubMed

    Pries, Lotta-Katrin; Guloksuz, Sinan; Menne-Lothmann, Claudia; Decoster, Jeroen; van Winkel, Ruud; Collip, Dina; Delespaul, Philippe; De Hert, Marc; Derom, Catherine; Thiery, Evert; Jacobs, Nele; Wichers, Marieke; Simons, Claudia J P; Rutten, Bart P F; van Os, Jim

    2017-01-01

    An association between white noise speech illusion and psychotic symptoms has been reported in patients and their relatives. This supports the theory that bottom-up and top-down perceptual processes are involved in the mechanisms underlying perceptual abnormalities. However, findings in nonclinical populations have been conflicting. The aim of this study was to examine the association between white noise speech illusion and subclinical expression of psychotic symptoms in a nonclinical sample. Findings were compared to previous results to investigate potential methodology dependent differences. In a general population adolescent and young adult twin sample (n = 704), the association between white noise speech illusion and subclinical psychotic experiences, using the Structured Interview for Schizotypy-Revised (SIS-R) and the Community Assessment of Psychic Experiences (CAPE), was analyzed using multilevel logistic regression analyses. Perception of any white noise speech illusion was not associated with either positive or negative schizotypy in the general population twin sample, using the method by Galdos et al. (2011) (positive: ORadjusted: 0.82, 95% CI: 0.6-1.12, p = 0.217; negative: ORadjusted: 0.75, 95% CI: 0.56-1.02, p = 0.065) and the method by Catalan et al. (2014) (positive: ORadjusted: 1.11, 95% CI: 0.79-1.57, p = 0.557). No association was found between CAPE scores and speech illusion (ORadjusted: 1.25, 95% CI: 0.88-1.79, p = 0.220). For the Catalan et al. (2014) but not the Galdos et al. (2011) method, a negative association was apparent between positive schizotypy and speech illusion with positive or negative affective valence (ORadjusted: 0.44, 95% CI: 0.24-0.81, p = 0.008). Contrary to findings in clinical populations, white noise speech illusion may not be associated with psychosis proneness in nonclinical populations.

  14. Association of Velopharyngeal Insufficiency With Quality of Life and Patient-Reported Outcomes After Speech Surgery.

    PubMed

    Bhuskute, Aditi; Skirko, Jonathan R; Roth, Christina; Bayoumi, Ahmed; Durbin-Johnson, Blythe; Tollefson, Travis T

    2017-09-01

    Patients with cleft palate and other causes of velopharyngeal insufficiency (VPI) suffer adverse effects on social interactions and communication. Measurement of these patient-reported outcomes is needed to help guide surgical and nonsurgical care. To further validate the VPI Effects on Life Outcomes (VELO) instrument, measure the change in quality of life (QOL) after speech surgery, and test the association of change in speech with change in QOL. Prospective descriptive cohort including children and young adults undergoing speech surgery for VPI in a tertiary academic center. Participants completed the validated VELO instrument before and after surgical treatment. The main outcome measures were preoperative and postoperative VELO scores and the perceptual speech assessment of speech intelligibility. The VELO scores are divided into subscale domains. Changes in VELO after surgery were analyzed using linear regression models. VELO scores were analyzed as a function of speech intelligibility adjusting for age and cleft type. The correlation between speech intelligibility rating and VELO scores was estimated using the polyserial correlation. Twenty-nine patients (13 males and 16 females) were included. Mean (SD) age was 7.9 (4.1) years (range, 4-20 years). Pharyngeal flap was used in 14 (48%) cases, Furlow palatoplasty in 12 (41%), and sphincter pharyngoplasty in 1 (3%). The mean (SD) preoperative speech intelligibility rating was 1.71 (1.08), which decreased postoperatively to 0.79 (0.93) in 24 patients who completed protocol (P < .01). The VELO scores improved after surgery (P<.001) as did most subscale scores. Caregiver impact did not change after surgery (P = .36). Speech Intelligibility was correlated with preoperative and postoperative total VELO score (P < .01) and to preoperative subscale domains (situational difficulty [VELO-SiD, P = .005] and perception by others [VELO-PO, P = .05]) and postoperative subscale domains (VELO-SiD [P = .03], VELO-PO [P = .003]). Neither the VELO total nor subscale score change after surgery was correlated with change in speech intelligibility. Speech surgery improves VPI-specific quality of life. We confirmed validation in a population of untreated patients with VPI and included pharyngeal flap surgery, which had not previously been included in validation studies. The VELO instrument provides patient-specific outcomes, which allows a broader understanding of the social, emotional, and physical effects of VPI. 2.

  15. Speech sound disorder at 4 years: prevalence, comorbidities, and predictors in a community cohort of children.

    PubMed

    Eadie, Patricia; Morgan, Angela; Ukoumunne, Obioha C; Ttofari Eecen, Kyriaki; Wake, Melissa; Reilly, Sheena

    2015-06-01

    The epidemiology of preschool speech sound disorder is poorly understood. Our aims were to determine: the prevalence of idiopathic speech sound disorder; the comorbidity of speech sound disorder with language and pre-literacy difficulties; and the factors contributing to speech outcome at 4 years. One thousand four hundred and ninety-four participants from an Australian longitudinal cohort completed speech, language, and pre-literacy assessments at 4 years. Prevalence of speech sound disorder (SSD) was defined by standard score performance of ≤79 on a speech assessment. Logistic regression examined predictors of SSD within four domains: child and family; parent-reported speech; cognitive-linguistic; and parent-reported motor skills. At 4 years the prevalence of speech disorder in an Australian cohort was 3.4%. Comorbidity with SSD was 40.8% for language disorder and 20.8% for poor pre-literacy skills. Sex, maternal vocabulary, socio-economic status, and family history of speech and language difficulties predicted SSD, as did 2-year speech, language, and motor skills. Together these variables provided good discrimination of SSD (area under the curve=0.78). This is the first epidemiological study to demonstrate prevalence of SSD at 4 years of age that was consistent with previous clinical studies. Early detection of SSD at 4 years should focus on family variables and speech, language, and motor skills measured at 2 years. © 2014 Mac Keith Press.

  16. Electrocorticographic representations of segmental features in continuous speech

    PubMed Central

    Lotte, Fabien; Brumberg, Jonathan S.; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L.; Guan, Cuntai; Schalk, Gerwin

    2015-01-01

    Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.e., phonemes, syllables and words) in isolation; sentence-length investigations have largely been used to inform coincident linguistic processing. In this study, we examined the neural representations of segmental features (place and manner of articulation, and voicing status) in the context of fluent, continuous speech production. We used recordings from the cortical surface [electrocorticography (ECoG)] to simultaneously evaluate the spatial topography and temporal dynamics of the neural correlates of speech articulation that may mediate the generation of hypothesized gestural or articulatory scores. We found that the representation of place of articulation involved broad networks of brain regions during all phases of speech production: preparation, execution and monitoring. In contrast, manner of articulation and voicing status were dominated by auditory cortical responses after speech had been initiated. These results provide a new insight into the articulatory and auditory processes underlying speech production in terms of their motor requirements and acoustic correlates. PMID:25759647

  17. Speech effort measurement and stuttering: investigating the chorus reading effect.

    PubMed

    Ingham, Roger J; Warner, Allison; Byrd, Anne; Cotton, John

    2006-06-01

    The purpose of this study was to investigate chorus reading's (CR's) effect on speech effort during oral reading by adult stuttering speakers and control participants. The effect of a speech effort measurement highlighting strategy was also investigated. Twelve persistent stuttering (PS) adults and 12 normally fluent control participants completed 1-min base rate readings (BR-nonchorus) and CRs within a BR/CR/BR/CR/BR experimental design. Participants self-rated speech effort using a 9-point scale after each reading trial. Stuttering frequency, speech rate, and speech naturalness measures were also obtained. Instructions highlighting speech effort ratings during BR and CR phases were introduced after the first CR. CR improved speech effort ratings for the PS group, but the control group showed a reverse trend. Both groups' effort ratings were not significantly different during CR phases but were significantly poorer than the control group's effort ratings during BR phases. The highlighting strategy did not significantly change effort ratings. The findings show that CR will produce not only stutter-free and natural sounding speech but also reliable reductions in speech effort. However, these reductions do not reach effort levels equivalent to those achieved by normally fluent speakers, thereby conditioning its use as a gold standard of achievable normal fluency by PS speakers.

  18. Synthesis of Polysyllabic Sequences of Thai Tones Using a Generative Model of Fundamental Frequency Contours

    NASA Astrophysics Data System (ADS)

    Seresangtakul, Pusadee; Takara, Tomio

    In this paper, the distinctive tones of Thai in running speech are studied. We present rules to synthesize F0 contours of Thai tones in running speech by using the generative model of F0 contours. Along with our method, the pitch contours of Thai polysyllabic words, both disyllabic and trisyllabic words, were analyzed. The coarticulation effect of Thai tones in running speech were found. Based on the analysis of the polysyllabic words using this model, rules are derived and applied to synthesize Thai polysyllabic tone sequences. We performed listening tests to evaluate intelligibility of the rules for Thai tones generation. The average intelligibility scores became 98.8%, and 96.6% for disyllabic and trisyllabic words, respectively. From these result, the rule of the tones' generation was shown to be effective. Furthermore, we constructed the connecting rules to synthesize suprasegmental F0 contours using the trisyllable training rules' parameters. The parameters of the first, the third, and the second syllables were selected and assigned to the initial, the ending, and the remaining syllables in a sentence, respectively. Even such a simple rule, the synthesized phrases/senetences were completely identified in listening tests. The MOSs (Mean Opinion Score) was 3.50 while the original and analysis/synthesis samples were 4.82 and 3.59, respectively.

  19. Production Variability and Single Word Intelligibility in Aphasia and Apraxia of Speech

    ERIC Educational Resources Information Center

    Haley, Katarina L.; Martin, Gwenyth

    2011-01-01

    This study was designed to estimate test-retest reliability of orthographic speech intelligibility testing in speakers with aphasia and AOS and to examine its relationship to the consistency of speaker and listener responses. Monosyllabic single word speech samples were recorded from 13 speakers with coexisting aphasia and AOS. These words were…

  20. Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

    ERIC Educational Resources Information Center

    Loh, Marco; Schmid, Gabriele; Deco, Gustavo; Ziegler, Wolfram

    2010-01-01

    Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult…

  1. Phonology and Vocal Behavior in Toddlers with Autism Spectrum Disorders

    PubMed Central

    Schoen, Elizabeth; Paul, Rhea; Chawarska, Katyrzyna

    2011-01-01

    Scientific Abstract The purpose of this study is to examine the phonological and other vocal productions of children, 18-36 months, with autism spectrum disorder (ASD) and to compare these productions to those of age-matched and language-matched controls. Speech samples were obtained from 30 toddlers with ASD, 11 age-matched toddlers and 23 language-matched toddlers during either parent-child or clinician-child play sessions. Samples were coded for a variety of speech-like and non-speech vocalization productions. Toddlers with ASD produced speech-like vocalizations similar to those of language-matched peers, but produced significantly more atypical non-speech vocalizations when compared to both control groups.Toddlers with ASD show speech-like sound production that is linked to their language level, in a manner similar to that seen in typical development. The main area of difference in vocal development in this population is in the production of atypical vocalizations. Findings suggest that toddlers with autism spectrum disorders might not tune into the language model of their environment. Failure to attend to the ambient language environment negatively impacts the ability to acquire spoken language. PMID:21308998

  2. Emotional and physiological responses of fluent listeners while watching the speech of adults who stutter.

    PubMed

    Guntupalli, Vijaya K; Everhart, D Erik; Kalinowski, Joseph; Nanjundeswaran, Chayadevie; Saltuklaroglu, Tim

    2007-01-01

    People who stutter produce speech that is characterized by intermittent, involuntary part-word repetitions and prolongations. In addition to these signature acoustic manifestations, those who stutter often display repetitive and fixated behaviours outside the speech producing mechanism (e.g. in the head, arm, fingers, nares, etc.). Previous research has examined the attitudes and perceptions of those who stutter and people who frequently interact with them (e.g. relatives, parents, employers). Results have shown an unequivocal, powerful and robust negative stereotype despite a lack of defined differences in personality structure between people who stutter and normally fluent individuals. However, physiological investigations of listener responses during moments of stuttering are limited. There is a need for data that simultaneously examine physiological responses (e.g. heart rate and galvanic skin conductance) and subjective behavioural responses to stuttering. The pairing of these objective and subjective data may provide information that casts light on the genesis of negative stereotypes associated with stuttering, the development of compensatory mechanisms in those who stutter, and the true impact of stuttering on senders and receivers alike. To compare the emotional and physiological responses of fluent speakers while listening and observing fluent and severe stuttered speech samples. Twenty adult participants (mean age = 24.15 years, standard deviation = 3.40) observed speech samples of two fluent speakers and two speakers who stutter reading aloud. Participants' skin conductance and heart rate changes were measured as physiological responses to stuttered or fluent speech samples. Participants' subjective responses on arousal (excited-calm) and valence (happy-unhappy) dimensions were assessed via the Self-Assessment Manikin (SAM) rating scale with an additional questionnaire comprised of a set of nine bipolar adjectives. Results showed significantly increased skin conductance and lower mean heart rate during the presentation of stuttered speech relative to the presentation of fluent speech samples (p<0.05). Listeners also self-rated themselves as being more aroused, unhappy, nervous, uncomfortable, sad, tensed, unpleasant, avoiding, embarrassed, and annoyed while viewing stuttered speech relative to the fluent speech. These data support the notion that stutter-filled speech can elicit physiological and emotional responses in listeners. Clinicians who treat stuttering should be aware that listeners show involuntary physiological responses to moderate-severe stuttering that probably remain salient over time and contribute to the evolution of negative stereotypes of people who stutter. With this in mind, it is hoped that clinicians can work with people who stutter to develop appropriate coping strategies. The role of amygdala and mirror neural mechanism in physiological and subjective responses to stuttering is discussed.

  3. Setting up of teeth in the neutral zone and its effect on speech

    PubMed Central

    Al-Magaleh, Wafa’a Radwan; Swelem, Amal Ali; Shohdi, Sahar Saad; Mawsouf, Nadia Mohamed

    2011-01-01

    Rational goals for denture construction are basically directed at the restoration of esthetics and masticatory function and the healthy preservation of the remaining natural tissues. Little concern has been given to the perfection and optimization of the phonetic quality of denture users. However, insertion of prosthodontic restorations may lead to speech defects. Most such defects are mild but, nevertheless, can be a source of concern to the patient. For the dental practitioner, there are few guidelines for designing a prosthetic restoration with maximum phonetic success. One of these guidelines involves the setting up of teeth within the neutral zone. The aim of this study was to evaluate, subjectively and objectively, the effect on speech of setting up teeth in the neutral zone. Three groups were examined: group I (control) included 10 completely dentulous subjects, group II included 10 completely edentulous patients with conventional dentures, and group III included the same 10 edentulous patients with neutral zone dentures. Subjective assessment included patient satisfaction. Objective assessment included duration taken for recitation of Al-Fateha and acoustic analysis. Subjectively, patients were more satisfied with their neutral zone dentures. Objectively, speech produced with the neutral zone dentures was closer to normal than speech with conventional dentures. PMID:23960527

  4. Oxytocin selectively moderates negative cognitive appraisals in high trait anxious males.

    PubMed

    Alvares, Gail A; Chen, Nigel T M; Balleine, Bernard W; Hickie, Ian B; Guastella, Adam J

    2012-12-01

    The mammalian neuropeptide oxytocin has well-characterized effects in facilitating prosocial and affiliative behavior. Additionally, oxytocin decreases physiological and behavioral responses to social stress. In the present study we investigated the effects of oxytocin on cognitive appraisals after a naturalistic social stress task in healthy male students. In a randomized, double-blind, placebo-controlled trial, 48 participants self-administered either an oxytocin or placebo nasal spray and, following a wait period, completed an impromptu speech task. Eye gaze to a pre-recorded video of an audience displayed during the task was simultaneously collected. After the speech, participants completed questionnaires assessing negative cognitive beliefs about speech performance. Whilst there was no overall effect of oxytocin compared to placebo on either eye gaze or questionnaire measures, there were significant positive correlations between trait levels of anxiety and negative self-appraisals following the speech. Exploratory analyses revealed that whilst higher trait anxiety was associated with increasingly poorer perceptions of speech performance in the placebo group, this relationship was not found in participants administered oxytocin. These results provide preliminary evidence to suggest that oxytocin may reduce negative cognitive self-appraisals in high trait anxious males. It adds to a growing body of evidence that oxytocin seems to attenuate negative cognitive responses to stress in anxious individuals. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. The assessment of premorbid intellectual ability following right-hemisphere stroke: reliability of a lexical decision task.

    PubMed

    Gillespie, David C; Bowen, Audrey; Foster, Jonathan K

    2012-01-01

    Comparing current with estimated premorbid performance helps identify acquired cognitive deficits after brain injury. Tests of reading pronunciation, often used to measure premorbid ability, are inappropriate for stroke patients with motor speech problems. The Spot-the-Word Test (STWT), a measure of lexical decision, offers an alternative approach for estimating premorbid capacity in those with speech problems. However, little is known about the STWT's reliability. In the present study, a consecutive sample of right-hemisphere stroke (RHS) patients (n = 56) completed the STWT at 4 and 16 weeks poststroke. A control group, individually matched to the patients for age and initial STWT score, also completed the STWT on two occasions. More than 80% of patients had STWT scores at retest within 2 scaled score points of their initial score, suggesting that the STWT is a reliable measure for most individuals with RHS. However, RHS patients had significantly greater score change than controls. Limits of agreement analysis revealed that approximately 1 in 7 patients obtained abnormally large STWT score improvements at retest. It is concluded that although the STWT is a useful assessment tool for stroke clinicians, this instrument may significantly underestimate premorbid level of ability in approximately 14% of stroke patients.

  6. Decoding spectrotemporal features of overt and covert speech from the human cortex

    PubMed Central

    Martin, Stéphanie; Brunner, Peter; Holdgraf, Chris; Heinze, Hans-Jochen; Crone, Nathan E.; Rieger, Jochem; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.

    2014-01-01

    Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10−5; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate. PMID:24904404

  7. Systematic studies of modified vocalization: the effect of speech rate on speech production measures during metronome-paced speech in persons who stutter.

    PubMed

    Davidow, Jason H

    2014-01-01

    Metronome-paced speech results in the elimination, or substantial reduction, of stuttering moments. The cause of fluency during this fluency-inducing condition is unknown. Several investigations have reported changes in speech pattern characteristics from a control condition to a metronome-paced speech condition, but failure to control speech rate between conditions limits our ability to determine if the changes were necessary for fluency. This study examined the effect of speech rate on several speech production variables during one-syllable-per-beat metronomic speech in order to determine changes that may be important for fluency during this fluency-inducing condition. Thirteen persons who stutter (PWS), aged 18-62 years, completed a series of speaking tasks. Several speech production variables were compared between conditions produced at different metronome beat rates, and between a control condition and a metronome-paced speech condition produced at a rate equal to the control condition. Vowel duration, voice onset time, pressure rise time and phonated intervals were significantly impacted by metronome beat rate. Voice onset time and the percentage of short (30-100 ms) phonated intervals significantly decreased from the control condition to the equivalent rate metronome-paced speech condition. A reduction in the percentage of short phonated intervals may be important for fluency during syllable-based metronome-paced speech for PWS. Future studies should continue examining the necessity of this reduction. In addition, speech rate must be controlled in future fluency-inducing condition studies, including neuroimaging investigations, in order for this research to make a substantial contribution to finding the fluency-inducing mechanism of fluency-inducing conditions. © 2013 Royal College of Speech and Language Therapists.

  8. High-frequency energy in singing and speech

    NASA Astrophysics Data System (ADS)

    Monson, Brian Bruce

    While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.

  9. From a concept to a word in a syntactically complete sentence: an fMRI study on spontaneous language production in an overt picture description task.

    PubMed

    Grande, Marion; Meffert, Elisabeth; Schoenberger, Eva; Jung, Stefanie; Frauenrath, Tobias; Huber, Walter; Hussmann, Katja; Moormann, Mareike; Heim, Stefan

    2012-07-02

    Spontaneous language has rarely been subjected to neuroimaging studies. This study therefore introduces a newly developed method for the analysis of linguistic phenomena observed in continuous language production during fMRI. Most neuroimaging studies investigating language have so far focussed on single word or - to a smaller extent - sentence processing, mostly due to methodological considerations. Natural language production, however, is far more than the mere combination of words to larger units. Therefore, the present study aimed at relating brain activation to linguistic phenomena like word-finding difficulties or syntactic completeness in a continuous language fMRI paradigm. A picture description task with special constraints was used to provoke hesitation phenomena and speech errors. The transcribed speech sample was segmented into events of one second and each event was assigned to one category of a complex schema especially developed for this purpose. The main results were: conceptual planning engages bilateral activation of the precuneus. Successful lexical retrieval is accompanied - particularly in comparison to unsolved word-finding difficulties - by the left middle and superior temporal gyrus. Syntactic completeness is reflected in activation of the left inferior frontal gyrus (IFG) (area 44). In sum, the method has proven to be useful for investigating the neural correlates of lexical and syntactic phenomena in an overt picture description task. This opens up new prospects for the analysis of spontaneous language production during fMRI. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Coordination of Oral and Laryngeal Movements in the Perceptually Fluent Speech of Adults Who Stutter

    ERIC Educational Resources Information Center

    Max, Ludo; Gracco, Vincent L.

    2005-01-01

    This work investigated whether stuttering and nonstuttering adults differ in the coordination of oral and laryngeal movements during the production of perceptually fluent speech. This question was addressed by completing correlation analyses that extended previous acoustic studies by others as well as inferential analyses based on the…

  11. Rehabilitation of a patient with complete mandibulectomy and partial glossectomy.

    PubMed

    Meyerson, M D; Johnson, B H; Weitzman, R S

    1980-05-01

    Following a number of radiologic and surgical procedures for the treatment of oral cancer, a patient with severe facial disfigurement and alteration of the vocal tract acquired acceptable speech. Consultation among referring physicians and speech pathologists can aid such a patient by facilitating the rehabilitative process through improvement of communicative skills.

  12. Training and Self-Reported Confidence for Dysphagia Management among Speech-Language Pathologists in the Schools

    ERIC Educational Resources Information Center

    O'Donoghue, Cynthia R.; Dean-Claytor, Ashli

    2008-01-01

    Purpose: The number of children requiring dysphagia management in the schools is increasing. This article reports survey findings relative to speech-language pathologists' (SLPs') training and self-rated confidence to treat children with swallowing and feeding disorders in the schools. Method: Surveys were completed by 222 SLPs representing…

  13. Sensitivity to Speech Rhythm Explains Individual Differences in Reading Ability Independently of Phonological Awareness

    ERIC Educational Resources Information Center

    Holliman, Andrew J.; Wood, Clare; Sheehy, Kieron

    2008-01-01

    This study considered whether sensitivity to speech rhythm can predict concurrent variance in reading attainment after individual differences in age, vocabulary, and phonological awareness have been controlled. Five- to six-year-old English-speaking children completed a battery of phonological processing assessments and reading assessments, along…

  14. Employment Myths and Realities in Speech and Language Pathology.

    ERIC Educational Resources Information Center

    Chan, T. C.; Curran, Theresa; Deskin, Caroline

    This study compared the professional expectations of graduate students in speech and language pathology (SLP) with the employment realities offered by the profession. A total of 89 graduate students enrolled in the SLP program at Valdosta State University (VSU) in Georgia during 1995-97 completed a seven-item questionnaire on employment prospects.…

  15. Autistic Spectrum Disorder: Caseload Characteristics, and Interventions Implemented by Speech-Language Therapists

    ERIC Educational Resources Information Center

    Smith, Deborah L.; Gillon, Gail T.

    2004-01-01

    This study investigated the caseload characteristics and the types of intervention implemented for children with autistic spectrum disorder (ASD). A survey was developed and distributed to 75 speech-language therapists working for Special Education within the New Zealand Ministry of Education. A total of 34 surveys were completed and returned.…

  16. Factors Associated With Negative Attitudes Toward Speaking in Preschool-Age Children Who Do and Do Not Stutter.

    PubMed

    Groner, Stephen; Walden, Tedra; Jones, Robin

    2016-01-01

    This study explored relations between the negativity of children's speech-related attitudes as measured by the Communication Attitude Test for Preschool and Kindergarten Children Who Stutter (KiddyCAT; Vanryckeghem & Brutten, 2007) and (a) age; (b) caregiver reports of stuttering and its social consequences; (c) types of disfluencies; and (d) standardized speech, vocabulary, and language scores. Participants were 46 preschool-age children who stutter (CWS; 12 females, 34 males) and 66 preschool-age children who do not stutter (CWNS; 35 females, 31 males). After a conversation, children completed standardized tests and the KiddyCAT while their caregivers completed scales on observed stuttering behaviors and their consequences. The KiddyCAT scores of both the CWS and the CWNS were significantly negatively correlated with age. Both groups' KiddyCAT scores increased with higher scores on the Speech Fluency Rating Scale of the Test of Childhood Stuttering (Gillam, Logan, & Pearson, 2009). Repetitions were a significant contributor to the CWNS's KiddyCAT scores, but no specific disfluency significantly contributed to the CWS's KiddyCAT scores. Greater articulation errors were associated with higher KiddyCAT scores in the CWNS. No standardized test scores were associated with KiddyCAT scores in the CWS. Attitudes that speech is difficult are not associated with similar aspects of communication for CWS and CWNS. Age significantly contributed to negative speech attitudes for CWS, whereas age, repetitions, and articulation errors contributed to negative speech attitudes for CWNS.

  17. Intimate insight: MDMA changes how people talk about significant others

    PubMed Central

    Baggott, Matthew J.; Kirkpatrick, Matthew G.; Bedi, Gillinder; de Wit, Harriet

    2015-01-01

    Rationale ±3,4-methylenedioxymethamphetamine (MDMA) is widely believed to increase sociability. The drug alters speech production and fluency, and may influence speech content. Here, we investigated the effect of MDMA on speech content, which may reveal how this drug affects social interactions. Method 35 healthy volunteers with prior MDMA experience completed this two-session, within-subjects, double-blind study during which they received 1.5 mg/kg oral MDMA and placebo. Participants completed a 5-min standardized talking task during which they discussed a close personal relationship (e.g., a friend or family member) with a research assistant. The conversations were analyzed for selected content categories (e.g., words pertaining to affect, social interaction, and cognition), using both a standard dictionary method (Pennebaker’s Linguistic Inquiry and Word Count: LIWC) and a machine learning method using random forest classifiers. Results Both analytic methods revealed that MDMA altered speech content relative to placebo. Using LIWC scores, the drug increased use of social and sexual words, consistent with reports that MDMA increases willingness to disclose. Using the machine learning algorithm, we found that MDMA increased use of social words and words relating to both positive and negative emotions. Conclusions These findings are consistent with reports that MDMA acutely alters speech content, specifically increasing emotional and social content during a brief semistructured dyadic interaction. Studying effects of psychoactive drugs on speech content may offer new insights into drug effects on mental states, and on emotional and psychosocial interaction. PMID:25922420

  18. Intimate insight: MDMA changes how people talk about significant others.

    PubMed

    Baggott, Matthew J; Kirkpatrick, Matthew G; Bedi, Gillinder; de Wit, Harriet

    2015-06-01

    ±3,4-methylenedioxymethamphetamine (MDMA) is widely believed to increase sociability. The drug alters speech production and fluency, and may influence speech content. Here, we investigated the effect of MDMA on speech content, which may reveal how this drug affects social interactions. Thirty-five healthy volunteers with prior MDMA experience completed this two-session, within-subjects, double-blind study during which they received 1.5 mg/kg oral MDMA and placebo. Participants completed a five-minute standardized talking task during which they discussed a close personal relationship (e.g. a friend or family member) with a research assistant. The conversations were analyzed for selected content categories (e.g. words pertaining to affect, social interaction, and cognition), using both a standard dictionary method (Pennebaker's Linguistic Inquiry and Word Count: LIWC) and a machine learning method using random forest classifiers. Both analytic methods revealed that MDMA altered speech content relative to placebo. Using LIWC scores, the drug increased use of social and sexual words, consistent with reports that MDMA increases willingness to disclose. Using the machine learning algorithm, we found that MDMA increased use of social words and words relating to both positive and negative emotions. These findings are consistent with reports that MDMA acutely alters speech content, specifically increasing emotional and social content during a brief semistructured dyadic interaction. Studying effects of psychoactive drugs on speech content may offer new insights into drug effects on mental states, and on emotional and psychosocial interaction. © The Author(s) 2015.

  19. Partial maintenance of auditory-based cognitive training benefits in older adults

    PubMed Central

    Anderson, Samira; White-Schwoch, Travis; Choi, Hee Jae; Kraus, Nina

    2014-01-01

    The potential for short-term training to improve cognitive and sensory function in older adults has captured the public’s interest. Initial results have been promising. For example, eight weeks of auditory-based cognitive training decreases peak latencies and peak variability in neural responses to speech presented in a background of noise and instills gains in speed of processing, speech-in-noise recognition, and short-term memory in older adults. But while previous studies have demonstrated short-term plasticity in older adults, we must consider the long-term maintenance of training gains. To evaluate training maintenance, we invited participants from an earlier training study to return for follow-up testing six months after the completion of training. We found that improvements in response peak timing to speech in noise and speed of processing were maintained, but the participants did not maintain speech-in-noise recognition or memory gains. Future studies should consider factors that are important for training maintenance, including the nature of the training, compliance with the training schedule, and the need for booster sessions after the completion of primary training. PMID:25111032

  20. Reading in Subjects with an Oral Cleft: Speech, Hearing and Neuropsychological Skills

    PubMed Central

    Conrad, Amy L.; McCoy, Thomasin E.; DeVolder, Ian; Richman, Lynn C.; Nopoulos, Peg

    2014-01-01

    Objective Evaluate speech, hearing, and neuropsychological correlates to reading among children, adolescents and young adults with non-syndromic cleft of the lip and/or palate (NSCL/P). Method All testing was completed in one visit at a Midwestern university hospital. Subjects in both the NSCL/P (n = 80) and control group (n = 62) ranged in age from 7 to 26 years (average age = 17.60 and 17.66, respectively). Subjects completed a battery of standardized tests evaluating intelligence, neuropsychological skills, and word reading. Subjects with NSCL/P also underwent speech assessment and past audiology records were evaluated. Results After controlling for age and SES, subjects with cleft performed significantly worse on a test of word reading. For subjects with cleft, word reading deficits were not associated with measures of speech or hearing, but were correlated with impairments in auditory memory. Conclusions These findings show poorer reading among subjects with NCL/P compared to those without. Further work needs to focus on correlates of reading among subjects with cleft to allow early identification and appropriate intervention/accommodation for those at risk. PMID:24188114

  1. Management of swallowing and communication difficulties in Down syndrome: A survey of speech-language pathologists.

    PubMed

    Meyer, Carly; Theodoros, Deborah; Hickson, Louise

    2017-02-01

    To explore speech pathology services for people with Down syndrome across the lifespan. Speech-language pathologists (SLPs) working in Australia were invited to complete an online survey, which enquired about the speech pathology services they had provided to client/s with Down syndrome in the past 12 months. The data were analysed using descriptive statistics. A total of 390 SLPs completed the survey; 62% reported seeing a client with Down syndrome in the past 12 months. Most commonly, SLPs provided assessment and individual intervention for communication with varying levels of family involvement. The areas of dysphagia and/or communication addressed by SLPs, or in need of more services differed according to the age of the person with Down syndrome. SLPs reported a number of reasons why services were restricted. There is a need to re-assess the way that SLPs currently provide services to people with Down syndrome. More research is needed to develop and evaluate treatment approaches that can be used to better address the needs of this population.

  2. Is There a Relationship Between Tic Frequency and Physiological Arousal? Examination in a Sample of Children with Co-Occurring Tic and Anxiety Disorders

    PubMed Central

    Conelea, Christine A.; Ramanujam, Krishnapriya; Walther, Michael R.; Freeman, Jennifer B.; Garcia, Abbe M.

    2014-01-01

    Stress is the contextual variable most commonly implicated in tic exacerbations. However, research examining associations between tics, stressors, and the biological stress response has yielded mixed results. This study examined whether tics occur at a greater frequency during discrete periods of heightened physiological arousal. Children with co-occurring tic and anxiety disorders (n = 8) completed two stress induction tasks (discussion of family conflict, public speech). Observational (tic frequencies) and physiological (heart rate) data were synchronized using The Observer XT, and tic frequencies were compared across periods of high and low heart rate. Tic frequencies across the entire experiment did not increase during periods of higher heart rate. During the speech task, tic frequencies were significantly lower during periods of higher heart rate. Results suggest that tic exacerbations may not be associated with heightened physiological arousal and highlight the need for further tic research using integrated measurement of behavioral and biological processes. PMID:24662238

  3. Is There a Relationship Between Tic Frequency and Physiological Arousal? Examination in a Sample of Children With Co-Occurring Tic and Anxiety Disorders.

    PubMed

    Conelea, Christine A; Ramanujam, Krishnapriya; Walther, Michael R; Freeman, Jennifer B; Garcia, Abbe M

    2014-03-01

    Stress is the contextual variable most commonly implicated in tic exacerbations. However, research examining associations between tics, stressors, and the biological stress response has yielded mixed results. This study examined whether tics occur at a greater frequency during discrete periods of heightened physiological arousal. Children with co-occurring tic and anxiety disorders (n = 8) completed two stress-induction tasks (discussion of family conflict, public speech). Observational (tic frequencies) and physiological (heart rate [HR]) data were synchronized using The Observer XT, and tic frequencies were compared across periods of high and low HR. Tic frequencies across the entire experiment did not increase during periods of higher HR. During the speech task, tic frequencies were significantly lower during periods of higher HR. Results suggest that tic exacerbations may not be associated with heightened physiological arousal and highlight the need for further tic research using integrated measurement of behavioral and biological processes. © The Author(s) 2014.

  4. Perception of Filtered Speech by Children with Developmental Dyslexia and Children with Specific Language Impairments

    PubMed Central

    Goswami, Usha; Cumming, Ruth; Chait, Maria; Huss, Martina; Mead, Natasha; Wilson, Angela M.; Barnes, Lisa; Fosker, Tim

    2016-01-01

    Here we use two filtered speech tasks to investigate children’s processing of slow (<4 Hz) versus faster (∼33 Hz) temporal modulations in speech. We compare groups of children with either developmental dyslexia (Experiment 1) or speech and language impairments (SLIs, Experiment 2) to groups of typically-developing (TD) children age-matched to each disorder group. Ten nursery rhymes were filtered so that their modulation frequencies were either low-pass filtered (<4 Hz) or band-pass filtered (22 – 40 Hz). Recognition of the filtered nursery rhymes was tested in a picture recognition multiple choice paradigm. Children with dyslexia aged 10 years showed equivalent recognition overall to TD controls for both the low-pass and band-pass filtered stimuli, but showed significantly impaired acoustic learning during the experiment from low-pass filtered targets. Children with oral SLIs aged 9 years showed significantly poorer recognition of band pass filtered targets compared to their TD controls, and showed comparable acoustic learning effects to TD children during the experiment. The SLI samples were also divided into children with and without phonological difficulties. The children with both SLI and phonological difficulties were impaired in recognizing both kinds of filtered speech. These data are suggestive of impaired temporal sampling of the speech signal at different modulation rates by children with different kinds of developmental language disorder. Both SLI and dyslexic samples showed impaired discrimination of amplitude rise times. Implications of these findings for a temporal sampling framework for understanding developmental language disorders are discussed. PMID:27303348

  5. Identifying Residual Speech Sound Disorders in Bilingual Children: A Japanese-English Case Study

    PubMed Central

    Preston, Jonathan L.; Seki, Ayumi

    2012-01-01

    Purpose The purposes are to (1) describe the assessment of residual speech sound disorders (SSD) in bilinguals by distinguishing speech patterns associated with second language acquisition from patterns associated with misarticulations, and (2) describe how assessment of domains such as speech motor control and phonological awareness can provide a more complete understanding of SSDs in bilinguals. Method A review of Japanese phonology is provided to offer a context for understanding the transfer of Japanese to English productions. A case study of an 11-year-old is presented, demonstrating parallel speech assessments in English and Japanese. Speech motor and phonological awareness tasks were conducted in both languages. Results Several patterns were observed in the participant’s English that could be plausibly explained by the influence of Japanese phonology. However, errors indicating a residual SSD were observed in both Japanese and English. A speech motor assessment suggested possible speech motor control problems, and phonological awareness was judged to be within the typical range of performance in both languages. Conclusion Understanding the phonological characteristics of L1 can help clinicians recognize speech patterns in L2 associated with transfer. Once these differences are understood, patterns associated with a residual SSD can be identified. Supplementing a relational speech analysis with measures of speech motor control and phonological awareness can provide a more comprehensive understanding of a client’s strengths and needs. PMID:21386046

  6. The speech perception skills of children with and without speech sound disorder.

    PubMed

    Hearnshaw, Stephanie; Baker, Elise; Munro, Natalie

    To investigate whether Australian-English speaking children with and without speech sound disorder (SSD) differ in their overall speech perception accuracy. Additionally, to investigate differences in the perception of specific phonemes and the association between speech perception and speech production skills. Twenty-five Australian-English speaking children aged 48-60 months participated in this study. The SSD group included 12 children and the typically developing (TD) group included 13 children. Children completed routine speech and language assessments in addition to an experimental Australian-English lexical and phonetic judgement task based on Rvachew's Speech Assessment and Interactive Learning System (SAILS) program (Rvachew, 2009). This task included eight words across four word-initial phonemes-/k, ɹ, ʃ, s/. Children with SSD showed significantly poorer perceptual accuracy on the lexical and phonetic judgement task compared with TD peers. The phonemes /ɹ/ and /s/ were most frequently perceived in error across both groups. Additionally, the phoneme /ɹ/ was most commonly produced in error. There was also a positive correlation between overall speech perception and speech production scores. Children with SSD perceived speech less accurately than their typically developing peers. The findings suggest that an Australian-English variation of a lexical and phonetic judgement task similar to the SAILS program is promising and worthy of a larger scale study. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Expressed parental concern regarding childhood stuttering and the Test of Childhood Stuttering.

    PubMed

    Tumanova, Victoria; Choi, Dahye; Conture, Edward G; Walden, Tedra A

    The purpose of the present study was to determine whether the Test of Childhood Stuttering observational rating scales (TOCS; Gillam et al., 2009) (1) differed between parents who did versus did not express concern (independent from the TOCS) about their child's speech fluency; (2) correlated with children's frequency of stuttering measured during a child-examiner conversation; and (3) correlated with the length and complexity of children's utterances, as indexed by mean length of utterance (MLU). Participants were 183 young children ages 3:0-5:11. Ninety-one had parents who reported concern about their child's stuttering (65 boys, 26 girls) and 92 had parents who reported no such concern (50 boys, 42 girls). Participants' conversational speech during a child-examiner conversation was analyzed for (a) frequency of occurrence of stuttered and non-stuttered disfluencies, and (b) MLU. Besides expressing concern or lack thereof about their child's speech fluency, parents completed the TOCS observational rating scales documenting how often they observe different disfluency types in speech of their children, as well as disfluency-related consequences. There were three main findings. First, parents who expressed concern (independently from the TOCS) about their child's stuttering reported significantly higher scores on the TOCS Speech Fluency and Disfluency-Related Consequences rating scales. Second, children whose parents rated them higher on the TOCS Speech Fluency rating scale produced more stuttered disfluencies during a child-examiner conversation. Third, children with higher scores on the TOCS Disfluency-Related Consequences rating scale had shorter MLU during child-examiner conversation, across age and level of language ability. Findings support the use of the TOCS observational rating scales as one documentable, objective means to determine parental perception of and concern about their child's stuttering. Findings also support the notion that parents are reasonably accurate, if not reliable, judges of the quantity and quality (i.e., stuttered vs. non-stuttered) of their child's speech disfluencies. Lastly, findings that some children may decrease their verbal output in attempts to minimize instances of stuttering - as indexed by relatively low MLU and a high TOCS Disfluency-Related Consequences scores - provides strong support for sampling young children's speech and language across various situations to obtain the most representative index possible of the child's MLU and associated instances of stuttering. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Determining stability in connected speech in primary progressive aphasia and Alzheimer's disease.

    PubMed

    Beales, Ashleigh; Whitworth, Anne; Cartwright, Jade; Panegyres, Peter K; Kane, Robert T

    2018-03-08

    Using connected speech to assess progressive language disorders is confounded by uncertainty around whether connected speech is stable over successive sampling, and therefore representative of an individual's performance, and whether some contexts and/or language behaviours show greater stability than others. A repeated measure, within groups, research design was used to investigate stability of a range of behaviours in the connected speech of six individuals with primary progressive aphasia and three individuals with Alzheimer's disease. Stability was evaluated, at a group and individual level, across three samples, collected over 3 weeks, involving everyday monologue, narrative and picture description, and analysed for lexical content, fluency and communicative informativeness and efficiency. Excellent and significant stability was found on the majority of measures, at a group and individual level, across all genres, with isolated measures (e.g. nouns use, communicative efficiency) showing good, but greater variability, within one of the three genres. Findings provide evidence of stability on measures of lexical content, fluency and communicative informativeness and efficiency. While preliminary evidence suggests that task selection is influential when considering stability of particular connected speech measures, replication over a larger sample is necessary to reproduce findings.

  9. Speech analyzer

    NASA Technical Reports Server (NTRS)

    Lokerson, D. C. (Inventor)

    1977-01-01

    A speech signal is analyzed by applying the signal to formant filters which derive first, second and third signals respectively representing the frequency of the speech waveform in the first, second and third formants. A first pulse train having approximately a pulse rate representing the average frequency of the first formant is derived; second and third pulse trains having pulse rates respectively representing zero crossings of the second and third formants are derived. The first formant pulse train is derived by establishing N signal level bands, where N is an integer at least equal to two. Adjacent ones of the signal bands have common boundaries, each of which is a predetermined percentage of the peak level of a complete cycle of the speech waveform.

  10. Post-treatment speech naturalness of comprehensive stuttering program clients and differences in ratings among listener groups.

    PubMed

    Teshima, Shelli; Langevin, Marilyn; Hagler, Paul; Kully, Deborah

    2010-03-01

    The purposes of this study were to investigate naturalness of the post-treatment speech of Comprehensive Stuttering Program (CSP) clients and differences in naturalness ratings by three listener groups. Listeners were 21 student speech-language pathologists, 9 community members, and 15 listeners who stutter. Listeners rated perceptually fluent speech samples of CSP clients obtained immediately post-treatment (Post) and at 5 years follow-up (F5), and speech samples of matched typically fluent (TF) speakers. A 9-point interval rating scale was used. A 3 (listener group)x2 (time)x2 (speaker) mixed ANOVA was used to test for differences among mean ratings. The difference between CSP Post and F5 mean ratings was statistically significant. The F5 mean rating was within the range reported for typically fluent speakers. Student speech-language pathologists were found to be less critical than community members and listeners who stutter in rating naturalness; however, there were no significant differences in ratings made by community members and listeners who stutter. Results indicate that the naturalness of post-treatment speech of CSP clients improves in the post-treatment period and that it is possible for clients to achieve levels of naturalness that appear to be acceptable to adults who stutter and that are within the range of naturalness ratings given to typically fluent speakers. Readers will be able to (a) summarize key findings of studies that have investigated naturalness ratings, and (b) interpret the naturalness ratings of Comprehensive Stuttering Program speaker samples and the ratings made by the three listener groups in this study.

  11. Systematic Studies of Modified Vocalization: The Effect of Speech Rate on Speech Production Measures During Metronome-Paced Speech in Persons who Stutter

    PubMed Central

    Davidow, Jason H.

    2013-01-01

    Background Metronome-paced speech results in the elimination, or substantial reduction, of stuttering moments. The cause of fluency during this fluency-inducing condition is unknown. Several investigations have reported changes in speech pattern characteristics from a control condition to a metronome-paced speech condition, but failure to control speech rate between conditions limits our ability to determine if the changes were necessary for fluency. Aims This study examined the effect of speech rate on several speech production variables during one-syllable-per-beat metronomic speech, in order to determine changes that may be important for fluency during this fluency-inducing condition. Methods and Procedures Thirteen persons who stutter (PWS), aged 18–62 years, completed a series of speaking tasks. Several speech production variables were compared between conditions produced at different metronome beat rates, and between a control condition and a metronome-paced speech condition produced at a rate equal to the control condition. Outcomes & Results Vowel duration, voice onset time, pressure rise time, and phonated intervals were significantly impacted by metronome beat rate. Voice onset time and the percentage of short (30–100 ms) phonated intervals significantly decreased from the control condition to the equivalent rate metronome-paced speech condition. Conclusions & Implications A reduction in the percentage of short phonated intervals may be important for fluency during syllable-based metronome-paced speech for PWS. Future studies should continue examining the necessity of this reduction. In addition, speech rate must be controlled in future fluency-inducing condition studies, including neuroimaging investigations, in order for this research to make a substantial contribution to finding the fluency-inducing mechanism of fluency-inducing conditions. PMID:24372888

  12. Speech fluency profile on different tasks for individuals with Parkinson's disease.

    PubMed

    Juste, Fabiola Staróbole; Andrade, Claudia Regina Furquim de

    2017-07-20

    To characterize the speech fluency profile of patients with Parkinson's disease. Study participants were 40 individuals of both genders aged 40 to 80 years divided into 2 groups: Research Group - RG (20 individuals with diagnosis of Parkinson's disease) and Control Group - CG (20 individuals with no communication or neurological disorders). For all of the participants, three speech samples involving different tasks were collected: monologue, individual reading, and automatic speech. The RG presented a significant larger number of speech disruptions, both stuttering-like and typical dysfluencies, and higher percentage of speech discontinuity in the monologue and individual reading tasks compared with the CG. Both groups presented reduced number of speech disruptions (stuttering-like and typical dysfluencies) in the automatic speech task; the groups presented similar performance in this task. Regarding speech rate, individuals in the RG presented lower number of words and syllables per minute compared with those in the CG in all speech tasks. Participants of the RG presented altered parameters of speech fluency compared with those of the CG; however, this change in fluency cannot be considered a stuttering disorder.

  13. Two Different Communication Genres and Implications for Vocabulary Development and Learning to Read

    ERIC Educational Resources Information Center

    Massaro, Dominic W.

    2015-01-01

    This study examined potential differences in vocabulary found in picture books and adult's speech to children and to other adults. Using a small sample of various sources of speech and print, Hayes observed that print had a more extensive vocabulary than speech. The current analyses of two different spoken language databases and an assembled…

  14. An Experimental Investigation of the Effect of Altered Auditory Feedback on the Conversational Speech of Adults Who Stutter

    ERIC Educational Resources Information Center

    Lincoln, Michelle; Packman, Ann; Onslow, Mark; Jones, Mark

    2010-01-01

    Purpose: To investigate the impact on percentage of syllables stuttered of various durations of delayed auditory feedback (DAF), levels of frequency-altered feedback (FAF), and masking auditory feedback (MAF) during conversational speech. Method: Eleven adults who stuttered produced 10-min conversational speech samples during a control condition…

  15. School-Based Speech-Language Pathologists' Use of iPads

    ERIC Educational Resources Information Center

    Romane, Garvin Philippe

    2017-01-01

    This study explored school-based speech-language pathologists' (SLPs') use of iPads and apps for speech and language instruction, specifically for articulation, language, and vocabulary goals. A mostly quantitative-based survey was administered to approximately 2,800 SLPs in a K-12 setting; the final sample consisted of 189 licensed SLPs. Overall,…

  16. The Measurement of the Oral and Nasal Sound Pressure Levels of Speech

    ERIC Educational Resources Information Center

    Clarke, Wayne M.

    1975-01-01

    A nasal separator was used to measure the oral and nasal components in the speech of a normal adult Australian population. Results indicated no difference in oral and nasal sound pressure levels for read versus spontaneous speech samples; however, females tended to have a higher nasal component than did males. (Author/TL)

  17. Effects of Culture and Gender in Comprehension of Speech Acts of Indirect Request

    ERIC Educational Resources Information Center

    Shams, Rabe'a; Afghari, Akbar

    2011-01-01

    This study investigates the comprehension of indirect request speech act used by Iranian people in daily communication. The study is an attempt to find out whether different cultural backgrounds and the gender of the speakers affect the comprehension of the indirect request of speech act. The sample includes thirty males and females in Gachsaran…

  18. Phonological Memory, Attention Control, and Musical Ability: Effects of Individual Differences on Rater Judgments of Second Language Speech

    ERIC Educational Resources Information Center

    Isaacs, Talia; Trofimovich, Pavel

    2011-01-01

    This study examines how listener judgments of second language speech relate to individual differences in listeners' phonological memory, attention control, and musical ability. Sixty native English listeners (30 music majors, 30 nonmusic majors) rated 40 nonnative speech samples for accentedness, comprehensibility, and fluency. The listeners were…

  19. The Influence of Social Class and Race on Language Test Performance and Spontaneous Speech of Preschool Children.

    ERIC Educational Resources Information Center

    Johnson, Dale L.

    This investigation compares child language obtained with standardized tests and samples of spontaneous speech obtained in natural settings. It was hypothesized that differences would exist between social class and racial groups on the unfamiliar standard tests, but such differences would not be evident on spontaneous speech measures. Also, higher…

  20. Relationships among psychoacoustic judgments, speech understanding ability and self-perceived handicap in tinnitus subjects.

    PubMed

    Newman, C W; Wharton, J A; Shivapuja, B G; Jacobson, G P

    1994-01-01

    Tinnitus is often a disturbing symptom which affects 6-20% of the population. Relationships among tinnitus pitch and loudness judgments, audiometric speech understanding measures and self-perceived handicap were evaluated in a sample of subjects with tinnitus and hearing loss (THL). Data obtained from the THL sample on the audiometric speech measures were compared to the performance of an age-matched hearing loss only (HL) group. Both groups had normal hearing through 1 kHz with a sloping configuration of < or = 20 dB/octave between 2-12 kHz. The THL subjects performed more poorly on the low predictability items of the Speech Perception in Noise Test, suggesting that tinnitus may interfere with the perception of speech signals having reduced linguistic redundancy. The THL subjects rated their tinnitus as annoying at relatively low sensation levels using the pitch-match frequency as the reference tone. Further, significant relationships were found between loudness judgment measures and self-rated annoyance. No predictable relationships were observed between the audiometric speech measures and perceived handicap using the Tinnitus Handicap Questionnaire. These findings support the use of self-report measures in tinnitus patients in that audiometric speech tests alone may be insufficient in describing an individual's reaction to his/her communication breakdowns.

  1. Are the Literacy Difficulties That Characterize Developmental Dyslexia Associated with a Failure to Integrate Letters and Speech Sounds?

    ERIC Educational Resources Information Center

    Nash, Hannah M.; Gooch, Debbie; Hulme, Charles; Mahajan, Yatin; McArthur, Genevieve; Steinmetzger, Kurt; Snowling, Margaret J.

    2017-01-01

    The "automatic letter-sound integration hypothesis" (Blomert, [Blomert, L., 2011]) proposes that dyslexia results from a failure to fully integrate letters and speech sounds into automated audio-visual objects. We tested this hypothesis in a sample of English-speaking children with dyslexic difficulties (N = 13) and samples of…

  2. Attitudes toward speech disorders: sampling the views of Cantonese-speaking Americans.

    PubMed

    Bebout, L; Arthur, B

    1997-01-01

    Speech-language pathologists who serve clients from cultural backgrounds that are not familiar to them may encounter culturally influenced attitudinal differences. A questionnaire with statements about 4 speech disorders (dysfluency, cleft pallet, speech of the deaf, and misarticulations) was given to a focus group of Chinese Americans and a comparison group of non-Chinese Americans. The focus group was much more likely to believe that persons with speech disorders could improve their own speech by "trying hard," was somewhat more likely to say that people who use deaf speech and people with cleft palates might be "emotionally disturbed," and generally more likely to view deaf speech as a limitation. The comparison group was more pessimistic about stuttering children's acceptance by their peers than was the focus group. The two subject groups agreed about other items, such as the likelihood that older children with articulation problems are "less intelligent" than their peers.

  3. Longitudinal decline in speech production in Parkinson's disease spectrum disorders.

    PubMed

    Ash, Sharon; Jester, Charles; York, Collin; Kofman, Olga L; Langey, Rachel; Halpin, Amy; Firn, Kim; Dominguez Perez, Sophia; Chahine, Lama; Spindler, Meredith; Dahodwala, Nabila; Irwin, David J; McMillan, Corey; Weintraub, Daniel; Grossman, Murray

    2017-08-01

    We examined narrative speech production longitudinally in non-demented (n=15) and mildly demented (n=8) patients with Parkinson's disease spectrum disorder (PDSD), and we related increasing impairment to structural brain changes in specific language and motor regions. Patients provided semi-structured speech samples, describing a standardized picture at two time points (mean±SD interval=38±24months). The recorded speech samples were analyzed for fluency, grammar, and informativeness. PDSD patients with dementia exhibited significant decline in their speech, unrelated to changes in overall cognitive or motor functioning. Regression analysis in a subset of patients with MRI scans (n=11) revealed that impaired language performance at Time 2 was associated with reduced gray matter (GM) volume at Time 1 in regions of interest important for language functioning but not with reduced GM volume in motor brain areas. These results dissociate language and motor systems and highlight the importance of non-motor brain regions for declining language in PDSD. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Together They Stand: Interpreting Not-At-Issue Content.

    PubMed

    Frazier, Lyn; Dillon, Brian; Clifton, Charles

    2018-06-01

    Potts unified the account of appositives, parentheticals, expressives, and honorifics as 'Not- At-Issue' (NAI) content, treating them as a natural class semantically in behaving like root (unembedded) structures, typically expressing speaker commitments, and being interpreted independently of At-Issue content. We propose that NAI content expresses a complete speech act distinct from the speech act of the containing utterance. The speech act hypothesis leads us to expect the semantic properties Potts established. We present experimental confirmation of two intuitive observations made by Potts: first that speech act adverbs should be acceptable as NAI content, supporting the speech act hypothesis; and second, that when two speech acts are expressed as successive sentences, the comprehender assumes they are related by some discourse coherence relation, whereas an NAI speech act need not bear a restrictive discourse coherence relation to its containing utterance, though overall sentences containing relevant content are rated more acceptable than those that do not. The speech act hypothesis accounts for these effects, and further accounts for why judgments of syntactic complexity or evaluations of whether or not a statement is true interact with the at-issue status of the material being judged or evaluated.

  5. Individual differences in language and working memory affect children's speech recognition in noise.

    PubMed

    McCreery, Ryan W; Spratford, Meredith; Kirby, Benjamin; Brennan, Marc

    2017-05-01

    We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. Ninety-six children with normal hearing, who were between 5 and 12 years of age. Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.

  6. Working memory training to improve speech perception in noise across languages

    PubMed Central

    Ingvalson, Erin M.; Dhar, Sumitrajit; Wong, Patrick C. M.; Liu, Hanjun

    2015-01-01

    Working memory capacity has been linked to performance on many higher cognitive tasks, including the ability to perceive speech in noise. Current efforts to train working memory have demonstrated that working memory performance can be improved, suggesting that working memory training may lead to improved speech perception in noise. A further advantage of working memory training to improve speech perception in noise is that working memory training materials are often simple, such as letters or digits, making them easily translatable across languages. The current effort tested the hypothesis that working memory training would be associated with improved speech perception in noise and that materials would easily translate across languages. Native Mandarin Chinese and native English speakers completed ten days of reversed digit span training. Reading span and speech perception in noise both significantly improved following training, whereas untrained controls showed no gains. These data suggest that working memory training may be used to improve listeners' speech perception in noise and that the materials may be quickly adapted to a wide variety of listeners. PMID:26093435

  7. Working memory training to improve speech perception in noise across languages.

    PubMed

    Ingvalson, Erin M; Dhar, Sumitrajit; Wong, Patrick C M; Liu, Hanjun

    2015-06-01

    Working memory capacity has been linked to performance on many higher cognitive tasks, including the ability to perceive speech in noise. Current efforts to train working memory have demonstrated that working memory performance can be improved, suggesting that working memory training may lead to improved speech perception in noise. A further advantage of working memory training to improve speech perception in noise is that working memory training materials are often simple, such as letters or digits, making them easily translatable across languages. The current effort tested the hypothesis that working memory training would be associated with improved speech perception in noise and that materials would easily translate across languages. Native Mandarin Chinese and native English speakers completed ten days of reversed digit span training. Reading span and speech perception in noise both significantly improved following training, whereas untrained controls showed no gains. These data suggest that working memory training may be used to improve listeners' speech perception in noise and that the materials may be quickly adapted to a wide variety of listeners.

  8. Hearing and Speech Sciences in Educational Environment Mapping in Brazil: education, work and professional experience.

    PubMed

    Celeste, Letícia Corrêa; Zanoni, Graziela; Queiroga, Bianca; Alves, Luciana Mendonça

    2017-03-09

    To map the profile of Brazilian Speech Therapists who report acting in Educational Speech Therapy, with regard to aspects related to training, performance and professional experience. Retrospective study, based on secondary database analysis of the Federal Council of Hearing and Speech Sciences on the questionnaires reporting acting with Educational Environment. 312 questionnaires were completed, of which 93.3% by women aged 30-39 years. Most Speech Therapists continued the studies, opting mostly for specialization. Almost 50% of respondents, have worked for less than six years with the speciality, most significantly in the public service (especially municipal) and private area. The profile of the Speech Therapists active in the Educational area in Brazil is a professional predominantly female, who values to continue their studies after graduation, looking mostly for specialization in the following areas: Audiology and Orofacial Motor. The time experience of the majority is up to 10 years of work whose nature is divided mainly in public (municipal) and private schools. The performance of Speech Therapists in the Educational area concentrates in Elementary and Primary school, with varied workload.

  9. Children's Speech Perception in Noise: Evidence for Dissociation From Language and Working Memory.

    PubMed

    Magimairaj, Beula M; Nagaraj, Naveen K; Benafield, Natalie J

    2018-05-17

    We examined the association between speech perception in noise (SPIN), language abilities, and working memory (WM) capacity in school-age children. Existing studies supporting the Ease of Language Understanding (ELU) model suggest that WM capacity plays a significant role in adverse listening situations. Eighty-three children between the ages of 7 to 11 years participated. The sample represented a continuum of individual differences in attention, memory, and language abilities. All children had normal-range hearing and normal-range nonverbal IQ. Children completed the Bamford-Kowal-Bench Speech-in-Noise Test (BKB-SIN; Etymotic Research, 2005), a selective auditory attention task, and multiple measures of language and WM. Partial correlations (controlling for age) showed significant positive associations among attention, memory, and language measures. However, BKB-SIN did not correlate significantly with any of the other measures. Principal component analysis revealed a distinct WM factor and a distinct language factor. BKB-SIN loaded robustly as a distinct 3rd factor with minimal secondary loading from sentence recall and short-term memory. Nonverbal IQ loaded as a 4th factor. Results did not support an association between SPIN and WM capacity in children. However, in this study, a single SPIN measure was used. Future studies using multiple SPIN measures are warranted. Evidence from the current study supports the use of BKB-SIN as clinical measure of speech perception ability because it was not influenced by variation in children's language and memory abilities. More large-scale studies in school-age children are needed to replicate the proposed role played by WM in adverse listening situations.

  10. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  11. The impact of workplace factors on evidence-based speech-language pathology practice for children with autism spectrum disorders.

    PubMed

    Cheung, Gladys; Trembath, David; Arciuli, Joanne; Togher, Leanne

    2013-08-01

    Although researchers have examined barriers to implementing evidence-based practice (EBP) at the level of the individual, little is known about the effects workplaces have on speech-language pathologists' implementation of EBP. The aim of this study was to examine the impact of workplace factors on the use of EBP amongst speech-language pathologists who work with children with Autism Spectrum Disorder (ASD). This study sought to (a) explore views about EBP amongst speech-language pathologists who work with children with ASD, (b) identify workplace factors which, in the participants' opinions, acted as barriers or enablers to their provision of evidence-based speech-language pathology services, and (c) examine whether or not speech-language pathologists' responses to workplace factors differed based on the type of workplace or their years of experience. A total of 105 speech-language pathologists from across Australia completed an anonymous online questionnaire. The results indicate that, although the majority of speech-language pathologists agreed that EBP is necessary, they experienced barriers to their implementation of EBP including workplace culture and support, lack of time, cost of EBP, and the availability and accessibility of EBP resources. The barriers reported by speech-language pathologists were similar, regardless of their workplace (private practice vs organization) and years of experience.

  12. Temporal plasticity in auditory cortex improves neural discrimination of speech sounds

    PubMed Central

    Engineer, Crystal T.; Shetake, Jai A.; Engineer, Navzer D.; Vrana, Will A.; Wolf, Jordan T.; Kilgard, Michael P.

    2017-01-01

    Background Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. Objective/Hypothesis We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. Methods VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Results Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. Conclusion This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. PMID:28131520

  13. Understanding Why Speech-Language Pathologists Rarely Pursue a PhD in Communication Sciences and Disorders

    ERIC Educational Resources Information Center

    Myotte, Theodore; Hutchins, Tiffany L.; Cannizzaro, Michael S.; Belin, Gayle

    2011-01-01

    Masters-level speech-language pathologists in communication sciences and disorders (n = 122) completed a survey soliciting their reasons for not pursuing doctoral study. Factor analysis revealed a four-factor solution including one reflecting a lack of interest in doctoral study (Factor 2) and one reflecting practical financial concerns (Factor…

  14. Speaking Math--A Voice Input, Speech Output Calculator for Students with Visual Impairments

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Flanagan, Sara; Joshi, Gauri S.; Sheikh, Waseem; Schleppenbach, Dave

    2011-01-01

    This project explored a newly developed computer-based voice input, speech output (VISO) calculator. Three high school students with visual impairments educated at a state school for the blind and visually impaired participated in the study. The time they took to complete assessments and the average number of attempts per problem were recorded…

  15. Voice Disorder Management Competencies: A Survey of School-Based Speech-Language Pathologists in Nebraska

    ERIC Educational Resources Information Center

    Teten, Amy F.; DeVeney, Shari L.; Friehe, Mary J.

    2016-01-01

    Purpose: The purpose of this survey was to determine the self-perceived competence levels in voice disorders of practicing school-based speech-language pathologists (SLPs) and identify correlated variables. Method: Participants were 153 master's level, school-based SLPs with a Nebraska teaching certificate and/or licensure who completed a survey,…

  16. Beliefs regarding the Impact of Accent within Speech-Language Pathology Practice Areas

    ERIC Educational Resources Information Center

    Levy, Erika S.; Crowley, Catherine J.

    2012-01-01

    With the demographic shifts in the United States, it is increasingly the case that speech-language pathologists (SLPs) come from different language backgrounds from those of their clients and have nonnative accents in their languages of service. An anonymous web-based survey was completed by students and clinic directors in SLP training programs…

  17. Self-Assessed Hearing Handicap in Older Adults with Poorer-than-Predicted Speech Recognition in Noise

    ERIC Educational Resources Information Center

    Eckert, Mark A.; Matthews, Lois J.; Dubno, Judy R.

    2017-01-01

    Purpose: Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility. Method: We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982)…

  18. Impact of Noise and Working Memory on Speech Processing in Adults with and without ADHD

    ERIC Educational Resources Information Center

    Michalek, Anne M. P.

    2012-01-01

    Auditory processing of speech is influenced by internal (i.e., attention, working memory) and external factors (i.e., background noise, visual information). This study examined the interplay among these factors in individuals with and without ADHD. All participants completed a listening in noise task, two working memory capacity tasks, and two…

  19. Comparison of Transducers and Intraoral Placement Options for Measuring Lingua-Palatal Contact Pressure during Speech

    ERIC Educational Resources Information Center

    Searl, Jeffrey P.

    2003-01-01

    Two studies were completed that focused on instrumentation and procedural issues associated with measurement of lingua-palatal contact pressure (LPCP) during speech. In the first experiment, physical features and response characteristics of 2 miniature pressure transducers (Entran EPI-BO and Precision Measurement 60S) were evaluated to identify a…

  20. Phonation Interval Modification and Speech Performance Quality during Fluency-Inducing Conditions by Adults Who Stutter

    ERIC Educational Resources Information Center

    Ingham, Roger J.; Bothe, Anne K.; Wang, Yuedong; Purkhiser, Krystal; New, Anneliese

    2012-01-01

    Purpose: To relate changes in four variables previously defined as characteristic of normally fluent speech to changes in phonatory behavior during oral reading by persons who stutter (PWS) and normally fluent controls under multiple fluency-inducing (FI) conditions. Method: Twelve PWS and 12 controls each completed 4 ABA experiments. During A…

  1. Developing a bilingual "persian cued speech" website for parents and professionals of children with hearing impairment.

    PubMed

    Movallali, Guita; Sajedi, Firoozeh

    2014-03-01

    The use of the internet as a source of information gathering, self-help and support is becoming increasingly recognized. Parents and professionals of children with hearing impairment have been shown to seek information about different communication approaches online. Cued Speech is a very new approach to Persian speaking pupils. Our aim was to develop a useful website to give related information about Persian Cued Speech to parents and professionals of children with hearing impairment. All Cued Speech websites from different countries that fell within the first ten pages of Google and Yahoo search-engines were assessed. Main subjects and links were studied. All related information was gathered from the websites, textbooks, articles etc. Using a framework that combined several criteria for health-information websites, we developed the Persian Cued Speech website for three distinct audiences (parents, professionals and children). An accurate, complete, accessible and readable resource about Persian Cued Speech for parents and professionals is available now.

  2. Speech versus non-speech as irrelevant sound: controlling acoustic variation.

    PubMed

    Little, Jason S; Martin, Frances Heritage; Thomson, Richard H S

    2010-09-01

    Functional differences between speech and non-speech within the irrelevant sound effect were investigated using repeated and changing formats of irrelevant sounds in the form of intelligible words and unintelligible signal correlated noise (SCN) versions of the words. Event-related potentials were recorded from 25 females aged between 18 and 25 while they completed a serial order recall task in the presence of irrelevant sound or silence. As expected and in line with the changing-state hypothesis both words and SCN produced robust changing-state effects. However, words produced a greater changing-state effect than SCN indicating that the spectral detail inherent within speech accounts for the greater irrelevant sound effect and changing-state effect typically observed with speech. ERP data in the form of N1 amplitude was modulated within some irrelevant sound conditions suggesting that attentional aspects are involved in the elicitation of the irrelevant sound effect. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  3. The Atlanta Motor Speech Disorders Corpus: Motivation, Development, and Utility.

    PubMed

    Laures-Gore, Jacqueline; Russell, Scott; Patel, Rupal; Frankel, Michael

    2016-01-01

    This paper describes the design and collection of a comprehensive spoken language dataset from speakers with motor speech disorders in Atlanta, Ga., USA. This collaborative project aimed to gather a spoken database consisting of nonmainstream American English speakers residing in the Southeastern US in order to provide a more diverse perspective of motor speech disorders. Ninety-nine adults with an acquired neurogenic disorder resulting in a motor speech disorder were recruited. Stimuli include isolated vowels, single words, sentences with contrastive focus, sentences with emotional content and prosody, sentences with acoustic and perceptual sensitivity to motor speech disorders, as well as 'The Caterpillar' and 'The Grandfather' passages. Utility of this data in understanding the potential interplay of dialect and dysarthria was demonstrated with a subset of the speech samples existing in the database. The Atlanta Motor Speech Disorders Corpus will enrich our understanding of motor speech disorders through the examination of speech from a diverse group of speakers. © 2016 S. Karger AG, Basel.

  4. Measuring Speech Comprehensibility in Students with Down Syndrome

    PubMed Central

    Woynaroski, Tiffany; Camarata, Stephen

    2016-01-01

    Purpose There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based measure of the comprehensibility of conversational speech in students with Down syndrome. Method Participants were 10 elementary school students with Down syndrome and 4 unfamiliar adult raters. Averaged across-observer Likert ratings of speech comprehensibility were called a ratings-based measure of speech comprehensibility. The proportion of utterance attempts fully glossed constituted an orthography-based measure of speech comprehensibility. Results Averaging across 4 raters on four 5-min segments produced a reliable (G = .83) ratings-based measure of speech comprehensibility. The ratings-based measure was strongly (r > .80) correlated with the orthography-based measure for both the same and different conversational samples. Conclusion Reliable and valid measures of speech comprehensibility are achievable with the resources available to many researchers and some clinicians. PMID:27299989

  5. Speech and pause characteristics in multiple sclerosis: A preliminary study of speakers with high and low neuropsychological test performance

    PubMed Central

    FEENAUGHTY, LYNDA; TJADEN, KRIS; BENEDICT, RALPH H.B.; WEINSTOCK-GUTTMAN, BIANCA

    2017-01-01

    This preliminary study investigated how cognitive-linguistic status in multiple sclerosis (MS) is reflected in two speech tasks (i.e. oral reading, narrative) that differ in cognitive-linguistic demand. Twenty individuals with MS were selected to comprise High and Low performance groups based on clinical tests of executive function and information processing speed and efficiency. Ten healthy controls were included for comparison. Speech samples were audio-recorded and measures of global speech timing were obtained. Results indicated predicted differences in global speech timing (i.e. speech rate and pause characteristics) for speech tasks differing in cognitive-linguistic demand, but the magnitude of these task-related differences was similar for all speaker groups. Findings suggest that assumptions concerning the cognitive-linguistic demands of reading aloud as compared to spontaneous speech may need to be re-considered for individuals with cognitive impairment. Qualitative trends suggest that additional studies investigating the association between cognitive-linguistic and speech motor variables in MS are warranted. PMID:23294227

  6. Cued Speech for Enhancing Speech Perception and First Language Development of Children With Cochlear Implants

    PubMed Central

    Leybaert, Jacqueline; LaSasso, Carol J.

    2010-01-01

    Nearly 300 million people worldwide have moderate to profound hearing loss. Hearing impairment, if not adequately managed, has strong socioeconomic and affective impact on individuals. Cochlear implants have become the most effective vehicle for helping profoundly deaf children and adults to understand spoken language, to be sensitive to environmental sounds, and, to some extent, to listen to music. The auditory information delivered by the cochlear implant remains non-optimal for speech perception because it delivers a spectrally degraded signal and lacks some of the fine temporal acoustic structure. In this article, we discuss research revealing the multimodal nature of speech perception in normally-hearing individuals, with important inter-subject variability in the weighting of auditory or visual information. We also discuss how audio-visual training, via Cued Speech, can improve speech perception in cochlear implantees, particularly in noisy contexts. Cued Speech is a system that makes use of visual information from speechreading combined with hand shapes positioned in different places around the face in order to deliver completely unambiguous information about the syllables and the phonemes of spoken language. We support our view that exposure to Cued Speech before or after the implantation could be important in the aural rehabilitation process of cochlear implantees. We describe five lines of research that are converging to support the view that Cued Speech can enhance speech perception in individuals with cochlear implants. PMID:20724357

  7. Monkey vocal tracts are speech-ready.

    PubMed

    Fitch, W Tecumseh; de Boer, Bart; Mathur, Neil; Ghazanfar, Asif A

    2016-12-01

    For four decades, the inability of nonhuman primates to produce human speech sounds has been claimed to stem from limitations in their vocal tract anatomy, a conclusion based on plaster casts made from the vocal tract of a monkey cadaver. We used x-ray videos to quantify vocal tract dynamics in living macaques during vocalization, facial displays, and feeding. We demonstrate that the macaque vocal tract could easily produce an adequate range of speech sounds to support spoken language, showing that previous techniques based on postmortem samples drastically underestimated primate vocal capabilities. Our findings imply that the evolution of human speech capabilities required neural changes rather than modifications of vocal anatomy. Macaques have a speech-ready vocal tract but lack a speech-ready brain to control it.

  8. The Contribution of Cognitive Factors to Individual Differences in Understanding Noise-Vocoded Speech in Young and Older Adults

    PubMed Central

    Rosemann, Stephanie; Gießing, Carsten; Özyurt, Jale; Carroll, Rebecca; Puschmann, Sebastian; Thiel, Christiane M.

    2017-01-01

    Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome. PMID:28638329

  9. Speech entrainment compensates for Broca's area damage.

    PubMed

    Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris

    2015-08-01

    Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to SE. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during SE versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of SE to improve speech production and may help select patients for SE treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. "The caterpillar": a novel reading passage for assessment of motor speech disorders.

    PubMed

    Patel, Rupal; Connaghan, Kathryn; Franco, Diana; Edsall, Erika; Forgit, Dory; Olsen, Laura; Ramage, Lianna; Tyler, Emily; Russell, Scott

    2013-02-01

    A review of the salient characteristics of motor speech disorders and common assessment protocols revealed the need for a novel reading passage tailored specifically to differentiate between and among the dysarthrias (DYSs) and apraxia of speech (AOS). "The Caterpillar" passage was designed to provide a contemporary, easily read, contextual speech sample with specific tasks (e.g., prosodic contrasts, words of increasing length and complexity) targeted to inform the assessment of motor speech disorders. Twenty-two adults, 15 with DYS or AOS and 7 healthy controls (HC), were recorded reading "The Caterpillar" passage to demonstrate its utility in examining motor speech performance. Analysis of performance across a subset of segmental and prosodic variables illustrated that "The Caterpillar" passage showed promise for extracting individual profiles of impairment that could augment current assessment protocols and inform treatment planning in motor speech disorders.

  11. Perceptual Measures of Speech from Individuals with Parkinson's Disease and Multiple Sclerosis: Intelligibility and beyond

    ERIC Educational Resources Information Center

    Sussman, Joan E.; Tjaden, Kris

    2012-01-01

    Purpose: The primary purpose of this study was to compare percent correct word and sentence intelligibility scores for individuals with multiple sclerosis (MS) and Parkinson's disease (PD) with scaled estimates of speech severity obtained for a reading passage. Method: Speech samples for 78 talkers were judged, including 30 speakers with MS, 16…

  12. Do Native Speakers of North American and Singapore English Differentially Perceive Comprehensibility in Second Language Speech?

    ERIC Educational Resources Information Center

    Saito, Kazuya; Shintani, Natsuko

    2016-01-01

    The current study examined the extent to which native speakers of North American and Singapore English differentially perceive the comprehensibility (ease of understanding) of second language (L2) speech. Spontaneous speech samples elicited from 50 Japanese learners of English with various proficiency levels were first rated by 10 Canadian and 10…

  13. Assessing Children's Home Language Environments Using Automatic Speech Recognition Technology

    ERIC Educational Resources Information Center

    Greenwood, Charles R.; Thiemann-Bourque, Kathy; Walker, Dale; Buzhardt, Jay; Gilkerson, Jill

    2011-01-01

    The purpose of this research was to replicate and extend some of the findings of Hart and Risley using automatic speech processing instead of human transcription of language samples. The long-term goal of this work is to make the current approach to speech processing possible by researchers and clinicians working on a daily basis with families and…

  14. Music and Speech Perception in Children Using Sung Speech

    PubMed Central

    Nie, Yingjiu; Galvin, John J.; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners. PMID:29609496

  15. Music and Speech Perception in Children Using Sung Speech.

    PubMed

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  16. TOEFL iBT Speaking Test Scores as Indicators of Oral Communicative Language Proficiency

    ERIC Educational Resources Information Center

    Bridgeman, Brent; Powers, Donald; Stone, Elizabeth; Mollaun, Pamela

    2012-01-01

    Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language…

  17. An Analysis of the Use and Structure of Logic in Japanese Argument.

    ERIC Educational Resources Information Center

    Hazen, Michael David

    A study was conducted to determine if the Japanese use logic and argument in different ways than do Westerners. The study analyzed sample rebuttal speeches (in English) of 14 Japanese debaters using the Toulmin model of argument. In addition, it made comparisons with a sample of speeches made by 5 American high school debaters. Audiotapes of the…

  18. Risk and Protective Factors Associated with Speech and Language Impairment in a Nationally Representative Sample of 4- to 5-Year-Old Children

    ERIC Educational Resources Information Center

    Harrison, Linda J.; McLeod, Sharynne

    2010-01-01

    Purpose: To determine risk and protective factors for speech and language impairment in early childhood. Method: Data are presented for a nationally representative sample of 4,983 children participating in the Longitudinal Study of Australian Children (described in McLeod & Harrison, 2009). Thirty-one child, parent, family, and community…

  19. Culturally diverse attitudes and beliefs of students majoring in speech-language pathology.

    PubMed

    Franca, Maria Claudia; Smith, Linda McCabe; Nichols, Jane Luanne; Balan, Dianna Santos

    Academic education in speech-language pathology should prepare students to provide professional services that mirror current knowledge, skills, and scope of practice in a pluralistic society. This study seeks to examine the impact of speech-language pathology (SLP) students prior multicultural experiences and previous formal education on attitudes and beliefs toward language diversity. A survey to investigate SLP students attitudes toward language diversity was applied. After the research study and instructions to complete the consent form questionnaire was presented by a research assistant, an announcement was given by a graduate student who speaks English as a second language with an accent. The participants then completed a questionnaire containing questions related to attitudes about the presentation of the announcement in particular and toward language diversity in general. Responses suggested a relationship between self-reported cultural bias and ability to concentrate on speech with an accent, and the extent of interaction with individuals from a cultural and linguistic diverse (CLD) background. Additional outcomes revealed that cultural bias may be predicted by factors related to amount of CLD exposure. Results of this study indicated critical areas that need to be considered when developing curricula in speech-language pathology programs. The results will be useful in determining procedures applicable in larger investigations, and encourage future research on attitudes and beliefs toward aspects of cultural diversity.

  20. Speech coding at 4800 bps for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Gersho, Allen; Chan, Wai-Yip; Davidson, Grant; Chen, Juin-Hwey; Yong, Mei

    1988-01-01

    A speech compression project has recently been completed to develop a speech coding algorithm suitable for operation in a mobile satellite environment aimed at providing telephone quality natural speech at 4.8 kbps. The work has resulted in two alternative techniques which achieve reasonably good communications quality at 4.8 kbps while tolerating vehicle noise and rather severe channel impairments. The algorithms are embodied in a compact self-contained prototype consisting of two AT and T 32-bit floating-point DSP32 digital signal processors (DSP). A Motorola 68HC11 microcomputer chip serves as the board controller and interface handler. On a wirewrapped card, the prototype's circuit footprint amounts to only 200 sq cm, and consumes about 9 watts of power.

  1. Stuttering on function words in bilingual children who stutter: A preliminary study.

    PubMed

    Gkalitsiou, Zoi; Byrd, Courtney T; Bedore, Lisa M; Taliancich-Klinger, Casey L

    2017-01-01

    Evidence suggests young monolingual children who stutter (CWS) are more disfluent on function than content words, particularly when produced in the initial utterance position. The purpose of the present preliminary study was to investigate whether young bilingual CWS present with this same pattern. The narrative and conversational samples of four bilingual Spanish- and English-speaking CWS were analysed. All four bilingual participants produced significantly more stuttering on function words compared to content words, irrespective of their position in the utterance, in their Spanish narrative and conversational speech samples. Three of the four participants also demonstrated more stuttering on function compared to content words in their narrative speech samples in English, but only one participant produced more stuttering on function than content words in her English conversational sample. These preliminary findings are discussed relative to linguistic planning and language proficiency and their potential contribution to stuttered speech.

  2. Smartphone, tablet computer and e-reader use by people with vision impairment.

    PubMed

    Crossland, Michael D; Silva, Rui S; Macedo, Antonio F

    2014-09-01

    Consumer electronic devices such as smartphones, tablet computers, and e-book readers have become far more widely used in recent years. Many of these devices contain accessibility features such as large print and speech. Anecdotal experience suggests people with vision impairment frequently make use of these systems. Here we survey people with self-identified vision impairment to determine their use of this equipment. An internet-based survey was advertised to people with vision impairment by word of mouth, social media, and online. Respondents were asked demographic information, what devices they owned, what they used these devices for, and what accessibility features they used. One hundred and thirty-two complete responses were received. Twenty-six percent of the sample reported that they had no vision and the remainder reported they had low vision. One hundred and seven people (81%) reported using a smartphone. Those with no vision were as likely to use a smartphone or tablet as those with low vision. Speech was found useful by 59% of smartphone users. Fifty-one percent of smartphone owners used the camera and screen as a magnifier. Forty-eight percent of the sample used a tablet computer, and 17% used an e-book reader. The most frequently cited reason for not using these devices included cost and lack of interest. Smartphones, tablet computers, and e-book readers can be used by people with vision impairment. Speech is used by people with low vision as well as those with no vision. Many of our (self-selected) group used their smartphone camera and screen as a magnifier, and others used the camera flash as a spotlight. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  3. White noise speech illusion and psychosis expression: An experimental investigation of psychosis liability

    PubMed Central

    Guloksuz, Sinan; Menne-Lothmann, Claudia; Decoster, Jeroen; van Winkel, Ruud; Collip, Dina; Delespaul, Philippe; De Hert, Marc; Derom, Catherine; Thiery, Evert; Jacobs, Nele; Wichers, Marieke; Simons, Claudia J. P.; Rutten, Bart P. F.; van Os, Jim

    2017-01-01

    Background An association between white noise speech illusion and psychotic symptoms has been reported in patients and their relatives. This supports the theory that bottom-up and top-down perceptual processes are involved in the mechanisms underlying perceptual abnormalities. However, findings in nonclinical populations have been conflicting. Objectives The aim of this study was to examine the association between white noise speech illusion and subclinical expression of psychotic symptoms in a nonclinical sample. Findings were compared to previous results to investigate potential methodology dependent differences. Methods In a general population adolescent and young adult twin sample (n = 704), the association between white noise speech illusion and subclinical psychotic experiences, using the Structured Interview for Schizotypy—Revised (SIS-R) and the Community Assessment of Psychic Experiences (CAPE), was analyzed using multilevel logistic regression analyses. Results Perception of any white noise speech illusion was not associated with either positive or negative schizotypy in the general population twin sample, using the method by Galdos et al. (2011) (positive: ORadjusted: 0.82, 95% CI: 0.6–1.12, p = 0.217; negative: ORadjusted: 0.75, 95% CI: 0.56–1.02, p = 0.065) and the method by Catalan et al. (2014) (positive: ORadjusted: 1.11, 95% CI: 0.79–1.57, p = 0.557). No association was found between CAPE scores and speech illusion (ORadjusted: 1.25, 95% CI: 0.88–1.79, p = 0.220). For the Catalan et al. (2014) but not the Galdos et al. (2011) method, a negative association was apparent between positive schizotypy and speech illusion with positive or negative affective valence (ORadjusted: 0.44, 95% CI: 0.24–0.81, p = 0.008). Conclusion Contrary to findings in clinical populations, white noise speech illusion may not be associated with psychosis proneness in nonclinical populations. PMID:28832672

  4. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  5. What's on Your Mind? How Private Speech Mediates Cognition during Initial Non-Primary Language Learning

    ERIC Educational Resources Information Center

    Stafford, Catherine A.

    2013-01-01

    Vygotskian sociocultural theory of mind holds that language mediates thought. According to the theory, speech does not merely put completed thought into words; rather, it is a tool to refine thought as it evolves in real time. This study investigated from a sociocultural theory of mind perspective how nine beginning learners of Latin used private…

  6. From Storage to Manipulation: How the Neural Correlates of Verbal Working Memory Reflect Varying Demands on Inner Speech

    ERIC Educational Resources Information Center

    Marvel, Cherie L.; Desmond, John E.

    2012-01-01

    The ability to store and manipulate online information may be enhanced by an inner speech mechanism that draws upon motor brain regions. Neural correlates of this mechanism were examined using event-related functional magnetic resonance imaging (fMRI). Sixteen participants completed two conditions of a verbal working memory task. In both…

  7. Self-Esteem in Children with Speech and Language Impairment: An Exploratory Study of Transition from Language Units to Mainstream School

    ERIC Educational Resources Information Center

    Rannard, Anne; Glenn, Sheila

    2009-01-01

    Little is known about the self-perceptions of children moving from language units to mainstream school. This longitudinal exploratory study examined the effects of transition on perceptions of competence and acceptance in one group of children with speech and language impairment. Seven children and their teachers completed the Pictorial Scale of…

  8. Constructing a Scale to Assess L2 Written Speech Act Performance: WDCT and E-Mail Tasks

    ERIC Educational Resources Information Center

    Chen, Yuan-shan; Liu, Jianda

    2016-01-01

    This study reports the development of a scale to evaluate the speech act performance by intermediate-level Chinese learners of English. A qualitative analysis of the American raters' comments was conducted on learner scripts in response to a total of 16 apology and request written discourse completion task (WDCT) situations. The results showed…

  9. Connected word recognition using a cascaded neuro-computational model

    NASA Astrophysics Data System (ADS)

    Hoya, Tetsuya; van Leeuwen, Cees

    2016-10-01

    We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.

  10. Speech after Radial Forearm Free Flap Reconstruction of the Tongue: A Longitudinal Acoustic Study of Vowel and Diphthong Sounds

    ERIC Educational Resources Information Center

    Laaksonen, Juha-Pertti; Rieger, Jana; Happonen, Risto-Pekka; Harris, Jeffrey; Seikaly, Hadi

    2010-01-01

    The purpose of this study was to use acoustic analyses to describe speech outcomes over the course of 1 year after radial forearm free flap (RFFF) reconstruction of the tongue. Eighteen Canadian English-speaking females and males with reconstruction for oral cancer had speech samples recorded (pre-operative, and 1 month, 6 months, and 1 year…

  11. Speech Sound Disorders in Preschool Children: Correspondence between Clinical Diagnosis and Teacher and Parent Report

    ERIC Educational Resources Information Center

    Harrison, Linda J.; McLeod, Sharynne; McAllister, Lindy; McCormack, Jane

    2017-01-01

    This study sought to assess the level of correspondence between parent and teacher report of concern about young children's speech and specialist assessment of speech sound disorders (SSD). A sample of 157 children aged 4-5 years was recruited in preschools and long day care centres in Victoria and New South Wales (NSW). SSD was assessed…

  12. Regular/Irregular is Not the Whole Story: The Role of Frequency and Generalization in the Acquisition of German Past Participle Inflection

    ERIC Educational Resources Information Center

    Szagun, Gisela

    2011-01-01

    The acquisition of German participle inflection was investigated using spontaneous speech samples from six children between 1 ; 4 and 3 ; 8 and ten children between 1 ; 4 and 2 ; 10 recorded longitudinally at regular intervals. Child-directed speech was also analyzed. In adult and child speech weak participles were significantly more frequent than…

  13. Loss of regional accent after damage to the speech production network.

    PubMed

    Berthier, Marcelo L; Dávila, Guadalupe; Moreno-Torres, Ignacio; Beltrán-Corbellini, Álvaro; Santana-Moreno, Daniel; Roé-Vellvé, Núria; Thurnhofer-Hemsi, Karl; Torres-Prioris, María José; Massone, María Ignacia; Ruiz-Cruces, Rafael

    2015-01-01

    Lesion-symptom mapping studies reveal that selective damage to one or more components of the speech production network can be associated with foreign accent syndrome, changes in regional accent (e.g., from Parisian accent to Alsatian accent), stronger regional accent, or re-emergence of a previously learned and dormant regional accent. Here, we report loss of regional accent after rapidly regressive Broca's aphasia in three Argentinean patients who had suffered unilateral or bilateral focal lesions in components of the speech production network. All patients were monolingual speakers with three different native Spanish accents (Cordobés or central, Guaranítico or northeast, and Bonaerense). Samples of speech production from the patient with native Córdoba accent were compared with previous recordings of his voice, whereas data from the patient with native Guaranítico accent were compared with speech samples from one healthy control matched for age, gender, and native accent. Speech samples from the patient with native Buenos Aires's accent were compared with data obtained from four healthy control subjects with the same accent. Analysis of speech production revealed discrete slowing in speech rate, inappropriate long pauses, and monotonous intonation. Phonemic production remained similar to those of healthy Spanish speakers, but phonetic variants peculiar to each accent (e.g., intervocalic aspiration of /s/ in Córdoba accent) were absent. While basic normal prosodic features of Spanish prosody were preserved, features intrinsic to melody of certain geographical areas (e.g., rising end F0 excursion in declarative sentences intoned with Córdoba accent) were absent. All patients were also unable to produce sentences with different emotional prosody. Brain imaging disclosed focal left hemisphere lesions involving the middle part of the motor cortex, the post-central cortex, the posterior inferior and/or middle frontal cortices, insula, anterior putamen and supplementary motor area. Our findings suggest that lesions affecting the middle part of the left motor cortex and other components of the speech production network disrupt neural processes involved in the production of regional accent features.

  14. Loss of regional accent after damage to the speech production network

    PubMed Central

    Berthier, Marcelo L.; Dávila, Guadalupe; Moreno-Torres, Ignacio; Beltrán-Corbellini, Álvaro; Santana-Moreno, Daniel; Roé-Vellvé, Núria; Thurnhofer-Hemsi, Karl; Torres-Prioris, María José; Massone, María Ignacia; Ruiz-Cruces, Rafael

    2015-01-01

    Lesion-symptom mapping studies reveal that selective damage to one or more components of the speech production network can be associated with foreign accent syndrome, changes in regional accent (e.g., from Parisian accent to Alsatian accent), stronger regional accent, or re-emergence of a previously learned and dormant regional accent. Here, we report loss of regional accent after rapidly regressive Broca’s aphasia in three Argentinean patients who had suffered unilateral or bilateral focal lesions in components of the speech production network. All patients were monolingual speakers with three different native Spanish accents (Cordobés or central, Guaranítico or northeast, and Bonaerense). Samples of speech production from the patient with native Córdoba accent were compared with previous recordings of his voice, whereas data from the patient with native Guaranítico accent were compared with speech samples from one healthy control matched for age, gender, and native accent. Speech samples from the patient with native Buenos Aires’s accent were compared with data obtained from four healthy control subjects with the same accent. Analysis of speech production revealed discrete slowing in speech rate, inappropriate long pauses, and monotonous intonation. Phonemic production remained similar to those of healthy Spanish speakers, but phonetic variants peculiar to each accent (e.g., intervocalic aspiration of /s/ in Córdoba accent) were absent. While basic normal prosodic features of Spanish prosody were preserved, features intrinsic to melody of certain geographical areas (e.g., rising end F0 excursion in declarative sentences intoned with Córdoba accent) were absent. All patients were also unable to produce sentences with different emotional prosody. Brain imaging disclosed focal left hemisphere lesions involving the middle part of the motor cortex, the post-central cortex, the posterior inferior and/or middle frontal cortices, insula, anterior putamen and supplementary motor area. Our findings suggest that lesions affecting the middle part of the left motor cortex and other components of the speech production network disrupt neural processes involved in the production of regional accent features. PMID:26594161

  15. Immediate effects of AAF devices on the characteristics of stuttering: a clinical analysis.

    PubMed

    Unger, Julia P; Glück, Christian W; Cholewa, Jürgen

    2012-06-01

    The present study investigated the immediate effects of altered auditory feedback (AAF) and one Inactive Condition (AAF parameters set to 0) on clinical attributes of stuttering during scripted and spontaneous speech. Two commercially available, portable AAF devices were used to create the combined delayed auditory feedback (DAF) and frequency altered feedback (FAF) effects. Thirty adults, who stutter, aged 18-68 years (M=36.5; SD=15.2), participated in this investigation. Each subject produced four sets of 5-min of oral reading, three sets of 5-min monologs as well as 10-min dialogs. These speech samples were analyzed to detect changes in descriptive features of stuttering (frequency, duration, speech/articulatory rate, core behaviors) across the various speech samples and within two SSI-4 (Riley, 2009) based severity ratings. A statistically significant difference was found in the frequency of stuttered syllables (%SS) during both Active Device conditions (p=.000) for all speech samples. The most sizable reductions in %SS occurred within scripted speech. In the analysis of stuttering type, it was found that blocks were reduced significantly (Device A: p=.017; Device B: p=.049). To evaluate the impact on severe and mild stuttering, participants were grouped into two SSI-4 based categories; mild and moderate-severe. During the Inactive Condition those participants within the moderate-severe group (p=.024) showed a statistically significant reduction in overall disfluencies. This result indicates, that active AAF parameters alone may not be the sole cause of a fluency-enhancement when using a technical speech aid. The reader will learn and be able to describe: (1) currently available scientific evidence on the use of altered auditory feedback (AAF) during scripted and spontaneous speech, (2) which characteristics of stuttering are impacted by an AAF device (frequency, duration, core behaviors, speech & articulatory rate, stuttering severity), (3) the effects of an Inactive Condition on people who stutter (PWS) falling into two severity groups, and (4) how the examined participants perceived the use of AAF devices. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Do not throw out the baby with the bath water: choosing an effective baseline for a functional localizer of speech processing.

    PubMed

    Stoppelman, Nadav; Harpaz, Tamar; Ben-Shachar, Michal

    2013-05-01

    Speech processing engages multiple cortical regions in the temporal, parietal, and frontal lobes. Isolating speech-sensitive cortex in individual participants is of major clinical and scientific importance. This task is complicated by the fact that responses to sensory and linguistic aspects of speech are tightly packed within the posterior superior temporal cortex. In functional magnetic resonance imaging (fMRI), various baseline conditions are typically used in order to isolate speech-specific from basic auditory responses. Using a short, continuous sampling paradigm, we show that reversed ("backward") speech, a commonly used auditory baseline for speech processing, removes much of the speech responses in frontal and temporal language regions of adult individuals. On the other hand, signal correlated noise (SCN) serves as an effective baseline for removing primary auditory responses while maintaining strong signals in the same language regions. We show that the response to reversed speech in left inferior frontal gyrus decays significantly faster than the response to speech, thus suggesting that this response reflects bottom-up activation of speech analysis followed up by top-down attenuation once the signal is classified as nonspeech. The results overall favor SCN as an auditory baseline for speech processing.

  17. Temporal modulations in speech and music.

    PubMed

    Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David

    2017-10-01

    Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. The Role of Clinical Experience in Speech-Language Pathologists' Perception of Subphonemic Detail in Children's Speech

    PubMed Central

    Munson, Benjamin; Johnson, Julie M.; Edwards, Jan

    2013-01-01

    Purpose This study examined whether experienced speech-language pathologists differ from inexperienced people in their perception of phonetic detail in children's speech. Method Convenience samples comprising 21 experienced speech-language pathologist and 21 inexperienced listeners participated in a series of tasks in which they made visual-analog scale (VAS) ratings of children's natural productions of target /s/-/θ/, /t/-/k/, and /d/-/ɡ/ in word-initial position. Listeners rated the perception distance between individual productions and ideal productions. Results The experienced listeners' ratings differed from inexperienced listeners' in four ways: they had higher intra-rater reliability, they showed less bias toward a more frequent sound, their ratings were more closely related to the acoustic characteristics of the children's speech, and their responses were related to a different set of predictor variables. Conclusions Results suggest that experience working as a speech-language pathologist leads to better perception of phonetic detail in children's speech. Limitations and future research are discussed. PMID:22230182

  19. Developmental profile of speech-language and communicative functions in an individual with the Preserved Speech Variant of Rett syndrome

    PubMed Central

    Marschik, Peter B.; Vollmann, Ralf; Bartl-Pokorny, Katrin D.; Green, Vanessa A.; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2018-01-01

    Objective We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant (PSV) of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. Methods For this study we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples, and picture stories to elicit narrative competences. Results Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Conclusion Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note. PMID:23870013

  20. Developmental profile of speech-language and communicative functions in an individual with the preserved speech variant of Rett syndrome.

    PubMed

    Marschik, Peter B; Vollmann, Ralf; Bartl-Pokorny, Katrin D; Green, Vanessa A; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2014-08-01

    We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.

  1. Hearing impaired speech in noisy classrooms

    NASA Astrophysics Data System (ADS)

    Shahin, Kimary; McKellin, William H.; Jamieson, Janet; Hodgson, Murray; Pichora-Fuller, M. Kathleen

    2005-04-01

    Noisy classrooms have been shown to induce among students patterns of interaction similar to those used by hearing impaired people [W. H. McKellin et al., GURT (2003)]. In this research, the speech of children in a noisy classroom setting was investigated to determine if noisy classrooms have an effect on students' speech. Audio recordings were made of the speech of students during group work in their regular classrooms (grades 1-7), and of the speech of the same students in a sound booth. Noise level readings in the classrooms were also recorded. Each student's noisy and quiet environment speech samples were acoustically analyzed for prosodic and segmental properties (f0, pitch range, pitch variation, phoneme duration, vowel formants), and compared. The analysis showed that the students' speech in the noisy classrooms had characteristics of the speech of hearing-impaired persons [e.g., R. O'Halpin, Clin. Ling. and Phon. 15, 529-550 (2001)]. Some educational implications of our findings were identified. [Work supported by the Peter Wall Institute for Advanced Studies, University of British Columbia.

  2. Creating speech-synchronized animation.

    PubMed

    King, Scott A; Parent, Richard E

    2005-01-01

    We present a facial model designed primarily to support animated speech. Our facial model takes facial geometry as input and transforms it into a parametric deformable model. The facial model uses a muscle-based parameterization, allowing for easier integration between speech synchrony and facial expressions. Our facial model has a highly deformable lip model that is grafted onto the input facial geometry to provide the necessary geometric complexity needed for creating lip shapes and high-quality renderings. Our facial model also includes a highly deformable tongue model that can represent the shapes the tongue undergoes during speech. We add teeth, gums, and upper palate geometry to complete the inner mouth. To decrease the processing time, we hierarchically deform the facial surface. We also present a method to animate the facial model over time to create animated speech using a model of coarticulation that blends visemes together using dominance functions. We treat visemes as a dynamic shaping of the vocal tract by describing visemes as curves instead of keyframes. We show the utility of the techniques described in this paper by implementing them in a text-to-audiovisual-speech system that creates animation of speech from unrestricted text. The facial and coarticulation models must first be interactively initialized. The system then automatically creates accurate real-time animated speech from the input text. It is capable of cheaply producing tremendous amounts of animated speech with very low resource requirements.

  3. Articulation of sounds in Serbian language in patients who learned esophageal speech successfully.

    PubMed

    Vekić, Maja; Veselinović, Mila; Mumović, Gordana; Mitrović, Slobodan M

    2014-01-01

    Articulation of pronounced sounds during the training and subsequent use of esophageal speech is very important because it contributes significantly to intelligibility and aesthetics of spoken words and sentences, as well as of speech and language itself. The aim of this research was to determine the quality of articulation of sounds of Serbian language by groups of sounds in patients who had learned esophageal speech successfully as well as the effect of age and tooth loss on the quality of articulation. This retrospective-prospective study included 16 patients who had undergone total laryngectomy. Having completed the rehabilitation of speech, these patient used esophageal voice and speech. The quality of articulation was tested by the "Global test of articulation." Esophageal speech was rated with grade 5, 4 and 3 in 62.5%, 31.3% and one patient, respectively. Serbian was the native language of all the patients. The study included 30 sounds of Serbian language in 16 subjects (480 total sounds). Only two patients (12.5%) articulated all sounds properly, whereas 87.5% of them had incorrect articulation. The articulation of affricates and fricatives, especially sound /h/ from the group of the fricatives, was found to be the worst in the patients who had successfully mastered esophageal speech. The age and the tooth loss of patients who have mastered esophageal speech do not affect the articulation of sounds in Serbian language.

  4. Phonation interval modification and speech performance quality during fluency-inducing conditions by adults who stutter

    PubMed Central

    Ingham, Roger J.; Bothe, Anne K.; Wang, Yuedong; Purkhiser, Krystal; New, Anneliese

    2012-01-01

    Purpose To relate changes in four variables previously defined as characteristic of normally fluent speech to changes in phonatory behavior during oral reading by persons who stutter (PWS) and normally fluent controls under multiple fluency-inducing (FI) conditions. Method Twelve PWS and 12 controls each completed 4 ABA experiments. During A phases, participants read normally. B phases were 4 different FI conditions: auditory masking, chorus reading, whispering, and rhythmic stimulation. Dependent variables were the durations of accelerometer-recorded phonated intervals; self-judged speech effort; and observer-judged stuttering frequency, speech rate, and speech naturalness. The method enabled a systematic replication of Ingham et al. (2009). Results All FI conditions resulted in decreased stuttering and decreases in the number of short phonated intervals, as compared with baseline conditions, but the only FI condition that satisfied all four characteristics of normally fluent speech was chorus reading. Increases in longer phonated intervals were associated with decreased stuttering but also with poorer naturalness and/or increased speech effort. Previous findings concerning the effects of FI conditions on speech naturalness and effort were replicated. Conclusions Measuring all relevant characteristics of normally fluent speech, in the context of treatments that aim to reduce the occurrence of short-duration PIs, may aid the search for an explanation of the nature of stuttering and may also maximize treatment outcomes for adults who stutter. PMID:22365886

  5. Evaluation of selected speech parameters after prosthesis supply in patients with maxillary or mandibular defects.

    PubMed

    Müller, Rainer; Höhlein, Andreas; Wolf, Annette; Markwardt, Jutta; Schulz, Matthias C; Range, Ursula; Reitemeier, Bernd

    2013-01-01

    Ablative surgery of oropharyngeal tumors frequently leads to defects in the speech organs, resulting in impairment of speech up to the point of unintelligibility. The aim of the present study was the assessment of selected parameters of speech with and without resection prostheses. The speech sounds of 22 patients suffering from maxillary and mandibular defects were recorded using a digital audio tape (DAT) recorder with and without resection prostheses. Evaluation of the resonance and the production of the sounds /s/, /sch/, and /ch/ was performed by 2 experienced speech therapists. Additionally, the patients completed a non-standardized questionnaire containing a linguistic self-assessment. After prosthesis supply, the number of patients with rhinophonia aperta decreased from 7 to 2 while the number of patients with intelligible speech increased from 2 to 20. Correct production of the sounds /s/, /sch/, and /ch/ increased from 2 to 13 patients. A significant improvement of the evaluated parameters could be observed only in patients with maxillary defects. The linguistic self-assessment showed a higher satisfaction in patients with maxillary defects. In patients with maxillary defects due to ablative tumor surgery, an increase in speech performance and intelligibility is possible by supplying resection prostheses. © 2013 S. Karger GmbH, Freiburg.

  6. High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.

    A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less

  7. High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings

    PubMed Central

    Anumanchipalli, Gopala K.; Dichter, Benjamin; Chaisanguanthum, Kris S.; Johnson, Keith; Chang, Edward F.

    2016-01-01

    A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial—especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship across speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics. PMID:27019106

  8. The Cleft Care UK study. Part 4: perceptual speech outcomes.

    PubMed

    Sell, D; Mildinhall, S; Albery, L; Wills, A K; Sandy, J R; Ness, A R

    2015-11-01

    To describe the perceptual speech outcomes from the Cleft Care UK (CCUK) study and compare them to the 1998 Clinical Standards Advisory Group (CSAG) audit. A cross-sectional study of 248 children born with complete unilateral cleft lip and palate, between 1 April 2005 and 31 March 2007 who underwent speech assessment. Centre-based specialist speech and language therapists (SLT) took speech audio-video recordings according to nationally agreed guidelines. Two independent listeners undertook the perceptual analysis using the CAPS-A Audit tool. Intra- and inter-rater reliability were tested. For each speech parameter of intelligibility/distinctiveness, hypernasality, palatal/palatalization, backed to velar/uvular, glottal, weak and nasalized consonants, and nasal realizations, there was strong evidence that speech outcomes were better in the CCUK children compared to CSAG children. The parameters which did not show improvement were nasal emission, nasal turbulence, hyponasality and lateral/lateralization. These results suggest that centralization of cleft care into high volume centres has resulted in improvements in UK speech outcomes in five-year-olds with unilateral cleft lip and palate. This may be associated with the development of a specialized workforce. Nevertheless, there still remains a group of children with significant difficulties at school entry. © The Authors. Orthodontics & Craniofacial Research Published by John Wiley & Sons Ltd.

  9. High-Resolution, Non-Invasive Imaging of Upper Vocal Tract Articulators Compatible with Human Brain Recordings

    DOE PAGES

    Bouchard, Kristofer E.; Conant, David F.; Anumanchipalli, Gopala K.; ...

    2016-03-28

    A complete neurobiological understanding of speech motor control requires determination of the relationship between simultaneously recorded neural activity and the kinematics of the lips, jaw, tongue, and larynx. Many speech articulators are internal to the vocal tract, and therefore simultaneously tracking the kinematics of all articulators is nontrivial-especially in the context of human electrophysiology recordings. Here, we describe a noninvasive, multi-modal imaging system to monitor vocal tract kinematics, demonstrate this system in six speakers during production of nine American English vowels, and provide new analysis of such data. Classification and regression analysis revealed considerable variability in the articulator-to-acoustic relationship acrossmore » speakers. Non-negative matrix factorization extracted basis sets capturing vocal tract shapes allowing for higher vowel classification accuracy than traditional methods. Statistical speech synthesis generated speech from vocal tract measurements, and we demonstrate perceptual identification. We demonstrate the capacity to predict lip kinematics from ventral sensorimotor cortical activity. These results demonstrate a multi-modal system to non-invasively monitor articulator kinematics during speech production, describe novel analytic methods for relating kinematic data to speech acoustics, and provide the first decoding of speech kinematics from electrocorticography. These advances will be critical for understanding the cortical basis of speech production and the creation of vocal prosthetics.« less

  10. Lexical influences on competing speech perception in younger, middle-aged, and older adults

    PubMed Central

    Helfer, Karen S.; Jesse, Alexandra

    2015-01-01

    The influence of lexical characteristics of words in to-be-attended and to-be-ignored speech streams was examined in a competing speech task. Older, middle-aged, and younger adults heard pairs of low-cloze probability sentences in which the frequency or neighborhood density of words was manipulated in either the target speech stream or the masking speech stream. All participants also completed a battery of cognitive measures. As expected, for all groups, target words that occur frequently or that are from sparse lexical neighborhoods were easier to recognize than words that are infrequent or from dense neighborhoods. Compared to other groups, these neighborhood density effects were largest for older adults; the frequency effect was largest for middle-aged adults. Lexical characteristics of words in the to-be-ignored speech stream also affected recognition of to-be-attended words, but only when overall performance was relatively good (that is, when younger participants listened to the speech streams at a more advantageous signal-to-noise ratio). For these listeners, to-be-ignored masker words from sparse neighborhoods interfered with recognition of target speech more than masker words from dense neighborhoods. Amount of hearing loss and cognitive abilities relating to attentional control modulated overall performance as well as the strength of lexical influences. PMID:26233036

  11. Missouri Assessment Program, Spring 2002: Social Studies, Grade 8. Released Items [and] Scoring Guide.

    ERIC Educational Resources Information Center

    Missouri State Dept. of Elementary and Secondary Education, Jefferson City.

    This booklet contains sample items from the Missouri social studies test for eighth graders. The first sample is based on a speech delivered by Elizabeth Cady Stanton in the mid-1880s, which proposed a new approach to raising girls. Students are directed to use their own knowledge and the speech excerpt to do three activities. The second sample…

  12. Can you hear my age? Influences of speech rate and speech spontaneity on estimation of speaker age

    PubMed Central

    Skoog Waller, Sara; Eriksson, Mårten; Sörqvist, Patrik

    2015-01-01

    Cognitive hearing science is mainly about the study of how cognitive factors contribute to speech comprehension, but cognitive factors also partake in speech processing to infer non-linguistic information from speech signals, such as the intentions of the talker and the speaker’s age. Here, we report two experiments on age estimation by “naïve” listeners. The aim was to study how speech rate influences estimation of speaker age by comparing the speakers’ natural speech rate with increased or decreased speech rate. In Experiment 1, listeners were presented with audio samples of read speech from three different speaker age groups (young, middle aged, and old adults). They estimated the speakers as younger when speech rate was faster than normal and as older when speech rate was slower than normal. This speech rate effect was slightly greater in magnitude for older (60–65 years) speakers in comparison with younger (20–25 years) speakers, suggesting that speech rate may gain greater importance as a perceptual age cue with increased speaker age. This pattern was more pronounced in Experiment 2, in which listeners estimated age from spontaneous speech. Faster speech rate was associated with lower age estimates, but only for older and middle aged (40–45 years) speakers. Taken together, speakers of all age groups were estimated as older when speech rate decreased, except for the youngest speakers in Experiment 2. The absence of a linear speech rate effect in estimates of younger speakers, for spontaneous speech, implies that listeners use different age estimation strategies or cues (possibly vocabulary) depending on the age of the speaker and the spontaneity of the speech. Potential implications for forensic investigations and other applied domains are discussed. PMID:26236259

  13. Communication Deficits and the Motor System: Exploring Patterns of Associations in Autism Spectrum Disorder (ASD).

    PubMed

    Mody, M; Shui, A M; Nowinski, L A; Golas, S B; Ferrone, C; O'Rourke, J A; McDougle, C J

    2017-01-01

    Many children with autism spectrum disorder (ASD) have notable difficulties in motor, speech and language domains. The connection between motor skills (oral-motor, manual-motor) and speech and language deficits reported in other developmental disorders raises important questions about a potential relationship between motor skills and speech-language deficits in ASD. To this end, we examined data from children with ASD (n = 1781), 2-17 years of age, enrolled in the Autism Speaks-Autism Treatment Network (AS-ATN) registry who completed a multidisciplinary evaluation that included diagnostic, physical, cognitive and behavioral assessments as part of a routine standard of care protocol. After adjusting for age, non-verbal IQ, Attention Deficit Hyperactivity Disorder (ADHD) medication use, and muscle tone, separate multiple linear regression analyses revealed significant positive associations of fine motor skills (FM) with both expressive language (EL) and receptive language (RL) skills in an impaired FM subgroup; in contrast, the impaired gross motor (GM) subgroup showed no association with EL but a significant negative association with RL. Similar analyses between motor skills and interpersonal relationships across the sample found both GM skills and FM skills to be associated with social interactions. These results suggest potential differences in the contributions of fine versus gross motor skills to autistic profiles and may provide another lens with which to view communication differences across the autism spectrum for use in treatment interventions.

  14. Affective Properties of Mothers' Speech to Infants With Hearing Impairment and Cochlear Implants

    PubMed Central

    Bergeson, Tonya R.; Xu, Huiping; Kitamura, Christine

    2015-01-01

    Purpose The affective properties of infant-directed speech influence the attention of infants with normal hearing to speech sounds. This study explored the affective quality of maternal speech to infants with hearing impairment (HI) during the 1st year after cochlear implantation as compared to speech to infants with normal hearing. Method Mothers of infants with HI and mothers of infants with normal hearing matched by age (NH-AM) or hearing experience (NH-EM) were recorded playing with their infants during 3 sessions over a 12-month period. Speech samples of 25 s were low-pass filtered, leaving intonation but not speech information intact. Sixty adults rated the stimuli along 5 scales: positive/negative affect and intention to express affection, to encourage attention, to comfort/soothe, and to direct behavior. Results Low-pass filtered speech to HI and NH-EM groups was rated as more positive, affective, and comforting compared with the such speech to the NH-AM group. Speech to infants with HI and with NH-AM was rated as more directive than speech to the NH-EM group. Mothers decreased affective qualities in speech to all infants but increased directive qualities in speech to infants with NH-EM over time. Conclusions Mothers fine-tune communicative intent in speech to their infant's developmental stage. They adjust affective qualities to infants' hearing experience rather than to chronological age but adjust directive qualities of speech to the chronological age of their infants. PMID:25679195

  15. The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2011-01-01

    Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.

  16. [Comparative studies of the quality of the esophageal voice following laryngectomy: the insufflation test and reverse speech audiometry].

    PubMed

    Böhme, G; Clasen, B

    1989-09-01

    We carried out a transnasal insufflation test according to Blom and Singer on 27 laryngectomy patients as well as a speech communications test with the help of reverse speech audiometry, i.e. the post laryngectomy telephone test according to Zenner and Pfrang. The combined evaluation of both tests provided basic information on the quality of the esophagus voice and functionability of the speech organs. Both tests can be carried out quickly and easily and allow a differentiated statement to be made on the application possibilities of a esophagus voice, electronic speech aids and voice prothesis. Three groups could be identified from our results: 1. Insufflation test and reverse speech test provided conformable good or very good results. The esophagus voice was well understood. 2. Complete failure in the insufflation and telephone tests calls for further examinations to exclude any spasm, stricture, divertical and scarred membrane stenosis as well as tumor relapse in the region of the pharyngo-esophageal segments. 3. Organic causes must be looked for in the area of the nozzle as well as cranial nerve failure and social-determined causes in the case of normal insufflation and considerably reduced speech communication in the telephone test.

  17. Perception and analysis of Spanish accents in English speech

    NASA Astrophysics Data System (ADS)

    Chism, Cori; Lass, Norman

    2002-05-01

    The purpose of the present study was to determine what relates most closely to the degree of perceived foreign accent in the English speech of native Spanish speakers: intonation, vowel length, stress, voice onset time (VOT), or segmental accuracy. Nineteen native English speaking listeners rated speech samples from 7 native English speakers and 15 native Spanish speakers for comprehensibility and degree of foreign accent. The speech samples were analyzed spectrographically and perceptually to obtain numerical values for each variable. Correlation coefficients were computed to determine the relationship beween these values and the average foreign accent scores. Results showed that the average foreign accent scores were statistically significantly correlated with three variables: the length of stressed vowels (r=-0.48, p=0.05), voice onset time (r =-0.62, p=0.01), and segmental accuracy (r=0.92, p=0.001). Implications of these findings and suggestions for future research are discussed.

  18. Auditory and cognitive factors underlying individual differences in aided speech-understanding among older adults

    PubMed Central

    Humes, Larry E.; Kidd, Gary R.; Lentz, Jennifer J.

    2013-01-01

    This study was designed to address individual differences in aided speech understanding among a relatively large group of older adults. The group of older adults consisted of 98 adults (50 female and 48 male) ranging in age from 60 to 86 (mean = 69.2). Hearing loss was typical for this age group and about 90% had not worn hearing aids. All subjects completed a battery of tests, including cognitive (6 measures), psychophysical (17 measures), and speech-understanding (9 measures), as well as the Speech, Spatial, and Qualities of Hearing (SSQ) self-report scale. Most of the speech-understanding measures made use of competing speech and the non-speech psychophysical measures were designed to tap phenomena thought to be relevant for the perception of speech in competing speech (e.g., stream segregation, modulation-detection interference). All measures of speech understanding were administered with spectral shaping applied to the speech stimuli to fully restore audibility through at least 4000 Hz. The measures used were demonstrated to be reliable in older adults and, when compared to a reference group of 28 young normal-hearing adults, age-group differences were observed on many of the measures. Principal-components factor analysis was applied successfully to reduce the number of independent and dependent (speech understanding) measures for a multiple-regression analysis. Doing so yielded one global cognitive-processing factor and five non-speech psychoacoustic factors (hearing loss, dichotic signal detection, multi-burst masking, stream segregation, and modulation detection) as potential predictors. To this set of six potential predictor variables were added subject age, Environmental Sound Identification (ESI), and performance on the text-recognition-threshold (TRT) task (a visual analog of interrupted speech recognition). These variables were used to successfully predict one global aided speech-understanding factor, accounting for about 60% of the variance. PMID:24098273

  19. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus)

    PubMed Central

    Flaherty, Mary; Dent, Micheal L.; Sawusch, James R.

    2017-01-01

    The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with “d” or “t” and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal. PMID:28562597

  20. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus).

    PubMed

    Flaherty, Mary; Dent, Micheal L; Sawusch, James R

    2017-01-01

    The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT) and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated), Passive speech exposure (regular exposure to human speech), and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  1. Speech vs. singing: infants choose happier sounds

    PubMed Central

    Corbeil, Marieve; Trehub, Sandra E.; Peretz, Isabelle

    2013-01-01

    Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants' attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4–13 months of age were exposed to happy-sounding infant-directed speech vs. hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children's song spoken vs. sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children's song vs. a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing) was the principal contributor to infant attention, regardless of age. PMID:23805119

  2. The Relationship Between Apraxia of Speech and Oral Apraxia: Association or Dissociation?

    PubMed

    Whiteside, Sandra P; Dyson, Lucy; Cowell, Patricia E; Varley, Rosemary A

    2015-11-01

    Acquired apraxia of speech (AOS) is a motor speech disorder that affects the implementation of articulatory gestures and the fluency and intelligibility of speech. Oral apraxia (OA) is an impairment of nonspeech volitional movement. Although many speakers with AOS also display difficulties with volitional nonspeech oral movements, the relationship between the 2 conditions is unclear. This study explored the relationship between speech and volitional nonspeech oral movement impairment in a sample of 50 participants with AOS. We examined levels of association and dissociation between speech and OA using a battery of nonspeech oromotor, speech, and auditory/aphasia tasks. There was evidence of a moderate positive association between the 2 impairments across participants. However, individual profiles revealed patterns of dissociation between the 2 in a few cases, with evidence of double dissociation of speech and oral apraxic impairment. We discuss the implications of these relationships for models of oral motor and speech control. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. The speech naturalness of people who stutter speaking under delayed auditory feedback as perceived by different groups of listeners.

    PubMed

    Van Borsel, John; Eeckhout, Hannelore

    2008-09-01

    This study investigated listeners' perception of the speech naturalness of people who stutter (PWS) speaking under delayed auditory feedback (DAF) with particular attention for possible listener differences. Three panels of judges consisting of 14 stuttering individuals, 14 speech language pathologists, and 14 naive listeners rated the naturalness of speech samples of stuttering and non-stuttering individuals using a 9-point interval scale. Results clearly indicate that these three groups evaluate naturalness differently. Naive listeners appear to be more severe in their judgements than speech language pathologists and stuttering listeners, and speech language pathologists are apparently more severe than PWS. The three listener groups showed similar trends with respect to the relationship between speech naturalness and speech rate. Results of all three indicated that for PWS, the slower a speaker's rate was, the less natural speech was judged to sound. The three listener groups also showed similar trends with regard to naturalness of the stuttering versus the non-stuttering individuals. All three panels considered the speech of the non-stuttering participants more natural. The reader will be able to: (1) discuss the speech naturalness of people who stutter speaking under delayed auditory feedback, (2) discuss listener differences about the naturalness of people who stutter speaking under delayed auditory feedback, and (3) discuss the importance of speech rate for the naturalness of speech.

  4. Scores on Riley's stuttering severity instrument versions three and four for samples of different length and for different types of speech material.

    PubMed

    Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter

    2014-12-01

    Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.

  5. A Window into the Intoxicated Mind? Speech as an Index of Psychoactive Drug Effects

    PubMed Central

    Bedi, Gillinder; Cecchi, Guillermo A; Slezak, Diego F; Carrillo, Facundo; Sigman, Mariano; de Wit, Harriet

    2014-01-01

    Abused drugs can profoundly alter mental states in ways that may motivate drug use. These effects are usually assessed with self-report, an approach that is vulnerable to biases. Analyzing speech during intoxication may present a more direct, objective measure, offering a unique ‘window' into the mind. Here, we employed computational analyses of speech semantic and topological structure after ±3,4-methylenedioxymethamphetamine (MDMA; ‘ecstasy') and methamphetamine in 13 ecstasy users. In 4 sessions, participants completed a 10-min speech task after MDMA (0.75 and 1.5 mg/kg), methamphetamine (20 mg), or placebo. Latent Semantic Analyses identified the semantic proximity between speech content and concepts relevant to drug effects. Graph-based analyses identified topological speech characteristics. Group-level drug effects on semantic distances and topology were assessed. Machine-learning analyses (with leave-one-out cross-validation) assessed whether speech characteristics could predict drug condition in the individual subject. Speech after MDMA (1.5 mg/kg) had greater semantic proximity than placebo to the concepts friend, support, intimacy, and rapport. Speech on MDMA (0.75 mg/kg) had greater proximity to empathy than placebo. Conversely, speech on methamphetamine was further from compassion than placebo. Classifiers discriminated between MDMA (1.5 mg/kg) and placebo with 88% accuracy, and MDMA (1.5 mg/kg) and methamphetamine with 84% accuracy. For the two MDMA doses, the classifier performed at chance. These data suggest that automated semantic speech analyses can capture subtle alterations in mental state, accurately discriminating between drugs. The findings also illustrate the potential for automated speech-based approaches to characterize clinically relevant alterations to mental state, including those occurring in psychiatric illness. PMID:24694926

  6. Early Postimplant Speech Perception and Language Skills Predict Long-Term Language and Neurocognitive Outcomes Following Pediatric Cochlear Implantation

    PubMed Central

    Kronenberger, William G.; Castellanos, Irina; Pisoni, David B.

    2017-01-01

    Purpose We sought to determine whether speech perception and language skills measured early after cochlear implantation in children who are deaf, and early postimplant growth in speech perception and language skills, predict long-term speech perception, language, and neurocognitive outcomes. Method Thirty-six long-term users of cochlear implants, implanted at an average age of 3.4 years, completed measures of speech perception, language, and executive functioning an average of 14.4 years postimplantation. Speech perception and language skills measured in the 1st and 2nd years postimplantation and open-set word recognition measured in the 3rd and 4th years postimplantation were obtained from a research database in order to assess predictive relations with long-term outcomes. Results Speech perception and language skills at 6 and 18 months postimplantation were correlated with long-term outcomes for language, verbal working memory, and parent-reported executive functioning. Open-set word recognition was correlated with early speech perception and language skills and long-term speech perception and language outcomes. Hierarchical regressions showed that early speech perception and language skills at 6 months postimplantation and growth in these skills from 6 to 18 months both accounted for substantial variance in long-term outcomes for language and verbal working memory that was not explained by conventional demographic and hearing factors. Conclusion Speech perception and language skills measured very early postimplantation, and early postimplant growth in speech perception and language, may be clinically relevant markers of long-term language and neurocognitive outcomes in users of cochlear implants. Supplemental materials https://doi.org/10.23641/asha.5216200 PMID:28724130

  7. A window into the intoxicated mind? Speech as an index of psychoactive drug effects.

    PubMed

    Bedi, Gillinder; Cecchi, Guillermo A; Slezak, Diego F; Carrillo, Facundo; Sigman, Mariano; de Wit, Harriet

    2014-09-01

    Abused drugs can profoundly alter mental states in ways that may motivate drug use. These effects are usually assessed with self-report, an approach that is vulnerable to biases. Analyzing speech during intoxication may present a more direct, objective measure, offering a unique 'window' into the mind. Here, we employed computational analyses of speech semantic and topological structure after ±3,4-methylenedioxymethamphetamine (MDMA; 'ecstasy') and methamphetamine in 13 ecstasy users. In 4 sessions, participants completed a 10-min speech task after MDMA (0.75 and 1.5 mg/kg), methamphetamine (20 mg), or placebo. Latent Semantic Analyses identified the semantic proximity between speech content and concepts relevant to drug effects. Graph-based analyses identified topological speech characteristics. Group-level drug effects on semantic distances and topology were assessed. Machine-learning analyses (with leave-one-out cross-validation) assessed whether speech characteristics could predict drug condition in the individual subject. Speech after MDMA (1.5 mg/kg) had greater semantic proximity than placebo to the concepts friend, support, intimacy, and rapport. Speech on MDMA (0.75 mg/kg) had greater proximity to empathy than placebo. Conversely, speech on methamphetamine was further from compassion than placebo. Classifiers discriminated between MDMA (1.5 mg/kg) and placebo with 88% accuracy, and MDMA (1.5 mg/kg) and methamphetamine with 84% accuracy. For the two MDMA doses, the classifier performed at chance. These data suggest that automated semantic speech analyses can capture subtle alterations in mental state, accurately discriminating between drugs. The findings also illustrate the potential for automated speech-based approaches to characterize clinically relevant alterations to mental state, including those occurring in psychiatric illness.

  8. Only Behavioral But Not Self-Report Measures of Speech Perception Correlate with Cognitive Abilities.

    PubMed

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A

    2016-01-01

    Good speech perception and communication skills in everyday life are crucial for participation and well-being, and are therefore an overarching aim of auditory rehabilitation. Both behavioral and self-report measures can be used to assess these skills. However, correlations between behavioral and self-report speech perception measures are often low. One possible explanation is that there is a mismatch between the specific situations used in the assessment of these skills in each method, and a more careful matching across situations might improve consistency of results. The role that cognition plays in specific speech situations may also be important for understanding communication, as speech perception tests vary in their cognitive demands. In this study, the role of executive function, working memory (WM) and attention in behavioral and self-report measures of speech perception was investigated. Thirty existing hearing aid users with mild-to-moderate hearing loss aged between 50 and 74 years completed a behavioral test battery with speech perception tests ranging from phoneme discrimination in modulated noise (easy) to words in multi-talker babble (medium) and keyword perception in a carrier sentence against a distractor voice (difficult). In addition, a self-report measure of aided communication, residual disability from the Glasgow Hearing Aid Benefit Profile, was obtained. Correlations between speech perception tests and self-report measures were higher when specific speech situations across both were matched. Cognition correlated with behavioral speech perception test results but not with self-report. Only the most difficult speech perception test, keyword perception in a carrier sentence with a competing distractor voice, engaged executive functions in addition to WM. In conclusion, any relationship between behavioral and self-report speech perception is not mediated by a shared correlation with cognition.

  9. Only Behavioral But Not Self-Report Measures of Speech Perception Correlate with Cognitive Abilities

    PubMed Central

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2016-01-01

    Good speech perception and communication skills in everyday life are crucial for participation and well-being, and are therefore an overarching aim of auditory rehabilitation. Both behavioral and self-report measures can be used to assess these skills. However, correlations between behavioral and self-report speech perception measures are often low. One possible explanation is that there is a mismatch between the specific situations used in the assessment of these skills in each method, and a more careful matching across situations might improve consistency of results. The role that cognition plays in specific speech situations may also be important for understanding communication, as speech perception tests vary in their cognitive demands. In this study, the role of executive function, working memory (WM) and attention in behavioral and self-report measures of speech perception was investigated. Thirty existing hearing aid users with mild-to-moderate hearing loss aged between 50 and 74 years completed a behavioral test battery with speech perception tests ranging from phoneme discrimination in modulated noise (easy) to words in multi-talker babble (medium) and keyword perception in a carrier sentence against a distractor voice (difficult). In addition, a self-report measure of aided communication, residual disability from the Glasgow Hearing Aid Benefit Profile, was obtained. Correlations between speech perception tests and self-report measures were higher when specific speech situations across both were matched. Cognition correlated with behavioral speech perception test results but not with self-report. Only the most difficult speech perception test, keyword perception in a carrier sentence with a competing distractor voice, engaged executive functions in addition to WM. In conclusion, any relationship between behavioral and self-report speech perception is not mediated by a shared correlation with cognition. PMID:27242564

  10. Text as a Supplement to Speech in Young and Older Adults a)

    PubMed Central

    Krull, Vidya; Humes, Larry E.

    2015-01-01

    Objective The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, we tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. Our working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speechreading literature. We hypothesized that: 1) combining auditory and visual text information will result in improved recognition accuracy compared to auditory or visual text information alone; 2) benefit from supplementing speech with visual text (auditory and visual enhancement) in young adults will be greater than that in older adults; and 3) individual differences in performance on perceptual measures would be associated with cognitive abilities. Design Fifteen young adults with normal hearing, fifteen older adults with normal hearing, and fifteen older adults with hearing loss participated in this study. All participants completed sentence recognition tasks in auditory-only, text-only, and combined auditory-text conditions. The auditory sentence stimuli were spectrally shaped to restore audibility for the older participants with impaired hearing. All participants also completed various cognitive measures, including measures of working memory, processing speed, verbal comprehension, perceptual and cognitive speed, processing efficiency, inhibition, and the ability to form wholes from parts. Group effects were examined for each of the perceptual and cognitive measures. Audiovisual benefit was calculated relative to performance on auditory-only and visual-text only conditions. Finally, the relationship between perceptual measures and other independent measures were examined using principal-component factor analyses, followed by regression analyses. Results Both young and older adults performed similarly on nine out of ten perceptual measures (auditory, visual, and combined measures). Combining degraded speech with partially correct text from an automatic speech recognizer improved the understanding of speech in both young and older adults, relative to both auditory- and text-only performance. In all subjects, cognition emerged as a key predictor for a general speech-text integration ability. Conclusions These results suggest that neither age nor hearing loss affected the ability of subjects to benefit from text when used to support speech, after ensuring audibility through spectral shaping. These results also suggest that the benefit obtained by supplementing auditory input with partially accurate text is modulated by cognitive ability, specifically lexical and verbal skills. PMID:26458131

  11. An Analysis of the Variations from Standard English Pronunciation in the Phonetic Performance of Two Groups of Nonstandard-English-Speaking Children. Final Report.

    ERIC Educational Resources Information Center

    Williams, Frederick, Ed.; And Others

    In this second of two studies conducted with portions of the National Speech and Hearing Survey data, the investigators analyzed the phonetic variants from standard American English in the speech of two groups of nonstandard English speaking children. The study used samples of free speech and performance on the Gold-Fristoe Test of Articulation…

  12. Risk and protective factors associated with speech and language impairment in a nationally representative sample of 4- to 5-year-old children.

    PubMed

    Harrison, Linda J; McLeod, Sharynne

    2010-04-01

    To determine risk and protective factors for speech and language impairment in early childhood. Data are presented for a nationally representative sample of 4,983 children participating in the Longitudinal Study of Australian Children (described in McLeod & Harrison, 2009). Thirty-one child, parent, family, and community factors previously reported as being predictors of speech and language impairment were tested as predictors of (a) parent-rated expressive speech/language concern and (b) receptive language concern, (c) use of speech-language pathology services, and (d) low receptive vocabulary. Bivariate logistic regression analyses confirmed 29 of the identified factors. However, when tested concurrently with other predictors in multivariate analyses, only 19 remained significant: 9 for 2-4 outcomes and 10 for 1 outcome. Consistent risk factors were being male, having ongoing hearing problems, and having a more reactive temperament. Protective factors were having a more persistent and sociable temperament and higher levels of maternal well-being. Results differed by outcome for having an older sibling, parents speaking a language other than English, and parental support for children's learning at home. Identification of children requiring speech and language assessment requires consideration of the context of family life as well as biological and psychosocial factors intrinsic to the child.

  13. Differential effects of speech situations on mothers' and fathers' infant-directed and dog-directed speech: An acoustic analysis.

    PubMed

    Gergely, Anna; Faragó, Tamás; Galambos, Ágoston; Topál, József

    2017-10-23

    There is growing evidence that dog-directed and infant-directed speech have similar acoustic characteristics, like high overall pitch, wide pitch range, and attention-getting devices. However, it is still unclear whether dog- and infant-directed speech have gender or context-dependent acoustic features. In the present study, we collected comparable infant-, dog-, and adult directed speech samples (IDS, DDS, and ADS) in four different speech situations (Storytelling, Task solving, Teaching, and Fixed sentences situations); we obtained the samples from parents whose infants were younger than 30 months of age and also had pet dog at home. We found that ADS was different from IDS and DDS, independently of the speakers' gender and the given situation. Higher overall pitch in DDS than in IDS during free situations was also found. Our results show that both parents hyperarticulate their vowels when talking to children but not when addressing dogs: this result is consistent with the goal of hyperspeech in language tutoring. Mothers, however, exaggerate their vowels for their infants under 18 months more than fathers do. Our findings suggest that IDS and DDS have context-dependent features and support the notion that people adapt their prosodic features to the acoustic preferences and emotional needs of their audience.

  14. Speech Spectrum's Correlation with Speakers' Eysenck Personality Traits

    PubMed Central

    Hu, Chao; Wang, Qiandong; Short, Lindsey A.; Fu, Genyue

    2012-01-01

    The current study explored the correlation between speakers' Eysenck personality traits and speech spectrum parameters. Forty-six subjects completed the Eysenck Personality Questionnaire. They were instructed to verbally answer the questions shown on a computer screen and their responses were recorded by the computer. Spectrum parameters of /sh/ and /i/ were analyzed by Praat voice software. Formant frequencies of the consonant /sh/ in lying responses were significantly lower than that in truthful responses, whereas no difference existed on the vowel /i/ speech spectrum. The second formant bandwidth of the consonant /sh/ speech spectrum was significantly correlated with the personality traits of Psychoticism, Extraversion, and Neuroticism, and the correlation differed between truthful and lying responses, whereas the first formant frequency of the vowel /i/ speech spectrum was negatively correlated with Neuroticism in both response types. The results suggest that personality characteristics may be conveyed through the human voice, although the extent to which these effects are due to physiological differences in the organs associated with speech or to a general Pygmalion effect is yet unknown. PMID:22439014

  15. Examination of Learner and Situation Level Variables: Choice of Speech Act and Request Strategy by Spanish L2 Learners

    ERIC Educational Resources Information Center

    Kuriscak, Lisa

    2015-01-01

    This study focuses on variation within a group of learners of Spanish (N = 253) who produced requests and complaints via a written discourse completion task. It examines the effects of learner and situational variables on production--the effect of proficiency and addressee-gender on speech-act choice and the effect of perception of imposition on…

  16. An acoustical assessment of pitch-matching accuracy in relation to speech frequency, speech frequency range, age and gender in preschool children

    NASA Astrophysics Data System (ADS)

    Trollinger, Valerie L.

    This study investigated the relationship between acoustical measurement of singing accuracy in relationship to speech fundamental frequency, speech fundamental frequency range, age and gender in preschool-aged children. Seventy subjects from Southeastern Pennsylvania; the San Francisco Bay Area, California; and Terre Haute, Indiana, participated in the study. Speech frequency was measured by having the subjects participate in spontaneous and guided speech activities with the researcher, with 18 diverse samples extracted from each subject's recording for acoustical analysis for fundamental frequency in Hz with the CSpeech computer program. The fundamental frequencies were averaged together to derive a mean speech frequency score for each subject. Speech range was calculated by subtracting the lowest fundamental frequency produced from the highest fundamental frequency produced, resulting in a speech range measured in increments of Hz. Singing accuracy was measured by having the subjects each echo-sing six randomized patterns using the pitches Middle C, D, E, F♯, G and A (440), using the solfege syllables of Do and Re, which were recorded by a 5-year-old female model. For each subject, 18 samples of singing were recorded. All samples were analyzed by the CSpeech for fundamental frequency. For each subject, deviation scores in Hz were derived by calculating the difference between what the model sang in Hz and what the subject sang in response in Hz. Individual scores for each child consisted of an overall mean total deviation frequency, mean frequency deviations for each pattern, and mean frequency deviation for each pitch. Pearson correlations, MANOVA and ANOVA analyses, Multiple Regressions and Discriminant Analysis revealed the following findings: (1) moderate but significant (p < .001) relationships emerged between mean speech frequency and the ability to sing the pitches E, F♯, G and A in the study; (2) mean speech frequency also emerged as the strongest predictor of subjects' ability to sing the notes E and F♯; (3) mean speech frequency correlated moderately and significantly (p < .001) with sharpness and flatness of singing response accuracy in Hz; (4) speech range was the strongest predictor of singing accuracy for the pitches G and A in the study (p < .001); (5) gender emerged as a significant, but not the strongest, predictor for ability to sing the pitches in the study above C and D; (6) gender did not correlate with mean speech frequency and speech range; (7) age in months emerged as a low but significant predictor of ability to sing the lower notes (C and D) in the study; (8) age correlated significantly but negatively low (r = -.23, p < .05, two-tailed) with mean speech frequency; and (9) age did not emerge as a significant predictor of overall singing accuracy. Ancillary findings indicated that there were significant differences in singing accuracy based on geographic location by gender, and that siblings and fraternal twins in the study generally performed similarly. In addition, reliability for using the CSpeech for acoustical analysis revealed test/retest correlations of .99, with one exception at .94. Based on these results, suggestions were made concerning future research concerned with studying the use of voice in speech and how it may affect singing development, overall use in singing, and pitch-matching accuracy.

  17. Longitudinal follow-up to evaluate speech disorders in early-treated patients with infantile-onset Pompe disease.

    PubMed

    Zeng, Yin-Ting; Hwu, Wuh-Liang; Torng, Pao-Chuan; Lee, Ni-Chung; Shieh, Jeng-Yi; Lu, Lu; Chien, Yin-Hsiu

    2017-05-01

    Patients with infantile-onset Pompe disease (IOPD) can be treated by recombinant human acid alpha glucosidase (rhGAA) replacement beginning at birth with excellent survival rates, but they still commonly present with speech disorders. This study investigated the progress of speech disorders in these early-treated patients and ascertained the relationship with treatments. Speech disorders, including hypernasal resonance, articulation disorders, and speech intelligibility, were scored by speech-language pathologists using auditory perception in seven early-treated patients over a period of 6 years. Statistical analysis of the first and last evaluations of the patients was performed with the Wilcoxon signed-rank test. A total of 29 speech samples were analyzed. All the patients suffered from hypernasality, articulation disorder, and impairment in speech intelligibility at the age of 3 years. The conditions were stable, and 2 patients developed normal or near normal speech during follow-up. Speech therapy and a high dose of rhGAA appeared to improve articulation in 6 of the 7 patients (86%, p = 0.028) by decreasing the omission of consonants, which consequently increased speech intelligibility (p = 0.041). Severity of hypernasality greatly reduced only in 2 patients (29%, p = 0.131). Speech disorders were common even in early and successfully treated patients with IOPD; however, aggressive speech therapy and high-dose rhGAA could improve their speech disorders. Copyright © 2016 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  18. Driver compliance to take-over requests with different auditory outputs in conditional automation.

    PubMed

    Forster, Yannick; Naujoks, Frederik; Neukum, Alexandra; Huestegge, Lynn

    2017-12-01

    Conditionally automated driving (CAD) systems are expected to improve traffic safety. Whenever the CAD system exceeds its limit of operation, designers of the system need to ensure a safe and timely enough transition from automated to manual mode. An existing visual Human-Machine Interface (HMI) was supplemented by different auditory outputs. The present work compares the effects of different auditory outputs in form of (1) a generic warning tone and (2) additional semantic speech output on driver behavior for the announcement of an upcoming take-over request (TOR). We expect the information carried by means of speech output to lead to faster reactions and better subjective evaluations by the drivers compared to generic auditory output. To test this assumption, N=17 drivers completed two simulator drives, once with a generic warning tone ('Generic') and once with additional speech output ('Speech+generic'), while they were working on a non-driving related task (NDRT; i.e., reading a magazine). Each drive incorporated one transition from automated to manual mode when yellow secondary lanes emerged. Different reaction time measures, relevant for the take-over process, were assessed. Furthermore, drivers evaluated the complete HMI regarding usefulness, ease of use and perceived visual workload just after experiencing the take-over. They gave comparative ratings on usability and acceptance at the end of the experiment. Results revealed that reaction times, reflecting information processing time (i.e., hands on the steering wheel, termination of NDRT), were shorter for 'Speech+generic' compared to 'Generic' while reaction time, reflecting allocation of attention (i.e., first glance ahead), did not show this difference. Subjective ratings were in favor of the system with additional speech output. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Promoting consistent use of the communication function classification system (CFCS).

    PubMed

    Cunningham, Barbara Jane; Rosenbaum, Peter; Hidecker, Mary Jo Cooley

    2016-01-01

    We developed a Knowledge Translation (KT) intervention to standardize the way speech-language pathologists working in Ontario Canada's Preschool Speech and Language Program (PSLP) used the Communication Function Classification System (CFCS). This tool was being used as part of a provincial program evaluation and standardizing its use was critical for establishing reliability and validity within the provincial dataset. Two theoretical foundations - Diffusion of Innovations and the Communication Persuasion Matrix - were used to develop and disseminate the intervention to standardize use of the CFCS among a cohort speech-language pathologists. A descriptive pre-test/post-test study was used to evaluate the intervention. Fifty-two participants completed an electronic pre-test survey, reviewed intervention materials online, and then immediately completed an electronic post-test survey. The intervention improved clinicians' understanding of how the CFCS should be used, their intentions to use the tool in the standardized way, and their abilities to make correct classifications using the tool. Findings from this work will be shared with representatives of the Ontario PSLP. The intervention may be disseminated to all speech-language pathologists working in the program. This study can be used as a model for developing and disseminating KT interventions for clinicians in paediatric rehabilitation. The Communication Function Classification System (CFCS) is a new tool that allows speech-language pathologists to classify children's skills into five meaningful levels of function. There is uncertainty and inconsistent practice in the field about the methods for using this tool. This study used combined two theoretical frameworks to develop an intervention to standardize use of the CFCS among a cohort of speech-language pathologists. The intervention effectively increased clinicians' understanding of the methods for using the CFCS, ability to make correct classifications, and intention to use the tool in the standardized way in the future.

  20. Developing a weighted measure of speech sound accuracy.

    PubMed

    Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J

    2011-02-01

    To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.

  1. Measurement of speech levels in the presence of time varying background noise

    NASA Technical Reports Server (NTRS)

    Pearsons, K. S.; Horonjeff, R.

    1982-01-01

    Short-term speech level measurements which could be used to note changes in vocal effort in a time varying noise environment were studied. Knowing the changes in speech level would in turn allow prediction of intelligibility in the presence of aircraft flyover noise. Tests indicated that it is possible to use two second samples of speech to estimate long term root mean square speech levels. Other tests were also performed in which people read out loud during aircraft flyover noise. Results of these tests indicate that people do indeed raise their voice during flyovers at a rate of about 3-1/2 dB for each 10 dB increase in background level. This finding is in agreement with other tests of speech levels in the presence of steady state background noise.

  2. Individual differences in children’s private speech: The role of imaginary companions

    PubMed Central

    Davis, Paige E.; Meins, Elizabeth; Fernyhough, Charles

    2013-01-01

    Relations between children’s imaginary companion status and their engagement in private speech during free play were investigated in a socially diverse sample of 5-year-olds (N = 148). Controlling for socioeconomic status, receptive verbal ability, total number of utterances, and duration of observation, there was a main effect of imaginary companion status on type of private speech. Children who had imaginary companions were more likely to engage in covert private speech compared with their peers who did not have imaginary companions. These results suggest that the private speech of children with imaginary companions is more internalized than that of their peers who do not have imaginary companions and that social engagement with imaginary beings may fulfill a similar role to social engagement with real-life partners in the developmental progression of private speech. PMID:23978382

  3. Functional MRI evidence for fine motor praxis dysfunction in children with persistent speech disorders

    PubMed Central

    Redle, Erin; Vannest, Jennifer; Maloney, Thomas; Tsevat, Rebecca K.; Eikenberry, Sarah; Lewis, Barbara; Shriberg, Lawrence D.; Tkach, Jean; Holland, Scott K.

    2014-01-01

    Children with persistent speech disorders (PSD) often present with overt or subtle motor deficits; the possibility that speech disorders and motor deficits could arise from a shared neurological base is currently unknown. Functional MRI (fMRI) was used to examine the brain networks supporting fine motor praxis in children with PSD and without clinically identified fine motor deficits. Methods This case-control study included 12 children with PSD (mean age 7.42 years, 4 female) and 12 controls (mean age 7.44 years, 4 female). Children completed behavioral evaluations using standardized motor assessments and parent reported functional measures. During fMRI scanning, participants completed a cued finger tapping task contrasted passive listening. A general linear model approach identified brain regions associated with finger tapping in each group and regions that differed between groups. The relationship between regional fMRI activation and fine motor skill was assessed using a regression analysis. Results Children with PSD had significantly poorer results for rapid speech production and fine motor praxis skills, but did not differ on classroom functional skills. Functional MRI results showed that children with PSD had significantly more activation in the cerebellum during finger tapping. Positive correlations between performance on a fine motor praxis test and activation multiple cortical regions were noted for children with PSD but not for controls. Conclusions Over-activation in the cerebellum during a motor task may reflect a subtle abnormality in the non-speech motor neural circuitry in children with PSD. PMID:25481413

  4. Evaluation of palatal plate thickness of maxillary prosthesis on phonation- a comparative clinical study.

    PubMed

    Zakkula, Srujana; B, Sreedevi; Anne, Gopinadh; Manne, Prakash; Bindu O, Swetha Hima; Atla, Jyothi; Deepthi, Sneha; Chaitanya A, Krishna

    2014-04-01

    Prosthodontic treatment involves clinical procedures which influence speech performance directly or indirectly. Prosthetic rehabilitation of missing teeth with partial or complete maxillary removable dentures influences the individual voice characteristics like Phonation, resonance etc. To evaluate the effect of Acrylic palatal plate thickness (1mm-3mm) of maxillary prosthesis on phonation. Twelve subjects were selected randomly between the age group of 20-25 years who have full complement of teeth and have no speech problems. Speech evaluation was done under four experimental conditions i.e. Without any experimental acrylic palatal plate (control), with experimental acrylic palatal plates of thickness 1 mm, 2 mm and 3 mm respectively. The speech material for phonation test consisted of Vowels sounds /a/, /i/, and /o/. Speech analysis to assess phonation was done using digital acoustic analysis (PRAAT software). The obtained results were statistically analyzed by One-way ANOVA and Tukey's multiple post-hoc for comparison of four experimental conditions with respect to different vowel sounds. Mean harmonics to noise ratio (HNR) values obtained for all the Experimental conditions did not show significant difference (p>0.05). In conclusion, an increase in the thickness of the acrylic palatal plate of maxillary prosthesis for about 1 mm - 3mm in complete or partial maxillary removable dentures resulted in no significant effect on phonation of vowel sounds /a/, /i/ and /o/. Increasing the thickness of the palatal plate from 1 mm to 3 mm has not shown any significant effect on the phonation.

  5. Exploring "psychic transparency" during pregnancy: a mixed-methods approach.

    PubMed

    Oriol, Cécile; Tordjman, Sylvie; Dayan, Jacques; Poulain, Patrice; Rosenblum, Ouriel; Falissard, Bruno; Dindoyal, Asha; Naudet, Florian

    2016-08-12

    Psychic transparency is described as a psychic crisis occurring during pregnancy. The objective was to test if it was clinically detectable. Seven primiparous and seven nulliparous subjects were recorded during 5 min of spontaneous speech about their dreams. 25 raters from five groups (psychoanalysts, psychiatrists, general practitioners, pregnant women and medical students) listened to the audiotapes. They were asked to rate the probability of the women being pregnant or not. Their ability to discriminate the primiparous women was tested. The probability of being identified correctly or not was calculated for each woman. A qualitative analysis of the speech samples was performed. No group of rater was able to correctly classify pregnant and non-pregnant women. However, the raters' choices were not completely random. The wish to be pregnant or to have a baby could be linked to a primiparous classification whereas job priorities could be linked to a nulliparous classification. It was not possible to detect Psychic transparency in this study. The wish for a child might be easier to identify. In addition, the raters' choices seemed to be connected to social representations of motherhood.

  6. Children of Few Words: Relations Among Selective Mutism, Behavioral Inhibition, and (Social) Anxiety Symptoms in 3- to 6-Year-Olds.

    PubMed

    Muris, Peter; Hendriks, Eline; Bot, Suili

    2016-02-01

    Children with selective mutism (SM) fail to speak in specific public situations (e.g., school), despite speaking normally in other situations (e.g., at home). The current study explored the phenomenon of SM in a sample of 57 non-clinical children aged 3-6 years. Children performed two speech tasks to assess their absolute amount of spoken words, while their parents completed questionnaires for measuring children's levels of SM, social anxiety and non-social anxiety symptoms as well as the temperament characteristic of behavioral inhibition. The results indicated that high levels of parent-reported SM were primarily associated with high levels of social anxiety symptoms. The number of spoken words was negatively related to behavioral inhibition: children with a more inhibited temperament used fewer words during the speech tasks. Future research is necessary to test whether the temperament characteristic of behavioral inhibition prompts children to speak less in novel social situations, and whether it is mainly social anxiety that turns this taciturnity into the psychopathology of SM.

  7. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values aremore » constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.« less

  8. Automatic intelligibility classification of sentence-level pathological speech

    PubMed Central

    Kim, Jangwon; Kumar, Naveen; Tsiartas, Andreas; Li, Ming; Narayanan, Shrikanth S.

    2014-01-01

    Pathological speech usually refers to the condition of speech distortion resulting from atypicalities in voice and/or in the articulatory mechanisms owing to disease, illness or other physical or biological insult to the production system. Although automatic evaluation of speech intelligibility and quality could come in handy in these scenarios to assist experts in diagnosis and treatment design, the many sources and types of variability often make it a very challenging computational processing problem. In this work we propose novel sentence-level features to capture abnormal variation in the prosodic, voice quality and pronunciation aspects in pathological speech. In addition, we propose a post-classification posterior smoothing scheme which refines the posterior of a test sample based on the posteriors of other test samples. Finally, we perform feature-level fusions and subsystem decision fusion for arriving at a final intelligibility decision. The performances are tested on two pathological speech datasets, the NKI CCRT Speech Corpus (advanced head and neck cancer) and the TORGO database (cerebral palsy or amyotrophic lateral sclerosis), by evaluating classification accuracy without overlapping subjects’ data among training and test partitions. Results show that the feature sets of each of the voice quality subsystem, prosodic subsystem, and pronunciation subsystem, offer significant discriminating power for binary intelligibility classification. We observe that the proposed posterior smoothing in the acoustic space can further reduce classification errors. The smoothed posterior score fusion of subsystems shows the best classification performance (73.5% for unweighted, and 72.8% for weighted, average recalls of the binary classes). PMID:25414544

  9. How our own speech rate influences our perception of others.

    PubMed

    Bosker, Hans Rutger

    2017-08-01

    In conversation, our own speech and that of others follow each other in rapid succession. Effects of the surrounding context on speech perception are well documented but, despite the ubiquity of the sound of our own voice, it is unknown whether our own speech also influences our perception of other talkers. This study investigated context effects induced by our own speech through 6 experiments, specifically targeting rate normalization (i.e., perceiving phonetic segments relative to surrounding speech rate). Experiment 1 revealed that hearing prerecorded fast or slow context sentences altered the perception of ambiguous vowels, replicating earlier work. Experiment 2 demonstrated that talking at a fast or slow rate prior to target presentation also altered target perception, though the effect of preceding speech rate was reduced. Experiment 3 showed that silent talking (i.e., inner speech) at fast or slow rates did not modulate the perception of others, suggesting that the effect of self-produced speech rate in Experiment 2 arose through monitoring of the external speech signal. Experiment 4 demonstrated that, when participants were played back their own (fast/slow) speech, no reduction of the effect of preceding speech rate was observed, suggesting that the additional task of speech production may be responsible for the reduced effect in Experiment 2. Finally, Experiments 5 and 6 replicate Experiments 2 and 3 with new participant samples. Taken together, these results suggest that variation in speech production may induce variation in speech perception, thus carrying implications for our understanding of spoken communication in dialogue settings. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Open source OCR framework using mobile devices

    NASA Astrophysics Data System (ADS)

    Zhou, Steven Zhiying; Gilani, Syed Omer; Winkler, Stefan

    2008-02-01

    Mobile phones have evolved from passive one-to-one communication device to powerful handheld computing device. Today most new mobile phones are capable of capturing images, recording video, and browsing internet and do much more. Exciting new social applications are emerging on mobile landscape, like, business card readers, sing detectors and translators. These applications help people quickly gather the information in digital format and interpret them without the need of carrying laptops or tablet PCs. However with all these advancements we find very few open source software available for mobile phones. For instance currently there are many open source OCR engines for desktop platform but, to our knowledge, none are available on mobile platform. Keeping this in perspective we propose a complete text detection and recognition system with speech synthesis ability, using existing desktop technology. In this work we developed a complete OCR framework with subsystems from open source desktop community. This includes a popular open source OCR engine named Tesseract for text detection & recognition and Flite speech synthesis module, for adding text-to-speech ability.

  11. Questions You May Want to Ask Your Child's Speech-Language Pathologist = Preguntas que usted le podria hacer al patologo del habla y el lenguaje de su hijo

    ERIC Educational Resources Information Center

    Centers for Disease Control and Prevention, 2007

    2007-01-01

    This accordion style pamphlet, dual sided with English and Spanish text, suggests questions for parents to ask their Speech-Language Pathologist and speech and language therapy services for their children. Sample questions include: How will I participate in my child's therapy sessions? How do you decide how much time my child will spend on speech…

  12. Speech pathologists' experiences with stroke clinical practice guidelines and the barriers and facilitators influencing their use: a national descriptive study.

    PubMed

    Hadely, Kathleen A; Power, Emma; O'Halloran, Robyn

    2014-03-06

    Communication and swallowing disorders are a common consequence of stroke. Clinical practice guidelines (CPGs) have been created to assist health professionals to put research evidence into clinical practice and can improve stroke care outcomes. However, CPGs are often not successfully implemented in clinical practice and research is needed to explore the factors that influence speech pathologists' implementation of stroke CPGs. This study aimed to describe speech pathologists' experiences and current use of guidelines, and to identify what factors influence speech pathologists' implementation of stroke CPGs. Speech pathologists working in stroke rehabilitation who had used a stroke CPG were invited to complete a 39-item online survey. Content analysis and descriptive and inferential statistics were used to analyse the data. 320 participants from all states and territories of Australia were surveyed. Almost all speech pathologists had used a stroke CPG and had found the guideline "somewhat useful" or "very useful". Factors that speech pathologists perceived influenced CPG implementation included the: (a) guideline itself, (b) work environment, (c) aspects related to the speech pathologist themselves, (d) patient characteristics, and (e) types of implementation strategies provided. There are many different factors that can influence speech pathologists' implementation of CPGs. The factors that influenced the implementation of CPGs can be understood in terms of knowledge creation and implementation frameworks. Speech pathologists should continue to adapt the stroke CPG to their local work environment and evaluate their use. To enhance guideline implementation, they may benefit from a combination of educational meetings and resources, outreach visits, support from senior colleagues, and audit and feedback strategies.

  13. Cohesive and coherent connected speech deficits in mild stroke.

    PubMed

    Barker, Megan S; Young, Breanne; Robinson, Gail A

    2017-05-01

    Spoken language production theories and lesion studies highlight several important prelinguistic conceptual preparation processes involved in the production of cohesive and coherent connected speech. Cohesion and coherence broadly connect sentences with preceding ideas and the overall topic. Broader cognitive mechanisms may mediate these processes. This study aims to investigate (1) whether stroke patients without aphasia exhibit impairments in cohesion and coherence in connected speech, and (2) the role of attention and executive functions in the production of connected speech. Eighteen stroke patients (8 right hemisphere stroke [RHS]; 6 left [LHS]) and 21 healthy controls completed two self-generated narrative tasks to elicit connected speech. A multi-level analysis of within and between-sentence processing ability was conducted. Cohesion and coherence impairments were found in the stroke group, particularly RHS patients, relative to controls. In the whole stroke group, better performance on the Hayling Test of executive function, which taps verbal initiation/suppression, was related to fewer propositional repetitions and global coherence errors. Better performance on attention tasks was related to fewer propositional repetitions, and decreased global coherence errors. In the RHS group, aspects of cohesive and coherent speech were associated with better performance on attention tasks. Better Hayling Test scores were related to more cohesive and coherent speech in RHS patients, and more coherent speech in LHS patients. Thus, we documented connected speech deficits in a heterogeneous stroke group without prominent aphasia. Our results suggest that broader cognitive processes may play a role in producing connected speech at the early conceptual preparation stage. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Methodology for speech assessment in the Scandcleft project--an international randomized clinical trial on palatal surgery: experiences from a pilot study.

    PubMed

    Lohmander, A; Willadsen, E; Persson, C; Henningsson, G; Bowden, M; Hutters, B

    2009-07-01

    To present the methodology for speech assessment in the Scandcleft project and discuss issues from a pilot study. Description of methodology and blinded test for speech assessment. Speech samples and instructions for data collection and analysis for comparisons of speech outcomes across five included languages were developed and tested. PARTICIPANTS AND MATERIALS: Randomly selected video recordings of 10 5-year-old children from each language (n = 50) were included in the project. Speech material consisted of test consonants in single words, connected speech, and syllable chains with nasal consonants. Five experienced speech and language pathologists participated as observers. Narrow phonetic transcription of test consonants translated into cleft speech characteristics, ordinal scale rating of resonance, and perceived velopharyngeal closure (VPC). A velopharyngeal composite score (VPC-sum) was extrapolated from raw data. Intra-agreement comparisons were performed. Range for intra-agreement for consonant analysis was 53% to 89%, for hypernasality on high vowels in single words the range was 20% to 80%, and the agreement between the VPC-sum and the overall rating of VPC was 78%. Pooling data of speakers of different languages in the same trial and comparing speech outcome across trials seems possible if the assessment of speech concerns consonants and is confined to speech units that are phonetically similar across languages. Agreed conventions and rules are important. A composite variable for perceptual assessment of velopharyngeal function during speech seems usable; whereas, the method for hypernasality evaluation requires further testing.

  15. A user's guide for the signal processing software for image and speech compression developed in the Communications and Signal Processing Laboratory (CSPL), version 1

    NASA Technical Reports Server (NTRS)

    Kumar, P.; Lin, F. Y.; Vaishampayan, V.; Farvardin, N.

    1986-01-01

    A complete documentation of the software developed in the Communication and Signal Processing Laboratory (CSPL) during the period of July 1985 to March 1986 is provided. Utility programs and subroutines that were developed for a user-friendly image and speech processing environment are described. Additional programs for data compression of image and speech type signals are included. Also, programs for the zero-memory and block transform quantization in the presence of channel noise are described. Finally, several routines for simulating the perfromance of image compression algorithms are included.

  16. [Detection of Weak Speech Signals from Strong Noise Background Based on Adaptive Stochastic Resonance].

    PubMed

    Lu, Huanhuan; Wang, Fuzhong; Zhang, Huichun

    2016-04-01

    Traditional speech detection methods regard the noise as a jamming signal to filter,but under the strong noise background,these methods lost part of the original speech signal while eliminating noise.Stochastic resonance can use noise energy to amplify the weak signal and suppress the noise.According to stochastic resonance theory,a new method based on adaptive stochastic resonance to extract weak speech signals is proposed.This method,combined with twice sampling,realizes the detection of weak speech signals from strong noise.The parameters of the systema,b are adjusted adaptively by evaluating the signal-to-noise ratio of the output signal,and then the weak speech signal is optimally detected.Experimental simulation analysis showed that under the background of strong noise,the output signal-to-noise ratio increased from the initial value-7dB to about 0.86 dB,with the gain of signalto-noise ratio is 7.86 dB.This method obviously raises the signal-to-noise ratio of the output speech signals,which gives a new idea to detect the weak speech signals in strong noise environment.

  17. Psychometric Functions for Shortened Administrations of a Speech Recognition Approach Using Tri-Word Presentations and Phonemic Scoring

    ERIC Educational Resources Information Center

    Gelfand, Stanley A.; Gelfand, Jessica T.

    2012-01-01

    Method: Complete psychometric functions for phoneme and word recognition scores at 8 signal-to-noise ratios from -15 dB to 20 dB were generated for the first 10, 20, and 25, as well as all 50, three-word presentations of the Tri-Word or Computer Assisted Speech Recognition Assessment (CASRA) Test (Gelfand, 1998) based on the results of 12…

  18. Parental and Spousal Self-Efficacy of Young Adults Who Are Deaf or Hard of Hearing: Relationship to Speech Intelligibility

    ERIC Educational Resources Information Center

    Adi-Bensaid, Limor; Michael, Rinat; Most, Tova; Gali-Cinamon, Rachel

    2012-01-01

    This study examined the parental and spousal self-efficacy (SE) of adults who are deaf and who are hard of hearing (d/hh) in relation to their speech intelligibility. Forty individuals with hearing loss completed self-report measures: Spousal SE in a relationship with a spouse who was hearing/deaf, parental SE to a child who was hearing/deaf, and…

  19. Webcam delivery of the Camperdown Program for adolescents who stutter: a phase II trial.

    PubMed

    Carey, Brenda; O'Brian, Sue; Lowe, Robyn; Onslow, Mark

    2014-10-01

    This Phase II clinical trial examined stuttering adolescents' responsiveness to the Webcam-delivered Camperdown Program. Sixteen adolescents were treated by Webcam with no clinic attendance. Primary outcome was percentage of syllables stuttered (%SS). Secondary outcomes were number of sessions, weeks and hours to maintenance, self-reported stuttering severity, speech satisfaction, speech naturalness, self-reported anxiety, self-reported situation avoidance, self-reported impact of stuttering, and satisfaction with Webcam treatment delivery. Data were collected before treatment and up to 12 months after entry into maintenance. Fourteen participants completed the treatment. Group mean stuttering frequency was 6.1 %SS (range, 0.7-14.7) pretreatment and 2.8 %SS (range, 0-12.2) 12 months after entry into maintenance, with half the participants stuttering at 1.2 %SS or lower at this time. Treatment was completed in a mean of 25 sessions (15.5 hr). Self-reported stuttering severity ratings, self-reported stuttering impact, and speech satisfaction scores supported %SS outcomes. Minimal anxiety was evident either pre- or post-treatment. Individual responsiveness to the treatment varied, with half the participants showing little reduction in avoidance of speech situations. The Webcam service delivery model was appealing to participants, although it was efficacious and efficient for only half. Suggestions for future stuttering treatment development for adolescents are discussed.

  20. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  1. Oral motor deficits in speech-impaired children with autism

    PubMed Central

    Belmonte, Matthew K.; Saxena-Chandhok, Tanushree; Cherian, Ruth; Muneer, Reema; George, Lisa; Karanth, Prathibha

    2013-01-01

    Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive vs. expressive speech/language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills) and 90 (for oral motor skills) typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual. PMID:23847480

  2. Recovering With Acquired Apraxia of Speech: The First 2 Years.

    PubMed

    Haley, Katarina L; Shafer, Jennifer N; Harmon, Tyson G; Jacks, Adam

    2016-12-01

    This study was intended to document speech recovery for 1 person with acquired apraxia of speech quantitatively and on the basis of her lived experience. The second author sustained a traumatic brain injury that resulted in acquired apraxia of speech. Over a 2-year period, she documented her recovery through 22 video-recorded monologues. We analyzed these monologues using a combination of auditory perceptual, acoustic, and qualitative methods. Recovery was evident for all quantitative variables examined. For speech sound production, the recovery was most prominent during the first 3 months, but slower improvement was evident for many months. Measures of speaking rate, fluency, and prosody changed more gradually throughout the entire period. A qualitative analysis of topics addressed in the monologues was consistent with the quantitative speech recovery and indicated a subjective dynamic relationship between accuracy and rate, an observation that several factors made speech sound production variable, and a persisting need for cognitive effort while speaking. Speech features improved over an extended time, but the recovery trajectories differed, indicating dynamic reorganization of the underlying speech production system. The relationship among speech dimensions should be examined in other cases and in population samples. The combination of quantitative and qualitative analysis methods offers advantages for understanding clinically relevant aspects of recovery.

  3. Acoustic Sources of Accent in Second Language Japanese Speech.

    PubMed

    Idemaru, Kaori; Wei, Peipei; Gubbins, Lucy

    2018-05-01

    This study reports an exploratory analysis of the acoustic characteristics of second language (L2) speech which give rise to the perception of a foreign accent. Japanese speech samples were collected from American English and Mandarin Chinese speakers ( n = 16 in each group) studying Japanese. The L2 participants and native speakers ( n = 10) provided speech samples modeling after six short sentences. Segmental (vowels and stops) and prosodic features (rhythm, tone, and fluency) were examined. Native Japanese listeners ( n = 10) rated the samples with regard to degrees of foreign accent. The analyses predicting accent ratings based on the acoustic measurements indicated that one of the prosodic features in particular, tone (defined as high and low patterns of pitch accent and intonation in this study), plays an important role in robustly predicting accent rating in L2 Japanese across the two first language (L1) backgrounds. These results were consistent with the prediction based on phonological and phonetic comparisons between Japanese and English, as well as Japanese and Mandarin Chinese. The results also revealed L1-specific predictors of perceived accent in Japanese. The findings of this study contribute to the growing literature that examines sources of perceived foreign accent.

  4. Behavioral and neurobiological correlates of childhood apraxia of speech in Italian children.

    PubMed

    Chilosi, Anna Maria; Lorenzini, Irene; Fiori, Simona; Graziosi, Valentina; Rossi, Giuseppe; Pasquariello, Rosa; Cipriani, Paola; Cioni, Giovanni

    2015-11-01

    Childhood apraxia of speech (CAS) is a neurogenic Speech Sound Disorder whose etiology and neurobiological correlates are still unclear. In the present study, 32 Italian children with idiopathic CAS underwent a comprehensive speech and language, genetic and neuroradiological investigation aimed to gather information on the possible behavioral and neurobiological markers of the disorder. The results revealed four main aggregations of behavioral symptoms that indicate a multi-deficit disorder involving both motor-speech and language competence. Six children presented with chromosomal alterations. The familial aggregation rate for speech and language difficulties and the male to female ratio were both very high in the whole sample, supporting the hypothesis that genetic factors make substantial contribution to the risk of CAS. As expected in accordance with the diagnosis of idiopathic CAS, conventional MRI did not reveal macrostructural pathogenic neuroanatomical abnormalities, suggesting that CAS may be due to brain microstructural alterations. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Ultrasound as visual feedback in speech habilitation: exploring consultative use in rural British Columbia, Canada.

    PubMed

    Bernhardt, May B; Bacsfalvi, Penelope; Adler-Bock, Marcy; Shimizu, Reiko; Cheney, Audrey; Giesbrecht, Nathan; O'connell, Maureen; Sirianni, Jason; Radanov, Bosko

    2008-02-01

    Ultrasound has shown promise as a visual feedback tool in speech therapy. Rural clients, however, often have minimal access to new technologies. The purpose of the current study was to evaluate consultative treatment using ultrasound in rural communities. Two speech-language pathologists (SLPs) trained in ultrasound use provided consultation with ultrasound in rural British Columbia to 13 school-aged children with residual speech impairments. Local SLPs provided treatment without ultrasound before and after the consultation. Speech samples were transcribed phonetically by independent trained listeners. Eleven children showed greater gains in production of the principal target /[image omitted]/ after the ultrasound consultation. Four of the seven participants who received more consultation time with ultrasound showed greatest improvement. Individual client factors also affected outcomes. The current study was a quasi-experimental clinic-based study. Larger, controlled experimental studies are needed to provide ultimate evaluation of the consultative use of ultrasound in speech therapy.

  6. Prevalence and Phenotype of Childhood Apraxia of Speech In Youth with Galactosemia

    PubMed Central

    Shriberg, Lawrence D.; Potter, Nancy L.; Strand, Edythe A.

    2010-01-01

    Purpose We address the hypothesis that the severe and persistent speech disorder reported in persons with galactosemia meets contemporary diagnostic criteria for Childhood Apraxia of Speech (CAS). A positive finding for CAS in this rare metabolic disorder has the potential to impact treatment of persons with galactosemia and inform explanatory perspectives on CAS in neurological, neurodevelopmental, and idiopathic contexts. Method Thirty-three youth with galactosemia and significant prior or persistent speech sound disorder were assessed in their homes in 17 states. Participants completed a protocol yielding information on their cognitive, structural, sensorimotor, language, speech, prosody, and voice status and function. Results Eight of the 33 participants (24%) met contemporary diagnostic criteria for CAS. Two participants, one of whom was among the 8 with CAS, met criteria for ataxic or hyperkinetic dysarthria. Group-wise findings for the remaining 24 participants are consistent with a classification category termed Motor Speech Disorder-Not Otherwise Specified (MSD-NOS; Shriberg, Fourakis, et al., in press-a). Conclusion We estimate the prevalence of CAS in galactosemia at 18 per hundred, 180 times the estimated risk for idiopathic CAS. Findings support the need to study risk factors for the high occurrence of motor speech disorders in galactosemia, despite early compliant dietary management. PMID:20966389

  7. The Effect of Noise on Relationships Between Speech Intelligibility and Self-Reported Communication Measures in Tracheoesophageal Speakers.

    PubMed

    Eadie, Tanya L; Otero, Devon Sawin; Bolt, Susan; Kapsner-Smith, Mara; Sullivan, Jessica R

    2016-08-01

    The purpose of this study was to examine how sentence intelligibility relates to self-reported communication in tracheoesophageal speakers when speech intelligibility is measured in quiet and noise. Twenty-four tracheoesophageal speakers who were at least 1 year postlaryngectomy provided audio recordings of 5 sentences from the Sentence Intelligibility Test. Speakers also completed self-reported measures of communication-the Voice Handicap Index-10 and the Communicative Participation Item Bank short form. Speech recordings were presented to 2 groups of inexperienced listeners who heard sentences in quiet or noise. Listeners transcribed the sentences to yield speech intelligibility scores. Very weak relationships were found between intelligibility in quiet and measures of voice handicap and communicative participation. Slightly stronger, but still weak and nonsignificant, relationships were observed between measures of intelligibility in noise and both self-reported measures. However, 12 speakers who were more than 65% intelligible in noise showed strong and statistically significant relationships with both self-reported measures (R2 = .76-.79). Speech intelligibility in quiet is a weak predictor of self-reported communication measures in tracheoesophageal speakers. Speech intelligibility in noise may be a better metric of self-reported communicative function for speakers who demonstrate higher speech intelligibility in noise.

  8. Self-Assessed Hearing Handicap in Older Adults With Poorer-Than-Predicted Speech Recognition in Noise.

    PubMed

    Eckert, Mark A; Matthews, Lois J; Dubno, Judy R

    2017-01-01

    Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility. We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982) were significantly associated with measures of speech recognition in noise that controlled for differences in speech audibility. One hundred sixty-two middle-aged and older adults had HHIE total scores that were significantly associated with audibility-adjusted measures of speech recognition for low-context but not high-context sentences. These findings were driven by HHIE items involving negative feelings related to communication difficulties that also captured variance in subjective ratings of effort and frustration that predicted speech recognition. The average pure-tone threshold accounted for some of the variance in the association between the HHIE and audibility-adjusted speech recognition, suggesting an effect of central and peripheral auditory system decline related to elevated thresholds. The accumulation of difficult listening experiences appears to produce a self-assessment of hearing handicap resulting from (a) reduced audibility of stimuli, (b) declines in the central and peripheral auditory system function, and (c) additional individual variation in central nervous system function.

  9. Self-Assessed Hearing Handicap in Older Adults With Poorer-Than-Predicted Speech Recognition in Noise

    PubMed Central

    Matthews, Lois J.; Dubno, Judy R.

    2017-01-01

    Purpose Even older adults with relatively mild hearing loss report hearing handicap, suggesting that hearing handicap is not completely explained by reduced speech audibility. Method We examined the extent to which self-assessed ratings of hearing handicap using the Hearing Handicap Inventory for the Elderly (HHIE; Ventry & Weinstein, 1982) were significantly associated with measures of speech recognition in noise that controlled for differences in speech audibility. Results One hundred sixty-two middle-aged and older adults had HHIE total scores that were significantly associated with audibility-adjusted measures of speech recognition for low-context but not high-context sentences. These findings were driven by HHIE items involving negative feelings related to communication difficulties that also captured variance in subjective ratings of effort and frustration that predicted speech recognition. The average pure-tone threshold accounted for some of the variance in the association between the HHIE and audibility-adjusted speech recognition, suggesting an effect of central and peripheral auditory system decline related to elevated thresholds. Conclusion The accumulation of difficult listening experiences appears to produce a self-assessment of hearing handicap resulting from (a) reduced audibility of stimuli, (b) declines in the central and peripheral auditory system function, and (c) additional individual variation in central nervous system function. PMID:28060993

  10. Field experiments using SPEAR: a speech control system for UGVs

    NASA Astrophysics Data System (ADS)

    Chhatpar, Siddharth R.; Blanco, Chris; Czerniak, Jeffrey; Hoffman, Orin; Juneja, Amit; Pruthi, Tarun; Liu, Dongqing; Karlsen, Robert; Brown, Jonathan

    2009-05-01

    This paper reports on a Field Experiment carried out by the Human Research and Engineering Directorate at Ft. Benning to evaluate the efficacy of using speech to control an Unmanned Ground Vehicle (UGV) concurrently with a handcontroller. The SPEAR system, developed by Think-A-Move, provides speech-control of UGVs. The system picks up user-speech in the ear canal with an in-ear microphone. This property allows it to work efficiently in high-noise environments, where traditional speech systems, employing external microphones, fail. It has been integrated with an iRobot PackBot 510 with EOD kit. The integrated system allows the hand-controller to be supplemented with speech for concurrent control. At Ft. Benning, the integrated system was tested by soldiers from the Officer Candidate School. The Experiment had dual focus: 1) Quantitative measurement of the time taken to complete each station and the cognitive load on users; 2) Qualitative evaluation of ease-of-use and ergonomics through soldier-feedback. Also of significant benefit to Think-A-Move was soldier-feedback on the speech-command vocabulary employed: What spoken commands are intuitive, and how the commands should be executed, e.g., limited-motion vs. unlimited-motion commands. Overall results from the Experiment are reported in the paper.

  11. A Nonword Repetition Task for Speakers with Misarticulations: The Syllable Repetition Task (SRT)

    PubMed Central

    Shriberg, Lawrence D.; Lohmeier, Heather L.; Campbell, Thomas F.; Dollaghan, Christine A.; Green, Jordan R.; Moore, Christopher A.

    2010-01-01

    Purpose Conceptual and methodological confounds occur when non(sense) repetition tasks are administered to speakers who do not have the target speech sounds in their phonetic inventories or who habitually misarticulate targeted speech sounds. We describe a nonword repetition task, the Syllable Repetiton Task (SRT) that eliminates this confound and report findings from three validity studies. Method Ninety-five preschool children with Speech Delay and 63 with Typical Speech, completed an assessment battery that included the Nonword Repetition Task (NRT: Dollaghan & Campbell, 1998) and the SRT. SRT stimuli include only four of the earliest occurring consonants and one early occurring vowel. Results Study 1 findings indicated that the SRT eliminated the speech confound in nonword testing with speakers who misarticulate. Study 2 findings indicated that the accuracy of the SRT to identify expressive language impairment was comparable to findings for the NRT. Study 3 findings illustrated the SRT’s potential to interrogate speech processing constraints underlying poor nonword repetition accuracy. Results supported both memorial and auditory-perceptual encoding constraints underlying nonword repetition errors in children with speech-language impairment. Conclusion The SRT appears to be a psychometrically stable and substantively informative nonword repetition task for emerging genetic and other research with speakers who misarticulate. PMID:19635944

  12. Teachers' perceptions of students with speech sound disorders: a quantitative and qualitative analysis.

    PubMed

    Overby, Megan; Carrell, Thomas; Bernthal, John

    2007-10-01

    This study examined 2nd-grade teachers' perceptions of the academic, social, and behavioral competence of students with speech sound disorders (SSDs). Forty-eight 2nd-grade teachers listened to 2 groups of sentences differing by intelligibility and pitch but spoken by a single 2nd grader. For each sentence group, teachers rated the speaker's academic, social, and behavioral competence using an adapted version of the Teacher Rating Scale of the Self-Perception Profile for Children (S. Harter, 1985) and completed 3 open-ended questions. The matched-guise design controlled for confounding speaker and stimuli variables that were inherent in prior studies. Statistically significant differences in teachers' expectations of children's academic, social, and behavioral performances were found between moderately intelligible and normal intelligibility speech. Teachers associated moderately intelligible low-pitched speech with more behavior problems than moderately intelligible high-pitched speech or either pitch with normal intelligibility. One third of the teachers reported that they could not accurately predict a child's school performance based on the child's speech skills, one third of the teachers causally related school difficulty to SSD, and one third of the teachers made no comment. Intelligibility and speaker pitch appear to be speech variables that influence teachers' perceptions of children's school performance.

  13. High-frame-rate full-vocal-tract 3D dynamic speech imaging.

    PubMed

    Fu, Maojing; Barlaz, Marissa S; Holtrop, Joseph L; Perry, Jamie L; Kuehn, David P; Shosted, Ryan K; Liang, Zhi-Pei; Sutton, Bradley P

    2017-04-01

    To achieve high temporal frame rate, high spatial resolution and full-vocal-tract coverage for three-dimensional dynamic speech MRI by using low-rank modeling and sparse sampling. Three-dimensional dynamic speech MRI is enabled by integrating a novel data acquisition strategy and an image reconstruction method with the partial separability model: (a) a self-navigated sparse sampling strategy that accelerates data acquisition by collecting high-nominal-frame-rate cone navigator sand imaging data within a single repetition time, and (b) are construction method that recovers high-quality speech dynamics from sparse (k,t)-space data by enforcing joint low-rank and spatiotemporal total variation constraints. The proposed method has been evaluated through in vivo experiments. A nominal temporal frame rate of 166 frames per second (defined based on a repetition time of 5.99 ms) was achieved for an imaging volume covering the entire vocal tract with a spatial resolution of 2.2 × 2.2 × 5.0 mm 3 . Practical utility of the proposed method was demonstrated via both validation experiments and a phonetics investigation. Three-dimensional dynamic speech imaging is possible with full-vocal-tract coverage, high spatial resolution and high nominal frame rate to provide dynamic speech data useful for phonetic studies. Magn Reson Med 77:1619-1629, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Long-Term Speech and Language Outcomes in Prelingually Deaf Children, Adolescents and Young Adults Who Received Cochlear Implants in Childhood

    PubMed Central

    Ruffin, Chad V.; Kronenberger, William G.; Colson, Bethany G.; Henning, Shirley C.; Pisoni, David B.

    2013-01-01

    This study investigated long-term speech and language outcomes in 51 prelingually deaf children, adolescents, and young adults who received cochlear implants (CIs) prior to 7 years of age and used their implants for at least 7 years. Average speech perception scores were similar to those found in prior research with other samples of experienced CI users. Mean language test scores were lower than norm-referenced scores from nationally representative normal-hearing, typically-developing samples, although a majority of the CI users scored within one standard deviation of the normative mean or higher on the Peabody Picture Vocabulary Test, Fourth Edition (63%) and Clinical Evaluation of Language Fundamentals, Fourth Edition (69%). Speech perception scores were negatively associated with a meningitic etiology of hearing loss, older age at implantation, poorer pre-implant unaided pure tone average thresholds, lower family income, and the use of Total Communication. Users of CIs for 15 years or more were more likely to have these characteristics and were more likely to score lower on measures of speech perception compared to users of CIs for 14 years or less. The aggregation of these risk factors in the > 15 years of CI use subgroup accounts for their lower speech perception scores and may stem from more conservative CI candidacy criteria in use at the beginning of pediatric cochlear implantation. PMID:23988907

  15. Investigation of in-vehicle speech intelligibility metrics for normal hearing and hearing impaired listeners

    NASA Astrophysics Data System (ADS)

    Samardzic, Nikolina

    The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly, as an example of the significance of speech intelligibility evaluation in the context of an applicable listening environment, as indicated in this research, it was found that the jury test participants required on average an approximate 3 dB increase in sound pressure level of speech material while driving and listening compared to when just listening, for an equivalent speech intelligibility performance and the same listening task.

  16. Speech Analyses of Four Children with Repaired Cleft Palates.

    ERIC Educational Resources Information Center

    Powers, Gene R.; And Others

    1990-01-01

    Spontaneous speech samples were collected from four three-year olds with surgically repaired cleft palates. Analyses showed that subjects were similar to one another with respect to their phonetic inventories but differed considerably in the frequency and types of phonological processes used. (Author/JDD)

  17. Speech acoustic markers of early stage and prodromal Huntington's disease: a marker of disease onset?

    PubMed

    Vogel, Adam P; Shirbin, Christopher; Churchyard, Andrew J; Stout, Julie C

    2012-12-01

    Speech disturbances (e.g., altered prosody) have been described in symptomatic Huntington's Disease (HD) individuals, however, the extent to which speech changes in gene positive pre-manifest (PreHD) individuals is largely unknown. The speech of individuals carrying the mutant HTT gene is a behavioural/motor/cognitive marker demonstrating some potential as an objective indicator of early HD onset and disease progression. Speech samples were acquired from 30 individuals carrying the mutant HTT gene (13 PreHD, 17 early stage HD) and 15 matched controls. Participants read a passage, produced a monologue and said the days of the week. Data were analysed acoustically for measures of timing, frequency and intensity. There was a clear effect of group across most acoustic measures, so that speech performance differed in-line with disease progression. Comparisons across groups revealed significant differences between the control and the early stage HD group on measures of timing (e.g., speech rate). Participants carrying the mutant HTT gene presented with slower rates of speech, took longer to say words and produced greater silences between and within words compared to healthy controls. Importantly, speech rate showed a significant correlation to burden of disease scores. The speech of early stage HD differed significantly from controls. The speech of PreHD, although not reaching significance, tended to lie between the performance of controls and early stage HD. This suggests that changes in speech production appear to be developing prior to diagnosis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. How much is a word? Predicting ease of articulation planning from apraxic speech error patterns.

    PubMed

    Ziegler, Wolfram; Aichert, Ingrid

    2015-08-01

    According to intuitive concepts, 'ease of articulation' is influenced by factors like word length or the presence of consonant clusters in an utterance. Imaging studies of speech motor control use these factors to systematically tax the speech motor system. Evidence from apraxia of speech, a disorder supposed to result from speech motor planning impairment after lesions to speech motor centers in the left hemisphere, supports the relevance of these and other factors in disordered speech planning and the genesis of apraxic speech errors. Yet, there is no unified account of the structural properties rendering a word easy or difficult to pronounce. To model the motor planning demands of word articulation by a nonlinear regression model trained to predict the likelihood of accurate word production in apraxia of speech. We used a tree-structure model in which vocal tract gestures are embedded in hierarchically nested prosodic domains to derive a recursive set of terms for the computation of the likelihood of accurate word production. The model was trained with accuracy data from a set of 136 words averaged over 66 samples from apraxic speakers. In a second step, the model coefficients were used to predict a test dataset of accuracy values for 96 new words, averaged over 120 samples produced by a different group of apraxic speakers. Accurate modeling of the first dataset was achieved in the training study (R(2)adj = .71). In the cross-validation, the test dataset was predicted with a high accuracy as well (R(2)adj = .67). The model shape, as reflected by the coefficient estimates, was consistent with current phonetic theories and with clinical evidence. In accordance with phonetic and psycholinguistic work, a strong influence of word stress on articulation errors was found. The proposed model provides a unified and transparent account of the motor planning requirements of word articulation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Tone classification of syllable-segmented Thai speech based on multilayer perception

    NASA Astrophysics Data System (ADS)

    Satravaha, Nuttavudh; Klinkhachorn, Powsiri; Lass, Norman

    2002-05-01

    Thai is a monosyllabic tonal language that uses tone to convey lexical information about the meaning of a syllable. Thus to completely recognize a spoken Thai syllable, a speech recognition system not only has to recognize a base syllable but also must correctly identify a tone. Hence, tone classification of Thai speech is an essential part of a Thai speech recognition system. Thai has five distinctive tones (``mid,'' ``low,'' ``falling,'' ``high,'' and ``rising'') and each tone is represented by a single fundamental frequency (F0) pattern. However, several factors, including tonal coarticulation, stress, intonation, and speaker variability, affect the F0 pattern of a syllable in continuous Thai speech. In this study, an efficient method for tone classification of syllable-segmented Thai speech, which incorporates the effects of tonal coarticulation, stress, and intonation, as well as a method to perform automatic syllable segmentation, were developed. Acoustic parameters were used as the main discriminating parameters. The F0 contour of a segmented syllable was normalized by using a z-score transformation before being presented to a tone classifier. The proposed system was evaluated on 920 test utterances spoken by 8 speakers. A recognition rate of 91.36% was achieved by the proposed system.

  20. Effect of a Bluetooth-implemented hearing aid on speech recognition performance: subjective and objective measurement.

    PubMed

    Kim, Min-Beom; Chung, Won-Ho; Choi, Jeesun; Hong, Sung Hwa; Cho, Yang-Sun; Park, Gyuseok; Lee, Sangmin

    2014-06-01

    The object was to evaluate speech perception improvement through Bluetooth-implemented hearing aids in hearing-impaired adults. Thirty subjects with bilateral symmetric moderate sensorineural hearing loss participated in this study. A Bluetooth-implemented hearing aid was fitted unilaterally in all study subjects. Objective speech recognition score and subjective satisfaction were measured with a Bluetooth-implemented hearing aid to replace the acoustic connection from either a cellular phone or a loudspeaker system. In each system, participants were assigned to 4 conditions: wireless speech signal transmission into hearing aid (wireless mode) in quiet or noisy environment and conventional speech signal transmission using external microphone of hearing aid (conventional mode) in quiet or noisy environment. Also, participants completed questionnaires to investigate subjective satisfaction. Both cellular phone and loudspeaker system situation, participants showed improvements in sentence and word recognition scores with wireless mode compared to conventional mode in both quiet and noise conditions (P < .001). Participants also reported subjective improvements, including better sound quality, less noise interference, and better accuracy naturalness, when using the wireless mode (P < .001). Bluetooth-implemented hearing aids helped to improve subjective and objective speech recognition performances in quiet and noisy environments during the use of electronic audio devices.

  1. Cognitive Load Reduces Perceived Linguistic Convergence Between Dyads.

    PubMed

    Abel, Jennifer; Babel, Molly

    2017-09-01

    Speech convergence is the tendency of talkers to become more similar to someone they are listening or talking to, whether that person is a conversational partner or merely a voice heard repeating words. To elucidate the nature of the mechanisms underlying convergence, this study uses different levels of task difficulty on speech convergence within dyads collaborating on a task. Dyad members had to build identical LEGO® constructions without being able to see each other's construction, and with each member having half of the instructions required to complete the construction. Three levels of task difficulty were created, with five dyads at each level (30 participants total). Task difficulty was also measured using completion time and error rate. Listeners who heard pairs of utterances from each dyad judged convergence to be occurring in the Easy condition and to a lesser extent in the Medium condition, but not in the Hard condition. Amplitude envelope acoustic similarity analyses of the same utterance pairs showed that convergence occurred in dyads with shorter completion times and lower error rates. Together, these results suggest that while speech convergence is a highly variable behavior, it may occur more in contexts of low cognitive load. The relevance of these results for the current automatic and socially-driven models of convergence is discussed.

  2. A randomized controlled trial comparing two techniques for unilateral cleft lip and palate: Growth and speech outcomes during mixed dentition.

    PubMed

    Ganesh, Praveen; Murthy, Jyotsna; Ulaghanathan, Navitha; Savitha, V H

    2015-07-01

    To study the growth and speech outcomes in children who were operated on for unilateral cleft lip and palate (UCLP) by a single surgeon using two different treatment protocols. A total of 200 consecutive patients with nonsyndromic UCLP were randomly allocated to two different treatment protocols. Of the 200 patients, 179 completed the protocol. However, only 85 patients presented for follow-up during the mixed dentition period (7-10 years of age). The following treatment protocol was followed. Protocol 1 consisted of the vomer flap (VF), whereby patients underwent primary lip nose repair and vomer flap for hard palate single-layer closure, followed by soft palate repair 6 months later; Protocol 2 consisted of the two-flap technique (TF), whereby the cleft palate (CP) was repaired by two-flap technique after primary lip and nose repair. GOSLON Yardstick scores for dental arch relation, and speech outcomes based on universal reporting parameters, were noted. A total of 40 patients in the VF group and 45 in the TF group completed the treatment protocols. The GOSLON scores showed marginally better outcomes in the VF group compared to the TF group. Statistically significant differences were found only in two speech parameters, with better outcomes in the TF group. Our results showed marginally better growth outcome in the VF group compared to the TF group. However, the speech outcomes were better in the TF group. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  3. Functional MRI evidence for fine motor praxis dysfunction in children with persistent speech disorders.

    PubMed

    Redle, Erin; Vannest, Jennifer; Maloney, Thomas; Tsevat, Rebecca K; Eikenberry, Sarah; Lewis, Barbara; Shriberg, Lawrence D; Tkach, Jean; Holland, Scott K

    2015-02-09

    Children with persistent speech disorders (PSD) often present with overt or subtle motor deficits; the possibility that speech disorders and motor deficits could arise from a shared neurological base is currently unknown. Functional MRI (fMRI) was used to examine the brain networks supporting fine motor praxis in children with PSD and without clinically identified fine motor deficits. This case-control study included 12 children with PSD (mean age 7.42 years, four female) and 12 controls (mean age 7.44 years, four female). Children completed behavioral evaluations using standardized motor assessments and parent reported functional measures. During fMRI scanning, participants completed a cued finger tapping task contrasted passive listening. A general linear model approach identified brain regions associated with finger tapping in each group and regions that differed between groups. The relationship between regional fMRI activation and fine motor skill was assessed using a regression analysis. Children with PSD had significantly poorer results for rapid speech production and fine motor praxis skills, but did not differ on classroom functional skills. Functional MRI results showed that children with PSD had significantly more activation in the cerebellum during finger tapping. Positive correlations between performance on a fine motor praxis test and activation multiple cortical regions were noted for children with PSD but not for controls. Over-activation in the cerebellum during a motor task may reflect a subtle abnormality in the non-speech motor neural circuitry in children with PSD. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Evaluation of Palatal Plate Thickness of Maxillary Prosthesis on Phonation- A Comparative Clinical Study

    PubMed Central

    B, Sreedevi; Anne, Gopinadh; Manne, Prakash; Bindu O, Swetha Hima; Atla, Jyothi; Deepthi, Sneha; Chaitanya A, Krishna

    2014-01-01

    Background: Prosthodontic treatment involves clinical procedures which influence speech performance directly or indirectly. Prosthetic rehabilitation of missing teeth with partial or complete maxillary removable dentures influences the individual voice characteristics like Phonation, resonance etc. Aim: To evaluate the effect of Acrylic palatal plate thickness (1mm-3mm) of maxillary prosthesis on phonation. Materials and Methods: Twelve subjects were selected randomly between the age group of 20-25 years who have full complement of teeth and have no speech problems. Speech evaluation was done under four experimental conditions i.e. Without any experimental acrylic palatal plate (control), with experimental acrylic palatal plates of thickness 1 mm, 2 mm and 3 mm respectively. The speech material for phonation test consisted of Vowels sounds /a/, /i/, and /o/. Speech analysis to assess phonation was done using digital acoustic analysis (PRAAT software). The obtained results were statistically analyzed by One-way ANOVA and Tukey’s multiple post-hoc for comparison of four experimental conditions with respect to different vowel sounds. Results: Mean harmonics to noise ratio (HNR) values obtained for all the Experimental conditions did not show significant difference (p>0.05). In conclusion, an increase in the thickness of the acrylic palatal plate of maxillary prosthesis for about 1 mm - 3mm in complete or partial maxillary removable dentures resulted in no significant effect on phonation of vowel sounds /a/, /i/ and /o/. Conclusion: Increasing the thickness of the palatal plate from 1 mm to 3 mm has not shown any significant effect on the phonation. PMID:24959508

  5. Attentional Control Buffers the Effect of Public Speaking Anxiety on Performance.

    PubMed

    Jones, Christopher R; Fazio, Russell H; Vasey, Michael W

    2012-09-01

    We explored dispositional differences in the ability to self-regulate attentional processes in the domain of public speaking. Participants first completed measures of speech anxiety and attentional control. In a second session, participants prepared and performed a short speech. Fear of public speaking negatively impacted performance only for those low in attentional control. Thus, attentional control appears to act as a buffer that facilitates successful self-regulation despite performance anxiety.

  6. Attentional Control Buffers the Effect of Public Speaking Anxiety on Performance

    PubMed Central

    Jones, Christopher R.; Fazio, Russell H.; Vasey, Michael W.

    2011-01-01

    We explored dispositional differences in the ability to self-regulate attentional processes in the domain of public speaking. Participants first completed measures of speech anxiety and attentional control. In a second session, participants prepared and performed a short speech. Fear of public speaking negatively impacted performance only for those low in attentional control. Thus, attentional control appears to act as a buffer that facilitates successful self-regulation despite performance anxiety. PMID:22924093

  7. Djinnati syndrome: symptoms and prevalence in rural population of Baluchistan (southeast of Iran).

    PubMed

    Bakhshani, Nour Mohammad; Hosseinbore, Nasrin; Kianpoor, Mohsen

    2013-12-01

    The present study describes "Djinnati," a culture-bound syndrome and examines its prevalence and demographic attributes such as age, gender and education level in the rural population of Baluchistan in southeast Iran. In this cross-sectional study, the participants (n=4129) were recruited from people living in rural areas of Baluchistan (southeast Iran) by multistage sampling. The data were collected through interviews with local healers, health care personnel, family health records, interview patients suspected with the disorder and their relatives. We administered the dissociative experiences scale. Prevalence of Djinnati syndrome was about 0.5% in the studied population and 1.03% in women. All patients who experienced episodic symptoms of Djinnati were female. The most common reported symptoms were altered consciousness and memory, muteness, laughing, crying, incomprehensible speech and hallucination that have been attributed to a foreign entity called "Djinn." In addition loss of speech or change in speech rhythm and tone of voice was observed in a subgroup. In one case, speaking in a different language during the attack was reported. There was partial and rarely complete amnesia during the attack. Attacks usually lasted from 30 min to 2h. It is suggested that future studies explore prevalence of Djinnati syndrome in women and explore predisposing, precipitating, and maintaining factors. It is further suggested that a comprehensive pathology model should integrate the data related to socio-cultural context in order to prevent and treat this syndrome. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Neural Recruitment for the Production of Native and Novel Speech Sounds

    PubMed Central

    Moser, Dana; Fridriksson, Julius; Bonilha, Leonardo; Healy, Eric W.; Baylis, Gordon; Baker, Julie; Rorden, Chris

    2010-01-01

    Two primary areas of damage have been implicated in apraxia of speech (AOS) based on the time post-stroke: (1) the left inferior frontal gyrus (IFG) in acute patients, and (2) the left anterior insula (aIns) in chronic patients. While AOS is widely characterized as a disorder in motor speech planning, little is known about the specific contributions of each of these regions in speech. The purpose of this study was to investigate cortical activation during speech production with a specific focus on the aIns and the IFG in normal adults. While undergoing sparse fMRI, 30 normal adults completed a 30-minute speech-repetition task consisting of three-syllable nonwords that contained either (a) English (native) syllables or (b) Non-English (novel) syllables. When the novel syllable productions were compared to the native syllable productions, greater neural activation was observed in the aIns and IFG, particularly during the first 10 minutes of the task when novelty was the greatest. Although activation in the aIns remained high throughout the task for novel productions, greater activation was clearly demonstrated when the initial 10 minutes were compared to the final 10 minutes of the task. These results suggest increased activity within an extensive neural network, including the aIns and IFG, when the motor speech system is taxed, such as during the production of novel speech. We speculate that the amount of left aIns recruitment during speech production may be related to the internal construction of the motor speech unit such that the degree of novelty/automaticity would result in more or less demands respectively. The role of the IFG as a storehouse and integrative processor for previously acquired routines is also discussed. PMID:19385020

  9. Song and speech: examining the link between singing talent and speech imitation ability.

    PubMed

    Christiner, Markus; Reiterer, Susanne M

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory.

  10. Speech pathologists’ experiences with stroke clinical practice guidelines and the barriers and facilitators influencing their use: a national descriptive study

    PubMed Central

    2014-01-01

    Background Communication and swallowing disorders are a common consequence of stroke. Clinical practice guidelines (CPGs) have been created to assist health professionals to put research evidence into clinical practice and can improve stroke care outcomes. However, CPGs are often not successfully implemented in clinical practice and research is needed to explore the factors that influence speech pathologists’ implementation of stroke CPGs. This study aimed to describe speech pathologists’ experiences and current use of guidelines, and to identify what factors influence speech pathologists’ implementation of stroke CPGs. Methods Speech pathologists working in stroke rehabilitation who had used a stroke CPG were invited to complete a 39-item online survey. Content analysis and descriptive and inferential statistics were used to analyse the data. Results 320 participants from all states and territories of Australia were surveyed. Almost all speech pathologists had used a stroke CPG and had found the guideline “somewhat useful” or “very useful”. Factors that speech pathologists perceived influenced CPG implementation included the: (a) guideline itself, (b) work environment, (c) aspects related to the speech pathologist themselves, (d) patient characteristics, and (e) types of implementation strategies provided. Conclusions There are many different factors that can influence speech pathologists’ implementation of CPGs. The factors that influenced the implementation of CPGs can be understood in terms of knowledge creation and implementation frameworks. Speech pathologists should continue to adapt the stroke CPG to their local work environment and evaluate their use. To enhance guideline implementation, they may benefit from a combination of educational meetings and resources, outreach visits, support from senior colleagues, and audit and feedback strategies. PMID:24602148

  11. Voice Acoustical Measurement of the Severity of Major Depression

    ERIC Educational Resources Information Center

    Cannizzaro, Michael; Harel, Brian; Reilly, Nicole; Chappell, Phillip; Snyder, Peter J.

    2004-01-01

    A number of empirical studies have documented the relationship between quantifiable and objective acoustical measures of voice and speech, and clinical subjective ratings of severity of Major Depression. To further explore this relationship, speech samples were extracted from videotape recordings of structured interviews made during the…

  12. Perceptions of University Instructors When Listening to International Student Speech

    ERIC Educational Resources Information Center

    Sheppard, Beth; Elliott, Nancy; Baese-Berk, Melissa

    2017-01-01

    Intensive English Program (IEP) Instructors and content faculty both listen to international students at the university. For these two groups of instructors, this study compared perceptions of international student speech by collecting comprehensibility ratings and transcription samples for intelligibility scores. No significant differences were…

  13. Real-time continuous visual biofeedback in the treatment of speech breathing disorders following childhood traumatic brain injury: report of one case.

    PubMed

    Murdoch, B E; Pitt, G; Theodoros, D G; Ward, E C

    1999-01-01

    The efficacy of traditional and physiological biofeedback methods for modifying abnormal speech breathing patterns was investigated in a child with persistent dysarthria following severe traumatic brain injury (TBI). An A-B-A-B single-subject experimental research design was utilized to provide the subject with two exclusive periods of therapy for speech breathing, based on traditional therapy techniques and physiological biofeedback methods, respectively. Traditional therapy techniques included establishing optimal posture for speech breathing, explanation of the movement of the respiratory muscles, and a hierarchy of non-speech and speech tasks focusing on establishing an appropriate level of sub-glottal air pressure, and improving the subject's control of inhalation and exhalation. The biofeedback phase of therapy utilized variable inductance plethysmography (or Respitrace) to provide real-time, continuous visual biofeedback of ribcage circumference during breathing. As in traditional therapy, a hierarchy of non-speech and speech tasks were devised to improve the subject's control of his respiratory pattern. Throughout the project, the subject's respiratory support for speech was assessed both instrumentally and perceptually. Instrumental assessment included kinematic and spirometric measures, and perceptual assessment included the Frenchay Dysarthria Assessment, Assessment of Intelligibility of Dysarthric Speech, and analysis of a speech sample. The results of the study demonstrated that real-time continuous visual biofeedback techniques for modifying speech breathing patterns were not only effective, but superior to the traditional therapy techniques for modifying abnormal speech breathing patterns in a child with persistent dysarthria following severe TBI. These results show that physiological biofeedback techniques are potentially useful clinical tools for the remediation of speech breathing impairment in the paediatric dysarthric population.

  14. Relationship between quality of life instruments and phonatory function in tracheoesophageal speech with voice prosthesis.

    PubMed

    Miyoshi, Masayuki; Fukuhara, Takahiro; Kataoka, Hideyuki; Hagino, Hiroshi

    2016-04-01

    The use of tracheoesophageal speech with voice prosthesis (T-E speech) after total laryngectomy has increased recently as a method of vocalization following laryngeal cancer. Previous research has not investigated the relationship between quality of life (QOL) and phonatory function in those using T-E speech. This study aimed to demonstrate the relationship between phonatory function and both comprehensive health-related QOL and QOL related to speech in people using T-E speech. The subjects of the study were 20 male patients using T-E speech after total laryngectomy. At a visit to our clinic, the subjects underwent a phonatory function test and completed three questionnaires: the MOS 8-Item Short-Form Health Survey (SF-8), the Voice Handicap Index-10 (VHI-10), and the Voice-Related Quality of Life (V-RQOL) Measure. A significant correlation was observed between the physical component summary (PCS), a summary score of SF-8, and VHI-10. Additionally, a significant correlation was observed between the SF-8 mental component summary (MCS) and both VHI-10 and VRQOL. Significant correlations were also observed between voice intensity in the phonatory function test and both VHI-10 and V-RQOL. Finally, voice intensity was significantly correlated with the SF-8 PCS. QOL questionnaires and phonatory function tests showed that, in people using T-E speech after total laryngectomy, voice intensity was correlated with comprehensive QOL, including physical and mental health. This finding suggests that voice intensity can be used as a performance index for speech rehabilitation.

  15. Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis

    PubMed Central

    Vielsmeier, Veronika; Kreuzer, Peter M.; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R. O.; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin

    2016-01-01

    Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments (“How would you rate your ability to understand speech?”; “How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?”). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role. PMID:28018209

  16. Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis.

    PubMed

    Vielsmeier, Veronika; Kreuzer, Peter M; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R O; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin

    2016-01-01

    Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments ("How would you rate your ability to understand speech?"; "How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?"). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role.

  17. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    PubMed

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.

  18. Should singing activities be included in speech and voice therapy for prepubertal children?

    PubMed

    Rinta, Tiija; Welch, Graham F

    2008-01-01

    Customarily, speaking and singing have tended to be regarded as two completely separate sets of behaviors in clinical and educational settings. The treatment of speech and voice disorders has focused on the client's speaking ability, as this is perceived to be the main vocal behavior of concern. However, according to a broader voice-science perspective, given that the same vocal structure is used for speaking and singing, it may be possible to include singing in speech and voice therapy. In this article, a theoretical framework is proposed that indicates possible benefits from the inclusion of singing in such therapeutic settings. Based on a literature review, it is demonstrated theoretically why singing activities can potentially be exploited in the treatment of prepubertal children suffering from speech and voice disorders. Based on this theoretical framework, implications for further empirical research and practice are suggested.

  19. Association between enterovirus infection and speech and language impairments: A nationwide population-based study.

    PubMed

    Hung, Tai-Hsin; Chen, Vincent Chin-Hung; Yang, Yao-Hsu; Tsai, Ching-Shu; Lu, Mong-Liang; McIntyre, Roger S; Lee, Yena; Huang, Kuo-You

    2018-06-01

    Delay and impairment in Speech and language are common developmental problems in younger populations. Hitherto, there has been minimal study of the association between common childhood infections (e.g. enterovirus [EV]) and speech and language. The impetus for evaluating this association is provided by evidence linking inflammation to neurodevelopmental disorders. Herein we sought to determine whether an association exists between EV infection and subsequent diagnoses of speech and language impairments in a nationwide population-based sample in Taiwan. Our study acquired data from the Taiwan National Health Insurance Research Database. The sample was comprised of individuals under 18 years of age with newly diagnosed EV infection during the period from January 1998 to December 2011. 39669 eligible cases were compared to matched controls and assessed during the study period for incident cases of speech and language impairments. Cox regression analyses were applied, adjusting for sex, age and other physical and mental problems. In the fully adjusted Cox regression model for hazard ratios, EV infection as positively associated with speech and language impairments (HR = 1.14, 95% CI: 1.06-1.22) after adjusting for age, sex and other confounds. Compared to the control group, the hazard ratio for speech and language impairments was 1.12 (95% CI: 1.03-1.21) amongst the group of EV infection without hospitalization, and 1.26 (95% CI: 1.10-1.45) amongst the group of EV infection with hospitalization. EV infection is temporally associated with incident speech and language impairments. Our findings herein provide rationale for educating families that EV infection may be associated with subsequent speech and language problems in susceptible individuals and that monitoring for such a presentation would be warranted. WHAT THIS PAPER ADDS?: Speech and language impairments associated with central nervous system infections have been reported in the literature. EV are medically important human pathogens and associated with select neuropsychiatric diseases. Notwithstanding, relatively few reports have mentioned the effects of EV infection on speech and language problems. Our study used a nationwide longitudinal dataset and identified that children with EV infection have a greater risk for speech and language impairments as compared with control group. Infected children combined other comorbidities or risk factors might have greater possibility to develop speech problems. Clinicians should be vigilant for the onset of language developmental abnormalities of preschool children with EV infection. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    NASA Astrophysics Data System (ADS)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  1. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension.

    PubMed

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-05-01

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  2. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension

    PubMed Central

    Özyürek, Asli; Jensen, Ole

    2018-01-01

    Abstract During face‐to‐face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued‐recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand‐area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low‐ and high‐frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low‐ and high‐frequency oscillations in predicting the integration of auditory and visual information at a semantic level. PMID:29380945

  3. SPEECH EVALUATION WITH AND WITHOUT PALATAL OBTURATOR IN PATIENTS SUBMITTED TO MAXILLECTOMY

    PubMed Central

    de Carvalho-Teles, Viviane; Pegoraro-Krook, Maria Inês; Lauris, José Roberto Pereira

    2006-01-01

    Most patients who have undergone resection of the maxillae due to benign or malignant tumors in the palatomaxillary region present with speech and swallowing disorders. Coupling of the oral and nasal cavities increases nasal resonance, resulting in hypernasality and unintelligible speech. Prosthodontic rehabilitation of maxillary resections with effective separation of the oral and nasal cavities can improve speech and esthetics, and assist the psychosocial adjustment of the patient as well. The objective of this study was to evaluate the efficacy of the palatal obturator prosthesis on speech intelligibility and resonance of 23 patients with age ranging from 18 to 83 years (Mean = 49.5 years), who had undergone inframedial-structural maxillectomy. The patients were requested to count from 1 to 20, to repeat 21 words and to spontaneously speak for 15 seconds, once with and again without the prosthesis, for tape recording purposes. The resonance and speech intelligibility were judged by 5 speech language pathologists from the tape recordings samples. The results have shown that the majority of patients (82.6%) significantly improved their speech intelligibility, and 16 patients (69.9%) exhibited a significant hypernasality reduction with the obturator in place. The results of this study indicated that maxillary obturator prosthesis was efficient to improve the speech intelligibility and resonance in patients who had undergone maxillectomy. PMID:19089242

  4. Speech Intelligibility in Persian Hearing Impaired Children with Cochlear Implants and Hearing Aids.

    PubMed

    Rezaei, Mohammad; Emadi, Maryam; Zamani, Peyman; Farahani, Farhad; Lotfi, Gohar

    2017-04-01

    The aim of present study is to evaluate and compare speech intelligibility in hearing impaired children with cochlear implants (CI) and hearing aid (HA) users and children with normal hearing (NH). The sample consisted of 45 Persian-speaking children aged 3 to 5-years-old. They were divided into three groups, and each group had 15, children, children with CI and children using hearing aids in Hamadan. Participants was evaluated by the test of speech intelligibility level. Results of ANOVA on speech intelligibility test showed that NH children had significantly better reading performance than hearing impaired children with CI and HA. Post-hoc analysis, using Scheffe test, indicated that the mean score of speech intelligibility of normal children was higher than the HA and CI groups; but the difference was not significant between mean of speech intelligibility in children with hearing loss that use cochlear implant and those using HA. It is clear that even with remarkabkle advances in HA technology, many hearing impaired children continue to find speech production a challenging problem. Given that speech intelligibility is a key element in proper communication and social interaction, consequently, educational and rehabilitation programs are essential to improve speech intelligibility of children with hearing loss.

  5. Recording high quality speech during tagged cine-MRI studies using a fiber optic microphone.

    PubMed

    NessAiver, Moriel S; Stone, Maureen; Parthasarathy, Vijay; Kahana, Yuvi; Paritsky, Alexander; Paritsky, Alex

    2006-01-01

    To investigate the feasibility of obtaining high quality speech recordings during cine imaging of tongue movement using a fiber optic microphone. A Complementary Spatial Modulation of Magnetization (C-SPAMM) tagged cine sequence triggered by an electrocardiogram (ECG) simulator was used to image a volunteer while speaking the syllable pairs /a/-/u/, /i/-/u/, and the words "golly" and "Tamil" in sync with the imaging sequence. A noise-canceling, optical microphone was fastened approximately 1-2 inches above the mouth of the volunteer. The microphone was attached via optical fiber to a laptop computer, where the speech was sampled at 44.1 kHz. A reference recording of gradient activity with no speech was subtracted from target recordings. Good quality speech was discernible above the background gradient sound using the fiber optic microphone without reference subtraction. The audio waveform of gradient activity was extremely stable and reproducible. Subtraction of the reference gradient recording further reduced gradient noise by roughly 21 dB, resulting in exceptionally high quality speech waveforms. It is possible to obtain high quality speech recordings using an optical microphone even during exceptionally loud cine imaging sequences. This opens up the possibility of more elaborate MRI studies of speech including spectral analysis of the speech signal in all types of MRI.

  6. Everyday listeners' impressions of speech produced by individuals with adductor spasmodic dysphonia.

    PubMed

    Nagle, Kathleen F; Eadie, Tanya L; Yorkston, Kathryn M

    2015-01-01

    Individuals with adductor spasmodic dysphonia (ADSD) have reported that unfamiliar communication partners appear to judge them as sneaky, nervous or not intelligent, apparently based on the quality of their speech; however, there is minimal research into the actual everyday perspective of listening to ADSD speech. The purpose of this study was to investigate the impressions of listeners hearing ADSD speech for the first time using a mixed-methods design. Everyday listeners were interviewed following sessions in which they made ratings of ADSD speech. A semi-structured interview approach was used and data were analyzed using thematic content analysis. Three major themes emerged: (1) everyday listeners make judgments about speakers with ADSD; (2) ADSD speech does not sound normal to everyday listeners; and (3) rating overall severity is difficult for everyday listeners. Participants described ADSD speech similarly to existing literature; however, some listeners inaccurately extrapolated speaker attributes based solely on speech samples. Listeners may draw erroneous conclusions about individuals with ADSD and these biases may affect the communicative success of these individuals. Results have implications for counseling individuals with ADSD, as well as the need for education and awareness about ADSD. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. The Relationship between Binaural Benefit and Difference in Unilateral Speech Recognition Performance for Bilateral Cochlear Implant Users

    PubMed Central

    Yoon, Yang-soo; Li, Yongxin; Kang, Hou-Yong; Fu, Qian-Jie

    2011-01-01

    Objective The full benefit of bilateral cochlear implants may depend on the unilateral performance with each device, the speech materials, processing ability of the user, and/or the listening environment. In this study, bilateral and unilateral speech performances were evaluated in terms of recognition of phonemes and sentences presented in quiet or in noise. Design Speech recognition was measured for unilateral left, unilateral right, and bilateral listening conditions; speech and noise were presented at 0° azimuth. The “binaural benefit” was defined as the difference between bilateral performance and unilateral performance with the better ear. Study Sample 9 adults with bilateral cochlear implants participated. Results On average, results showed a greater binaural benefit in noise than in quiet for all speech tests. More importantly, the binaural benefit was greater when unilateral performance was similar across ears. As the difference in unilateral performance between ears increased, the binaural advantage decreased; this functional relationship was observed across the different speech materials and noise levels even though there was substantial intra- and inter-subject variability. Conclusions The results indicate that subjects who show symmetry in speech recognition performance between implanted ears in general show a large binaural benefit. PMID:21696329

  8. The brain dynamics of rapid perceptual adaptation to adverse listening conditions.

    PubMed

    Erb, Julia; Henry, Molly J; Eisner, Frank; Obleser, Jonas

    2013-06-26

    Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an "executive" network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic "language" areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory-language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.

  9. Development and functional significance of private speech among attention-deficit hyperactivity disordered and normal boys.

    PubMed

    Berk, L E; Potts, M K

    1991-06-01

    We compared the development of spontaneous private speech and its relationship to self-controlled behavior in a sample of 6- to 12-year-olds with attention-deficit hyperactivity disorder (ADHD) and matched normal controls. Thirty-eight boys were observed in their classrooms while engaged in math seatwork. Results revealed that ADHD children were delayed in private speech development in that they engaged in more externalized, self-fuiding and less inaudible, internalized speech than normal youngsters. Several findings suggest that the developmental lag was a consequence of a highly unmanageable attentional system that prevents ADHD children's private speech from gaining efficient mastery over behavior. First, self-guiding speech was associated with greater attentional focus only among the least distractible ADHD boys. Second, the most mature, internalized speech forms were correlated with self-stimulating behavior for ADHD subjects but not for controls. Third, observations of ADHD children both on and off stimulant medication indicated that reducing their symptoms substantially increased the maturity of private speech and its association with motor quiescence and attention to task. Results suggest that the Vygotskian hypothesis of a unidirectional path of influence from private speech to self-controlled behavior should be expanded into a bidirectional model. These findings may also shed light on why treatment programs that train children with attentional deficits in speech-to-self have shown limited efficacy.

  10. Methods of analysis speech rate: a pilot study.

    PubMed

    Costa, Luanna Maria Oliveira; Martins-Reis, Vanessa de Oliveira; Celeste, Letícia Côrrea

    2016-01-01

    To describe the performance of fluent adults in different measures of speech rate. The study included 24 fluent adults, of both genders, speakers of Brazilian Portuguese, who were born and still living in the metropolitan region of Belo Horizonte, state of Minas Gerais, aged between 18 and 59 years. Participants were grouped by age: G1 (18-29 years), G2 (30-39 years), G3 (40-49 years), and G4 (50-59 years). The speech samples were obtained following the methodology of the Speech Fluency Assessment Protocol. In addition to the measures of speech rate proposed by the protocol (speech rate in words and syllables per minute), the rate of speech into phonemes per second and the articulation rate with and without the disfluencies were calculated. We used the nonparametric Friedman test and the Wilcoxon test for multiple comparisons. Groups were compared using the nonparametric Kruskal Wallis. The significance level was of 5%. There were significant differences between measures of speech rate involving syllables. The multiple comparisons showed that all the three measures were different. There was no effect of age for the studied measures. These findings corroborate previous studies. The inclusion of temporal acoustic measures such as speech rate in phonemes per second and articulation rates with and without disfluencies can be a complementary approach in the evaluation of speech rate.

  11. Connected speech as a marker of disease progression in autopsy-proven Alzheimer's disease.

    PubMed

    Ahmed, Samrah; Haigh, Anne-Marie F; de Jager, Celeste A; Garrard, Peter

    2013-12-01

    Although an insidious history of episodic memory difficulty is a typical presenting symptom of Alzheimer's disease, detailed neuropsychological profiling frequently demonstrates deficits in other cognitive domains, including language. Previous studies from our group have shown that language changes may be reflected in connected speech production in the earliest stages of typical Alzheimer's disease. The aim of the present study was to identify features of connected speech that could be used to examine longitudinal profiles of impairment in Alzheimer's disease. Samples of connected speech were obtained from 15 former participants in a longitudinal cohort study of ageing and dementia, in whom Alzheimer's disease was diagnosed during life and confirmed at post-mortem. All patients met clinical and neuropsychological criteria for mild cognitive impairment between 6 and 18 months before converting to a status of probable Alzheimer's disease. In a subset of these patients neuropsychological data were available, both at the point of conversion to Alzheimer's disease, and after disease severity had progressed from the mild to moderate stage. Connected speech samples from these patients were examined at later disease stages. Spoken language samples were obtained using the Cookie Theft picture description task. Samples were analysed using measures of syntactic complexity, lexical content, speech production, fluency and semantic content. Individual case analysis revealed that subtle changes in language were evident during the prodromal stages of Alzheimer's disease, with two-thirds of patients with mild cognitive impairment showing significant but heterogeneous changes in connected speech. However, impairments at the mild cognitive impairment stage did not necessarily entail deficits at mild or moderate stages of disease, suggesting non-language influences on some aspects of performance. Subsequent examination of these measures revealed significant linear trends over the three stages of disease in syntactic complexity, semantic and lexical content. The findings suggest, first, that there is a progressive disruption in language integrity, detectable from the prodromal stage in a subset of patients with Alzheimer's disease, and secondly that measures of semantic and lexical content and syntactic complexity best capture the global progression of linguistic impairment through the successive clinical stages of disease. The identification of disease-specific language impairment in prodromal Alzheimer's disease could enhance clinicians' ability to distinguish probable Alzheimer's disease from changes attributable to ageing, while longitudinal assessment could provide a simple approach to disease monitoring in therapeutic trials.

  12. GRIN2A: an aptly named gene for speech dysfunction.

    PubMed

    Turner, Samantha J; Mayes, Angela K; Verhoeven, Andrea; Mandelstam, Simone A; Morgan, Angela T; Scheffer, Ingrid E

    2015-02-10

    To delineate the specific speech deficits in individuals with epilepsy-aphasia syndromes associated with mutations in the glutamate receptor subunit gene GRIN2A. We analyzed the speech phenotype associated with GRIN2A mutations in 11 individuals, aged 16 to 64 years, from 3 families. Standardized clinical speech assessments and perceptual analyses of conversational samples were conducted. Individuals showed a characteristic phenotype of dysarthria and dyspraxia with lifelong impact on speech intelligibility in some. Speech was typified by imprecise articulation (11/11, 100%), impaired pitch (monopitch 10/11, 91%) and prosody (stress errors 7/11, 64%), and hypernasality (7/11, 64%). Oral motor impairments and poor performance on maximum vowel duration (8/11, 73%) and repetition of monosyllables (10/11, 91%) and trisyllables (7/11, 64%) supported conversational speech findings. The speech phenotype was present in one individual who did not have seizures. Distinctive features of dysarthria and dyspraxia are found in individuals with GRIN2A mutations, often in the setting of epilepsy-aphasia syndromes; dysarthria has not been previously recognized in these disorders. Of note, the speech phenotype may occur in the absence of a seizure disorder, reinforcing an important role for GRIN2A in motor speech function. Our findings highlight the need for precise clinical speech assessment and intervention in this group. By understanding the mechanisms involved in GRIN2A disorders, targeted therapy may be designed to improve chronic lifelong deficits in intelligibility. © 2015 American Academy of Neurology.

  13. Martin Luther King, Jr. Teacher's Resource Manual.

    ERIC Educational Resources Information Center

    Connecticut State Dept. of Education, Hartford.

    This Connecticut teachers' manual on Martin Luther King, Jr. includes: (1) teacher background information; (2) five excerpts from King's speeches; (3) four themes for lesson plans; and (4) sample lesson plans. The teacher's background information provides biographical sketches of King and his precursors. The five speeches reproduced here are…

  14. Native Reactions to Non-Native Speech: A Review of Empirical Research.

    ERIC Educational Resources Information Center

    Eisenstein, Miriam

    1983-01-01

    Recent research on native speakers' reactions to nonnative speech that views listeners, speakers, and language from a variety of perspectives using both objective and subjective research paradigms is reviewed. Studies of error gravity, relative intelligibility of language samples, the role of accent, speakers' characteristics, and context in which…

  15. PACs: A Framework for Determining Appropriate Service Delivery Options.

    ERIC Educational Resources Information Center

    Blosser, Jean L.; Kratcoski, Annette

    1997-01-01

    Offers speech-language clinicians a framework for team decision making and service delivery by encouraging speech-language pathologists and their colleagues to consider the unique combination of providers, activities, and contexts (PACs) necessary to meet the specific needs of each individual with a communication disorder. Sample cases involving…

  16. Phonological Acquisition of Korean Consonants in Conversational Speech Produced by Young Korean Children

    ERIC Educational Resources Information Center

    Kim, Minjung; Kim, Soo-Jin; Stoel-Gammon, Carol

    2017-01-01

    This study investigates the phonological acquisition of Korean consonants using conversational speech samples collected from sixty monolingual typically developing Korean children aged two, three, and four years. Phonemic acquisition was examined for syllable-initial and syllable-final consonants. Results showed that Korean children acquired stops…

  17. Phonological Development of Monolingual Haitian Creole-Speaking Preschool Children

    ERIC Educational Resources Information Center

    Archer, Justine; Champion, Tempii; Tyrone, Martha E.; Walters, Sylvia

    2018-01-01

    This study provides preliminary data on the phonological development of Haitian Creole-Speaking children. The purpose of this study is to determine phonological acquisition in the speech of normally developing monolingual Haitian Creole-Speaking preschoolers, ages 2 to 4. Speech samples were collected cross-sectionally from 12 Haitian children…

  18. Using assistive robots to promote inclusive education.

    PubMed

    Encarnação, P; Leite, T; Nunes, C; Nunes da Ponte, M; Adams, K; Cook, A; Caiado, A; Pereira, J; Piedade, G; Ribeiro, M

    2017-05-01

    This paper describes the development and test of physical and virtual integrated augmentative manipulation and communication assistive technologies (IAMCATs) that enable children with motor and speech impairments to manipulate educational items by controlling a robot with a gripper, while communicating through a speech generating device. Nine children with disabilities, nine regular and nine special education teachers participated in the study. Teachers adapted academic activities so they could also be performed by the children with disabilities using the IAMCAT. An inductive content analysis of the teachers' interviews before and after the intervention was performed. Teachers considered the IAMCAT to be a useful resource that can be integrated into the regular class dynamics respecting their curricular planning. It had a positive impact on children with disabilities and on the educational community. However, teachers pointed out the difficulties in managing the class, even with another adult present, due to the extra time required by children with disabilities to complete the activities. The developed assistive technologies enable children with disabilities to participate in academic activities but full inclusion would require another adult in class and strategies to deal with the additional time required by children to complete the activities. Implications for Rehabilitation Integrated augmentative manipulation and communication assistive technologies are useful resources to promote the participation of children with motor and speech impairments in classroom activities. Virtual tools, running on a computer screen, may be easier to use but further research is needed in order to evaluate its effectiveness when compared to physical tools. Full participation of children with motor and speech impairments in academic activities using these technologies requires another adult in class and adequate strategies to manage the extra time the child with disabilities may require to complete the activities.

  19. Relations Between Self-reported Executive Functioning and Speech Perception Skills in Adult Cochlear Implant Users.

    PubMed

    Moberly, Aaron C; Patel, Tirth R; Castellanos, Irina

    2018-02-01

    As a result of their hearing loss, adults with cochlear implants (CIs) would self-report poorer executive functioning (EF) skills than normal-hearing (NH) peers, and these EF skills would be associated with performance on speech recognition tasks. EF refers to a group of high order neurocognitive skills responsible for behavioral and emotional regulation during goal-directed activity, and EF has been found to be poorer in children with CIs than their NH age-matched peers. Moreover, there is increasing evidence that neurocognitive skills, including some EF skills, contribute to the ability to recognize speech through a CI. Thirty postlingually deafened adults with CIs and 42 age-matched NH adults were enrolled. Participants and their spouses or significant others (informants) completed well-validated self-reports or informant-reports of EF, the Behavior Rating Inventory of Executive Function - Adult (BRIEF-A). CI users' speech recognition skills were assessed in quiet using several measures of sentence recognition. NH peers were tested for recognition of noise-vocoded versions of the same speech stimuli. CI users self-reported difficulty on EF tasks of shifting and task monitoring. In CI users, measures of speech recognition correlated with several self-reported EF skills. The present findings provide further evidence that neurocognitive factors, including specific EF skills, may decline in association with hearing loss, and that some of these EF skills contribute to speech processing under degraded listening conditions.

  20. Everyday listening questionnaire: correlation between subjective hearing and objective performance.

    PubMed

    Brendel, Martina; Frohne-Buechner, Carolin; Lesinski-Schiedat, Anke; Lenarz, Thomas; Buechner, Andreas

    2014-01-01

    Clinical experience has demonstrated that speech understanding by cochlear implant (CI) recipients has improved over recent years with the development of new technology. The Everyday Listening Questionnaire 2 (ELQ 2) was designed to collect information regarding the challenges faced by CI recipients in everyday listening. The aim of this study was to compare self-assessment of CI users using ELQ 2 with objective speech recognition measures and to compare results between users of older and newer coding strategies. During their regular clinical review appointments a group of representative adult CI recipients implanted with the Advanced Bionics implant system were asked to complete the questionnaire. The first 100 patients who agreed to participate in this survey were recruited independent of processor generation and speech coding strategy. Correlations between subjectively scored hearing performance in everyday listening situations and objectively measured speech perception abilities were examined relative to the speech coding strategies used. When subjects were grouped by strategy there were significant differences between users of older 'standard' strategies and users of the newer, currently available strategies (HiRes and HiRes 120), especially in the categories of telephone use and music perception. Significant correlations were found between certain subjective ratings and the objective speech perception data in noise. There is a good correlation between subjective and objective data. Users of more recent speech coding strategies tend to have fewer problems in difficult hearing situations.

  1. Multitasking During Degraded Speech Recognition in School-Age Children

    PubMed Central

    Ward, Kristina M.; Brehm, Laurel

    2017-01-01

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition. PMID:28105890

  2. The effect of unilateral electrostimulation of the subthalamic nucleus on respiratory/phonatory subsystems of speech production in Parkinson's disease--a preliminary report.

    PubMed

    Wang, Emily; Verhagen Metman, Leo; Bakay, Roy; Arzbaecher, Jean; Bernard, Bryan

    2003-01-01

    This paper reports findings on the respiratory/phonatory subsystems from an on-going study investigating the effect of unilateral electrostimulation of the subthalamic nucleus (STN) on different speech subsystems in people with Parkinson's disease (PD). Speech recordings were made in the medication-off state at baseline, three months post surgery with stimulation-on, and with stimulation-off, in six right-handed PD patients. Subjects completed several speech tasks. Acoustic analyses of the maximally sustained vowel phonation were reported. The results were compared to the scores of the motor section of the Unified Parkinson's Disease Rating Scale (UPDRS-III) obtained under the same conditions. Results showed that stimulation-on improved UPDRS-III scores in all six subjects. While mild improvement was observed for all subjects in the Stimulation-on condition, three subjects received left-STN stimulation showed a significant decline in vocal intensity and vowel duration from their baseline indicating the speech function was very susceptible to micro lesions due to the surgical procedure itself when the surgical site was in the dominant hemisphere.

  3. Multitasking During Degraded Speech Recognition in School-Age Children.

    PubMed

    Grieco-Calub, Tina M; Ward, Kristina M; Brehm, Laurel

    2017-01-01

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children's multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children's accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children's dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children's proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.

  4. CNTNAP2 Is Significantly Associated With Speech Sound Disorder in the Chinese Han Population.

    PubMed

    Zhao, Yun-Jing; Wang, Yue-Ping; Yang, Wen-Zhu; Sun, Hong-Wei; Ma, Hong-Wei; Zhao, Ya-Ru

    2015-11-01

    Speech sound disorder is the most common communication disorder. Some investigations support the possibility that the CNTNAP2 gene might be involved in the pathogenesis of speech-related diseases. To investigate single-nucleotide polymorphisms in the CNTNAP2 gene, 300 unrelated speech sound disorder patients and 200 normal controls were included in the study. Five single-nucleotide polymorphisms were amplified and directly sequenced. Significant differences were found in the genotype (P = .0003) and allele (P = .0056) frequencies of rs2538976 between patients and controls. The excess frequency of the A allele in the patient group remained significant after Bonferroni correction (P = .0280). A significant haplotype association with rs2710102T/+rs17236239A/+2538976A/+2710117A (P = 4.10e-006) was identified. A neighboring single-nucleotide polymorphism, rs10608123, was found in complete linkage disequilibrium with rs2538976, and the genotypes exactly corresponded to each other. The authors propose that these CNTNAP2 variants increase the susceptibility to speech sound disorder. The single-nucleotide polymorphisms rs10608123 and rs2538976 may merge into one single-nucleotide polymorphism. © The Author(s) 2015.

  5. Intelligibility in Context Scale: Normative and Validation Data for English-Speaking Preschoolers.

    PubMed

    McLeod, Sharynne; Crowe, Kathryn; Shahaeian, Ameneh

    2015-07-01

    The purpose of this study was to describe normative and validation data on the Intelligibility in Context Scale (ICS; McLeod, Harrison, & McCormack, 2012c) for English-speaking children. The ICS is a 7-item, parent-report measure of children's speech intelligibility with a range of communicative partners. Data were collected from the parents of 803 Australian English-speaking children ranging in age from 4;0 (years;months) to 5;5 (37.0% were multilingual). The mean ICS score was 4.4 (SD = 0.7) out of a possible total score of 5. Children's speech was reported to be most intelligible to their parents, followed by their immediate family, friends, and teachers; children's speech was least intelligible to strangers. The ICS had high internal consistency (α = .94). Significant differences in scores were identified on the basis of sex and age but not on the basis of socioeconomic status or the number of languages spoken. There were significant differences in scores between children whose parents had concerns about their child's speech (M = 3.9) and those who did not (M = 4.6). A sensitivity of .82 and a specificity of .58 were established as the optimal cutoff. Test-retest reliability and criterion validity were established for 184 children with a speech sound disorder. There was a significant low correlation between the ICS mean score and percentage of phonemes correct (r = .30), percentage of consonants correct (r = .24), and percentage of vowels correct (r = .30) on the Diagnostic Evaluation of Articulation and Phonology (Dodd, Hua, Crosbie, Holm, & Ozanne, 2002). Thirty-one parents completed the ICS related to English and another language spoken by their child with a speech sound disorder. The significant correlations between the scores suggest that the ICS may be robust between languages. This article provides normative ICS data for English-speaking children and additional validation of the psychometric properties of the ICS. The robustness of the ICS was suggested because mean ICS scores were not affected by socioeconomic status, number of languages spoken, or whether the ICS was completed in relation to English or another language. The ICS is recommended as a screening measure of children's speech intelligibility.

  6. Efficacy of Multiple-Talker Phonetic Identification Training in Postlingually Deafened Cochlear Implant Listeners.

    PubMed

    Miller, Sharon E; Zhang, Yang; Nelson, Peggy B

    2016-02-01

    This study implemented a pretest-intervention-posttest design to examine whether multiple-talker identification training enhanced phonetic perception of the /ba/-/da/ and /wa/-/ja/ contrasts in adult listeners who were deafened postlingually and have cochlear implants (CIs). Nine CI recipients completed 8 hours of identification training using a custom-designed training package. Perception of speech produced by familiar talkers (talkers used during training) and unfamiliar talkers (talkers not used during training) was measured before and after training. Five additional untrained CI recipients completed identical pre- and posttests over the same time course as the trainees to control for procedural learning effects. Perception of the speech contrasts produced by the familiar talkers significantly improved for the trained CI listeners, and effects of perceptual learning transferred to unfamiliar talkers. Such training-induced significant changes were not observed in the control group. The data provide initial evidence of the efficacy of the multiple-talker identification training paradigm for CI users who were deafened postlingually. This pattern of results is consistent with enhanced phonemic categorization of the trained speech sounds.

  7. Beyond stuttering: Speech disfluencies in normally fluent French-speaking children at age 4.

    PubMed

    Leclercq, Anne-Lise; Suaire, Pauline; Moyse, Astrid

    2018-01-01

    The aim of this study was to establish normative data on the speech disfluencies of normally fluent French-speaking children at age 4, an age at which stuttering has begun in 95% of children who stutter (Yairi & Ambrose, 2013). Fifty monolingual French-speaking children who do not stutter participated in the study. Analyses of a conversational speech sample comprising 250-550 words revealed an average of 10% total disfluencies, 2% stuttering-like disfluencies and around 8% non-stuttered disfluencies. Possible explanations for these high speech disfluency frequencies are discussed, including explanations linked to French in particular. The results shed light on the importance of normative data specific to each language.

  8. An exploratory trial of the effectiveness of an enhanced consultative approach to delivering speech and language intervention in schools.

    PubMed

    Mecrow, Carol; Beckwith, Jennie; Klee, Thomas

    2010-01-01

    Increased demand for access to specialist services for providing support to children with speech, language and communication needs prompted a local service review of how best to allocate limited resources. This study arose as a consequence of a wish to evaluate the effectiveness of an enhanced consultative approach to delivering speech and language intervention in local schools. The purpose was to evaluate an intensive speech and language intervention for children in mainstream schools delivered by specialist teaching assistants. A within-subjects, quasi-experimental exploratory trial was conducted, with each child serving as his or her own control with respect to the primary outcome measure. Thirty-five children between the ages of 4;2 and 6;10 (years; months) received speech and/or language intervention for an average of four 1-hour sessions per week over 10 weeks. The primary outcome measure consisted of change between pre- and post-intervention scores on probe tasks of treated and untreated behaviours summed across the group of children, and maintenance probes of treated behaviours. Secondary outcome measures included standardized tests (Clinical Evaluation of Language Fundamentals - Preschool(UK) (CELF-P(UK)); Diagnostic Evaluation of Articulation and Phonology (DEAP)) and questionnaires completed by parents/carers and school staff before and after the intervention period. The primary outcome measure showed improvement over the intervention period, with target behaviours showing a significantly larger increase than control behaviours. The gains made on the target behaviours as a result of intervention were sustained when reassessed 3-12 months later. These findings were replicated on a second set of targets and controls. Significant gains were also observed on CELF-Preschool(UK) receptive and expressive language standard scores from pre- to post-intervention. However, DEAP standard scores of speech ability did not increase over the intervention period, although improvements in raw scores were observed. Questionnaires completed before and after intervention showed some significant differences relating to how much the child's speech and language difficulties affected him/her at home and at school. This exploratory study demonstrates the benefit of an intensive therapy delivered by specialist teaching assistants for remediating speech and language difficulties experienced by young children in mainstream schools. The service delivery model was perceived by professionals as offering an inclusive and effective practice and provides empirical support for using both direct and indirect intervention in the school setting.

  9. Restoration of facial symmetry in a patient with bell palsy using a modified maxillary complete denture: a case report.

    PubMed

    Bagchi, Gautam; Nath, Dilip Kumar

    2012-01-01

    Permanent facial paralysis can be devastating for a patient. Modern society's emphasis on appearance and physical beauty contributes to this problem and often leads to isolation of patients embarrassed by their appearance. Lagophthalmos with ocular exposure, loss of oral competence with resultant drooling, alar collapse with nasal airway obstruction, and difficulties with mastication and speech production are all potential consequences of facial paralysis. Affected patients are confronted with both a cosmetic defect and the functional deficits associated with loss of facial nerve function. In this case history report, a modified maxillary complete denture permitted a patient with Bell palsy to carry on daily activities with minimal facial distortion, pain, speech difficulty, and associated emotional trauma.

  10. Gender differences in identifying emotions from auditory and visual stimuli.

    PubMed

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  11. Factors affecting articulation skills in children with velocardiofacial syndrome and children with cleft palate or velopharyngeal dysfunction: A preliminary report

    PubMed Central

    Baylis, Adriane L.; Munson, Benjamin; Moller, Karlind T.

    2010-01-01

    Objective To examine the influence of speech perception, cognition, and implicit phonological learning on articulation skills of children with Velocardiofacial syndrome (VCFS) and children with cleft palate or velopharyngeal dysfunction (VPD). Design Cross-sectional group experimental design. Participants 8 children with VCFS and 5 children with non-syndromic cleft palate or VPD. Methods and Measures All children participated in a phonetic inventory task, speech perception task, implicit priming nonword repetition task, conversational sample, nonverbal intelligence test, and hearing screening. Speech tasks were scored for percentage of phonemes correctly produced. Group differences and relations among measures were examined using nonparametric statistics. Results Children in the VCFS group demonstrated significantly poorer articulation skills and lower standard scores of nonverbal intelligence compared to the children with cleft palate or VPD. There were no significant group differences in speech perception skills. For the implicit priming task, both groups of children were more accurate in producing primed nonwords than unprimed nonwords. Nonverbal intelligence and severity of velopharyngeal inadequacy for speech were correlated with articulation skills. Conclusions In this study, children with VCFS had poorer articulation skills compared to children with cleft palate or VPD. Articulation difficulties seen in the children with VCFS did not appear to be associated with speech perception skills or the ability to learn new phonological representations. Future research should continue to examine relationships between articulation, cognition, and velopharyngeal dysfunction in a larger sample of children with cleft palate and VCFS. PMID:18333642

  12. Psychoacoustic cues to emotion in speech prosody and music.

    PubMed

    Coutinho, Eduardo; Dibben, Nicola

    2013-01-01

    There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.

  13. Spectral analysis method and sample generation for real time visualization of speech

    NASA Astrophysics Data System (ADS)

    Hobohm, Klaus

    A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.

  14. Smartphone Application for the Analysis of Prosodic Features in Running Speech with a Focus on Bipolar Disorders: System Performance Evaluation and Case Study.

    PubMed

    Guidi, Andrea; Salvi, Sergio; Ottaviano, Manuel; Gentili, Claudio; Bertschy, Gilles; de Rossi, Danilo; Scilingo, Enzo Pasquale; Vanello, Nicola

    2015-11-06

    Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented.

  15. Smartphone Application for the Analysis of Prosodic Features in Running Speech with a Focus on Bipolar Disorders: System Performance Evaluation and Case Study

    PubMed Central

    Guidi, Andrea; Salvi, Sergio; Ottaviano, Manuel; Gentili, Claudio; Bertschy, Gilles; de Rossi, Danilo; Scilingo, Enzo Pasquale; Vanello, Nicola

    2015-01-01

    Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented. PMID:26561811

  16. AYACUCHO READER.

    ERIC Educational Resources Information Center

    PARKER, GARY J.

    INTENDED FOR USE AFTER COMPLETION OF THE SPOKEN AYACUCHO COURSE, THIS VOLUME CONTAINS 19 PROSE SELECTIONS AND 11 PAGES OF VERSE AND LYRICS. THE STORIES WERE SELECTED TO ILLUSTRATE THE COMPLETE RANGE OF QUECHUA ORAL TRADITION IN THE AYACUCHO SPEECH AREA. INCLUDED ARE STORIES OF SUPERNATURAL BEINGS, TALES OF HUMAN FOLLY, ANIMAL STORIES, HUMOROUS…

  17. Reverse Discourse Completion Task as an Assessment Tool for Intercultural Competence

    ERIC Educational Resources Information Center

    Kanik, Mehmet

    2013-01-01

    This paper proposes a prototypic assessment tool for intercultural communicative competence. Because traditional discourse completion tasks (DCTs) focus on illocutionary competence rather than sociolinguistic competence, a modified version of a DCT was created to target sociolinguistic competence. The modified DCT employs speech acts as prompts…

  18. Word production inconsistency of Singaporean-English-speaking adolescents with Down Syndrome.

    PubMed

    Wong, Betty; Brebner, Chris; McCormack, Paul; Butcher, Andy

    2015-01-01

    The nature of speech disorders in individuals with Down Syndrome (DS) remains controversial despite various explanations put forth in the literature to account for the observed speech profiles. A high level of word production inconsistency in children with DS has led researchers to query whether the inconsistency continues into adolescence, and if the inconsistency stems from inconsistent phonological disorder (IPD) or childhood apraxia of speech (CAS). Of the studies that have been published, most suggest that the speech profile of individuals with DS is delayed, while a few recent studies suggest a combination of delayed and disordered patterns. However, no studies have explored the nature of word production inconsistency in this population, and the relationship between word production inconsistency, receptive vocabulary and severity of speech disorder. To investigate in a pilot study the extent of word production inconsistency in adolescents with DS and to examine the correlations between word production inconsistency, measures of receptive vocabulary, severity of speech disorder and oromotor skills in adolescents with DS. The participants were 32 native speakers of Singaporean-English adolescents, comprising 16 participants with DS and 16 typically developing (TD) participants. The participants completed a battery of standardized speech and language assessments, including The Diagnostic Evaluation of Articulation and Phonology (DEAP) assessment. Results from each test were correlated to determine relationships. Qualitative analyses were also carried out on all the data collected. In this study, seven out of 16 participants with DS scored above 40% on word production inconsistency, a diagnostic criterion for IPD. In addition, all participants with DS performed poorly on the oromotor assessment of DEAP. The overall speech profile observed did not exactly correspond with the cluster symptoms observed in children with IPD or CAS. Word production inconsistency is a noticeable feature in the speech of individuals with DS. In addition, the speech profiles of individuals with DS consist of atypical and unusual errors alongside developmental errors. Significant correlations were found between the measures investigated, suggesting that speech disorder in DS is multifactorial. The results from this study will help to improve differential diagnosis of speech disorders and individualized treatment plans in the population with DS. © 2015 Royal College of Speech and Language Therapists.

  19. Speech sound articulation abilities of preschool-age children who stutter.

    PubMed

    Clark, Chagit E; Conture, Edward G; Walden, Tedra A; Lambert, Warren E

    2013-12-01

    The purpose of this study was to assess the association between speech sound articulation and childhood stuttering in a relatively large sample of preschool-age children who do and do not stutter, using the Goldman-Fristoe Test of Articulation-2 (GFTA-2; Goldman & Fristoe, 2000). Participants included 277 preschool-age children who do (CWS; n=128, 101 males) and do not stutter (CWNS; n=149, 76 males). Generalized estimating equations (GEE) were performed to assess between-group (CWS versus CWNS) differences on the GFTA-2. Additionally, within-group correlations were performed to explore the relation between CWS' speech sound articulation abilities and their stuttering frequency and severity, as well as their sound prolongation index (SPI; Schwartz & Conture, 1988). No significant differences were found between the articulation scores of preschool-age CWS and CWNS. However, there was a small gender effect for the 5-year-old age group, with girls generally exhibiting better articulation scores than boys. Additional findings indicated no relation between CWS' speech sound articulation abilities and their stuttering frequency, severity, or SPI. Findings suggest no apparent association between speech sound articulation-as measured by one standardized assessment (GFTA-2)-and childhood stuttering for this sample of preschool-age children (N=277). After reading this article, the reader will be able to: (1) discuss salient issues in the articulation literature relative to children who stutter; (2) compare/contrast the present study's methodologies and main findings to those of previous studies that investigated the association between childhood stuttering and speech sound articulation; (3) identify future research needs relative to the association between childhood stuttering and speech sound development; (4) replicate the present study's methodology to expand this body of knowledge. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Dense home-based recordings reveal typical and atypical development of tense/aspect in a child with delayed language development.

    PubMed

    Chin, Iris; Goodwin, Matthew S; Vosoughi, Soroush; Roy, Deb; Naigles, Letitia R

    2018-01-01

    Studies investigating the development of tense/aspect in children with developmental disorders have focused on production frequency and/or relied on short spontaneous speech samples. How children with developmental disorders use future forms/constructions is also unknown. The current study expands this literature by examining frequency, consistency, and productivity of past, present, and future usage, using the Speechome Recorder, which enables collection of dense, longitudinal audio-video recordings of children's speech. Samples were collected longitudinally in a child who was previously diagnosed with autism spectrum disorder, but at the time of the study exhibited only language delay [Audrey], and a typically developing child [Cleo]. While Audrey was comparable to Cleo in frequency and productivity of tense/aspect use, she was atypical in her consistency and production of an unattested future form. Examining additional measures of densely collected speech samples may reveal subtle atypicalities that are missed when relying on only few typical measures of acquisition.

  1. Song and speech: examining the link between singing talent and speech imitation ability

    PubMed Central

    Christiner, Markus; Reiterer, Susanne M.

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of “speech” on the productive level and “music” on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory. PMID:24319438

  2. Speech Disorders in Neurofibromatosis Type 1: A Sample Survey

    ERIC Educational Resources Information Center

    Cosyns, Marjan; Vandeweghe, Lies; Mortier, Geert; Janssens, Sandra; Van Borsel, John

    2010-01-01

    Background: Neurofibromatosis type 1 (NF1) is an autosomal-dominant neurocutaneous disorder with an estimated prevalence of two to three cases per 10 000 population. While the physical characteristics have been well documented, speech disorders have not been fully characterized in NF1 patients. Aims: This study serves as a pilot to identify key…

  3. Disfluency Markers in L1 Attrition

    ERIC Educational Resources Information Center

    Schmid, Monika S.; Fagersten, Kristy Beers

    2010-01-01

    Based on an analysis of the speech of long-term emigres of German and Dutch origin, the present investigation discusses to what extent hesitation patterns in language attrition may be the result of the creation of an interlanguage system, on the one hand, or of language-internal attrition patterns on the other. We compare speech samples elicited…

  4. Perceptual Speech and Paralinguistic Skills of Adolescents with Williams Syndrome

    ERIC Educational Resources Information Center

    Hargrove, Patricia M.; Pittelko, Stephen; Fillingane, Evan; Rustman, Emily; Lund, Bonnie

    2013-01-01

    The purpose of this research was to compare selected speech and paralinguistic skills of speakers with Williams syndrome (WS) and typically developing peers and to demonstrate the feasibility of providing preexisting databases to students to facilitate graduate research. In a series of three studies, conversational samples of 12 adolescents with…

  5. Consonant Inventories in the Spontaneous Speech of Young Children: A Bootstrapping Procedure

    ERIC Educational Resources Information Center

    Van Severen, Lieve; Van Den Berg, Renate; Molemans, Inge; Gillis, Steven

    2012-01-01

    Consonant inventories are commonly drawn to assess the phonological acquisition of toddlers. However, the spontaneous speech data that are analysed often vary substantially in size and composition. Consequently, comparisons between children and across studies are fundamentally hampered. This study aims to examine the effect of sample size on the…

  6. Social Biases toward Children with Speech and Language Impairments: A Correlative Causal Model of Language Limitations.

    ERIC Educational Resources Information Center

    Rice, Mabel L.; And Others

    1993-01-01

    In a study of adults' attitudes toward children with limited linguistic competence, four groups of judges listened to audiotaped samples of preschool children's speech and responded to questionnaire items addressing child attributes (e.g., intelligence, social maturity). Systemic biases were revealed toward children with limited communication…

  7. Investigating Prompt Difficulty in an Automatically Scored Speaking Performance Assessment

    ERIC Educational Resources Information Center

    Cox, Troy L.

    2013-01-01

    Speaking assessments for second language learners have traditionally been expensive to administer because of the cost of rating the speech samples. To reduce the cost, many researchers are investigating the potential of using automatic speech recognition (ASR) as a means to score examinee responses to open-ended prompts. This study examined the…

  8. Effects of Length, Complexity, and Grammatical Correctness on Stuttering in Spanish-Speaking Preschool Children

    ERIC Educational Resources Information Center

    Watson, Jennifer B.; Byrd, Courtney T.; Carlo, Edna J.

    2011-01-01

    Purpose: To explore the effects of utterance length, syntactic complexity, and grammatical correctness on stuttering in the spontaneous speech of young, monolingual Spanish-speaking children. Method: Spontaneous speech samples of 11 monolingual Spanish-speaking children who stuttered, ages 35 to 70 months, were examined. Mean number of syllables,…

  9. Accurate visible speech synthesis based on concatenating variable length motion capture data.

    PubMed

    Ma, Jiyong; Cole, Ron; Pellom, Bryan; Ward, Wayne; Wise, Barbara

    2006-01-01

    We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is desrcribed. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergarten through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.

  10. Impact of tongue reduction on overall speech intelligibility, articulation and oromyofunctional behavior in 4 children with Beckwith-Wiedemann syndrome.

    PubMed

    Van Lierde, K; Galiwango, G; Hodges, A; Bettens, K; Luyten, A; Vermeersch, H

    2012-01-01

    The purpose of this study was to determine the impact of partial glossectomy (using the keyhole technique) on speech intelligibility, articulation, resonance and oromyofunctional behavior. A partial glossectomy was performed in 4 children with Beckwith- Wiedemann syndrome between the ages of 0.5 and 3.1 years. An ENT assessment, a phonetic inventory, a phonemic and phonological analysis and a consensus perceptual evaluation of speech intelligibility, resonance and oromyofunctional behavior were performed. It was not possible in this study to separate the effects of the surgery from the typical developmental progress of speech sound mastery. Improved speech intelligibility, a more complete phonetic inventory, an increase in phonological skills, normal resonance and increased motor-oriented oral behavior were found in the postsurgical condition. The presence of phonetic distortions, lip incompetence and interdental tongue position were still present in the postsurgical condition. Speech therapy should be focused on correct phonetic placement and a motor-oriented approach to increase lip competence, and on functional tongue exercises and tongue lifting during the production of alveolars. Detailed analyses in a larger number of subjects with and without Beckwith-Wiedemann syndrome may help further illustrate the long-term impact of partial glossectomy. Copyright © 2011 S. Karger AG, Basel.

  11. Acceptable noise level (ANL) with Danish and non-semantic speech materials in adult hearing-aid users.

    PubMed

    Olsen, Steen Østergaard; Lantz, Johannes; Nielsen, Lars Holme; Brännström, K Jonas

    2012-09-01

    The acceptable noise level (ANL) test is used for quantification of the amount of background noise subjects accept when listening to speech. This study investigates Danish hearing-aid users' ANL performance using Danish and non-semantic speech signals, the repeatability of ANL, and the association between ANL and outcome of the international outcome inventory for hearing aids (IOI-HA). ANL was measured in three conditions in both ears at two test sessions. Subjects completed the IOI-HA and the ANL questionnaire. Sixty-three Danish hearing-aid users; fifty-seven subjects were full time users and 6 were part time/non users of hearing aids according to the ANL questionnaire. ANLs were similar to results with American English speech material. The coefficient of repeatability (CR) was 6.5-8.8 dB. IOI-HA scores were not associated to ANL. Danish and non-semantic ANL versions yield results similar to the American English version. The magnitude of the CR indicates that ANL with Danish and non-semantic speech materials is not suitable for prediction of individual patterns of future hearing-aid use or evaluation of individual benefit from hearing-aid features. The ANL with Danish and non-semantic speech materials is not related to IOI-HA outcome.

  12. Threat Interference Biases Predict Socially Anxious Behavior: The Role of Inhibitory Control and Minute of Stressor.

    PubMed

    Gorlin, Eugenia I; Teachman, Bethany A

    2015-07-01

    The current study brings together two typically distinct lines of research. First, social anxiety is inconsistently associated with behavioral deficits in social performance, and the factors accounting for these deficits remain poorly understood. Second, research on selective processing of threat cues, termed cognitive biases, suggests these biases typically predict negative outcomes, but may sometimes be adaptive, depending on the context. Integrating these research areas, the current study examined whether conscious and/or unconscious threat interference biases (indexed by the unmasked and masked emotional Stroop) can explain unique variance, beyond self-reported anxiety measures, in behavioral avoidance and observer-rated anxious behavior during a public speaking task. Minute of speech and general inhibitory control (indexed by the color-word Stroop) were examined as within-subject and between-subject moderators, respectively. Highly socially anxious participants (N=135) completed the emotional and color-word Stroop blocks prior to completing a 4-minute videotaped speech task, which was later coded for anxious behaviors (e.g., speech dysfluency). Mixed-effects regression analyses revealed that general inhibitory control moderated the relationship between both conscious and unconscious threat interference bias and anxious behavior (though not avoidance), such that lower threat interference predicted higher levels of anxious behavior, but only among those with relatively weaker (versus stronger) inhibitory control. Minute of speech further moderated this relationship for unconscious (but not conscious) social-threat interference, such that lower social-threat interference predicted a steeper increase in anxious behaviors over the course of the speech (but only among those with weaker inhibitory control). Thus, both trait and state differences in inhibitory control resources may influence the behavioral impact of threat biases in social anxiety. Copyright © 2015. Published by Elsevier Ltd.

  13. Inferring speaker attributes in adductor spasmodic dysphonia: ratings from unfamiliar listeners.

    PubMed

    Isetti, Derek; Xuereb, Linnea; Eadie, Tanya L

    2014-05-01

    To determine whether unfamiliar listeners' perceptions of speakers with adductor spasmodic dysphonia (ADSD) differ from control speakers on the parameters of relative age, confidence, tearfulness, and vocal effort and are related to speaker-rated vocal effort or voice-specific quality of life. Twenty speakers with ADSD (including 6 speakers with ADSD plus tremor) and 20 age- and sex-matched controls provided speech recordings, completed a voice-specific quality-of-life instrument (Voice Handicap Index; Jacobson et al., 1997), and rated their own vocal effort. Twenty listeners evaluated speech samples for relative age, confidence, tearfulness, and vocal effort using rating scales. Listeners judged speakers with ADSD as sounding significantly older, less confident, more tearful, and more effortful than control speakers (p < .01). Increased vocal effort was strongly associated with decreased speaker confidence (rs = .88-.89) and sounding more tearful (rs = .83-.85). Self-rated speaker effort was moderately related (rs = .45-.52) to listener impressions. Listeners' perceptions of confidence and tearfulness were also moderately associated with higher Voice Handicap Index scores (rs = .65-.70). Unfamiliar listeners judge speakers with ADSD more negatively than control speakers, with judgments extending beyond typical clinical measures. The results have implications for counseling and understanding the psychosocial effects of ADSD.

  14. GRIN2A

    PubMed Central

    Turner, Samantha J.; Mayes, Angela K.; Verhoeven, Andrea; Mandelstam, Simone A.; Morgan, Angela T.

    2015-01-01

    Objective: To delineate the specific speech deficits in individuals with epilepsy-aphasia syndromes associated with mutations in the glutamate receptor subunit gene GRIN2A. Methods: We analyzed the speech phenotype associated with GRIN2A mutations in 11 individuals, aged 16 to 64 years, from 3 families. Standardized clinical speech assessments and perceptual analyses of conversational samples were conducted. Results: Individuals showed a characteristic phenotype of dysarthria and dyspraxia with lifelong impact on speech intelligibility in some. Speech was typified by imprecise articulation (11/11, 100%), impaired pitch (monopitch 10/11, 91%) and prosody (stress errors 7/11, 64%), and hypernasality (7/11, 64%). Oral motor impairments and poor performance on maximum vowel duration (8/11, 73%) and repetition of monosyllables (10/11, 91%) and trisyllables (7/11, 64%) supported conversational speech findings. The speech phenotype was present in one individual who did not have seizures. Conclusions: Distinctive features of dysarthria and dyspraxia are found in individuals with GRIN2A mutations, often in the setting of epilepsy-aphasia syndromes; dysarthria has not been previously recognized in these disorders. Of note, the speech phenotype may occur in the absence of a seizure disorder, reinforcing an important role for GRIN2A in motor speech function. Our findings highlight the need for precise clinical speech assessment and intervention in this group. By understanding the mechanisms involved in GRIN2A disorders, targeted therapy may be designed to improve chronic lifelong deficits in intelligibility. PMID:25596506

  15. Schizophrenia affects speech-induced functional connectivity of the superior temporal gyrus under cocktail-party listening conditions.

    PubMed

    Li, Juanhua; Wu, Chao; Zheng, Yingjun; Li, Ruikeng; Li, Xuanzi; She, Shenglin; Wu, Haibo; Peng, Hongjun; Ning, Yuping; Li, Liang

    2017-09-17

    The superior temporal gyrus (STG) is involved in speech recognition against informational masking under cocktail-party-listening conditions. Compared to healthy listeners, people with schizophrenia perform worse in speech recognition under informational speech-on-speech masking conditions. It is not clear whether the schizophrenia-related vulnerability to informational masking is associated with certain changes in FC of the STG with some critical brain regions. Using sparse-sampling fMRI design, this study investigated the differences between people with schizophrenia and healthy controls in FC of the STG for target-speech listening against informational speech-on-speech masking, when a listening condition with either perceived spatial separation (PSS, with a spatial release of informational masking) or perceived spatial co-location (PSC, without the spatial release) between target speech and masking speech was introduced. The results showed that in healthy participants, but not participants with schizophrenia, the contrast of either the PSS or PSC condition against the masker-only condition induced an enhancement of functional connectivity (FC) of the STG with the left superior parietal lobule and the right precuneus. Compared to healthy participants, participants with schizophrenia showed declined FC of the STG with the bilateral precuneus, right SPL, and right supplementary motor area. Thus, FC of the STG with the parietal areas is normally involved in speech listening against informational masking under either the PSS or PSC conditions, and declined FC of the STG in people with schizophrenia with the parietal areas may be associated with the increased vulnerability to informational masking. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Motor speech signature of behavioral variant frontotemporal dementia: Refining the phenotype.

    PubMed

    Vogel, Adam P; Poole, Matthew L; Pemberton, Hugh; Caverlé, Marja W J; Boonstra, Frederique M C; Low, Essie; Darby, David; Brodtmann, Amy

    2017-08-22

    To provide a comprehensive description of motor speech function in behavioral variant frontotemporal dementia (bvFTD). Forty-eight individuals (24 bvFTD and 24 age- and sex-matched healthy controls) provided speech samples. These varied in complexity and thus cognitive demand. Their language was assessed using the Progressive Aphasia Language Scale and verbal fluency tasks. Speech was analyzed perceptually to describe the nature of deficits and acoustically to quantify differences between patients with bvFTD and healthy controls. Cortical thickness and subcortical volume derived from MRI scans were correlated with speech outcomes in patients with bvFTD. Speech of affected individuals was significantly different from that of healthy controls. The speech signature of patients with bvFTD is characterized by a reduced rate (75%) and accuracy (65%) on alternating syllable production tasks, and prosodic deficits including reduced speech rate (45%), prolonged intervals (54%), and use of short phrases (41%). Groups differed on acoustic measures derived from the reading, unprepared monologue, and diadochokinetic tasks but not the days of the week or sustained vowel tasks. Variability of silence length was associated with cortical thickness of the inferior frontal gyrus and insula and speech rate with the precentral gyrus. One in 8 patients presented with moderate speech timing deficits with a further two-thirds rated as mild or subclinical. Subtle but measurable deficits in prosody are common in bvFTD and should be considered during disease management. Language function correlated with speech timing measures derived from the unprepared monologue only. © 2017 American Academy of Neurology.

  17. Don’t speak too fast! Processing of fast rate speech in children with specific language impairment

    PubMed Central

    Bedoin, Nathalie; Krifi-Papoz, Sonia; Herbillon, Vania; Caillot-Bascoul, Aurélia; Gonzalez-Monge, Sibylle; Boulenger, Véronique

    2018-01-01

    Background Perception of speech rhythm requires the auditory system to track temporal envelope fluctuations, which carry syllabic and stress information. Reduced sensitivity to rhythmic acoustic cues has been evidenced in children with Specific Language Impairment (SLI), impeding syllabic parsing and speech decoding. Our study investigated whether these children experience specific difficulties processing fast rate speech as compared with typically developing (TD) children. Method Sixteen French children with SLI (8–13 years old) with mainly expressive phonological disorders and with preserved comprehension and 16 age-matched TD children performed a judgment task on sentences produced 1) at normal rate, 2) at fast rate or 3) time-compressed. Sensitivity index (d′) to semantically incongruent sentence-final words was measured. Results Overall children with SLI perform significantly worse than TD children. Importantly, as revealed by the significant Group × Speech Rate interaction, children with SLI find it more challenging than TD children to process both naturally or artificially accelerated speech. The two groups do not significantly differ in normal rate speech processing. Conclusion In agreement with rhythm-processing deficits in atypical language development, our results suggest that children with SLI face difficulties adjusting to rapid speech rate. These findings are interpreted in light of temporal sampling and prosodic phrasing frameworks and of oscillatory mechanisms underlying speech perception. PMID:29373610

  18. Speech in 10-Year-Olds Born With Cleft Lip and Palate: What Do Peers Say?

    PubMed

    Nyberg, Jill; Havstam, Christina

    2016-09-01

    The aim of this study was to explore how 10-year-olds describe speech and communicative participation in children born with unilateral cleft lip and palate in their own words, whether they perceive signs of velopharyngeal insufficiency (VPI) and articulation errors of different degrees, and if so, which terminology they use. Methods/Participants: Nineteen 10-year-olds participated in three focus group interviews where they listened to 10 to 12 speech samples with different types of cleft speech characteristics assessed by speech and language pathologists (SLPs) and described what they heard. The interviews were transcribed and analyzed with qualitative content analysis. The analysis resulted in three interlinked categories encompassing different aspects of speech, personality, and social implications: descriptions of speech, thoughts on causes and consequences, and emotional reactions and associations. Each category contains four subcategories exemplified with quotes from the children's statements. More pronounced signs of VPI were perceived but referred to in terms relevant to 10-year-olds. Articulatory difficulties, even minor ones, were noted. Peers reflected on the risk to teasing and bullying and on how children with impaired speech might experience their situation. The SLPs and peers did not agree on minor signs of VPI, but they were unanimous in their analysis of clinically normal and more severely impaired speech. Articulatory impairments may be more important to treat than minor signs of VPI based on what peers say.

  19. Beginning to Talk Like an Adult: Increases in Speech-like Utterances in Young Cochlear Implant Recipients and Toddlers with Normal Hearing

    PubMed Central

    Ertmer, David J.; Jung, Jongmin; Kloiber, Diana True

    2013-01-01

    Background Speech-like utterances containing rapidly combined consonants and vowels eventually dominate the prelinguistic and early word productions of toddlers who are developing typically (TD). It seems reasonable to expect a similar phenomenon in young cochlear implants (CI) recipients. This study sought to determine the number of months of robust hearing experience needed to achieve a majority of speech-like utterances in both of these groups. Methods Speech samples were recorded at 3-month intervals during the first 2 years of CI experience, and between 6- and 24 months of age in TD children. Speech-like utterances were operationally defined as those belonging to the Basic Canonical Syllables (BCS) or Advanced Forms (AF) levels of the Consolidated Stark Assessment of Early Vocal Development-Revised. Results On average, the CI group achieved a majority of speech- like utterances after 12 months, and the TD group after 18 months of robust hearing experience. The CI group produced greater percentages of speech-like utterances at each interval until 24-months, when both groups approximated 80%. Conclusion Auditory deprivation did not limit progress in vocal development as young CI recipients showed more-rapid-than-typical speech development during the first 2 years of device use. Implications for the Infraphonological model of speech development are considered. PMID:23813203

  20. Speech-language pathology students' self-reports on voice training: easier to understand or to do?

    PubMed

    Lindhe, Christina; Hartelius, Lena

    2009-01-01

    The aim of the study was to describe the subjective ratings of the course 'Training of the student's own voice and speech', from a student-centred perspective. A questionnaire was completed after each of the six individual sessions. Six speech and language pathology (SLP) students rated how they perceived the practical exercises in terms of doing and understanding. The results showed that five of the six participants rated the exercises as significantly easier to understand than to do. The exercises were also rated as easier to do over time. Results are interpreted within in a theoretical framework of approaches to learning. The findings support the importance of both the physical and reflective aspects of the voice training process.

  1. Speech and Voice Response to a Levodopa Challenge in Late-Stage Parkinson's Disease.

    PubMed

    Fabbri, Margherita; Guimarães, Isabel; Cardoso, Rita; Coelho, Miguel; Guedes, Leonor Correia; Rosa, Mario M; Godinho, Catarina; Abreu, Daisy; Gonçalves, Nilza; Antonini, Angelo; Ferreira, Joaquim J

    2017-01-01

    Parkinson's disease (PD) patients are affected by hypokinetic dysarthria, characterized by hypophonia and dysprosody, which worsens with disease progression. Levodopa's (l-dopa) effect on quality of speech is inconclusive; no data are currently available for late-stage PD (LSPD). To assess the modifications of speech and voice in LSPD following an acute l-dopa challenge. LSPD patients [Schwab and England score <50/Hoehn and Yahr stage >3 (MED ON)] performed several vocal tasks before and after an acute l-dopa challenge. The following was assessed: respiratory support for speech, voice quality, stability and variability, speech rate, and motor performance (MDS-UPDRS-III). All voice samples were recorded and analyzed by a speech and language therapist blinded to patients' therapeutic condition using Praat 5.1 software. 24/27 (14 men) LSPD patients succeeded in performing voice tasks. Median age and disease duration of patients were 79 [IQR: 71.5-81.7] and 14.5 [IQR: 11-15.7] years, respectively. In MED OFF, respiratory breath support and pitch break time of LSPD patients were worse than the normative values of non-parkinsonian. A correlation was found between disease duration and voice quality ( R  = 0.51; p  = 0.013) and speech rate ( R  = -0.55; p  = 0.008). l-Dopa significantly improved MDS-UPDRS-III score (20%), with no effect on speech as assessed by clinical rating scales and automated analysis. Speech is severely affected in LSPD. Although l-dopa had some effect on motor performance, including axial signs, speech and voice did not improve. The applicability and efficacy of non-pharmacological treatment for speech impairment should be considered for speech disorder management in PD.

  2. Non-native Listeners’ Recognition of High-Variability Speech Using PRESTO

    PubMed Central

    Tamati, Terrin N.; Pisoni, David B.

    2015-01-01

    Background Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. Purpose The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. Research Design Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. Study Sample Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. Data Collection and Analysis Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary knowledge was assessed with the WordFam word familiarity test, and executive functioning was assessed with the BRIEF-A (Behavioral Rating Inventory of Executive Function – Adult Version) self-report questionnaire. Scores from the non-native listeners on behavioral tasks and self-report questionnaires were compared with scores obtained from native listeners tested in a previous study and were examined for individual differences. Results Non-native keyword recognition scores were significantly lower on PRESTO sentences than on HINT sentences. Non-native listeners’ keyword recognition scores were also lower than native listeners’ scores on both sentence recognition tasks. Differences in performance on the sentence recognition tasks between non-native and native listeners were larger on PRESTO than on HINT, although group differences varied by signal-to-noise ratio. The non-native and native groups also differed in the ability to categorize talkers by region of origin and in vocabulary knowledge. Individual non-native word recognition accuracy on PRESTO sentences in multitalker babble at more favorable signal-to-noise ratios was found to be related to several BRIEF-A subscales and composite scores. However, non-native performance on PRESTO was not related to regional dialect categorization, talker and gender discrimination, or vocabulary knowledge. Conclusions High-variability sentences in multitalker babble were particularly challenging for non-native listeners. Difficulty under high-variability testing conditions was related to lack of experience with the L2, especially L2 sociolinguistic information, compared with native listeners. Individual differences among the non-native listeners were related to weaknesses in core neurocognitive abilities affecting behavioral control in everyday life. PMID:25405842

  3. Direct speech quotations promote low relative-clause attachment in silent reading of English.

    PubMed

    Yao, Bo; Scheepers, Christoph

    2018-07-01

    The implicit prosody hypothesis (Fodor, 1998, 2002) proposes that silent reading coincides with a default, implicit form of prosody to facilitate sentence processing. Recent research demonstrated that a more vivid form of implicit prosody is mentally simulated during silent reading of direct speech quotations (e.g., Mary said, "This dress is beautiful"), with neural and behavioural consequences (e.g., Yao, Belin, & Scheepers, 2011; Yao & Scheepers, 2011). Here, we explored the relation between 'default' and 'simulated' implicit prosody in the context of relative-clause (RC) attachment in English. Apart from confirming a general low RC-attachment preference in both production (Experiment 1) and comprehension (Experiments 2 and 3), we found that during written sentence completion (Experiment 1) or when reading silently (Experiment 2), the low RC-attachment preference was reliably enhanced when the critical sentences were embedded in direct speech quotations as compared to indirect speech or narrative sentences. However, when reading aloud (Experiment 3), direct speech did not enhance the general low RC-attachment preference. The results from Experiments 1 and 2 suggest a quantitative boost to implicit prosody (via auditory perceptual simulation) during silent production/comprehension of direct speech. By contrast, when reading aloud (Experiment 3), prosody becomes equally salient across conditions due to its explicit nature; indirect speech and narrative sentences thus become as susceptible to prosody-induced syntactic biases as direct speech. The present findings suggest a shared cognitive basis between default implicit prosody and simulated implicit prosody, providing a new platform for studying the effects of implicit prosody on sentence processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Patient-reported speech in noise difficulties and hyperacusis symptoms and correlation with test results.

    PubMed

    Spyridakou, Chrysa; Luxon, Linda M; Bamiou, Doris E

    2012-07-01

    To compare self-reported symptoms of difficulty hearing speech in noise and hyperacusis in adults with auditory processing disorders (APDs) and normal controls; and to compare self-reported symptoms to objective test results (speech in babble test, transient evoked otoacoustic emission [TEOAE] suppression test using contralateral noise). A prospective case-control pilot study. Twenty-two participants were recruited in the study: 10 patients with reported hearing difficulty, normal audiometry, and a clinical diagnosis of APD; and 12 normal age-matched controls with no reported hearing difficulty. All participants completed the validated Amsterdam Inventory for Auditory Disability questionnaire, a hyperacusis questionnaire, a speech in babble test, and a TEOAE suppression test using contralateral noise. Patients had significantly worse scores than controls in all domains of the Amsterdam Inventory questionnaire (with the exception of sound detection) and the hyperacusis questionnaire (P < .005). Patients also had worse TEOAE suppression test results in both ears than controls; however, this result was not significant after Bonferroni correction. Strong correlations were observed between self-reported symptoms of difficulty hearing speech in noise and speech in babble test results in the right ear (ρ = 0.624, P = .002), and between self-reported symptoms of hyperacusis and TEOAE suppression test results in the right ear (ρ = -0.597 P = .003). There was no significant correlation between the two tests. A strong correlation was observed between right ear speech in babble and patient-reported intelligibility of speech in noise, and right ear TEOAE suppression by contralateral noise and hyperacusis questionnaire. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  5. Does quality of life depend on speech recognition performance for adult cochlear implant users?

    PubMed

    Capretta, Natalie R; Moberly, Aaron C

    2016-03-01

    Current postoperative clinical outcome measures for adults receiving cochlear implants (CIs) consist of testing speech recognition, primarily under quiet conditions. However, it is strongly suspected that results on these measures may not adequately reflect patients' quality of life (QOL) using their implants. This study aimed to evaluate whether QOL for CI users depends on speech recognition performance. Twenty-three postlingually deafened adults with CIs were assessed. Participants were tested for speech recognition (Central Institute for the Deaf word and AzBio sentence recognition in quiet) and completed three QOL measures-the Nijmegen Cochlear Implant Questionnaire; either the Hearing Handicap Inventory for Adults or the Hearing Handicap Inventory for the Elderly; and the Speech, Spatial and Qualities of Hearing Scale questionnaires-to assess a variety of QOL factors. Correlations were sought between speech recognition and QOL scores. Demographics, audiologic history, language, and cognitive skills were also examined as potential predictors of QOL. Only a few QOL scores significantly correlated with postoperative sentence or word recognition in quiet, and correlations were primarily isolated to speech-related subscales on QOL measures. Poorer pre- and postoperative unaided hearing predicted better QOL. Socioeconomic status, duration of deafness, age at implantation, duration of CI use, reading ability, vocabulary size, and cognitive status did not consistently predict QOL scores. For adult, postlingually deafened CI users, clinical speech recognition measures in quiet do not correlate broadly with QOL. Results suggest the need for additional outcome measures of the benefits and limitations of cochlear implantation. 4. Laryngoscope, 126:699-706, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  6. A warning to the Brazilian Speech-Language Pathology and Audiology community about the importance of scientific and clinical activities in primary progressive aphasia.

    PubMed

    Beber, Bárbara Costa; Brandão, Lenisa; Chaves, Márcia Lorena Fagundes

    2015-01-01

    This article aims to warn the Brazilian Speech-Language Pathology and Audiology scientific community about the importance and necessity of scientific and clinical activities regarding Primary Progressive Aphasia. This warning is based on a systematic literature review of the scientific production on Primary Progressive Aphasia, from which nine Brazilian articles were selected. It was observed that there is an obvious lack of studies on the subject, as all the retrieved articles were published in medical journals and much of it consisted of small samples; only two articles described the effectiveness of speech-language therapy in patients with Primary Progressive Aphasia. A perspective for the future in the area and characteristics of Speech-Language Therapy for Primary Progressive Aphasia are discussed. As a conclusion, it is evident the need for greater action by Speech-Language Pathology and Audiology on Primary Progressive Aphasia.

  7. The effect of guessing on the speech reception thresholds of children.

    PubMed

    Moodley, A

    1990-01-01

    Speech audiometry is an essential part of the assessment of hearing impaired children and it is now widely used throughout the United Kingdom. Although instructions are universally agreed upon as an important aspect in the administration of any form of audiometric testing, there has been little, if any, research towards evaluating the influence which instructions that are given to a listener have on the Speech Reception Threshold obtained. This study attempts to evaluate what effect guessing has on the Speech Reception Threshold of children. A sample of 30 secondary school pupils between 16 and 18 years of age with normal hearing was used in the study. It is argued that the type of instruction normally used for Speech Reception Threshold in audiometric testing may not provide a sufficient amount of control for guessing and the implications of this, using data obtained in the study, are examined.

  8. Rhythmic patterning in Malaysian and Singapore English.

    PubMed

    Tan, Rachel Siew Kuang; Low, Ee-Ling

    2014-06-01

    Previous work on the rhythm of Malaysian English has been based on impressionistic observations. This paper utilizes acoustic analysis to measure the rhythmic patterns of Malaysian English. Recordings of the read speech and spontaneous speech of 10 Malaysian English speakers were analyzed and compared with recordings of an equivalent sample of Singaporean English speakers. Analysis was done using two rhythmic indexes, the PVI and VarcoV. It was found that although the rhythm of read speech of the Singaporean speakers was syllable-based as described by previous studies, the rhythm of the Malaysian speakers was even more syllable-based. Analysis of the syllables in specific utterances showed that Malaysian speakers did not reduce vowels as much as Singaporean speakers in cases of syllables in utterances. Results of the spontaneous speech confirmed the findings for the read speech; that is, the same rhythmic patterning was found which normally triggers vowel reductions.

  9. Automated analysis of connected speech reveals early biomarkers of Parkinson's disease in patients with rapid eye movement sleep behaviour disorder.

    PubMed

    Hlavnička, Jan; Čmejla, Roman; Tykalová, Tereza; Šonka, Karel; Růžička, Evžen; Rusz, Jan

    2017-02-02

    For generations, the evaluation of speech abnormalities in neurodegenerative disorders such as Parkinson's disease (PD) has been limited to perceptual tests or user-controlled laboratory analysis based upon rather small samples of human vocalizations. Our study introduces a fully automated method that yields significant features related to respiratory deficits, dysphonia, imprecise articulation and dysrhythmia from acoustic microphone data of natural connected speech for predicting early and distinctive patterns of neurodegeneration. We compared speech recordings of 50 subjects with rapid eye movement sleep behaviour disorder (RBD), 30 newly diagnosed, untreated PD patients and 50 healthy controls, and showed that subliminal parkinsonian speech deficits can be reliably captured even in RBD patients, which are at high risk of developing PD or other synucleinopathies. Thus, automated vocal analysis should soon be able to contribute to screening and diagnostic procedures for prodromal parkinsonian neurodegeneration in natural environments.

  10. Telephone-quality pathological speech classification using empirical mode decomposition.

    PubMed

    Kaleem, M F; Ghoraani, B; Guergachi, A; Krishnan, S

    2011-01-01

    This paper presents a computationally simple and effective methodology based on empirical mode decomposition (EMD) for classification of telephone quality normal and pathological speech signals. EMD is used to decompose continuous normal and pathological speech signals into intrinsic mode functions, which are analyzed to extract physically meaningful and unique temporal and spectral features. Using continuous speech samples from a database of 51 normal and 161 pathological speakers, which has been modified to simulate telephone quality speech under different levels of noise, a linear classifier is used with the feature vector thus obtained to obtain a high classification accuracy, thereby demonstrating the effectiveness of the methodology. The classification accuracy reported in this paper (89.7% for signal-to-noise ratio 30 dB) is a significant improvement over previously reported results for the same task, and demonstrates the utility of our methodology for cost-effective remote voice pathology assessment over telephone channels.

  11. Speech Understanding with a New Implant Technology: A Comparative Study with a New Nonskin Penetrating Baha System

    PubMed Central

    Caversaccio, Marco

    2014-01-01

    Objective. To compare hearing and speech understanding between a new, nonskin penetrating Baha system (Baha Attract) to the current Baha system using a skin-penetrating abutment. Methods. Hearing and speech understanding were measured in 16 experienced Baha users. The transmission path via the abutment was compared to a simulated Baha Attract transmission path by attaching the implantable magnet to the abutment and then by adding a sample of artificial skin and the external parts of the Baha Attract system. Four different measurements were performed: bone conduction thresholds directly through the sound processor (BC Direct), aided sound field thresholds, aided speech understanding in quiet, and aided speech understanding in noise. Results. The simulated Baha Attract transmission path introduced an attenuation starting from approximately 5 dB at 1000 Hz, increasing to 20–25 dB above 6000 Hz. However, aided sound field threshold shows smaller differences and aided speech understanding in quiet and in noise does not differ significantly between the two transmission paths. Conclusion. The Baha Attract system transmission path introduces predominately high frequency attenuation. This attenuation can be partially compensated by adequate fitting of the speech processor. No significant decrease in speech understanding in either quiet or in noise was found. PMID:25140314

  12. Letting go of yesterday: Effect of distraction on post-event processing and anticipatory anxiety in a socially anxious sample.

    PubMed

    Blackie, Rebecca A; Kocovski, Nancy L

    2016-01-01

    According to cognitive models, post-event processing (PEP) is a key factor in the maintenance of social anxiety. Given that decreasing PEP can be challenging for socially anxious individuals, it is important to identify potentially useful strategies. Although distraction may help to decrease PEP, the findings have been equivocal. The primary purpose of this study was to examine whether a brief distraction period immediately following a speech would lead to less PEP the next day. The secondary aim was to examine the effect of distraction following an initial speech on anticipatory anxiety for a second speech, via reductions in PEP. Participants (N = 77 undergraduates with elevated social anxiety; 67.53% female) delivered a speech and were randomly assigned to a distraction, rumination, or control condition. The following day, participants reported levels of PEP in relation to the first speech, as well as anxiety regarding a second, upcoming speech. As expected, those in the distraction condition reported less PEP than those in the rumination and control conditions. Additionally, distraction following the first speech was indirectly related to anticipatory anxiety for the second speech, via PEP. Distraction may represent a potentially useful strategy for reducing PEP and other maladaptive processes that may maintain social anxiety.

  13. Vocal Age Disguise: The Role of Fundamental Frequency and Speech Rate and Its Perceived Effects.

    PubMed

    Skoog Waller, Sara; Eriksson, Mårten

    2016-01-01

    The relationship between vocal characteristics and perceived age is of interest in various contexts, as is the possibility to affect age perception through vocal manipulation. A few examples of such situations are when age is staged by actors, when ear witnesses make age assessments based on vocal cues only or when offenders (e.g., online groomers) disguise their voice to appear younger or older. This paper investigates how speakers spontaneously manipulate two age related vocal characteristics ( f 0 and speech rate) in attempt to sound younger versus older than their true age, and if the manipulations correspond to actual age related changes in f 0 and speech rate (Study 1). Further aims of the paper is to determine how successful vocal age disguise is by asking listeners to estimate the age of generated speech samples (Study 2) and to examine whether or not listeners use f 0 and speech rate as cues to perceived age. In Study 1, participants from three age groups (20-25, 40-45, and 60-65 years) agreed to read a short text under three voice conditions. There were 12 speakers in each age group (six women and six men). They used their natural voice in one condition, attempted to sound 20 years younger in another and 20 years older in a third condition. In Study 2, 60 participants (listeners) listened to speech samples from the three voice conditions in Study 1 and estimated the speakers' age. Each listener was exposed to all three voice conditions. The results from Study 1 indicated that the speakers increased fundamental frequency ( f 0 ) and speech rate when attempting to sound younger and decreased f 0 and speech rate when attempting to sound older. Study 2 showed that the voice manipulations had an effect in the sought-after direction, although the achieved mean effect was only 3 years, which is far less than the intended effect of 20 years. Moreover, listeners used speech rate, but not f 0 , as a cue to speaker age. It was concluded that age disguise by voice can be achieved by naïve speakers even though the perceived effect was smaller than intended.

  14. Phase II trial of a syllable-timed speech treatment for school-age children who stutter.

    PubMed

    Andrews, Cheryl; O'Brian, Sue; Onslow, Mark; Packman, Ann; Menzies, Ross; Lowe, Robyn

    2016-06-01

    A recent clinical trial (Andrews et al., 2012) showed Syllable Timed Speech (STS) to be a potentially useful treatment agent for the reduction of stuttering for school-age children. The present trial investigated a modified version of this program that incorporated parent verbal contingencies. Participants were 22 stuttering children aged 6-11 years. Treatment involved training the children and their parents to use STS in conversation. Parents were also taught to use verbal contingencies in response to their child's stuttered and stutter-free speech and to praise their child's use of STS. Outcome assessments were conducted pre-treatment, at the completion of Stage 1 of the program and 6 months and 12 months after Stage 1 completion. Outcomes are reported for the 19 children who completed Stage 1 of the program. The group mean percent stuttering reduction was 77% from pre-treatment to 12 months post-treatment, and 82% with the two least responsive participants removed. There was considerable variation in response to the treatment. Eleven of the children showed reduced avoidance of speaking situations and 18 were more satisfied with their fluency post-treatment. However, there was some suggestion that stuttering control was not sufficient to fully eliminate situation avoidance for the children. The results of this trial are sufficiently encouraging to warrant further clinical trials of the method. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Phonological processes in the speech of school-age children with hearing loss: Comparisons with children with normal hearing.

    PubMed

    Asad, Areej Nimer; Purdy, Suzanne C; Ballard, Elaine; Fairgray, Liz; Bowen, Caroline

    2018-04-27

    In this descriptive study, phonological processes were examined in the speech of children aged 5;0-7;6 (years; months) with mild to profound hearing loss using hearing aids (HAs) and cochlear implants (CIs), in comparison to their peers. A second aim was to compare phonological processes of HA and CI users. Children with hearing loss (CWHL, N = 25) were compared to children with normal hearing (CWNH, N = 30) with similar age, gender, linguistic, and socioeconomic backgrounds. Speech samples obtained from a list of 88 words, derived from three standardized speech tests, were analyzed using the CASALA (Computer Aided Speech and Language Analysis) program to evaluate participants' phonological systems, based on lax (a process appeared at least twice in the speech of at least two children) and strict (a process appeared at least five times in the speech of at least two children) counting criteria. Developmental phonological processes were eliminated in the speech of younger and older CWNH while eleven developmental phonological processes persisted in the speech of both age groups of CWHL. CWHL showed a similar trend of age of elimination to CWNH, but at a slower rate. Children with HAs and CIs produced similar phonological processes. Final consonant deletion, weak syllable deletion, backing, and glottal replacement were present in the speech of HA users, affecting their overall speech intelligibility. Developmental and non-developmental phonological processes persist in the speech of children with mild to profound hearing loss compared to their peers with typical hearing. The findings indicate that it is important for clinicians to consider phonological assessment in pre-school CWHL and the use of evidence-based speech therapy in order to reduce non-developmental and non-age-appropriate developmental processes, thereby enhancing their speech intelligibility. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Parent-child interaction in motor speech therapy.

    PubMed

    Namasivayam, Aravind Kumar; Jethava, Vibhuti; Pukonen, Margit; Huynh, Anna; Goshulak, Debra; Kroll, Robert; van Lieshout, Pascal

    2018-01-01

    This study measures the reliability and sensitivity of a modified Parent-Child Interaction Observation scale (PCIOs) used to monitor the quality of parent-child interaction. The scale is part of a home-training program employed with direct motor speech intervention for children with speech sound disorders. Eighty-four preschool age children with speech sound disorders were provided either high- (2×/week/10 weeks) or low-intensity (1×/week/10 weeks) motor speech intervention. Clinicians completed the PCIOs at the beginning, middle, and end of treatment. Inter-rater reliability (Kappa scores) was determined by an independent speech-language pathologist who assessed videotaped sessions at the midpoint of the treatment block. Intervention sensitivity of the scale was evaluated using a Friedman test for each item and then followed up with Wilcoxon pairwise comparisons where appropriate. We obtained fair-to-good inter-rater reliability (Kappa = 0.33-0.64) for the PCIOs using only video-based scoring. Child-related items were more strongly influenced by differences in treatment intensity than parent-related items, where a greater number of sessions positively influenced parent learning of treatment skills and child behaviors. The adapted PCIOs is reliable and sensitive to monitor the quality of parent-child interactions in a 10-week block of motor speech intervention with adjunct home therapy. Implications for rehabilitation Parent-centered therapy is considered a cost effective method of speech and language service delivery. However, parent-centered models may be difficult to implement for treatments such as developmental motor speech interventions that require a high degree of skill and training. For children with speech sound disorders and motor speech difficulties, a translated and adapted version of the parent-child observation scale was found to be sufficiently reliable and sensitive to assess changes in the quality of the parent-child interactions during intervention. In developmental motor speech interventions, high-intensity treatment (2×/week/10 weeks) facilitates greater changes in the parent-child interactions than low intensity treatment (1×/week/10 weeks). On one hand, parents may need to attend more than five sessions with the clinician to learn how to observe and address their child's speech difficulties. On the other hand, children with speech sound disorders may need more than 10 sessions to adapt to structured play settings even when activities and therapy materials are age-appropriate.

  17. Cochlear Implantation in Older Adults

    PubMed Central

    Lin, Frank R.; Chien, Wade W.; Li, Lingsheng; Niparko, John K.; Francis, Howard W.

    2012-01-01

    Cochlear implants allow individuals with severe-to-profound hearing loss access to sound and spoken language. The number of older adults in the United States who are potential candidates for cochlear implantation is approximately 150,000 and will continue to increase with the aging of the population. Should cochlear implantation (CI) be routinely recommended for these older adults, and do these individuals benefit from CI? We reviewed our 12 year experience with cochlear implantation in adults ≥60 years (n = 445) at Johns Hopkins to investigate the impact of CI on speech understanding and to identify factors associated with speech performance. Complete data on speech outcomes at baseline and 1 year post-CI were available for 83 individuals. Our results demonstrate that cochlear implantation in adults ≥60 years consistently improved speech understanding scores with a mean increase of 60. 0% (S. D. 24. 1) on HINT sentences in quiet . The magnitude of the gain in speech scores was negatively associated with age at implantation such that for every increasing year of age at CI the gain in speech scores was 1. 3 percentage points less (95% CI: 0. 6 – 1. 9) after adjusting for age at hearing loss onset. Conversely, individuals with higher pre-CI speech scores (HINT scores between 40–60%) had significantly greater post-CI speech scores by a mean of 10. 0 percentage points (95% CI: 0. 4 – 19. 6) than those with lower pre-CI speech scores (HINT <40%) after adjusting for age at CI and age at hearing loss onset. These results suggest that older adult CI candidates who are younger at implantation and with higher preoperative speech scores obtain the highest speech understanding scores after cochlear implantation with possible implications for current Medicare policy. Finally, we provide an extended discussion of the epidemiology and impact of hearing loss in older adults. Future research of CI in older adults should expand beyond simple speech outcomes to take into account the broad cognitive, social, and physical functioning outcomes that are likely detrimentally impacted by hearing loss and may be mitigated by cochlear implantation. PMID:22932787

  18. Robust relationship between reading span and speech recognition in noise

    PubMed Central

    Souza, Pamela; Arehart, Kathryn

    2015-01-01

    Objective Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. Design The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. Study sample The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Results Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Conclusions Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition. PMID:25975360

  19. Speech-language pathology findings in patients with mouth breathing: multidisciplinary diagnosis according to etiology.

    PubMed

    Junqueira, Patrícia; Marchesan, Irene Queiroz; de Oliveira, Luciana Regina; Ciccone, Emílio; Haddad, Leonardo; Rizzo, Maria Cândida

    2010-11-01

    The purpose of this study was to identify and compare the results of the findings from speech-language pathology evaluations for orofacial function including tongue and lip rest postures, tonus, articulation and speech, voice and language, chewing, and deglutition in children who had a history of mouth breathing. The diagnoses for mouth breathing included: allergic rhinitis, adenoidal hypertrophy, allergic rhinitis with adenoidal hypertrophy; and/or functional mouth breathing. This study was conducted with on 414 subjects of both genders, from 2 to 16-years old. A team consisting of 3 speech-language pathologists, 1 pediatrician, 1 allergist, and 1 otolaryngologist, evaluated the patients. Multidisciplinary clinical examinations were carried out (complete blood counting, X-rays, nasofibroscopy, audiometry). The two most commonly found etiologies were allergic rhinitis, followed by functional mouth breathing. Of the 414 patients in the study, 346 received a speech-language pathology evaluation. The most prevalent finding in this group of 346 subjects was the presence of orofacial myofunctional disorders. The most frequently orofacial myofunctional disorder identified in these subjects who also presented mouth breathing included: habitual open lips rest posture, low and forward tongue rest posture and lack of adequate muscle tone. There were also no statistically significant relationships identified between etiology and speech-language diagnosis. Therefore, the specific type of etiology of mouth breathing does not appear to contribute to the presence, type, or number of speech-language findings which may result from mouth breathing behavior.

  20. A Comparison of the Need for Speech Therapy After 2 Palatal Repair Techniques.

    PubMed

    Yen, Debra W; Nguyen, Dennis C; Skolnick, Gary B; Naidoo, Sybill D; Patel, Kamlesh B; Grames, Lynn Marty; Woo, Albert S

    2017-03-01

    Reconstruction of the levator musculature during cleft palate repair has been suggested to be important in long-term speech outcomes. In this study, we compare the need for postoperative speech therapy between 2 intravelar veloplasty techniques. Chart review was performed for patients with nonsyndromic cleft palate who underwent either primary Kriens or overlapping intravelar veloplasty before 18 months of age. All subjects completed a follow-up visit at approximately 3 years of age. Data obtained included documentation of ongoing or recommended speech therapy at age 3 years and reasons for speech therapy, which were categorized as cleft-related and non-cleft-related by a speech-language pathologist. One surgeon performed all Kriens procedures (n = 81), and the senior author performed all overlapping procedures (n = 25). Mean age at surgery (Kriens = 13.5 ± 1.4 months; overlapping = 13.1 ± 1.5 months; P = 0.188) and age at 3-year follow-up (Kriens = 3.0 ± 0.5 years; overlapping = 2.8 ± 0.5 years; P = 0.148) were equivalent in both groups. Cleft severity by Veau classification (P = 0.626), prepalatoplasty pure tone averages, (P = 0.237), pure tone averages at 3-year follow-up (P = 0.636), and incidence of prematurity (P = 0.190) were also similar between the 2 groups. At 3 years of age, significantly fewer overlapping intravelar veloplasty patients required cleft-related speech therapy (Kriens = 47%; overlapping = 20%; P = 0.015). The proportions of patients requiring non-cleft-related speech therapy were equivalent (P = 0.906). At 3 years of age, patients who received overlapping intravelar veloplasty were significantly less likely to need cleft-related speech therapy compared with patients who received Kriens intravelar veloplasty. Cleft severity, hearing loss, and prematurity at birth did not appear to explain the difference found in need for speech therapy.

  1. Children's Attitudes Toward Peers With Unintelligible Speech Associated With Cleft Lip and/or Palate.

    PubMed

    Lee, Alice; Gibbon, Fiona E; Spivey, Kimberley

    2017-05-01

      The objective of this study was to investigate whether reduced speech intelligibility in children with cleft palate affects social and personal attribute judgments made by typically developing children of different ages.   The study (1) measured the correlation between intelligibility scores of speech samples from children with cleft palate and social and personal attribute judgments made by typically developing children based on these samples and (2) compared the attitude judgments made by children of different ages. Participants   A total of 90 typically developing children, 30 in each of three age groups (7 to 8 years, 9 to 10 years, and 11 to 12 years).   Speech intelligibility scores and typically developing children's attitudes were measured using eight social and personal attributes on a three-point rating scale.   There was a significant correlation between the speech intelligibility scores and attitude judgments for a number of traits: "sick-healthy" as rated by the children aged 7 to 8 years, "no friends-friends" by the children aged 9 to 10 years, and "ugly-good looking" and "no friends-friends" by the children aged 11 to 12 years. Children aged 7 to 8 years gave significantly lower ratings for "mean-kind" but higher ratings for "shy-outgoing" when compared with the other two groups.   Typically developing children tended to make negative social and personal attribute judgments about children with cleft palate based solely on the intelligibility of their speech. Society, educators, and health professionals should work together to ensure that children with cleft palate are not stigmatized by their peers.

  2. The enhancement of beneficial effects following audio feedback by cognitive preparation in the treatment of social anxiety: a single-session experiment.

    PubMed

    Nilsson, Jan-Erik; Lundh, Lars-Gunnar; Faghihi, Shahriar; Roth-Andersson, Gun

    2011-12-01

    According to cognitive models, negatively biased processing of the publicly observable self is an important aspect of social phobia; if this is true, effective methods for producing corrective feedback concerning the public self should be strived for. Video feedback is proven effective, but since one's voice represents another aspect of the self, audio feedback should produce equivalent results. This is the first study to assess the enhancement of audio feedback by cognitive preparation in a single-session randomized controlled experiment. Forty socially anxious participants were asked to give a speech, then to listen to and evaluate a taped recording of their performance. Half of the sample was given cognitive preparation prior to the audio feedback and the remainder received audio feedback only. Cognitive preparation involved asking participants to (1) predict in detail what they would hear on the audiotape, (2) form an image of themselves giving the speech and (3) listen to the audio recording as though they were listening to a stranger. To assess generalization effects all participants were asked to give a second speech. Audio feedback with cognitive preparation was shown to produce less negative ratings after the first speech, and effects generalized to the evaluation of the second speech. More positive speech evaluations were associated with corresponding reductions of state anxiety. Social anxiety as indexed by the Implicit Association Test was reduced in participants given cognitive preparation. Small sample size; analogue study. Audio feedback with cognitive preparation may be utilized as a treatment intervention for social phobia. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. The effect of auditory verbal imagery on signal detection in hallucination-prone individuals

    PubMed Central

    Moseley, Peter; Smailes, David; Ellison, Amanda; Fernyhough, Charles

    2016-01-01

    Cognitive models have suggested that auditory hallucinations occur when internal mental events, such as inner speech or auditory verbal imagery (AVI), are misattributed to an external source. This has been supported by numerous studies indicating that individuals who experience hallucinations tend to perform in a biased manner on tasks that require them to distinguish self-generated from non-self-generated perceptions. However, these tasks have typically been of limited relevance to inner speech models of hallucinations, because they have not manipulated the AVI that participants used during the task. Here, a new paradigm was employed to investigate the interaction between imagery and perception, in which a healthy, non-clinical sample of participants were instructed to use AVI whilst completing an auditory signal detection task. It was hypothesized that AVI-usage would cause participants to perform in a biased manner, therefore falsely detecting more voices in bursts of noise. In Experiment 1, when cued to generate AVI, highly hallucination-prone participants showed a lower response bias than when performing a standard signal detection task, being more willing to report the presence of a voice in the noise. Participants not prone to hallucinations performed no differently between the two conditions. In Experiment 2, participants were not specifically instructed to use AVI, but retrospectively reported how often they engaged in AVI during the task. Highly hallucination-prone participants who retrospectively reported using imagery showed a lower response bias than did participants with lower proneness who also reported using AVI. Results are discussed in relation to prominent inner speech models of hallucinations. PMID:26435050

  4. Increased activation of the hippocampus during a Chinese character subvocalization task in adults with cleft lip and palate palatoplasty and speech therapy.

    PubMed

    Zhang, Wenjing; Li, Chunlin; Chen, Long; Xing, Xiyue; Li, Xiangyang; Yang, Zhi; Zhang, Haiyan; Chen, Renji

    2017-08-16

    This study aimed to explore brain activation in patients with cleft lip and palate (CLP) using a Chinese character subvocalization task, in which the stimuli were selected from a clinical articulation evaluation test. CLP is a congenital disability. Individuals with CLP usually have articulation disorder caused by abnormal lip and palate structure. Previous studies showed that primary somatosensory and motor areas had a significant difference in activation in patients with CLP. However, whether brain activation was restored to a normal level after palatoplasty and speech rehabilitation is not clear. Two groups, adults after palatoplasty with speech training and age-matched and sex-matched controls, participated in this study. Brain activation during Chinese character subvocalization task and behavioral data were recorded using functional MRI. Patients with CLP responded to the target significantly more slowly compared with the controls, whereas no significant difference in accuracy was found between the groups. Brain activation had similar patterns between groups. Broca's area, Wernicke's area, motor areas, somatosensory areas, and insula in both hemispheres, and the dorsolateral prefrontal cortex and the ventrolateral prefrontal cortex in the right hemisphere were activated in both groups, with no statistically significant difference. Furthermore, the two-sample t-test showed that the hippocampus in the left hemisphere was activated significantly in patients with CLP compared with the controls. The results suggested that the hippocampus might be involved in the language-related neural circuit in patients with CLP and play a role of pronunciation retrieval to help patients with CLP to complete the pronunciation effectively.

  5. English use among older bilingual immigrants in linguistically concentrated neighborhoods: Social proficiency and internal speech as intracultural variation

    PubMed Central

    Schrauf, Robert W.

    2013-01-01

    This research focuses on patterns of English proficiency and use-of-English among older immigrants living in linguistically concentrated, ethnic neighborhoods. A sample (n=60) of older Puerto Ricans, who moved from the island to the mainland in their twenties, were divided into English proficiency groups (fluent, high intermediate, low intermediate) via the Adult Language Assessment Scales. Participants then provided self-ratings of their English proficiency (understanding, speaking, reading, and writing), their use of English in social domains (language spoken with own-family, in-laws, spouse, children, neighbors, and workmates), and their use of English in private psychological domains (language of talking to oneself, counting, writing notes to oneself, thinking, dreaming, praying, and expressing feelings). Finally, all participants completed the Puerto Rican Bicultural Scale. Results show a cohort of immigrant elders whose first language is protected by their ethnic neighborhoods but whose domestic and private lives are increasingly permeated by English. In particular, children emerge as powerful forces of language socialization in English for their parents. Further, there are important individual differences by level of proficiency, with a lowest proficiency group that is less acculturated, lower in socioeconomic status, and even more linguistically isolated than groups with higher proficiency. In essence, level of second language proficiency is a potent source of intracultural variation. Methodologically, the paper makes the important point that self-rated patterns of language use are consistent with scores on formal measures of proficiency. The paper also provides empirical verification of the logic of dividing language use into external, social speech and internal, psychological speech. PMID:19184621

  6. A Multivariate Analytic Approach to the Differential Diagnosis of Apraxia of Speech

    ERIC Educational Resources Information Center

    Basilakos, Alexandra; Yourganov, Grigori; den Ouden, Dirk-Bart; Fogerty, Daniel; Rorden, Chris; Feenaughty, Lynda; Fridriksson, Julius

    2017-01-01

    Purpose: Apraxia of speech (AOS) is a consequence of stroke that frequently co-occurs with aphasia. Its study is limited by difficulties with its perceptual evaluation and dissociation from co-occurring impairments. This study examined the classification accuracy of several acoustic measures for the differential diagnosis of AOS in a sample of…

  7. Vulnerability to Bullying in Children with a History of Specific Speech and Language Difficulties

    ERIC Educational Resources Information Center

    Lindsay, Geoff; Dockrell, Julie E.; Mackie, Clare

    2008-01-01

    This study examined the susceptibility to problems with peer relationships and being bullied in a UK sample of 12-year-old children with a history of specific speech and language difficulties. Data were derived from the children's self-reports and the reports of parents and teachers using measures of victimization, emotional and behavioral…

  8. Yaounde French Speech Corpus

    DTIC Science & Technology

    2017-03-01

    the Center for Technology Enhanced Language Learning (CTELL), a research cell in the Department of Foreign Languages, United States Military Academy...models for automatic speech recognition (ASR), and to, thereby, investigate the utility of ASR in pedagogical technology . The corpus is a sample of...lexical resources, language technology 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF

  9. Expressive Language during Conversational Speech in Boys with Fragile X Syndrome

    ERIC Educational Resources Information Center

    Roberts, Joanne E.; Hennon, Elizabeth A.; Price, Johanna R.; Dear, Elizabeth; Anderson, Kathleen; Vandergrift, Nathan A.

    2007-01-01

    We compared the expressive syntax and vocabulary skills of 35 boys with fragile X syndrome and 27 younger typically developing boys who were at similar nonverbal mental levels. During a conversational speech sample, the boys with fragile X syndrome used shorter, less complex utterances and produced fewer different words than did the typically…

  10. Examining Acoustic and Kinematic Measures of Articulatory Working Space: Effects of Speech Intensity

    ERIC Educational Resources Information Center

    Whitfield, Jason A.; Dromey, Christopher; Palmer, Panika

    2018-01-01

    Purpose: The purpose of this study was to examine the effect of speech intensity on acoustic and kinematic vowel space measures and conduct a preliminary examination of the relationship between kinematic and acoustic vowel space metrics calculated from continuously sampled lingual marker and formant traces. Method: Young adult speakers produced 3…

  11. Use of Spectral/Cepstral Analyses for Differentiating Normal from Hypofunctional Voices in Sustained Vowel and Continuous Speech Contexts

    ERIC Educational Resources Information Center

    Watts, Christopher R.; Awan, Shaheen N.

    2011-01-01

    Purpose: In this study, the authors evaluated the diagnostic value of spectral/cepstral measures to differentiate dysphonic from nondysphonic voices using sustained vowels and continuous speech samples. Methodology: Thirty-two age- and gender-matched individuals (16 participants with dysphonia and 16 controls) were recorded reading a standard…

  12. Bivariate Genetic Analyses of Stuttering and Nonfluency in a Large Sample of 5-Year-Old Twins

    ERIC Educational Resources Information Center

    van Beijsterveldt, Catharina Eugenie Maria; Felsenfeld, Susan; Boomsma, Dorret Irene

    2010-01-01

    Purpose: Behavioral genetic studies of speech fluency have focused on participants who present with clinical stuttering. Knowledge about genetic influences on the development and regulation of normal speech fluency is limited. The primary aims of this study were to identify the heritability of stuttering and high nonfluency and to assess the…

  13. Lexical Profiles of Comprehensible Second Language Speech: The Role of Appropriateness, Fluency, Variation, Sophistication, Abstractness, and Sense Relations

    ERIC Educational Resources Information Center

    Saito, Kazuya; Webb, Stuart; Trofimovich, Pavel; Isaacs, Talia

    2016-01-01

    This study examined contributions of lexical factors to native-speaking raters' assessments of comprehensibility (ease of understanding) of second language (L2) speech. Extemporaneous oral narratives elicited from 40 French speakers of L2 English were transcribed and evaluated for comprehensibility by 10 raters. Subsequently, the samples were…

  14. Effects of Visual Information on Intelligibility of Open and Closed Class Words in Predictable Sentences Produced by Speakers with Dysarthria

    ERIC Educational Resources Information Center

    Hustad, Katherine C.; Dardis, Caitlin M.; Mccourt, Kelly A.

    2007-01-01

    This study examined the independent and interactive effects of visual information and linguistic class of words on intelligibility of dysarthric speech. Seven speakers with dysarthria participated in the study, along with 224 listeners who transcribed speech samples in audiovisual (AV) or audio-only (AO) listening conditions. Orthographic…

  15. Recent Advances in the Genetics of Vocal Learning

    PubMed Central

    Condro, Michael C.; White, Stephanie A.

    2015-01-01

    Language is a complex communicative behavior unique to humans, and its genetic basis is poorly understood. Genes associated with human speech and language disorders provide some insights, originating with the FOXP2 transcription factor, a mutation in which is the source of an inherited form of developmental verbal dyspraxia. Subsequently, targets of FOXP2 regulation have been associated with speech and language disorders, along with other genes. Here, we review these recent findings that implicate genetic factors in human speech. Due to the exclusivity of language to humans, no single animal model is sufficient to study the complete behavioral effects of these genes. Fortunately, some animals possess subcomponents of language. One such subcomponent is vocal learning, which though rare in the animal kingdom, is shared with songbirds. We therefore discuss how songbird studies have contributed to the current understanding of genetic factors that impact human speech, and support the continued use of this animal model for such studies in the future. PMID:26052371

  16. A Dual-Mode Human Computer Interface Combining Speech and Tongue Motion for People with Severe Disabilities

    PubMed Central

    Huo, Xueliang; Park, Hangue; Kim, Jeonghee; Ghovanloo, Maysam

    2015-01-01

    We are presenting a new wireless and wearable human computer interface called the dual-mode Tongue Drive System (dTDS), which is designed to allow people with severe disabilities to use computers more effectively with increased speed, flexibility, usability, and independence through their tongue motion and speech. The dTDS detects users’ tongue motion using a magnetic tracer and an array of magnetic sensors embedded in a compact and ergonomic wireless headset. It also captures the users’ voice wirelessly using a small microphone embedded in the same headset. Preliminary evaluation results based on 14 able-bodied subjects and three individuals with high level spinal cord injuries at level C3–C5 indicated that the dTDS headset, combined with a commercially available speech recognition (SR) software, can provide end users with significantly higher performance than either unimodal forms based on the tongue motion or speech alone, particularly in completing tasks that require both pointing and text entry. PMID:23475380

  17. Self-Reflection and the Inner Voice: Activation of the Left Inferior Frontal Gyrus During Perceptual and Conceptual Self-Referential Thinking

    PubMed Central

    Morin, Alain; Hamper, Breanne

    2012-01-01

    Inner speech involvement in self-reflection was examined by reviewing 130 studies assessing brain activation during self-referential processing in key self-domains: agency, self-recognition, emotions, personality traits, autobiographical memory, and miscellaneous (e.g., prospection, judgments). The left inferior frontal gyrus (LIFG) has been shown to be reliably recruited during inner speech production. The percentage of studies reporting LIFG activity for each self-dimension was calculated. Fifty five percent of all studies reviewed indicated LIFG (and presumably inner speech) activity during self-reflection tasks; on average LIFG activation is observed 16% of the time during completion of non-self tasks (e.g., attention, perception). The highest LIFG activation rate was observed during retrieval of autobiographical information. The LIFG was significantly more recruited during conceptual tasks (e.g., prospection, traits) than during perceptual tasks (agency and self-recognition). This constitutes additional evidence supporting the idea of a participation of inner speech in self-related thinking. PMID:23049653

  18. Self-reflection and the inner voice: activation of the left inferior frontal gyrus during perceptual and conceptual self-referential thinking.

    PubMed

    Morin, Alain; Hamper, Breanne

    2012-01-01

    Inner speech involvement in self-reflection was examined by reviewing 130 studies assessing brain activation during self-referential processing in key self-domains: agency, self-recognition, emotions, personality traits, autobiographical memory, and miscellaneous (e.g., prospection, judgments). The left inferior frontal gyrus (LIFG) has been shown to be reliably recruited during inner speech production. The percentage of studies reporting LIFG activity for each self-dimension was calculated. Fifty five percent of all studies reviewed indicated LIFG (and presumably inner speech) activity during self-reflection tasks; on average LIFG activation is observed 16% of the time during completion of non-self tasks (e.g., attention, perception). The highest LIFG activation rate was observed during retrieval of autobiographical information. The LIFG was significantly more recruited during conceptual tasks (e.g., prospection, traits) than during perceptual tasks (agency and self-recognition). This constitutes additional evidence supporting the idea of a participation of inner speech in self-related thinking.

  19. [Design of standard voice sample text for subjective auditory perceptual evaluation of voice disorders].

    PubMed

    Li, Jin-rang; Sun, Yan-yan; Xu, Wen

    2010-09-01

    To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, ê, was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population (r = 0.742, P < 0.001 and r = 0.844, P < 0.001, respectively). The constituent ratios of the tones presented in this short text were statistically not similar as those in Mandarin (r = 0.731, P > 0.05). A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.

  20. Connected speech as a marker of disease progression in autopsy-proven Alzheimer’s disease

    PubMed Central

    Ahmed, Samrah; Haigh, Anne-Marie F.; de Jager, Celeste A.

    2013-01-01

    Although an insidious history of episodic memory difficulty is a typical presenting symptom of Alzheimer’s disease, detailed neuropsychological profiling frequently demonstrates deficits in other cognitive domains, including language. Previous studies from our group have shown that language changes may be reflected in connected speech production in the earliest stages of typical Alzheimer’s disease. The aim of the present study was to identify features of connected speech that could be used to examine longitudinal profiles of impairment in Alzheimer’s disease. Samples of connected speech were obtained from 15 former participants in a longitudinal cohort study of ageing and dementia, in whom Alzheimer’s disease was diagnosed during life and confirmed at post-mortem. All patients met clinical and neuropsychological criteria for mild cognitive impairment between 6 and 18 months before converting to a status of probable Alzheimer’s disease. In a subset of these patients neuropsychological data were available, both at the point of conversion to Alzheimer’s disease, and after disease severity had progressed from the mild to moderate stage. Connected speech samples from these patients were examined at later disease stages. Spoken language samples were obtained using the Cookie Theft picture description task. Samples were analysed using measures of syntactic complexity, lexical content, speech production, fluency and semantic content. Individual case analysis revealed that subtle changes in language were evident during the prodromal stages of Alzheimer’s disease, with two-thirds of patients with mild cognitive impairment showing significant but heterogeneous changes in connected speech. However, impairments at the mild cognitive impairment stage did not necessarily entail deficits at mild or moderate stages of disease, suggesting non-language influences on some aspects of performance. Subsequent examination of these measures revealed significant linear trends over the three stages of disease in syntactic complexity, semantic and lexical content. The findings suggest, first, that there is a progressive disruption in language integrity, detectable from the prodromal stage in a subset of patients with Alzheimer’s disease, and secondly that measures of semantic and lexical content and syntactic complexity best capture the global progression of linguistic impairment through the successive clinical stages of disease. The identification of disease-specific language impairment in prodromal Alzheimer’s disease could enhance clinicians’ ability to distinguish probable Alzheimer’s disease from changes attributable to ageing, while longitudinal assessment could provide a simple approach to disease monitoring in therapeutic trials. PMID:24142144

Top