Sample records for minute speech sample

  1. Automatic Method of Pause Measurement for Normal and Dysarthric Speech

    ERIC Educational Resources Information Center

    Rosen, Kristin; Murdoch, Bruce; Folker, Joanne; Vogel, Adam; Cahill, Louise; Delatycki, Martin; Corben, Louise

    2010-01-01

    This study proposes an automatic method for the detection of pauses and identification of pause types in conversational speech for the purpose of measuring the effects of Friedreich's Ataxia (FRDA) on speech. Speech samples of [approximately] 3 minutes were recorded from 13 speakers with FRDA and 18 healthy controls. Pauses were measured from the…

  2. Echolalic and Spontaneous Phrase Speech in Autistic Children.

    ERIC Educational Resources Information Center

    Howlin, Patricia

    1982-01-01

    Investigates the syntactical level of spontaneous and echolalic utterances of 26 autistic boys at different stages of phrase speech development. Speech samples were collected over a 90-minute period in unstructured settings in participants' homes. Imitations were not deliberately elicited, and only unprompted, noncommunicative echoes were…

  3. Speech rate and fluency in children with phonological disorder.

    PubMed

    Novaes, Priscila Maronezi; Nicolielo-Carrilho, Ana Paola; Lopes-Herrera, Simone Aparecida

    2015-01-01

    To identify and describe the speech rate and fluency of children with phonological disorder (PD) with and without speech-language therapy. Thirty children, aged 5-8 years old, both genders, were divided into three groups: experimental group 1 (G1) — 10 children with PD in intervention; experimental group 2 (G2) — 10 children with PD without intervention; and control group (CG) — 10 children with typical development. Speech samples were collected and analyzed according to parameters of specific protocol. The children in CG had higher number of words per minute compared to those in G1, which, in turn, performed better in this aspect compared to children in G2. Regarding the number of syllables per minute, the CG showed the best result. In this aspect, the children in G1 showed better results than those in G2. Comparing children's performance in the assessed groups regarding the tests, those with PD in intervention had higher time of speech sample and adequate speech rate, which may be indicative of greater auditory monitoring of their own speech as a result of the intervention.

  4. Modification and preliminary use of the five-minute speech sample in the postpartum: associations with postnatal depression and posttraumatic stress.

    PubMed

    Iles, Jane; Spiby, Helen; Slade, Pauline

    2014-10-01

    Little is known about what constitutes key components of partner support during the childbirth experience. This study modified the five minute speech sample, a measure of expressed emotion (EE), for use with new parents in the immediate postpartum. A coding framework was developed to rate the speech samples on dimensions of couple support. Associations were explored between these codes and subsequent symptoms of postnatal depression and posttraumatic stress. 372 couples were recruited in the early postpartum and individually provided short speech samples. Posttraumatic stress and postnatal depression symptoms were assessed via questionnaire measures at six and thirteen weeks. Two hundred and twelve couples completed all time-points. Key elements of supportive interactions were identified and reliably categorised. Mothers' posttraumatic stress was associated with criticisms of the partner during childbirth, general relationship criticisms and men's perception of helplessness. Postnatal depression was associated with absence of partner empathy and any positive comments regarding the partner's support. The content of new parents' descriptions of labour and childbirth, their partner during labour and birth and their relationship within the immediate postpartum may have significant implications for later psychological functioning. Interventions to enhance specific supportive elements between couples during the antenatal period merit development and evaluation.

  5. 76 FR 44326 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-25

    ... Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities; Structure and Practices of the Video Relay Service Program AGENCY: Federal Communications Commission. ACTION...-minute video relay service (``VRS'') compensation rates, and adopts per-minute compensation rates for the...

  6. [Speech fluency developmental profile in Brazilian Portuguese speakers].

    PubMed

    Martins, Vanessa de Oliveira; Andrade, Claudia Regina Furquim de

    2008-01-01

    speech fluency varies from one individual to the next, fluent or stutterer, depending on several factors. Studies that investigate the influence of age on fluency patterns have been identified; however these differences were investigated in isolated age groups. Studies about life span fluency variations were not found. to verify the speech fluency developmental profile. speech samples of 594 fluent participants of both genders, with ages between 2:0 and 99:11 years, speakers of the Brazilian Portuguese language, were analyzed. Participants were grouped as follows: pre-scholars, scholars, early adolescence, late adolescence, adults and elderlies. Speech samples were analyzed according to the Speech Fluency Profile variables and were compared regarding: typology of speech disruptions (typical and less typical), speech rate (words and syllables per minute) and frequency of speech disruptions (percentage of speech discontinuity). although isolated variations were identified, overall there was no significant difference between the age groups for the speech disruption indexes (typical and less typical speech disruptions and percentage of speech discontinuity). Significant differences were observed between the groups when considering speech rate. the development of the neurolinguistic system for speech fluency, in terms of speech disruptions, seems to stabilize itself during the first years of life, presenting no alterations during the life span. Indexes of speech rate present variations in the age groups, indicating patterns of acquisition, development, stabilization and degeneration.

  7. Linguistic Analysis of the Preschool Five Minute Speech Sample: What the Parents of Preschool Children with Early Signs of ADHD Say and How They Say It?

    PubMed Central

    Perez, Elvira; Turner, Melody; Fisher, Anthony; Lockwood, Joanna; Daley, David

    2014-01-01

    A linguistic analysis was performed on the Preschool Five Minute Speech Sample (PFMSS) of 42 parents. PFMSS is a validated measure for Expressed Emotion (EE) to assess parent-child relationship. Half of these parents (n = 21, clinical group) had preschool children with early symptoms of attention deficit hyperactivity disorder (ADHD), the rest had typically developing children. Early symptoms of ADHD were identified with the Werry-Weiss Peters Rating Scale. The linguistic component of the PFMSS was analysed with keyword and linguistic pattern identification. The results of these two complementary analyses (i.e., EE and linguistic analysis) provided relevant recommendations that may improve the efficacy of psychological treatment for ADHD such as parenting interventions. We discuss the practical implications of these findings. PMID:25184287

  8. Speech fluency profile on different tasks for individuals with Parkinson's disease.

    PubMed

    Juste, Fabiola Staróbole; Andrade, Claudia Regina Furquim de

    2017-07-20

    To characterize the speech fluency profile of patients with Parkinson's disease. Study participants were 40 individuals of both genders aged 40 to 80 years divided into 2 groups: Research Group - RG (20 individuals with diagnosis of Parkinson's disease) and Control Group - CG (20 individuals with no communication or neurological disorders). For all of the participants, three speech samples involving different tasks were collected: monologue, individual reading, and automatic speech. The RG presented a significant larger number of speech disruptions, both stuttering-like and typical dysfluencies, and higher percentage of speech discontinuity in the monologue and individual reading tasks compared with the CG. Both groups presented reduced number of speech disruptions (stuttering-like and typical dysfluencies) in the automatic speech task; the groups presented similar performance in this task. Regarding speech rate, individuals in the RG presented lower number of words and syllables per minute compared with those in the CG in all speech tasks. Participants of the RG presented altered parameters of speech fluency compared with those of the CG; however, this change in fluency cannot be considered a stuttering disorder.

  9. Speech Entrainment Compensates for Broca's Area Damage

    PubMed Central

    Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris

    2015-01-01

    Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to speech entrainment. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during speech entrainment versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of speech entrainment to improve speech production and may help select patients for speech entrainment treatment. PMID:25989443

  10. Neural Recruitment for the Production of Native and Novel Speech Sounds

    PubMed Central

    Moser, Dana; Fridriksson, Julius; Bonilha, Leonardo; Healy, Eric W.; Baylis, Gordon; Baker, Julie; Rorden, Chris

    2010-01-01

    Two primary areas of damage have been implicated in apraxia of speech (AOS) based on the time post-stroke: (1) the left inferior frontal gyrus (IFG) in acute patients, and (2) the left anterior insula (aIns) in chronic patients. While AOS is widely characterized as a disorder in motor speech planning, little is known about the specific contributions of each of these regions in speech. The purpose of this study was to investigate cortical activation during speech production with a specific focus on the aIns and the IFG in normal adults. While undergoing sparse fMRI, 30 normal adults completed a 30-minute speech-repetition task consisting of three-syllable nonwords that contained either (a) English (native) syllables or (b) Non-English (novel) syllables. When the novel syllable productions were compared to the native syllable productions, greater neural activation was observed in the aIns and IFG, particularly during the first 10 minutes of the task when novelty was the greatest. Although activation in the aIns remained high throughout the task for novel productions, greater activation was clearly demonstrated when the initial 10 minutes were compared to the final 10 minutes of the task. These results suggest increased activity within an extensive neural network, including the aIns and IFG, when the motor speech system is taxed, such as during the production of novel speech. We speculate that the amount of left aIns recruitment during speech production may be related to the internal construction of the motor speech unit such that the degree of novelty/automaticity would result in more or less demands respectively. The role of the IFG as a storehouse and integrative processor for previously acquired routines is also discussed. PMID:19385020

  11. Speech entrainment compensates for Broca's area damage.

    PubMed

    Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris

    2015-08-01

    Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to SE. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during SE versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of SE to improve speech production and may help select patients for SE treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A comparison of speech outcomes using radical intravelar veloplasty or furlow palatoplasty for the treatment of velopharyngeal insufficiency associated with occult submucous cleft palate.

    PubMed

    Afrooz, Paul Nader; MacIsaac, Zoe; Rottgers, Stephen Alex; Ford, Matthew; Grunwaldt, Lorelei J; Kumar, Anand R

    2015-02-01

    The safety, efficacy, and direct comparison of various surgical treatments for velopharyngeal insufficiency (VPI) associated with occult submucous cleft palate (OSMCP) are poorly characterized. The aim of this study was to report and analyze the safety and efficacy of Furlow palatoplasty (FP) versus radical intravelar veloplasty (IVV) for treatment of VPI associated with OSMCP. A retrospective review of one institution's experience treating VPI associated with OSMCP using IVV (group 1) or FP (group 2) during 24 months was performed. Statistical significance was determined by Wilcoxon matched-pair, Independent-Samples Mann-Whitney U, and analysis of variance (SPSS 20.0.0). In group 1 (IVV), 18 patients were identified from August 2010 to 2011 (12 male and 6 female patients; average age, 5.39 years). Seven patients were syndromic and 11 were nonsyndromic. In group 2 (FP), 17 patients were identified from August 2009 to 2011 (8 male and 9 female patients; average age, 8.37 years). Three patients were syndromic and 14 patients were nonsyndromic. There was statistical significance between the average pretreatment Pittsburgh Weighted Speech Score (PWSS) of the 2 groups (group 1 and 2 averages 19.06 and 11.05, respectively, P=0.002), but there was no statistical significance postoperatively (group 1 and 2 averages 4.50 and 4.69, respectively, P=0.405). One patient from each group required secondary speech surgery. Average operative time was greater for FP (140 minutes; range, 93-177 minutes) compared to IVV (95 minutes; range, 58-135 minutes), P<0.001. Average hospital stay was 3.9 days for IVV (range, 2-9 days) and 3.2 days for FP (range, 2-6 days), with no significant difference (P=0.116). There were no postsurgical wound infections, oral-nasal fistulas, postoperative bleeding complications, or mortalities. Nonsyndromic patients with hypernasal speech are treated effectively and safely with either IVV or FP. Intravelar veloplasty trended toward lower speech scores than FP (76% IVV, 58% FP PWSS absolute reduction). Syndromic patients with OSMCP may be more effectively treated with FP (72% IVV vs 79% FP PWSS absolute reduction). Intravelar veloplasty is associated with shorter operative times. Both techniques are associated with low morbidity, improved speech scores, and low reoperative rates.

  13. Expressed Emotion Displayed by the Mothers of Inhibited and Uninhibited Preschool-Aged Children

    ERIC Educational Resources Information Center

    Raishevich, Natoshia; Kennedy, Susan J.; Rapee, Ronald M.

    2010-01-01

    In the current study, the Five Minute Speech Sample was used to assess the association between parent attitudes and children's behavioral inhibition in mothers of 120 behaviorally inhibited (BI) and 37 behaviorally uninhibited preschool-aged children. Mothers of BI children demonstrated significantly higher levels of emotional over-involvement…

  14. Quality of communication in interpreted versus noninterpreted PICU family meetings.

    PubMed

    Van Cleave, Alisa C; Roosen-Runge, Megan U; Miller, Alison B; Milner, Lauren C; Karkazis, Katrina A; Magnus, David C

    2014-06-01

    To describe the quality of physician-family communication during interpreted and noninterpreted family meetings in the PICU. Prospective, exploratory, descriptive observational study of noninterpreted English family meetings and interpreted Spanish family meetings in the pediatric intensive care setting. A single, university-based, tertiary children's hospital. Participants in PICU family meetings, including medical staff, family members, ancillary staff, and interpreters. Thirty family meetings (21 English and nine Spanish) were audio-recorded, transcribed, de-identified, and analyzed using the qualitative method of directed content analysis. Quality of communication was analyzed in three ways: 1) presence of elements of shared decision-making, 2) balance between physician and family speech, and 3) complexity of physician speech. Of the 11 elements of shared decision-making, only four occurred in more than half of English meetings, and only three occurred in more than half of Spanish meetings. Physicians spoke for a mean of 20.7 minutes, while families spoke for 9.3 minutes during English meetings. During Spanish meetings, physicians spoke for a mean of 14.9 minutes versus just 3.7 minutes of family speech. Physician speech complexity received a mean grade level score of 8.2 in English meetings compared to 7.2 in Spanish meetings. The quality of physician-family communication during PICU family meetings is poor overall. Interpreted meetings had poorer communication quality as evidenced by fewer elements of shared decision-making and greater imbalance between physician and family speech. However, physician speech may be less complex during interpreted meetings. Our data suggest that physicians can improve communication in both interpreted and noninterpreted family meetings by increasing the use of elements of shared decision-making, improving the balance between physician and family speech, and decreasing the complexity of physician speech.

  15. Disentangling Child and Family Influences on Maternal Expressed Emotion toward Children with Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Cartwright, Kim L.; Bitsakou, Paraskevi; Daley, David; Gramzow, Richard H.; Psychogiou, Lamprini; Simonoff, Emily; Thompson, Margaret J.; Sonuga-Barke, Edmund J. S.

    2011-01-01

    Objective: We used multi-level modelling of sibling-pair data to disentangle the influence of proband-specific and more general family influences on maternal expressed emotion (MEE) toward children and adolescents with attention-deficit/hyperactivity disorder (ADHD). Method: MEE was measured using the Five Minute Speech Sample (FMSS) for 60…

  16. Developmental Patterns of Past-Tense Acquisition among Foreign Language Learners of French.

    ERIC Educational Resources Information Center

    Kaplan, Marsha A.

    The patterns of acquisition of the passe compose and imperfect tenses in French among 16 adult beginning and intermediate students were studied. Based on 15-minute speech samples in which both verb tenses were elicited by seeded questions and cues for descriptive and narrative monologues, the intermediate learners had greater success with the…

  17. Methods of analysis speech rate: a pilot study.

    PubMed

    Costa, Luanna Maria Oliveira; Martins-Reis, Vanessa de Oliveira; Celeste, Letícia Côrrea

    2016-01-01

    To describe the performance of fluent adults in different measures of speech rate. The study included 24 fluent adults, of both genders, speakers of Brazilian Portuguese, who were born and still living in the metropolitan region of Belo Horizonte, state of Minas Gerais, aged between 18 and 59 years. Participants were grouped by age: G1 (18-29 years), G2 (30-39 years), G3 (40-49 years), and G4 (50-59 years). The speech samples were obtained following the methodology of the Speech Fluency Assessment Protocol. In addition to the measures of speech rate proposed by the protocol (speech rate in words and syllables per minute), the rate of speech into phonemes per second and the articulation rate with and without the disfluencies were calculated. We used the nonparametric Friedman test and the Wilcoxon test for multiple comparisons. Groups were compared using the nonparametric Kruskal Wallis. The significance level was of 5%. There were significant differences between measures of speech rate involving syllables. The multiple comparisons showed that all the three measures were different. There was no effect of age for the studied measures. These findings corroborate previous studies. The inclusion of temporal acoustic measures such as speech rate in phonemes per second and articulation rates with and without disfluencies can be a complementary approach in the evaluation of speech rate.

  18. Speech Research: A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for Its Investigation, and Practical Applications.

    DTIC Science & Technology

    1981-03-01

    adjusting the metronome beats to coincide with the stressed syllables. The sentences were constructed to have a regular rhythm. They were: "I think’ that it...rate was 92 beats per minute, the conversational rate was 120 beats per minute, and the fast rate was 160 beats per minute. Both sentences were recorded...shown in Figure 6 also suggests amplitude modulation (von Holst’s superimposition effect). Thus on some coinciding cycles a " beat " phenomenon can be

  19. Tampa Bay International Business Summit Keynote Speech

    NASA Technical Reports Server (NTRS)

    Clary, Christina

    2011-01-01

    A keynote speech outlining the importance of collaboration and diversity in the workplace. The 20-minute speech describes NASA's challenges and accomplishments over the years and what lies ahead. Topics include: diversity and inclusion principles, international cooperation, Kennedy Space Center planning and development, opportunities for cooperation, and NASA's vision for exploration.

  20. Threat Interference Biases Predict Socially Anxious Behavior: The Role of Inhibitory Control and Minute of Stressor.

    PubMed

    Gorlin, Eugenia I; Teachman, Bethany A

    2015-07-01

    The current study brings together two typically distinct lines of research. First, social anxiety is inconsistently associated with behavioral deficits in social performance, and the factors accounting for these deficits remain poorly understood. Second, research on selective processing of threat cues, termed cognitive biases, suggests these biases typically predict negative outcomes, but may sometimes be adaptive, depending on the context. Integrating these research areas, the current study examined whether conscious and/or unconscious threat interference biases (indexed by the unmasked and masked emotional Stroop) can explain unique variance, beyond self-reported anxiety measures, in behavioral avoidance and observer-rated anxious behavior during a public speaking task. Minute of speech and general inhibitory control (indexed by the color-word Stroop) were examined as within-subject and between-subject moderators, respectively. Highly socially anxious participants (N=135) completed the emotional and color-word Stroop blocks prior to completing a 4-minute videotaped speech task, which was later coded for anxious behaviors (e.g., speech dysfluency). Mixed-effects regression analyses revealed that general inhibitory control moderated the relationship between both conscious and unconscious threat interference bias and anxious behavior (though not avoidance), such that lower threat interference predicted higher levels of anxious behavior, but only among those with relatively weaker (versus stronger) inhibitory control. Minute of speech further moderated this relationship for unconscious (but not conscious) social-threat interference, such that lower social-threat interference predicted a steeper increase in anxious behaviors over the course of the speech (but only among those with weaker inhibitory control). Thus, both trait and state differences in inhibitory control resources may influence the behavioral impact of threat biases in social anxiety. Copyright © 2015. Published by Elsevier Ltd.

  1. Feasibility of through-time spiral generalized autocalibrating partial parallel acquisition for low latency accelerated real-time MRI of speech.

    PubMed

    Lingala, Sajan Goud; Zhu, Yinghua; Lim, Yongwan; Toutios, Asterios; Ji, Yunhua; Lo, Wei-Ching; Seiberlich, Nicole; Narayanan, Shrikanth; Nayak, Krishna S

    2017-12-01

    To evaluate the feasibility of through-time spiral generalized autocalibrating partial parallel acquisition (GRAPPA) for low-latency accelerated real-time MRI of speech. Through-time spiral GRAPPA (spiral GRAPPA), a fast linear reconstruction method, is applied to spiral (k-t) data acquired from an eight-channel custom upper-airway coil. Fully sampled data were retrospectively down-sampled to evaluate spiral GRAPPA at undersampling factors R = 2 to 6. Pseudo-golden-angle spiral acquisitions were used for prospective studies. Three subjects were imaged while performing a range of speech tasks that involved rapid articulator movements, including fluent speech and beat-boxing. Spiral GRAPPA was compared with view sharing, and a parallel imaging and compressed sensing (PI-CS) method. Spiral GRAPPA captured spatiotemporal dynamics of vocal tract articulators at undersampling factors ≤4. Spiral GRAPPA at 18 ms/frame and 2.4 mm 2 /pixel outperformed view sharing in depicting rapidly moving articulators. Spiral GRAPPA and PI-CS provided equivalent temporal fidelity. Reconstruction latency per frame was 14 ms for view sharing and 116 ms for spiral GRAPPA, using a single processor. Spiral GRAPPA kept up with the MRI data rate of 18ms/frame with eight processors. PI-CS required 17 minutes to reconstruct 5 seconds of dynamic data. Spiral GRAPPA enabled 4-fold accelerated real-time MRI of speech with a low reconstruction latency. This approach is applicable to wide range of speech RT-MRI experiments that benefit from real-time feedback while visualizing rapid articulator movement. Magn Reson Med 78:2275-2282, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Study of accent-based music speech protocol development for improving voice problems in stroke patients with mixed dysarthria.

    PubMed

    Kim, Soo Ji; Jo, Uiri

    2013-01-01

    Based on the anatomical and functional commonality between singing and speech, various types of musical elements have been employed in music therapy research for speech rehabilitation. This study was to develop an accent-based music speech protocol to address voice problems of stroke patients with mixed dysarthria. Subjects were 6 stroke patients with mixed dysarthria and they received individual music therapy sessions. Each session was conducted for 30 minutes and 12 sessions including pre- and post-test were administered for each patient. For examining the protocol efficacy, the measures of maximum phonation time (MPT), fundamental frequency (F0), average intensity (dB), jitter, shimmer, noise to harmonics ratio (NHR), and diadochokinesis (DDK) were compared between pre and post-test and analyzed with a paired sample t-test. The results showed that the measures of MPT, F0, dB, and sequential motion rates (SMR) were significantly increased after administering the protocol. Also, there were statistically significant differences in the measures of shimmer, and alternating motion rates (AMR) of the syllable /K$\\inve$/ between pre- and post-test. The results indicated that the accent-based music speech protocol may improve speech motor coordination including respiration, phonation, articulation, resonance, and prosody of patients with dysarthria. This suggests the possibility of utilizing the music speech protocol to maximize immediate treatment effects in the course of a long-term treatment for patients with dysarthria.

  3. Efficacy of continuous positive airway pressure for treatment of hypernasality.

    PubMed

    Kuehn, David P; Imrey, Peter B; Tomes, Lucrezia; Jones, David L; O'Gara, Mary M; Seaver, Earl J; Smith, Bonnie E; Van Demark, D R; Wachtel, Jayne M

    2002-05-01

    To determine whether speech hypernasality in subjects born with cleft palate can be reduced by graded velopharyngeal resistance training against continuous positive airway pressure (CPAP). Pretreatment versus immediate posttreatment comparison study. Eight university and hospital speech clinics. Forty-three subjects born with cleft palate, aged 3 years 10 months to 23 years 8 months, diagnosed with speech hypernasality. Eight weeks of 6 days per week in-home speech exercise sessions, increasing from 10 to 24 minutes, speaking against transnasal CPAP increasing from 4 to 8.5 cm H(2)0. MAIN OUTCOME MEASURES Pretreatment to immediate posttherapy change in perceptual nasality score based on blinded comparisons of subjects' speech samples to standard reference samples by six expert clinician-investigators. Participating clinical centers treated from two to nine eligible subjects, and results differed significantly across centers (interaction p =.004). Overall, there was statistically significant reduction in mean nasality score after 8 weeks of CPAP therapy, whether weighted equally across patients (mean reduction = 0.20 units on a scale of 1.0 to 7.0, p =.016) or across clinical centers (mean = 0.19, p =.046). This change was about one-sixth the maximum possible reduction from pretreatment. Nine patients showed reductions of at least half the maximum possible, but hypernasality of eight patients increased at least 30% above pretreatment level. Most improvement was seen during the second month when therapy was more intense (p =.045 for nonlinearity). No interactions with age or sex were detected. Patients receiving 8 weeks of velopharyngeal CPAP resistance training showed a net overall reduction in speech hypernasality, although response was quite variable across patients and clinical centers. The net reduction in hypernasality is not readily explainable by random variability, subject maturation, placebo effect, or regression to the mean. CPAP appears capable of substantially reducing speech hypernasality for some subjects with cleft palate.

  4. Depressed mood and speech in Chilean mothers of 5½-year-old children

    PubMed Central

    Clark, Katy M.; Su, Jing; Kaciroti, Niko; Castillo, Marcela; Millan, Rebeca; Rule, Heather; Lozoff, Besty

    2010-01-01

    Previous research on maternal speech and depression has focused almost exclusively on how depressed mothers talk to their infants and toddlers in the U.S. and U.K., two English-speaking countries. This study considered how depressed Spanish-speaking mothers from a Latin American country talk about their preschool-age children. Five-minute speech samples were provided by 178 Chilean mothers who were asked to talk about their 5½-year-old children to a project psychologist. Maternal depressive symptomatology was measured by the Spanish-language version of the Center for Epidemiologic Studies Depression Scale (CES-D). In multivariate analysis of covariance (MANCOVA), higher maternal depressed mood showed statistically significant associations with the following maternal speech characteristics: more criticisms, less laughter, fewer medium pauses, less positive satisfaction with the child’s behavior or characteristics, a rating of a negative overall relationship with the child, and more crying (suggestive trend). A structural equation model confirmed these findings and found an indirect effect between laughter and criticisms: mothers with higher depressed mood who laughed less criticized their children less. The findings illustrate that depressed mood adversely affects how a group of Chilean mothers speak about their children. PMID:21785514

  5. A Randomized Controlled Trial on The Beneficial Effects of Training Letter-Speech Sound Integration on Reading Fluency in Children with Dyslexia

    PubMed Central

    Fraga González, Gorka; Žarić, Gojko; Tijms, Jurgen; Bonte, Milene; van der Molen, Maurits W.

    2015-01-01

    A recent account of dyslexia assumes that a failure to develop automated letter-speech sound integration might be responsible for the observed lack of reading fluency. This study uses a pre-test-training-post-test design to evaluate the effects of a training program based on letter-speech sound associations with a special focus on gains in reading fluency. A sample of 44 children with dyslexia and 23 typical readers, aged 8 to 9, was recruited. Children with dyslexia were randomly allocated to either the training program group (n = 23) or a waiting-list control group (n = 21). The training intensively focused on letter-speech sound mapping and consisted of 34 individual sessions of 45 minutes over a five month period. The children with dyslexia showed substantial reading gains for the main word reading and spelling measures after training, improving at a faster rate than typical readers and waiting-list controls. The results are interpreted within the conceptual framework assuming a multisensory integration deficit as the most proximal cause of dysfluent reading in dyslexia. Trial Registration: ISRCTN register ISRCTN12783279 PMID:26629707

  6. Families with Limited English Proficiency Receive Less Information and Support in Interpreted ICU Family Conferences

    PubMed Central

    Thornton, J. Daryl; Pham, Kiemanh; Engelberg, Ruth A.; Jackson, J. Carey; Curtis, J. Randall

    2009-01-01

    Objective Family communication is important for delivering high quality end-of-life care in the ICU, yet little research has been conducted to describe and evaluate clinician-family communication with non-English speaking family members. We assessed clinician-family communication during ICU family conferences involving interpreters and compared it to conferences without interpreters. Design Cross-sectional descriptive study. Setting Family conferences in the ICU’s of four hospitals during which discussions about withdrawing life support or delivery of bad news were likely to occur. Participants 70 family members from 10 interpreted conferences and 214 family members from 51 non-interpreted conferences. Nine different physicians led interpreted conferences and 36 different physicians led non-interpreted conferences. Measurements All 61 conferences were audiotaped. We measured the duration of time that families, interpreters, and clinicians spoke during the conference and we tallied the number of supportive statements issued by clinicians in each conference. Results The mean conference time was 26.3 ± 13 minutes for interpreted and 32±15 minutes for non-interpreted conferences (p=0.25). The duration of clinician speech was 10.9 ± 5.8 minutes for interpreted conferences and 19.6 ± 10.2 minutes for non-interpreted conferences (p=0.001). The amount of clinician speech as a proportion of total speech time was 42.7% in interpreted conferences and 60.5% in non-interpreted conferences (p=0.004). Interpreter speech accounted for 7.9 ± 4.4 minutes and 32% of speech in interpreter conferences. Interpreted conferences contained fewer clinician statements providing support for families, including valuing families’ input (p=0.01), easing emotional burdens (p<0.01), and active listening (p<0.01). Conclusions This study suggests that families with non-English speaking members may be at increased risk of receiving less information about their loved one’s critical illness as well as less emotional support from their clinicians. Future studies should identify ways to improve communication with, and support for, non-English speaking families of critically ill patients. PMID:19050633

  7. Immediate effects of the semi-occluded vocal tract exercise with LaxVox® tube in singers.

    PubMed

    Fadel, Congeta Bruniere Xavier; Dassie-Leite, Ana Paula; Santos, Rosane Sampaio; Santos, Celso Gonçalves Dos; Dias, Cláudio Antônio Sorondo; Sartori, Denise Jussara

    The purpose of this study was to analyze the immediate effects of the semi-occluded vocal tract exercise (SOVTE) using the LaxVox® tube in singers. Participants were 23 singers, classical singing students, aged 18 to 47 years (mean age = 27.2 years). First, data was collected through the application of a demographic questionnaire and the recording of sustained emission - vowel /ε/, counting 1-10, and a music section from the participants' current repertoire. After that, the participants were instructed and performed the SOVTE using the LaxVox® tube for three minutes. Finally, the same vocal samples were collected immediately after SOVTE performance and the singers responded to a questionnaire on their perception regarding vocal changes after the exercise. The vocal samples were analyzed by referees (speech-language pathologists and singing teachers) and by means of acoustic analysis. Most of the singers reported improved voice post-exercise in both tasks - speech and singing. Regarding the perceptual assessment (sustained vowel, speech, and singing), the referees found no difference between pre- and post-exercise emissions. The acoustic analysis of the sustained vowel showed increased Fundamental Frequency (F0) and reduction of the Glottal to Noise Excitation (GNE) ratio post-exercise. The semi-occluded vocal tract exercise with LaxVox® tube promotes immediate positive effects on the self-assessment and acoustic analysis of voice in professional singers without vocal complains. No immediate significant changes were observed with respect to auditory-perceptual evaluation of speech and singing.

  8. Media Criticism Group Speech

    ERIC Educational Resources Information Center

    Ramsey, E. Michele

    2004-01-01

    Objective: To integrate speaking practice with rhetorical theory. Type of speech: Persuasive. Point value: 100 points (i.e., 30 points based on peer evaluations, 30 points based on individual performance, 40 points based on the group presentation), which is 25% of course grade. Requirements: (a) References: 7-10; (b) Length: 20-30 minutes; (c)…

  9. Phonetic measures of reduced tongue movement correlate with negative symptom severity in hospitalized patients with first-episode schizophrenia-spectrum disorders.

    PubMed

    Covington, Michael A; Lunden, S L Anya; Cristofaro, Sarah L; Wan, Claire Ramsay; Bailey, C Thomas; Broussard, Beth; Fogarty, Robert; Johnson, Stephanie; Zhang, Shayi; Compton, Michael T

    2012-12-01

    Aprosody, or flattened speech intonation, is a recognized negative symptom of schizophrenia, though it has rarely been studied from a linguistic/phonological perspective. To bring the latest advances in computational linguistics to the phenomenology of schizophrenia and related psychotic disorders, a clinical first-episode psychosis research team joined with a phonetics/computational linguistics team to conduct a preliminary, proof-of-concept study. Video recordings from a semi-structured clinical research interview were available from 47 first-episode psychosis patients. Audio tracks of the video recordings were extracted, and after review of quality, 25 recordings were available for phonetic analysis. These files were de-noised and a trained phonologist extracted a 1-minute sample of each patient's speech. WaveSurfer 1.8.5 was used to create, from each speech sample, a file of formant values (F0, F1, F2, where F0 is the fundamental frequency and F1 and F2 are resonance bands indicating the moment-by-moment shape of the oral cavity). Variability in these phonetic indices was correlated with severity of Positive and Negative Syndrome Scale negative symptom scores using Pearson correlations. A measure of variability of tongue front-to-back position-the standard deviation of F2-was statistically significantly correlated with the severity of negative symptoms (r=-0.446, p=0.03). This study demonstrates a statistically significant and meaningful correlation between negative symptom severity and phonetically measured reductions in tongue movements during speech in a sample of first-episode patients just initiating treatment. Further studies of negative symptoms, applying computational linguistics methods, are warranted. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Phonetic Measures of Reduced Tongue Movement Correlate with Negative Symptom Severity in Hospitalized Patients with First-Episode Schizophrenia-Spectrum Disorders

    PubMed Central

    Covington, Michael A.; Lunden, S.L. Anya; Cristofaro, Sarah L.; Wan, Claire Ramsay; Bailey, C. Thomas; Broussard, Beth; Fogarty, Robert; Johnson, Stephanie; Zhang, Shayi; Compton, Michael T.

    2012-01-01

    Background Aprosody, or flattened speech intonation, is a recognized negative symptom of schizophrenia, though it has rarely been studied from a linguistic/phonological perspective. To bring the latest advances in computational linguistics to the phenomenology of schizophrenia and related psychotic disorders, a clinical first-episode psychosis research team joined with a phonetics/computational linguistics team to conduct a preliminary, proof-of-concept study. Methods Video recordings from a semi-structured clinical research interview were available from 47 first-episode psychosis patients. Audio tracks of the video recordings were extracted, and after review of quality, 25 recordings were available for phonetic analysis. These files were de-noised and a trained phonologist extracted a 1-minute sample of each patient’s speech. WaveSurfer 1.8.5 was used to create, from each speech sample, a file of formant values (F0, F1, F2, where F0 is the fundamental frequency and F1 and F2 are resonance bands indicating the moment-by-moment shape of the oral cavity). Variability in these phonetic indices was correlated with severity of Positive and Negative Syndrome Scale negative symptom scores using Pearson correlations. Results A measure of variability of tongue front-to-back position—the standard deviation of F2—was statistically significantly correlated with the severity of negative symptoms (r=−0.446, p=0.03). Conclusion This study demonstrates a statistically significant and meaningful correlation between negative symptom severity and phonetically measured reductions in tongue movements during speech in a sample of first-episode patients just initiating treatment. Further studies of negative symptoms, applying computational linguistics methods, are warranted. PMID:23102940

  11. A preliminary comparison of speech recognition functionality in dental practice management systems.

    PubMed

    Irwin, Jeannie Y; Schleyer, Titus

    2008-11-06

    In this study, we examined speech recognition functionality in four leading dental practice management systems. Twenty dental students used voice to chart a simulated patient with 18 findings in each system. Results show it can take over a minute to chart one finding and that users frequently have to repeat commands. Limited functionality, poor usability and a high error rate appear to retard adoption of speech recognition in dentistry.

  12. Once More, With Feeling: Reagan and "The Speech" in 1980.

    ERIC Educational Resources Information Center

    Henry, David

    Ronald Reagan's rise from political neophyte to Republican candidate for governor of California in 1966 was characterized by a public relations strategy, which was bolstered by "The Speech," a 30-minute anti-big government, defense-of-freedom message. He presented this message appropriately to each audience to identify himself with…

  13. Business Communication Students Learn to Hear a Bad Speech Habit

    ERIC Educational Resources Information Center

    Bell, Reginald L.; Liang-Bell, Lei Paula; Deselle, Bettye

    2006-01-01

    Students were trained to perceive filled pauses (FP) as a bad speech habit. In a series of classroom sensitivity training activities, followed by students being rewarded to observe twenty minutes of live television from the public media, no differences between male and female Business Communication students was revealed. The practice of teaching…

  14. Teaching mindfulness meditation to adults with severe speech and physical impairments: An exploratory study.

    PubMed

    Goodrich, Elena; Wahbeh, Helané; Mooney, Aimee; Miller, Meghan; Oken, Barry S

    2015-01-01

    People with severe speech and physical impairments may benefit from mindfulness meditation training because it has the potential to enhance their ability to cope with anxiety, depression and pain and improve their attentional capacity to use brain-computer interface systems. Seven adults with severe speech and physical impairments (SSPI) - defined as speech that is understood less than 25% of the time and/or severely reduced hand function for writing/typing - participated in this exploratory, uncontrolled intervention study. The objectives were to describe the development and implementation of a six-week mindfulness meditation intervention and to identify feasible outcome measures in this population. The weekly intervention was delivered by an instructor in the participant's home, and participants were encouraged to practise daily using audio recordings. The objective adherence to home practice was 10.2 minutes per day. Exploratory outcome measures were an n-back working memory task, the Attention Process Training-II Attention Questionnaire, the Pittsburgh Sleep Quality Index, the Perceived Stress Scale, the Positive and Negative Affect Schedule, and a qualitative feedback survey. There were no statistically significant pre-post results in this small sample, yet administration of the measures proved feasible, and qualitative reports were overall positive. Obstacles to teaching mindfulness meditation to persons with SSPI are reported, and solutions are proposed.

  15. The hidden effect of hearing acuity on speech recall, and compensatory effects of self-paced listening

    PubMed Central

    Piquado, Tepring; Benichov, Jonathan I.; Brownell, Hiram; Wingfield, Arthur

    2013-01-01

    Objective The purpose of this research was to determine whether negative effects of hearing loss on recall accuracy for spoken narratives can be mitigated by allowing listeners to control the rate of speech input. Design Paragraph-length narratives were presented for recall under two listening conditions in a within-participants design: presentation without interruption (continuous) at an average speech-rate of 150 words per minute; and presentation interrupted at periodic intervals at which participants were allowed to pause before initiating the next segment (self-paced). Study sample Participants were 24 adults ranging from 21 to 33 years of age. Half had age-normal hearing acuity and half had mild-to-moderate hearing loss. The two groups were comparable for age, years of formal education, and vocabulary. Results When narrative passages were presented continuously, without interruption, participants with hearing loss recalled significantly fewer story elements, both main ideas and narrative details, than those with age-normal hearing. The recall difference was eliminated when the two groups were allowed to self-pace the speech input. Conclusion Results support the hypothesis that the listening effort associated with reduced hearing acuity can slow processing operations and increase demands on working memory, with consequent negative effects on accuracy of narrative recall. PMID:22731919

  16. Interrelations of maternal expressed emotion, maltreatment, and separation/divorce and links to family conflict and children's externalizing behavior.

    PubMed

    Narayan, Angela; Cicchetti, Dante; Rogosch, Fred A; Toth, Sheree L

    2015-02-01

    Research has documented that maternal expressed emotion-criticism (EE-Crit) from the Five-Minute Speech Sample (FMSS) predicts family conflict and children's externalizing behavior in clinical and community samples. However, studies have not examined EE-Crit in maltreating or separated/divorced families, or whether these family risks exacerbate the links between EE-Crit and family conflict and externalizing behavior. The current study examined the associations between maternal EE-Crit, maltreatment, and separation/divorce, and whether maltreatment and separation/divorce moderated associations between EE-Crit and children's externalizing problems, and EE-Crit and family conflict. Participants included 123 children (M = 8.01 years, SD = 1.58; 64.2 % males) from maltreating (n = 83) or low-income, comparison (n = 40) families, and 123 mothers (n = 48 separated/divorced). Mothers completed the FMSS for EE-Crit and the Family Environment Scale for family conflict. Maltreatment was coded with the Maltreatment Classification System using information from official Child Protection Services (CPS) reports from the Department of Human Services (DHS). Trained summer camp counselors rated children's externalizing behavior. Maltreatment was directly associated with higher externalizing problems, and separation/divorce, but not maltreatment, moderated the association between EE-Crit and externalizing behavior. Analyses pertaining to family conflict were not significant. Findings indicate that maltreatment is a direct risk factor for children's externalizing behavior and separation/divorce is a vulnerability factor for externalizing behavior in family contexts with high maternal EE-Crit. Intervention, prevention, and policy efforts to promote resilience in high-risk families may be effective in targeting maltreating and critical parents, especially those with co-occurring separation/divorce. Key Words: expressed emotion, EE-Crit, Five-Minute Speech Sample; maltreatment, divorce, externalizing behavior.

  17. Expressed emotion displayed by the mothers of inhibited and uninhibited preschool-aged children.

    PubMed

    Raishevich, Natoshia; Kennedy, Susan J; Rapee, Ronald M

    2010-01-01

    In the current study, the Five Minute Speech Sample was used to assess the association between parent attitudes and children's behavioral inhibition in mothers of 120 behaviorally inhibited (BI) and 37 behaviorally uninhibited preschool-aged children. Mothers of BI children demonstrated significantly higher levels of emotional over-involvement (EOI) and self-sacrificing/overprotective behavior (SS/OP). However, there was no significant relationship between inhibition status and maternal criticism. Multiple regression also indicated that child temperament, but not maternal anxiety, was a significant predictor of both EOI and SS/OP.

  18. A Test of Attention Control Theory in Public Speaking: Cognitive Load Influences the Relationship between State Anxiety and Verbal Production

    ERIC Educational Resources Information Center

    King, Paul E.; Finn, Amber N.

    2017-01-01

    This study investigated the relationship between public-speaking state anxiety (PSA) and verbal communication performance when delivering a speech. In Study 1, participants delivered an extemporaneous five-minute classroom speech behind a lectern, and in Study 2, to increase cognitive load, participants delivered an extemporaneous five-minute…

  19. The Role of Expressed Emotion in Relationships Between Psychiatric Staff and People With a Diagnosis of Psychosis: A Review of the Literature

    PubMed Central

    Berry, Katherine; Barrowclough, Christine; Haddock, Gillian

    2011-01-01

    The concept of expressed emotion (EE) has been extended to the study of staff-patient relationships in schizophrenia. A comprehensive review of the literature identified a total of 27 studies investigating EE in this group published between 1990 and 2008. The article aims to assess whether the concept of EE is a useful and valid measure of the quality of professional caregiver and patient relationships, given that staff may be less emotionally invested in relationships than relatives. In doing so, it summarizes methods of measuring EE, the nature of professional EE compared with familial EE, associations between high EE and patient outcomes, associations between EE and both patient and staff variables, and intervention studies to reduce staff high EE. The available evidence suggests that the Camberwell Family Interview is an acceptable measure of EE in staff-patient relationships, although the Five Minute Speech Sample may provide a less resource intensive alternative. However, in contrast to familial research, neither the EE status on the Camberwell Family Interview nor the Five Minute Speech Sample show a robust relationship with outcomes. The presence or absence of a positive staff-patient relationship may have more predictive validity in this group. There is relatively consistent evidence of associations between staff criticism and poorer patient social functioning. Consistent with findings in familial research, staff attributions may play a key role in driving critical responses, and it may be possible to reduce staff high EE by modifying negative appraisals. PMID:20056685

  20. THE COMPREHENSION OF RAPID SPEECH BY THE BLIND, PART III.

    ERIC Educational Resources Information Center

    FOULKE, EMERSON

    A REVIEW OF THE RESEARCH ON THE COMPREHENSION OF RAPID SPEECH BY THE BLIND IDENTIFIES FIVE METHODS OF SPEECH COMPRESSION--SPEECH CHANGING, ELECTROMECHANICAL SAMPLING, COMPUTER SAMPLING, SPEECH SYNTHESIS, AND FREQUENCY DIVIDING WITH THE HARMONIC COMPRESSOR. THE SPEECH CHANGING AND ELECTROMECHANICAL SAMPLING METHODS AND THE NECESSARY APPARATUS HAVE…

  1. Dosage dependent effect of high-resistance straw exercise in dysphonic and non-dysphonic women.

    PubMed

    Paes, Sabrina Mazzer; Behlau, Mara

    2017-03-09

    to study the dosage dependent effect of high-resistance straw exercise in women with behavioral dysphonia and in vocally healthy women. 25 dysphonic women (DG), with average age of 35 years (SD = 10.5) and 30 vocally healthy women (VHG), with average age of 31.6 years (SD = 10.3). The participants produced a continuous sound into a thin high-resistance straw for seven minutes, being interrupted after the first, third, fifth and seventh minute. At each interval, speech samples were recorded (sustained vowel and counting up to 20) and subsequently acoustically analyzed. Each participant reported the effort necessary to perform exercise and to speak, indicating their ratings on visual analog scales (VAS). with regard to the DG, the exercise caused positive vocal changes, especially between the third and fifth minute: less phonatory effort, increase in MPT, and reduction of F0 variability; these voice parameters deteriorated after five minutes. This fact associated with the increased effort to perform the exercise indicates a possible overload of the phonatory system. As to the VHG, MPT improved after one minute of exercise, while the other parameters did not change over time, probably due to the fact that the voices were not deviant; seven minutes did not seem to impose an overload in this population. positive vocal changes were observed with the high-resistance straw exercise; however, there are dosage restrictions, especially for dysphonic women.

  2. Construct-related validity of the TOCS measures: comparison of intelligibility and speaking rate scores in children with and without speech disorders.

    PubMed

    Hodge, Megan M; Gotzke, Carrie L

    2014-01-01

    This study evaluated construct-related validity of the Test of Children's Speech (TOCS). Intelligibility scores obtained using open-set word identification tasks (orthographic transcription) for the TOCS word and sentence tests and rate scores for the TOCS sentence test (words per minute or WPM and intelligible words per minute or IWPM) were compared for a group of 15 adults (18-30 years of age) with normal speech production and three groups of children: 48 3-6 year-olds with typical speech development and neurological histories (TDS), 48 3-6 year-olds with a speech sound disorder of unknown origin and no identified neurological impairment (SSD-UNK), and 22 3-10 year-olds with dysarthria and cerebral palsy (DYS). As expected, mean intelligibility scores and rates increased with age in the TDS group. However, word test intelligibility, WPM and IWPM scores for the 6 year-olds in the TDS group were significantly lower than those for the adults. The DYS group had significantly lower word and sentence test intelligibility and WPM and IWPM scores than the TDS and SSD-UNK groups. Compared to the TDS group, the SSD-UNK group also had significantly lower intelligibility scores for the word and sentence tests, and significantly lower IWPM, but not WPM scores on the sentence test. The results support the construct-related validity of TOCS as a tool for obtaining intelligibility and rate scores that are sensitive to group differences in 3-6 year-old children, with and without speech sound disorders, and to 3+ year-old children with speech disorders, with and without dysarthria. Readers will describe the word and sentence intelligibility and speaking rate performance of children with typically developing speech at age levels of 3, 4, 5 and 6 years, as measured by the Test of Children's Speech, and how these compare with adult speakers and two groups of children with speech disorders. They will also recognize what measures on this test differentiate children with speech sound disorders of unknown origin from children with cerebral palsy and dysarthria. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Impact of auditory training for perceptual assessment of voice executed by undergraduate students in Speech-Language Pathology.

    PubMed

    Silva, Regiane Serafim Abreu; Simões-Zenari, Marcia; Nemr, Nair Kátia

    2012-01-01

    To analyze the impact of auditory training for auditory-perceptual assessment carried out by Speech-Language Pathology undergraduate students. During two semesters, 17 undergraduate students enrolled in theoretical subjects regarding phonation (Phonation/Phonation Disorders) analyzed samples of altered and unaltered voices (selected for this purpose), using the GRBAS scale. All subjects received auditory training during nine 15-minute meetings. In each meeting, a different parameter was presented using the different voices sample, with predominance of the trained aspect in each session. Sample assessment using the scale was carried out before and after training, and in other four opportunities throughout the meetings. Students' assessments were compared to an assessment carried out by three voice-experts speech-language pathologists who were the judges. To verify training effectiveness, the Friedman's test and the Kappa index were used. The rate of correct answers in the pre-training was considered between regular and good. It was observed maintenance of the number of correct answers throughout assessments, for most of the scale parameters. In the post-training moment, the students showed improvements in the analysis of asthenia, a parameter that was emphasized during training after the students reported difficulties analyzing it. There was a decrease in the number of correct answers for the roughness parameter after it was approached segmented into hoarseness and harshness, and observed in association with different diagnoses and acoustic parameters. Auditory training enhances students' initial abilities to perform the evaluation, aside from guiding adjustments in the dynamics of the university subject.

  4. Local television news coverage of President Clinton's introduction of the Health Security Act.

    PubMed

    Dorfman, L; Schauffler, H H; Wilkerson, J; Feinson, J

    1996-04-17

    To investigate how local television news reported on health system reform during the week President Clinton presented his health system reform bill. Retrospective content analysis of the 1342-page Health Security Act of 1993, the printed text of President Clinton's speech before Congress on September 22, 1993, and a sample of local television news stories on health system reform broadcast during the week of September 19 through 25, 1993. The state of California. During the week, 316 television news stories on health system reform were aired during the 166 local news broadcasts sampled. Health system reform was the second most frequently reported topic, second to stories on violent crime. News stories on health system reform averaged 1 minute 38 seconds in length, compared with 57 seconds for violent crime. Fifty-seven percent of the local news stories focused on interest group politics. Compared with the content of the Health Security Act, local news broadcasts devoted a significantly greater portion of their stories to financing, eligibility, and preventive services. Local news stories gave significantly less attention to cost-saving mechanisms, long-term care benefits, and changes in Medicare and Medicaid, and less than 2% of stories mentioned quality assurance mechanisms, malpractice reform, or new public health initiatives. Of the 316 televised news stories, 53 reported on the president's speech, covering many of the same topics emphasized in the speech (financing, organization and administration, and eligibility) and de-emphasizing many of the same topics (Medicare and Medicaid, quality assurance, and malpractice reform). Two percent of the president's speech covered partisan politics; 45% of the local news stories on the speech featured challenges from partisan politicians. Although health system reform was the focus of a large number of local television news stories during the week, in-depth explanation was scarce. In general, the news stories provided superficial coverage framed largely in terms of the risks and costs of reform to specific stakeholders.

  5. [Oral and written affective expression in children of low socioeconomic status].

    PubMed

    Larraguibel, M; Lolas Stepke, F

    1991-06-01

    Descriptive data on affective expression of 58 children (33 girls and 25 boys) of low socioeconomic status (Graffar index), with ages between 8 and 12 are presented. Intelligence was assessed by means of Raven Progressive Matrixes Test, all subjects exhibiting mean level. Evaluated were the six forms of anxiety and the four hostility forms defined by the Gottschalk method of verbal content analysis. Hope scores, positive and negative, were also obtained from the same verbal samples. The oral sample consisted in speech produced spontaneously during 5 minutes, in response to a standard instruction, and the written sample consisted in brief stories produced under standardized conditions during 15 minutes. The most frequently expressed form of anxiety was separation anxiety, while the most frequently expressed form of hostility was directed outwards covert hostility. "Positive" hope was expressed more frequently than "negative" hope. Data are discussed in terms of their contribution to the establishment of population norms in Spanish-speaking populations for the psychological constructs explored. It is concluded that the method of content analysis of verbal behavior may represent a useful tool for the study of child psychology in different contexts.

  6. A STUDY OF THE EFFECTS OF PRESENTING INFORMATIVE SPEECHES WITH AND WITHOUT THE USE OF VISUAL AIDS TO VOLUNTARY ADULT AUDIENCES.

    ERIC Educational Resources Information Center

    BODENHAMER, SCHELL H.

    TO DETERMINE THE COMPARATIVE AMOUNT OF LEARNING THAT OCCURRED AND THE AUDIENCE REACTION TO MEETING EFFECTIVENESS, A 20-MINUTE INFORMATIVE SPEECH, "THE WEATHER," WAS PRESENTED WITH VISUAL AIDS TO 23 AND WITHOUT VISUAL AIDS TO 23 INFORMAL, VOLUNTARY, ADULT AUDIENCES. THE AUDIENCES WERE RANDOMLY DIVIDED, AND CONTROLS WERE USED TO ASSURE IDENTICAL…

  7. Application of advanced speech technology in manned penetration bombers

    NASA Astrophysics Data System (ADS)

    North, R.; Lea, W.

    1982-03-01

    This report documents research on the potential use of speech technology in a manned penetration bomber aircraft (B-52/G and H). The objectives of the project were to analyze the pilot/copilot crewstation tasks over a three-hour-and forty-minute mission and determine the tasks that would benefit the most from conversion to speech recognition/generation, determine the technological feasibility of each of the identified tasks, and prioritize these tasks based on these criteria. Secondary objectives of the program were to enunciate research strategies in the application of speech technologies in airborne environments, and develop guidelines for briefing user commands on the potential of using speech technologies in the cockpit. The results of this study indicated that for the B-52 crewmember, speech recognition would be most beneficial for retrieving chart and procedural data that is contained in the flight manuals. Technological feasibility of these tasks indicated that the checklist and procedural retrieval tasks would be highly feasible for a speech recognition system.

  8. Translation of incremental talk test responses to steady-state exercise training intensity.

    PubMed

    Lyon, Ellen; Menke, Miranda; Foster, Carl; Porcari, John P; Gibson, Mark; Bubbers, Terresa

    2014-01-01

    The Talk Test (TT) is a submaximal, incremental exercise test that has been shown to be useful in prescribing exercise training intensity. It is based on a subject's ability to speak comfortably during exercise. This study defined the amount of reduction in absolute workload intensity from an incremental exercise test using the TT to give appropriate absolute training intensity for cardiac rehabilitation patients. Patients in an outpatient rehabilitation program (N = 30) performed an incremental exercise test with the TT given every 2-minute stage. Patients rated their speech comfort after reciting a standardized paragraph. Anything other than a "yes" response was considered the "equivocal" stage, while all preceding stages were "positive" stages. The last stage with the unequivocally positive ability to speak was the Last Positive (LP), and the preceding stages were (LP-1 and LP-2). Subsequently, three 20-minute steady-state training bouts were performed in random order at the absolute workload at the LP, LP-1, and LP-2 stages of the incremental test. Speech comfort, heart rate (HR), and rating of perceived exertion (RPE) were recorded every 5 minutes. The 20-minute exercise training bout was completed fully by LP (n = 19), LP-1 (n = 28), and LP-2 (n = 30). Heart rate, RPE, and speech comfort were similar through the LP-1 and LP-2 tests, but the LP stage was markedly more difficult. Steady-state exercise training intensity was easily and appropriately prescribed at intensity associated with the LP-1 and LP-2 stages of the TT. The LP stage may be too difficult for patients in a cardiac rehabilitation program.

  9. Influence of speech sample on perceptual rating of hypernasality.

    PubMed

    Medeiros, Maria Natália Leite de; Fukushiro, Ana Paula; Yamashita, Renata Paciello

    2016-07-07

    To investigate the influence of speech sample of spontaneous conversation or sentences repetition on intra and inter-rater hypernasality reliability. One hundred and twenty audio recorded speech samples (60 containing spontaneous conversation and 60 containing repeated sentences) of individuals with repaired cleft palate±lip, both genders, aged between 6 and 52 years old (mean=21±10) were selected and edited. Three experienced speech and language pathologists rated hypernasality according to their own criteria using 4-point scale: 1=absence of hypernasality, 2=mild hypernasality, 3=moderate hypernasality and 4=severe hypernasality, first in spontaneous speech samples and 30 days after, in sentences repetition samples. Intra- and inter-rater agreements were calculated for both speech samples and were statistically compared by the Z test at a significance level of 5%. Comparison of intra-rater agreements between both speech samples showed an increase of the coefficients obtained in the analysis of sentences repetition compared to those obtained in spontaneous conversation. Comparison between inter-rater agreement showed no significant difference among the three raters for the two speech samples. Sentences repetition improved intra-raters reliability of perceptual judgment of hypernasality. However, the speech sample had no influence on reliability among different raters.

  10. Learning the language of time: Children's acquisition of duration words.

    PubMed

    Tillman, Katharine A; Barner, David

    2015-05-01

    Children use time words like minute and hour early in development, but take years to acquire their precise meanings. Here we investigate whether children assign meaning to these early usages, and if so, how. To do this, we test their interpretation of seven time words: second, minute, hour, day, week, month, and year. We find that preschoolers infer the orderings of time words (e.g., hour>minute), but have little to no knowledge of the absolute durations they encode. Knowledge of absolute duration is learned much later in development - many years after children first start using time words in speech - and in many children does not emerge until they have acquired formal definitions for the words. We conclude that associating words with the perception of duration does not come naturally to children, and that early intuitive meanings of time words are instead rooted in relative orderings, which children may infer from their use in speech. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. The Effects of a Brief Acceptance-based Behavior Therapy vs. Traditional Cognitive Behavior Therapy for Public Speaking Anxiety: Differential Effects on Performance and Verbal Working Memory

    NASA Astrophysics Data System (ADS)

    Glassman, Lisa Hayley

    Individuals with public speaking phobia experience fear and avoidance that can cause extreme distress, impaired speaking performance, and associated problems in psychosocial functioning. Most extant interventions for public speaking phobia focus on the reduction of anxiety and avoidance, but neglect performance. Additionally, very little is known about the relationship between verbal working memory and social performance under conditions of high anxiety. The current study compared the efficacy of two cognitive behavioral treatments, traditional Cognitive Behavioral Therapy (tCBT) and acceptance-based behavior therapy (ABBT), in enhancing public speaking performance via coping with anxiety. Verbal working memory performance, as measured by the backwards digit span (BDS), was measured to explore the relationships between treatment type, anxiety, performance, and verbal working memory. We randomized 30 individuals with high public speaking anxiety to a 90-minute ABBT or tCBT intervention. As this pilot study was underpowered, results are examined in terms of effect sizes as well as statistical significance. Assessments took place at pre and post-intervention and included self-rated and objective anxiety measurements, a behavioral assessment, ABBT and tCBT process measures, and backwards digit span verbal working memory tests. In order to examine verbal working memory during different levels of anxiety and performance pressure, we gave each participant a backwards digit span task three times during each assessment: once under calm conditions, then again while experiencing anticipatory anxiety, and finally under conditions of acute social performance anxiety in front of an audience. Participants were asked to give a video-recorded speech in front of the audience at pre- and post-intervention to examine speech performance. Results indicated that all participants experienced a very large and statistically significant decrease in anxiety (both during the speech and BDS), as well as an improvement in speech performance regardless of intervention received. While not statistically significant, participants who received an acceptance-based intervention exhibited larger improvements in observer-rated speech performance at post-treatment in comparison to tCBT (F (1,21) = 1.91, p =.18, etap2 = .08) such that individuals in the ABBT condition exhibited a considerably greater improvement in observer-rated speech performance than those in the tCBT condition. There was no differential impact of treatment condition on subjective speech anxiety or working memory task performance. Potential mediators and moderators of treatment were also examined. Results provide support for a brief 90-minute intervention for public speaking anxiety, but more research is needed in a study with a larger sample to fully understand the relationship between ABBT strategies and improvements in behavioral performance.

  12. Monitoring Progress in Vocal Development in Young Cochlear Implant Recipients: Relationships between Speech Samples and Scores from the Conditioned Assessment of Speech Production (CASP)

    PubMed Central

    Ertmer, David J.; Jung, Jongmin

    2012-01-01

    Background Evidence of auditory-guided speech development can be heard as the prelinguistic vocalizations of young cochlear implant recipients become increasingly complex, phonetically diverse, and speech-like. In research settings, these changes are most often documented by collecting and analyzing speech samples. Sampling, however, may be too time-consuming and impractical for widespread use in clinical settings. The Conditioned Assessment of Speech Production (CASP; Ertmer & Stoel-Gammon, 2008) is an easily administered and time-efficient alternative to speech sample analysis. The current investigation examined the concurrent validity of the CASP and data obtained from speech samples recorded at the same intervals. Methods Nineteen deaf children who received CIs before their third birthdays participated in the study. Speech samples and CASP scores were gathered at 6, 12, 18, and 24 months post-activation. Correlation analyses were conducted to assess the concurrent validity of CASP scores and data from samples. Results CASP scores showed strong concurrent validity with scores from speech samples gathered across all recording sessions (6 – 24 months). Conclusions The CASP was found to be a valid, reliable, and time-efficient tool for assessing progress in vocal development during young CI recipient’s first 2 years of device experience. PMID:22628109

  13. Sex and family history of cardiovascular disease influence heart rate variability during stress among healthy adults.

    PubMed

    Emery, Charles F; Stoney, Catherine M; Thayer, Julian F; Williams, DeWayne; Bodine, Andrew

    2018-07-01

    Studies of sex differences in heart rate variability (HRV) typically have not accounted for the influence of family history (FH) of cardiovascular disease (CVD). This study evaluated sex differences in HRV response to speech stress among men and women (age range 30-49 years) with and without a documented FH of CVD. Participants were 77 adults (mean age = 39.8 ± 6.2 years; range: 30-49 years; 52% female) with positive FH (FH+, n = 32) and negative FH (FH-, n = 45) of CVD, verified with relatives of participants. Cardiac activity for all participants was recorded via electrocardiogram during a standardized speech stress task with three phases: 5-minute rest, 5-minute speech, and 5-minute recovery. Outcomes included time domain and frequency domain indicators of HRV and heart rate (HR) at rest and during stress. Data were analyzed with repeated measures analysis of variance, with sex and FH as between subject variables and time/phase as a within subject variable. Women exhibited higher HR than did men and greater HR reactivity in response to the speech stress. However, women also exhibited greater HRV in both the time and frequency domains. FH+ women generally exhibited elevated HRV, despite the elevated risk of CVD associated with FH+. Although women participants exhibited higher HR at rest and during stress, women (both FH+ and FH-) also exhibited elevated HRV reactivity, reflecting greater autonomic control. Thus, enhanced autonomic function observed in prior studies of HRV among women is also evident among FH+ women during a standardized stress task. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Obstructive sleep apnea severity estimation: Fusion of speech-based systems.

    PubMed

    Ben Or, D; Dafna, E; Tarasiuk, A; Zigel, Y

    2016-08-01

    Obstructive sleep apnea (OSA) is a common sleep-related breathing disorder. Previous studies associated OSA with anatomical abnormalities of the upper respiratory tract that may be reflected in the acoustic characteristics of speech. We tested the hypothesis that the speech signal carries essential information that can assist in early assessment of OSA severity by estimating apnea-hypopnea index (AHI). 198 men referred to routine polysomnography (PSG) were recorded shortly prior to sleep onset while reading a one-minute speech protocol. The different parts of the speech recordings, i.e., sustained vowels, short-time frames of fluent speech, and the speech recording as a whole, underwent separate analyses, using sustained vowels features, short-term features, and long-term features, respectively. Applying support vector regression and regression trees, these features were used in order to estimate AHI. The fusion of the outputs of the three subsystems resulted in a diagnostic agreement of 67.3% between the speech-estimated AHI and the PSG-determined AHI, and an absolute error rate of 10.8 events/hr. Speech signal analysis may assist in the estimation of AHI, thus allowing the development of a noninvasive tool for OSA screening.

  15. Quantification and Systematic Characterization of Stuttering-Like Disfluencies in Acquired Apraxia of Speech.

    PubMed

    Bailey, Dallin J; Blomgren, Michael; DeLong, Catharine; Berggren, Kiera; Wambaugh, Julie L

    2017-06-22

    The purpose of this article is to quantify and describe stuttering-like disfluencies in speakers with acquired apraxia of speech (AOS), utilizing the Lidcombe Behavioural Data Language (LBDL). Additional purposes include measuring test-retest reliability and examining the effect of speech sample type on disfluency rates. Two types of speech samples were elicited from 20 persons with AOS and aphasia: repetition of mono- and multisyllabic words from a protocol for assessing AOS (Duffy, 2013), and connected speech tasks (Nicholas & Brookshire, 1993). Sampling was repeated at 1 and 4 weeks following initial sampling. Stuttering-like disfluencies were coded using the LBDL, which is a taxonomy that focuses on motoric aspects of stuttering. Disfluency rates ranged from 0% to 13.1% for the connected speech task and from 0% to 17% for the word repetition task. There was no significant effect of speech sampling time on disfluency rate in the connected speech task, but there was a significant effect of time for the word repetition task. There was no significant effect of speech sample type. Speakers demonstrated both major types of stuttering-like disfluencies as categorized by the LBDL (fixed postures and repeated movements). Connected speech samples yielded more reliable tallies over repeated measurements. Suggestions are made for modifying the LBDL for use in AOS in order to further add to systematic descriptions of motoric disfluencies in this disorder.

  16. Relations among questionnaire and experience sampling measures of inner speech: a smartphone app study

    PubMed Central

    Alderson-Day, Ben; Fernyhough, Charles

    2015-01-01

    Inner speech is often reported to be a common and central part of inner experience, but its true prevalence is unclear. Many questionnaire-based measures appear to lack convergent validity and it has been claimed that they overestimate inner speech in comparison to experience sampling methods (which involve collecting data at random timepoints). The present study compared self-reporting of inner speech collected via a general questionnaire and experience sampling, using data from a custom-made smartphone app (Inner Life). Fifty-one university students completed a generalized self-report measure of inner speech (the Varieties of Inner Speech Questionnaire, VISQ) and responded to at least seven random alerts to report on incidences of inner speech over a 2-week period. Correlations and pairwise comparisons were used to compare generalized endorsements and randomly sampled scores for each VISQ subscale. Significant correlations were observed between general and randomly sampled measures for only two of the four VISQ subscales, and endorsements of inner speech with evaluative or motivational characteristics did not correlate at all across different measures. Endorsement of inner speech items was significantly lower for random sampling compared to generalized self-report, for all VISQ subscales. Exploratory analysis indicated that specific inner speech characteristics were also related to anxiety and future-oriented thinking. PMID:25964773

  17. Differentiating primary progressive aphasias in a brief sample of connected speech

    PubMed Central

    Evans, Emily; O'Shea, Jessica; Powers, John; Boller, Ashley; Weinberg, Danielle; Haley, Jenna; McMillan, Corey; Irwin, David J.; Rascovsky, Katya; Grossman, Murray

    2013-01-01

    Objective: A brief speech expression protocol that can be administered and scored without special training would aid in the differential diagnosis of the 3 principal forms of primary progressive aphasia (PPA): nonfluent/agrammatic PPA, logopenic variant PPA, and semantic variant PPA. Methods: We used a picture-description task to elicit a short speech sample, and we evaluated impairments in speech-sound production, speech rate, lexical retrieval, and grammaticality. We compared the results with those obtained by a longer, previously validated protocol and further validated performance with multimodal imaging to assess the neuroanatomical basis of the deficits. Results: We found different patterns of impaired grammar in each PPA variant, and additional language production features were impaired in each: nonfluent/agrammatic PPA was characterized by speech-sound errors; logopenic variant PPA by dysfluencies (false starts and hesitations); and semantic variant PPA by poor retrieval of nouns. Strong correlations were found between this brief speech sample and a lengthier narrative speech sample. A composite measure of grammaticality and other measures of speech production were correlated with distinct regions of gray matter atrophy and reduced white matter fractional anisotropy in each PPA variant. Conclusions: These findings provide evidence that large-scale networks are required for fluent, grammatical expression; that these networks can be selectively disrupted in PPA syndromes; and that quantitative analysis of a brief speech sample can reveal the corresponding distinct speech characteristics. PMID:23794681

  18. Treatment for Vocal Polyps: Lips and Tongue Trill.

    PubMed

    de Vasconcelos, Daniela; Gomes, Adriana de Oliveira Camargo; de Araújo, Cláudia Marina Tavares

    2017-03-01

    Vocal polyps do not have a well-defined therapeutic indication. The recommended treatment is often laryngeal microsurgery, followed by postoperative speech therapy. Speech therapy as the initial treatment for polyps is a new concept and aims to modify inappropriate vocal behavior, adjust the voice quality, and encourage regression of the lesion. This study aimed to determine the effectiveness of the sonorous lips and tongue trill technique in the treatment of vocal polyps. The sample consisted of 10 adults diagnosed with a polyp who were divided into two subgroups: treatment and control. Ten speech therapy sessions were conducted, each lasting 30-45 minutes, based on the sonorous lips and tongue trill technique, accompanied by continuous guidance about vocal health. Speech therapy was effective in three of the five participants. The number of symptoms presented by the participants decreased significantly after voice therapy (P = 0.034) and vocal self-evaluation (P = 0.034). The acoustic evaluation showed improvements in parameters of noise values (P = 0.028) and jitter (P = 0.034). The size of the polyp and the degree of severity of dysphonia, hoarseness, and breathiness showed a significant reduction after treatment (P = 0.043). Among the remaining two participants, one opted out of laryngeal surgery, indicating that the improvement obtained was sufficient to avoid surgery. The sonorous lips and tongue trill technique was thus considered effective in 60% of the participants, and as laryngeal surgery was avoided in 80% of them, it should be considered a treatment option for vocal polyps. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. Ten-year follow-up of a consecutive series of children with multichannel cochlear implants.

    PubMed

    Uziel, Alain S; Sillon, Martine; Vieu, Adrienne; Artieres, Françoise; Piron, Jean-Pierre; Daures, Jean-Pierre; Mondain, Michel

    2007-08-01

    To assess a group of children who consecutively received implants more than 10 years after implantation with regard to speech perception, speech intelligibility, receptive language level, and academic/occupational status. A prospective longitudinal study. Pediatric referral center for cochlear implantation. Eighty-two prelingually deafened children received the Nucleus multichannel cochlear implant. Cochlear implantation with Cochlear Nucleus CI22 implant. The main outcome measures were open-set Phonetically Balanced Kindergarten word test, discrimination of sentences in noise, connective discourse tracking (CDT) using voice and telephone, speech intelligibility rating (SIR), vocabulary knowledge measured using the Peabody Picture Vocabulary Test (Revised), academic performance on French language, foreign language, and mathematics, and academic/occupational status. After 10 years of implant experience, 79 children (96%) reported that they always wear the device; 79% (65 of 82 children) could use the telephone. The mean scores were 72% for the Phonetically Balanced Kindergarten word test, 44% for word recognition in noise, 55.3 words per minute for the CDT, and 33 words per minute for the CDT via telephone. Thirty-three children (40%) developed speech intelligible to the average listener (SIR 5), and 22 (27%) developed speech intelligible to a listener with little experience of deaf person's speech (SIR 4). The measures of vocabulary showed that most (76%) of children who received implants scored below the median value of their normally hearing peers. The age at implantation was the most important factor that may influence the postimplant outcomes. Regarding educational/vocational status, 6 subjects attend universities, 3 already have a professional activity, 14 are currently at high school level, 32 are at junior high school level, 6 additional children are enrolled in a special unit for children with disability, and 3 children are still attending elementary schools. Seventeen are in further noncompulsory education studying a range of subjects at vocational level. This long-term report shows that many profoundly hearing-impaired children using cochlear implants can develop functional levels of speech perception and production, attain age-appropriate oral language, develop competency level in a language other than their primary language, and achieve satisfactory academic performance.

  20. The Prevalence of Speech Disorders among University Students in Jordan

    ERIC Educational Resources Information Center

    Alaraifi, Jehad Ahmad; Amayreh, Mousa Mohammad; Saleh, Mohammad Yusef

    2014-01-01

    Problem: There are no available studies on the prevalence, and distribution of speech disorders among Arabic speaking undergraduate students in Jordan. Method: A convenience sample of 400 undergraduate students at the University of Jordan was screened for speech disorders. Two spontaneous speech samples and an oral reading of a passage were…

  1. Methodological Choices in Rating Speech Samples

    ERIC Educational Resources Information Center

    O'Brien, Mary Grantham

    2016-01-01

    Much pronunciation research critically relies upon listeners' judgments of speech samples, but researchers have rarely examined the impact of methodological choices. In the current study, 30 German native listeners and 42 German L2 learners (L1 English) rated speech samples produced by English-German L2 learners along three continua: accentedness,…

  2. Speech Characteristics of Patients with Pallido-Ponto-Nigral Degeneration and Their Application to Presymptomatic Detection in At-Risk Relatives

    ERIC Educational Resources Information Center

    Liss, Julie M.; Krein-Jones, Kari; Wszolek, Zbigniew K.; Caviness, John N.

    2006-01-01

    Purpose: This report describes the speech characteristics of individuals with a neurodegenerative syndrome called pallido-ponto-nigral degeneration (PPND) and examines the speech samples of at-risk, but asymptomatic, relatives for possible preclinical detection. Method: Speech samples of 9 members of a PPND kindred were subjected to perceptual…

  3. Real-time classification of auditory sentences using evoked cortical activity in humans

    NASA Astrophysics Data System (ADS)

    Moses, David A.; Leonard, Matthew K.; Chang, Edward F.

    2018-06-01

    Objective. Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces. Approach. Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting. Significance. Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.

  4. Electrophysiological Correlates of Semantic Dissimilarity Reflect the Comprehension of Natural, Narrative Speech.

    PubMed

    Broderick, Michael P; Anderson, Andrew J; Di Liberto, Giovanni M; Crosse, Michael J; Lalor, Edmund C

    2018-03-05

    People routinely hear and understand speech at rates of 120-200 words per minute [1, 2]. Thus, speech comprehension must involve rapid, online neural mechanisms that process words' meanings in an approximately time-locked fashion. However, electrophysiological evidence for such time-locked processing has been lacking for continuous speech. Although valuable insights into semantic processing have been provided by the "N400 component" of the event-related potential [3-6], this literature has been dominated by paradigms using incongruous words within specially constructed sentences, with less emphasis on natural, narrative speech comprehension. Building on the discovery that cortical activity "tracks" the dynamics of running speech [7-9] and psycholinguistic work demonstrating [10-12] and modeling [13-15] how context impacts on word processing, we describe a new approach for deriving an electrophysiological correlate of natural speech comprehension. We used a computational model [16] to quantify the meaning carried by words based on how semantically dissimilar they were to their preceding context and then regressed this measure against electroencephalographic (EEG) data recorded from subjects as they listened to narrative speech. This produced a prominent negativity at a time lag of 200-600 ms on centro-parietal EEG channels, characteristics common to the N400. Applying this approach to EEG datasets involving time-reversed speech, cocktail party attention, and audiovisual speech-in-noise demonstrated that this response was very sensitive to whether or not subjects understood the speech they heard. These findings demonstrate that, when successfully comprehending natural speech, the human brain responds to the contextual semantic content of each word in a relatively time-locked fashion. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Feasibility of automated speech sample collection with stuttering children using interactive voice response (IVR) technology.

    PubMed

    Vogel, Adam P; Block, Susan; Kefalianos, Elaina; Onslow, Mark; Eadie, Patricia; Barth, Ben; Conway, Laura; Mundt, James C; Reilly, Sheena

    2015-04-01

    To investigate the feasibility of adopting automated interactive voice response (IVR) technology for remotely capturing standardized speech samples from stuttering children. Participants were 10 6-year-old stuttering children. Their parents called a toll-free number from their homes and were prompted to elicit speech from their children using a standard protocol involving conversation, picture description and games. The automated IVR system was implemented using an off-the-shelf telephony software program and delivered by a standard desktop computer. The software infrastructure utilizes voice over internet protocol. Speech samples were automatically recorded during the calls. Video recordings were simultaneously acquired in the home at the time of the call to evaluate the fidelity of the telephone collected samples. Key outcome measures included syllables spoken, percentage of syllables stuttered and an overall rating of stuttering severity using a 10-point scale. Data revealed a high level of relative reliability in terms of intra-class correlation between the video and telephone acquired samples on all outcome measures during the conversation task. Findings were less consistent for speech samples during picture description and games. Results suggest that IVR technology can be used successfully to automate remote capture of child speech samples.

  6. The Auditory-Brainstem Response to Continuous, Non-repetitive Speech Is Modulated by the Speech Envelope and Reflects Speech Processing

    PubMed Central

    Reichenbach, Chagit S.; Braiman, Chananel; Schiff, Nicholas D.; Hudspeth, A. J.; Reichenbach, Tobias

    2016-01-01

    The auditory-brainstem response (ABR) to short and simple acoustical signals is an important clinical tool used to diagnose the integrity of the brainstem. The ABR is also employed to investigate the auditory brainstem in a multitude of tasks related to hearing, such as processing speech or selectively focusing on one speaker in a noisy environment. Such research measures the response of the brainstem to short speech signals such as vowels or words. Because the voltage signal of the ABR has a tiny amplitude, several hundred to a thousand repetitions of the acoustic signal are needed to obtain a reliable response. The large number of repetitions poses a challenge to assessing cognitive functions due to neural adaptation. Here we show that continuous, non-repetitive speech, lasting several minutes, may be employed to measure the ABR. Because the speech is not repeated during the experiment, the precise temporal form of the ABR cannot be determined. We show, however, that important structural features of the ABR can nevertheless be inferred. In particular, the brainstem responds at the fundamental frequency of the speech signal, and this response is modulated by the envelope of the voiced parts of speech. We accordingly introduce a novel measure that assesses the ABR as modulated by the speech envelope, at the fundamental frequency of speech and at the characteristic latency of the response. This measure has a high signal-to-noise ratio and can hence be employed effectively to measure the ABR to continuous speech. We use this novel measure to show that the ABR is weaker to intelligible speech than to unintelligible, time-reversed speech. The methods presented here can be employed for further research on speech processing in the auditory brainstem and can lead to the development of future clinical diagnosis of brainstem function. PMID:27303286

  7. Does computer-synthesized speech manifest personality? Experimental tests of recognition, similarity-attraction, and consistency-attraction.

    PubMed

    Nass, C; Lee, K M

    2001-09-01

    Would people exhibit similarity-attraction and consistency-attraction toward unambiguously computer-generated speech even when personality is clearly not relevant? In Experiment 1, participants (extrovert or introvert) heard a synthesized voice (extrovert or introvert) on a book-buying Web site. Participants accurately recognized personality cues in text to speech and showed similarity-attraction in their evaluation of the computer voice, the book reviews, and the reviewer. Experiment 2, in a Web auction context, added personality of the text to the previous design. The results replicated Experiment 1 and demonstrated consistency (voice and text personality)-attraction. To maximize liking and trust, designers should set parameters, for example, words per minute or frequency range, that create a personality that is consistent with the user and the content being presented.

  8. Evaluation of NASA speech encoder

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Techniques developed by NASA for spaceflight instrumentation were used in the design of a quantizer for speech-decoding. Computer simulation of the actions of the quantizer was tested with synthesized and real speech signals. Results were evaluated by a phometician. Topics discussed include the relationship between the number of quantizer levels and the required sampling rate; reconstruction of signals; digital filtering; speech recording, sampling, and storage, and processing results.

  9. Writing Effectively as Counseling Center Directors and Administrators: Lessons Learned from a 2-Minute Speech

    ERIC Educational Resources Information Center

    Sevig, Todd; Bogan, Yolanda; Dunkle, John; Gong-Guy, Elizabeth

    2015-01-01

    Administrative writing is a crucial skill needed for the counseling center professional to be able to transmit knowledge and values for the rest of the campus community. This article highlights both conceptual and technical aspects of effective writing.

  10. 49 CFR 382.603 - Training for supervisors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... minutes of training on controlled substances use. The training will be used by the supervisors to determine whether reasonable suspicion exists to require a driver to undergo testing under § 382.307. The training shall include the physical, behavioral, speech, and performance indicators of probable alcohol...

  11. [Combining speech sample and feature bilateral selection algorithm for classification of Parkinson's disease].

    PubMed

    Zhang, Xiaoheng; Wang, Lirui; Cao, Yao; Wang, Pin; Zhang, Cheng; Yang, Liuyang; Li, Yongming; Zhang, Yanling; Cheng, Oumei

    2018-02-01

    Diagnosis of Parkinson's disease (PD) based on speech data has been proved to be an effective way in recent years. However, current researches just care about the feature extraction and classifier design, and do not consider the instance selection. Former research by authors showed that the instance selection can lead to improvement on classification accuracy. However, no attention is paid on the relationship between speech sample and feature until now. Therefore, a new diagnosis algorithm of PD is proposed in this paper by simultaneously selecting speech sample and feature based on relevant feature weighting algorithm and multiple kernel method, so as to find their synergy effects, thereby improving classification accuracy. Experimental results showed that this proposed algorithm obtained apparent improvement on classification accuracy. It can obtain mean classification accuracy of 82.5%, which was 30.5% higher than the relevant algorithm. Besides, the proposed algorithm detected the synergy effects of speech sample and feature, which is valuable for speech marker extraction.

  12. Asynchronous sampling of speech with some vocoder experimental results

    NASA Technical Reports Server (NTRS)

    Babcock, M. L.

    1972-01-01

    The method of asynchronously sampling speech is based upon the derivatives of the acoustical speech signal. The following results are apparent from experiments to date: (1) It is possible to represent speech by a string of pulses of uniform amplitude, where the only information contained in the string is the spacing of the pulses in time; (2) the string of pulses may be produced in a simple analog manner; (3) the first derivative of the original speech waveform is the most important for the encoding process; (4) the resulting pulse train can be utilized to control an acoustical signal production system to regenerate the intelligence of the original speech.

  13. Speech and language development in 2-year-old children with cerebral palsy.

    PubMed

    Hustad, Katherine C; Allison, Kristen; McFadd, Emily; Riehle, Katherine

    2014-06-01

    We examined early speech and language development in children who had cerebral palsy. Questions addressed whether children could be classified into early profile groups on the basis of speech and language skills and whether there were differences on selected speech and language measures among groups. Speech and language assessments were completed on 27 children with CP who were between the ages of 24 and 30 months (mean age 27.1 months; SD 1.8). We examined several measures of expressive and receptive language, along with speech intelligibility. Two-step cluster analysis was used to identify homogeneous groups of children based on their performance on the seven dependent variables characterizing speech and language performance. Three groups of children identified were those not yet talking (44% of the sample); those whose talking abilities appeared to be emerging (41% of the sample); and those who were established talkers (15% of the sample). Group differences were evident on all variables except receptive language skills. 85% of 2-year-old children with CP in this study had clinical speech and/or language delays relative to age expectations. Findings suggest that children with CP should receive speech and language assessment and treatment at or before 2 years of age.

  14. Investigation of habitual pitch during free play activities for preschool-aged children.

    PubMed

    Chen, Yang; Kimelman, Mikael D Z; Micco, Katie

    2009-01-01

    This study is designed to compare the habitual pitch measured in two different speech activities (free play activity and traditionally used structured speech activity) for normally developing preschool-aged children to explore to what extent preschoolers vary their vocal pitch among different speech environments. Habitual pitch measurements were conducted for 10 normally developing children (2 boys, 8 girls) between the ages of 31 months and 71 months during two different activities: (1) free play; and (2) structured speech. Speech samples were recorded using a throat microphone connected with a wireless transmitter in both activities. The habitual pitch (in Hz) was measured for all collected speech samples by using voice analysis software (Real-Time Pitch). Significantly higher habitual pitch is found during free play in contrast to structured speech activities. In addition, there is no showing of significant difference of habitual pitch elicited across a variety of structured speech activities. Findings suggest that the vocal usage of preschoolers appears to be more effortful during free play than during structured activities. It is recommended that a comprehensive evaluation for young children's voice needs to be based on the speech/voice samples collected from both free play and structured activities.

  15. Speech Recognition as a Transcription Aid: A Randomized Comparison With Standard Transcription

    PubMed Central

    Mohr, David N.; Turner, David W.; Pond, Gregory R.; Kamath, Joseph S.; De Vos, Cathy B.; Carpenter, Paul C.

    2003-01-01

    Objective. Speech recognition promises to reduce information entry costs for clinical information systems. It is most likely to be accepted across an organization if physicians can dictate without concerning themselves with real-time recognition and editing; assistants can then edit and process the computer-generated document. Our objective was to evaluate the use of speech-recognition technology in a randomized controlled trial using our institutional infrastructure. Design. Clinical note dictations from physicians in two specialty divisions were randomized to either a standard transcription process or a speech-recognition process. Secretaries and transcriptionists also were assigned randomly to each of these processes. Measurements. The duration of each dictation was measured. The amount of time spent processing a dictation to yield a finished document also was measured. Secretarial and transcriptionist productivity, defined as hours of secretary work per minute of dictation processed, was determined for speech recognition and standard transcription. Results. Secretaries in the endocrinology division were 87.3% (confidence interval, 83.3%, 92.3%) as productive with the speech-recognition technology as implemented in this study as they were using standard transcription. Psychiatry transcriptionists and secretaries were similarly less productive. Author, secretary, and type of clinical note were significant (p < 0.05) predictors of productivity. Conclusion. When implemented in an organization with an existing document-processing infrastructure (which included training and interfaces of the speech-recognition editor with the existing document entry application), speech recognition did not improve the productivity of secretaries or transcriptionists. PMID:12509359

  16. Do parents lead their children by the hand?

    PubMed

    Ozçalişkan, Seyda; Goldin-Meadow, Susan

    2005-08-01

    The types of gesture+speech combinations children produce during the early stages of language development change over time. This change, in turn, predicts the onset of two-word speech and thus might reflect a cognitive transition that the child is undergoing. An alternative, however, is that the change merely reflects changes in the types of gesture + speech combinations that their caregivers produce. To explore this possibility, we videotaped 40 American child-caregiver dyads in their homes for 90 minutes when the children were 1;2, 1;6, and 1;10. Each gesture was classified according to type (deictic, conventional, representational) and the relation it held to speech (reinforcing, disambiguating, supplementary). Children and their caregivers produced the same types of gestures and in approximately the same distribution. However, the children differed from their caregivers in the way they used gesture in relation to speech. Over time, children produced many more REINFORCING (bike+point at bike), DISAMBIGUATING (that one+ point at bike), and SUPPLEMENTARY combinations (ride+point at bike). In contrast, the frequency and distribution of caregivers' gesture+speech combinations remained constant over time. Thus, the changing relation between gesture and speech observed in the children cannot be traced back to the gestural input the children receive. Rather, it appears to reflect changes in the children's own skills, illustrating once again gesture's ability to shed light on developing cognitive and linguistic processes.

  17. Parental depressive symptoms, children’s emotional and behavioural problems, and parents’ expressed emotion—Critical and positive comments

    PubMed Central

    Parry, Elizabeth; Nath, Selina; Kallitsoglou, Angeliki; Russell, Ginny

    2017-01-01

    This longitudinal study examined whether mothers’ and fathers’ depressive symptoms predict, independently and interactively, children’s emotional and behavioural problems. It also examined bi-directional associations between parents’ expressed emotion constituents (parents’ child-directed positive and critical comments) and children’s emotional and behavioural problems. At time 1, the sample consisted of 160 families in which 50 mothers and 40 fathers had depression according to the Structured Clinical Interview for DSM-IV. Children’s mean age at Time 1 was 3.9 years (SD = 0.8). Families (n = 106) were followed up approximately 16 months later (Time 2). Expressed emotion constituents were assessed using the Preschool Five Minute Speech Sample. In total, 144 mothers and 158 fathers at Time 1 and 93 mothers and 105 fathers at Time 2 provided speech samples. Fathers’ depressive symptoms were concurrently associated with more child emotional problems when mothers had higher levels of depressive symptoms. When controlling for important confounders (children’s gender, baseline problems, mothers’ depressive symptoms and parents’ education and age), fathers’ depressive symptoms independently predicted higher levels of emotional and behavioural problems in their children over time. There was limited evidence for a bi-directional relationship between fathers’ positive comments and change in children’s behavioural problems over time. Unexpectedly, there were no bi-directional associations between parents’ critical comments and children’s outcomes. We conclude that the study provides evidence to support a whole family approach to prevention and intervention strategies for children’s mental health and parental depression. PMID:29045440

  18. The McGurk effect in children with autism and Asperger syndrome.

    PubMed

    Bebko, James M; Schroeder, Jessica H; Weiss, Jonathan A

    2014-02-01

    Children with autism may have difficulties in audiovisual speech perception, which has been linked to speech perception and language development. However, little has been done to examine children with Asperger syndrome as a group on tasks assessing audiovisual speech perception, despite this group's often greater language skills. Samples of children with autism, Asperger syndrome, and Down syndrome, as well as a typically developing sample, were presented with an auditory-only condition, a speech-reading condition, and an audiovisual condition designed to elicit the McGurk effect. Children with autism demonstrated unimodal performance at the same level as the other groups, yet showed a lower rate of the McGurk effect compared with the Asperger, Down and typical samples. These results suggest that children with autism may have unique intermodal speech perception difficulties linked to their representations of speech sounds. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  19. Language Sampling for Preschoolers With Severe Speech Impairments

    PubMed Central

    Ragsdale, Jamie; Bustos, Aimee

    2016-01-01

    Purpose The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Method Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Results Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur–Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Conclusion Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information. PMID:27552110

  20. Language Sampling for Preschoolers With Severe Speech Impairments.

    PubMed

    Binger, Cathy; Ragsdale, Jamie; Bustos, Aimee

    2016-11-01

    The purposes of this investigation were to determine if measures such as mean length of utterance (MLU) and percentage of comprehensible words can be derived reliably from language samples of children with severe speech impairments and if such measures correlate with tools that measure constructs assumed to be related. Language samples of 15 preschoolers with severe speech impairments (but receptive language within normal limits) were transcribed independently by 2 transcribers. Nonparametric statistics were used to determine which measures, if any, could be transcribed reliably and to determine if correlations existed between language sample measures and standardized measures of speech, language, and cognition. Reliable measures were extracted from the majority of the language samples, including MLU in words, mean number of syllables per utterance, and percentage of comprehensible words. Language sample comprehensibility measures were correlated with a single word comprehensibility task. Also, language sample MLUs and mean length of the participants' 3 longest sentences from the MacArthur-Bates Communicative Development Inventory (Fenson et al., 2006) were correlated. Language sampling, given certain modifications, may be used for some 3-to 5-year-old children with normal receptive language who have severe speech impairments to provide reliable expressive language and comprehensibility information.

  1. Talking Wheelchair

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Communication is made possible for disabled individuals by means of an electronic system, developed at Stanford University's School of Medicine, which produces highly intelligible synthesized speech. Familiarly known as the "talking wheelchair" and formally as the Versatile Portable Speech Prosthesis (VPSP). Wheelchair mounted system consists of a word processor, a video screen, a voice synthesizer and a computer program which instructs the synthesizer how to produce intelligible sounds in response to user commands. Computer's memory contains 925 words plus a number of common phrases and questions. Memory can also store several thousand other words of the user's choice. Message units are selected by operating a simple switch, joystick or keyboard. Completed message appears on the video screen, then user activates speech synthesizer, which generates a voice with a somewhat mechanical tone. With the keyboard, an experienced user can construct messages as rapidly as 30 words per minute.

  2. Speech Characteristics Associated with Three Genotypes of Ataxia

    ERIC Educational Resources Information Center

    Sidtis, John J.; Ahn, Ji Sook; Gomez, Christopher; Sidtis, Diana

    2011-01-01

    Purpose: Advances in neurobiology are providing new opportunities to investigate the neurological systems underlying motor speech control. This study explores the perceptual characteristics of the speech of three genotypes of spino-cerebellar ataxia (SCA) as manifest in four different speech tasks. Methods: Speech samples from 26 speakers with SCA…

  3. Automated Speech Rate Measurement in Dysarthria

    ERIC Educational Resources Information Center

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  4. Speech and Language Development in 2 Year Old Children with Cerebral Palsy

    PubMed Central

    Hustad, Katherine C.; Allison, Kristen; McFadd, Emily; Riehle, Katherine

    2013-01-01

    Objective We examined early speech and language development in children who had cerebral palsy. Questions addressed whether children could be classified into early profile groups on the basis of speech and language skills and whether there were differences on selected speech and language measures among groups. Methods Speech and language assessments were completed on 27 children with CP who were between the ages of 24-30 months (mean age 27.1 months; SD 1.8). We examined several measures of expressive and receptive language, along with speech intelligibility. Results 2-step cluster analysis was used to identify homogeneous groups of children based on their performance on the 7 dependent variables characterizing speech and language performance. Three groups of children identified were those not yet talking (44% of the sample); those whose talking abilities appeared to be emerging (41% of the sample); and those who were established talkers (15% of the sample). Group differences were evident on all variables except receptive language skills. Conclusion 85% of 2 year old children with CP in this study had clinical speech and /or language delays relative to age expectations. Findings suggest that children with CP should receive speech and language assessment and treatment to identify and treat those with delays at or before 2 years of age. PMID:23627373

  5. LANGUAGE TEACHING WITH CARTOONS.

    ERIC Educational Resources Information Center

    FLEMING, GERALD

    SHORT WELL-MADE CARTOONS, CAREFULLY MATCHED TO ORAL AND WRITTEN TEXTS, COMMAND THE LANGUAGE STUDENT'S ATTENTION BECAUSE OF THEIR NOVELTY AND MULTISENSORY APPEAL. THEY ARE ALSO IDEAL VEHICLES FOR THE DYNAMIC PRESENTATION OF EVERYDAY SITUATIONS WHICH CAN SERVE AS SETTINGS FOR NORMAL SPEECH PATTERNS. THESE FOUR-MINUTE CARTOONS LEND THEMSELVES TO A…

  6. Speech Analysis of Bengali Speaking Children with Repaired Cleft Lip & Palate

    ERIC Educational Resources Information Center

    Chakrabarty, Madhushree; Kumar, Suman; Chatterjee, Indranil; Maheshwari, Neha

    2012-01-01

    The present study aims at analyzing speech samples of four Bengali speaking children with repaired cleft palates with a view to differentiate between the misarticulations arising out of a deficit in linguistic skills and structural or motoric limitations. Spontaneous speech samples were collected and subjected to a number of linguistic analyses…

  7. Applications of Text Analysis Tools for Spoken Response Grading

    ERIC Educational Resources Information Center

    Crossley, Scott; McNamara, Danielle

    2013-01-01

    This study explores the potential for automated indices related to speech delivery, language use, and topic development to model human judgments of TOEFL speaking proficiency in second language (L2) speech samples. For this study, 244 transcribed TOEFL speech samples taken from 244 L2 learners were analyzed using automated indices taken from…

  8. Automated Speech Rate Measurement in Dysarthria.

    PubMed

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-06-01

    In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. The new algorithm was trained and tested using Dutch speech samples of 36 speakers with no history of speech impairment and 40 speakers with mild to moderate dysarthria. We tested the algorithm under various conditions: according to speech task type (sentence reading, passage reading, and storytelling) and algorithm optimization method (speaker group optimization and individual speaker optimization). Correlations between automated and human SR determination were calculated for each condition. High correlations between automated and human SR determination were found in the various testing conditions. The new algorithm measures SR in a sufficiently reliable manner. It is currently being integrated in a clinical software tool for assessing and managing prosody in dysarthric speech. Further research is needed to fine-tune the algorithm to severely dysarthric speech, to make the algorithm less sensitive to background noise, and to evaluate how the algorithm deals with syllabic consonants.

  9. Using on-line altered auditory feedback treating Parkinsonian speech

    NASA Astrophysics Data System (ADS)

    Wang, Emily; Verhagen, Leo; de Vries, Meinou H.

    2005-09-01

    Patients with advanced Parkinson's disease tend to have dysarthric speech that is hesitant, accelerated, and repetitive, and that is often resistant to behavior speech therapy. In this pilot study, the speech disturbances were treated using on-line altered feedbacks (AF) provided by SpeechEasy (SE), an in-the-ear device registered with the FDA for use in humans to treat chronic stuttering. Eight PD patients participated in the study. All had moderate to severe speech disturbances. In addition, two patients had moderate recurring stuttering at the onset of PD after long remission since adolescence, two had bilateral STN DBS, and two bilateral pallidal DBS. An effective combination of delayed auditory feedback and frequency-altered feedback was selected for each subject and provided via SE worn in one ear. All subjects produced speech samples (structured-monologue and reading) under three conditions: baseline, with SE without, and with feedbacks. The speech samples were randomly presented and rated for speech intelligibility goodness using UPDRS-III item 18 and the speaking rate. The results indicted that SpeechEasy is well tolerated and AF can improve speech intelligibility in spontaneous speech. Further investigational use of this device for treating speech disorders in PD is warranted [Work partially supported by Janus Dev. Group, Inc.].

  10. Is Presurgery and Early Postsurgery Performance Related to Speech and Language Outcomes at 3 Years of Age for Children with Cleft Palate?

    ERIC Educational Resources Information Center

    Chapman, Kathy L.

    2004-01-01

    This study examined the relationship between presurgery speech measures and speech and language performance at 39 months as well as the relationship between early postsurgery speech measures and speech and language performance at 39 months of age. Fifteen children with cleft lip and palate participated in the study. Spontaneous speech samples were…

  11. Measurement of trained speech patterns in stuttering: interjudge and intrajudge agreement of experts by means of modified time-interval analysis.

    PubMed

    Alpermann, Anke; Huber, Walter; Natke, Ulrich; Willmes, Klaus

    2010-09-01

    Improved fluency after stuttering therapy is usually measured by the percentage of stuttered syllables. However, outcome studies rarely evaluate the use of trained speech patterns that speakers use to manage stuttering. This study investigated whether the modified time interval analysis can distinguish between trained speech patterns, fluent speech, and stuttered speech. Seventeen German experts on stuttering judged a speech sample on two occasions. Speakers of the sample were stuttering adults, who were not undergoing therapy, as well as participants in a fluency shaping and a stuttering modification therapy. Results showed satisfactory inter-judge and intra-judge agreement above 80%. Intervals with trained speech patterns were identified as consistently as stuttered and fluent intervals. We discuss limitations of the study, as well as implications of our findings for the development of training for identification of trained speech patterns and future outcome studies. The reader will be able to (a) explain different methods to measure the use of trained speech patterns, (b) evaluate whether German experts are able to discriminate intervals with trained speech patterns reliably from fluent and stuttered intervals and (c) describe how the measurement of trained speech patterns can contribute to outcome studies.

  12. Transitioning from analog to digital audio recording in childhood speech sound disorders.

    PubMed

    Shriberg, Lawrence D; McSweeny, Jane L; Anderson, Bruce E; Campbell, Thomas F; Chial, Michael R; Green, Jordan R; Hauner, Katherina K; Moore, Christopher A; Rusiewicz, Heather L; Wilson, David L

    2005-06-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants' speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise.

  13. Transitioning from analog to digital audio recording in childhood speech sound disorders

    PubMed Central

    Shriberg, Lawrence D.; McSweeny, Jane L.; Anderson, Bruce E.; Campbell, Thomas F.; Chial, Michael R.; Green, Jordan R.; Hauner, Katherina K.; Moore, Christopher A.; Rusiewicz, Heather L.; Wilson, David L.

    2014-01-01

    Few empirical findings or technical guidelines are available on the current transition from analog to digital audio recording in childhood speech sound disorders. Of particular concern in the present context was whether a transition from analog- to digital-based transcription and coding of prosody and voice features might require re-standardizing a reference database for research in childhood speech sound disorders. Two research transcribers with different levels of experience glossed, transcribed, and prosody-voice coded conversational speech samples from eight children with mild to severe speech disorders of unknown origin. The samples were recorded, stored, and played back using representative analog and digital audio systems. Effect sizes calculated for an array of analog versus digital comparisons ranged from negligible to medium, with a trend for participants’ speech competency scores to be slightly lower for samples obtained and transcribed using the digital system. We discuss the implications of these and other findings for research and clinical practise. PMID:16019779

  14. Cleft Audit Protocol for Speech (CAPS-A): A Comprehensive Training Package for Speech Analysis

    ERIC Educational Resources Information Center

    Sell, D.; John, A.; Harding-Bell, A.; Sweeney, T.; Hegarty, F.; Freeman, J.

    2009-01-01

    Background: The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been…

  15. Speech-Language Pathologists' Assessment Practices for Children with Suspected Speech Sound Disorders: Results of a National Survey

    ERIC Educational Resources Information Center

    Skahan, Sarah M.; Watson, Maggie; Lof, Gregory L.

    2007-01-01

    Purpose: This study examined assessment procedures used by speech-language pathologists (SLPs) when assessing children suspected of having speech sound disorders (SSD). This national survey also determined the information participants obtained from clients' speech samples, evaluation of non-native English speakers, and time spent on assessment.…

  16. Attitudes toward Speech Disorders: Sampling the Views of Cantonese-Speaking Americans.

    ERIC Educational Resources Information Center

    Bebout, Linda; Arthur, Bradford

    1997-01-01

    A study of 60 Chinese Americans and 46 controls found the Chinese Americans were more likely to believe persons with speech disorders could improve speech by "trying hard," to view people using deaf speech and people with cleft palates as perhaps being emotionally disturbed, and to regard deaf speech as a limitation. (Author/CR)

  17. Preliminary comparison of infants speech with and without hearing loss

    NASA Astrophysics Data System (ADS)

    McGowan, Richard S.; Nittrouer, Susan; Chenausky, Karen

    2005-04-01

    The speech of ten children with hearing loss and ten children without hearing loss aged 12 months is examined. All the children with hearing loss were identified before six months of age, and all have parents who wish them to become oral communicators. The data are from twenty minute sessions with the caregiver and child, with their normal prostheses in place, in semi-structured settings. These data are part of a larger test battery applied to both caregiver and child that is part of a project comparing the development of children with hearing loss to those without hearing loss, known as the Early Development of Children with Hearing Loss. The speech comparisons are in terms of number of utterances, syllable shapes, and segment type. A subset of the data was given a detailed acoustic analysis, including formant frequencies and voice quality measures. [Work supported by NIDCD R01 006237 to Susan Nittrouer.

  18. [Vocal recognition in dental and oral radiology].

    PubMed

    La Fianza, A; Giorgetti, S; Marelli, P; Campani, R

    1993-10-01

    Speech reporting benefits by units which can recognize sentences in any natural language in real time. The use of this method in the everyday practice of radiology departments shows its possible application fields. We used the speech recognition method to report orthopantomographic exams in order to evaluate the advantages the method offers to the management and quality of reporting the exams which are difficult to fit in other closed computed reporting systems. Both speech recognition and the conventional reporting method (tape recording and typewriting) were used to report 760 orthopantomographs. The average time needed to make the report, the legibility (or Flesch) index, as adapted for the Italian language, and finally a clinical index (the subjective opinion of 4 odontostomatologists) were evaluated for each exam, with both techniques. Moreover, errors in speech reporting (crude, human and overall errors) were also evaluated. The advantages of speech reporting consisted in the shorter time needed for the report to become available (2.24 vs 2.99 minutes) (p < 0.0005), in the improved Flesch index (30.62 vs 28.9) and in the clinical index. The data obtained from speech reporting in odontostomatologic radiology were useful not only to reduce the mean reporting time of orthopantomographic exams but also to improve report quality by reducing both grammar and transmission mistakes. However, the basic condition for such results to be obtained is the speaker's skills to make a good report.

  19. Speech Volume Indexes Sex Differences in the Social-Emotional Effects of Alcohol

    PubMed Central

    Fairbairn, Catharine E.; Sayette, Michael A.; Amole, Marlissa C.; Dimoff, John D.; Cohn, Jeffrey F.; Girard, Jeffrey M.

    2015-01-01

    Men and women differ dramatically in their rates of alcohol use disorder (AUD), and researchers have long been interested in identifying mechanisms underlying male vulnerability to problem drinking. Surveys suggest that social processes underlie sex differences in drinking patterns, with men reporting greater social enhancement from alcohol than women, and all-male social drinking contexts being associated with particularly high rates of hazardous drinking. But experimental evidence for sex differences in social-emotional response to alcohol has heretofore been lacking. Research using larger sample sizes, a social context, and more sensitive measures of alcohol’s rewarding effects may be necessary to better understand sex differences in the etiology of AUD. This study explored the acute effects of alcohol during social exchange on speech volume –an objective measure of social-emotional experience that was reliably captured at the group level. Social drinkers (360 male; 360 female) consumed alcohol (.82g/kg males; .74g/kg females), placebo, or a no-alcohol control beverage in groups of three over 36-minutes. Within each of the three beverage conditions, equal numbers of groups consisted of all males, all females, 2 females and 1 male, and 1 female and 2 males. Speech volume was monitored continuously throughout the drink period, and group volume emerged as a robust correlate of self-report and facial indexes of social reward. Notably, alcohol-related increases in group volume were observed selectively in all-male groups but not in groups containing any females. Results point to social enhancement as a promising direction for research exploring factors underlying sex differences in problem drinking. PMID:26237323

  20. Things I'll Never Say: Stories of Growing up Undocumented in the United States

    ERIC Educational Resources Information Center

    Hernandez, Ingrid; Mendoza, Fermin; Lio, Mario; Latthi, Jirayut; Eusebio, Catherine

    2011-01-01

    Debate goes on about the proposed Development, Relief and Education for Alien Minors (DREAM) Act. In presidential speeches, one-minute congressional floor statements, and intermittent media coverage, we hear passionate arguments for and against this federal legislation that would provide a path toward citizenship for hundreds of thousands of…

  1. Demonstrating a Web-Design Technique in a Distance-Learning Environment

    ERIC Educational Resources Information Center

    Zdenek, Sean

    2004-01-01

    Objective: To lead a brief training session over a distance-learning network. Type of speech: Informative. Point value: 20% of course grade. Requirements: (a) References: Not specified; (b) Length: 15 minutes; (c) Visual aid: Yes; (d) Outline: No; (e) Prerequisite reading: Chapters 12-16, 18 (Bailey, 2002); (f) Additional requirements: None. This…

  2. The End of the Brezhnev Era: Stasis and Succession.

    DTIC Science & Technology

    1984-06-01

    appeareances and speeches. On June 22, Andropov attended ceremonies associated with the Dimitrov centenery at the Kremlin, and on June 24 he gave a V..’ short... Kirile nko *Not present at the Kremlin meeting, but possibly consulted by telephone. Of course, awaiting the publication of the minutes to the meeting

  3. Stimulus Shift: A Demonstration Motion Picture. Final Report.

    ERIC Educational Resources Information Center

    McLean, James E.

    The purposes of the project described were to demonstrate the use of innovative stimulus shift techniques in articulation and language training, and to experiment with use of motion picture photography in the education and instruction of clinicians and therapists working with speech handicapped children. A 37-minute sound color film was produced…

  4. Countering the Reactionary Federal Program for Education.

    ERIC Educational Resources Information Center

    McIntosh, Peggy

    This conference address, which originally concerned "gender issues in the schools," was modified at the last minute to contain arguments that counter and criticize a federal program for education put forth by President Ronald Reagan in a speech delivered earlier at the same conference, and the text of which is included here. The key…

  5. Changes in Speech Production Associated with Alphabet Supplementation

    ERIC Educational Resources Information Center

    Hustad, Katherine C.; Lee, Jimin

    2008-01-01

    Purpose: This study examined the effect of alphabet supplementation (AS) on temporal and spectral features of speech production in individuals with cerebral palsy and dysarthria. Method: Twelve speakers with dysarthria contributed speech samples using habitual speech and while using AS. One hundred twenty listeners orthographically transcribed…

  6. Criterion-related validity of the Test of Children's Speech sentence intelligibility measure for children with cerebral palsy and dysarthria.

    PubMed

    Hodge, Megan; Gotzke, Carrie Lynne

    2014-08-01

    To evaluate the criterion-related validity of the TOCS+ sentence measure (TOCS+, Hodge, Daniels & Gotzke, 2009 ) for children with dysarthria and CP by comparing intelligibility and rate scores obtained concurrently from the TOCS+ and from a conversational sample. Twenty children (3 to 10 years old) diagnosed with spastic cerebral palsy (CP) participated. Nineteen children also had a confirmed diagnosis of dysarthria. Children's intelligibility and speaking rate scores obtained from the TOCS+, which uses imitation of sets of randomly selected items ranging from 2-7 words (80 words in total) and from a contiguous 100-word conversational speech were compared. Mean intelligibility scores were 46.5% (SD = 26.4%) and 50.9% (SD = 19.1%) and mean rates in words per minute (WPM) were 90.2 (SD = 22.3) and 94.1 (SD = 25.6), respectively, for the TOCS+ and conversational samples. No significant differences were found between the two conditions for intelligibility or rate scores. Strong correlations were found between the TOCS+ and conversational samples for intelligibility (r = 0.86; p < 0.001) and WPM (r = 0.77; p < 0.001), supporting the criterion validity of the TOCS+ sentence task as a time efficient procedure for measuring intelligibility and rate in children with CP, with and without confirmed dysarthria. The results support the criterion validity of the TOCS+ sentence task as a time efficient procedure for measuring intelligibility and rate in children with CP, with and without confirmed dysarthria. Children varied in their relative performance on the two speaking tasks, reflecting the complexity of factors that influence intelligibility and rate scores.

  7. Analysis of Documentation Speed Using Web-Based Medical Speech Recognition Technology: Randomized Controlled Trial.

    PubMed

    Vogel, Markus; Kaisers, Wolfgang; Wassmuth, Ralf; Mayatepek, Ertan

    2015-11-03

    Clinical documentation has undergone a change due to the usage of electronic health records. The core element is to capture clinical findings and document therapy electronically. Health care personnel spend a significant portion of their time on the computer. Alternatives to self-typing, such as speech recognition, are currently believed to increase documentation efficiency and quality, as well as satisfaction of health professionals while accomplishing clinical documentation, but few studies in this area have been published to date. This study describes the effects of using a Web-based medical speech recognition system for clinical documentation in a university hospital on (1) documentation speed, (2) document length, and (3) physician satisfaction. Reports of 28 physicians were randomized to be created with (intervention) or without (control) the assistance of a Web-based system of medical automatic speech recognition (ASR) in the German language. The documentation was entered into a browser's text area and the time to complete the documentation including all necessary corrections, correction effort, number of characters, and mood of participant were stored in a database. The underlying time comprised text entering, text correction, and finalization of the documentation event. Participants self-assessed their moods on a scale of 1-3 (1=good, 2=moderate, 3=bad). Statistical analysis was done using permutation tests. The number of clinical reports eligible for further analysis stood at 1455. Out of 1455 reports, 718 (49.35%) were assisted by ASR and 737 (50.65%) were not assisted by ASR. Average documentation speed without ASR was 173 (SD 101) characters per minute, while it was 217 (SD 120) characters per minute using ASR. The overall increase in documentation speed through Web-based ASR assistance was 26% (P=.04). Participants documented an average of 356 (SD 388) characters per report when not assisted by ASR and 649 (SD 561) characters per report when assisted by ASR. Participants' average mood rating was 1.3 (SD 0.6) using ASR assistance compared to 1.6 (SD 0.7) without ASR assistance (P<.001). We conclude that medical documentation with the assistance of Web-based speech recognition leads to an increase in documentation speed, document length, and participant mood when compared to self-typing. Speech recognition is a meaningful and effective tool for the clinical documentation process.

  8. Continuous Speech Recognition for Clinicians

    PubMed Central

    Zafar, Atif; Overhage, J. Marc; McDonald, Clement J.

    1999-01-01

    The current generation of continuous speech recognition systems claims to offer high accuracy (greater than 95 percent) speech recognition at natural speech rates (150 words per minute) on low-cost (under $2000) platforms. This paper presents a state-of-the-technology summary, along with insights the authors have gained through testing one such product extensively and other products superficially. The authors have identified a number of issues that are important in managing accuracy and usability. First, for efficient recognition users must start with a dictionary containing the phonetic spellings of all words they anticipate using. The authors dictated 50 discharge summaries using one inexpensive internal medicine dictionary ($30) and found that they needed to add an additional 400 terms to get recognition rates of 98 percent. However, if they used either of two more expensive and extensive commercial medical vocabularies ($349 and $695), they did not need to add terms to get a 98 percent recognition rate. Second, users must speak clearly and continuously, distinctly pronouncing all syllables. Users must also correct errors as they occur, because accuracy improves with error correction by at least 5 percent over two weeks. Users may find it difficult to train the system to recognize certain terms, regardless of the amount of training, and appropriate substitutions must be created. For example, the authors had to substitute “twice a day” for “bid” when using the less expensive dictionary, but not when using the other two dictionaries. From trials they conducted in settings ranging from an emergency room to hospital wards and clinicians' offices, they learned that ambient noise has minimal effect. Finally, they found that a minimal “usable” hardware configuration (which keeps up with dictation) comprises a 300-MHz Pentium processor with 128 MB of RAM and a “speech quality” sound card (e.g., SoundBlaster, $99). Anything less powerful will result in the system lagging behind the speaking rate. The authors obtained 97 percent accuracy with just 30 minutes of training when using the latest edition of one of the speech recognition systems supplemented by a commercial medical dictionary. This technology has advanced considerably in recent years and is now a serious contender to replace some or all of the increasingly expensive alternative methods of dictation with human transcription. PMID:10332653

  9. Minimal Pair Distinctions and Intelligibility in Preschool Children with and without Speech Sound Disorders

    ERIC Educational Resources Information Center

    Hodge, Megan M.; Gotzke, Carrie L.

    2011-01-01

    Listeners' identification of young children's productions of minimally contrastive words and predictive relationships between accurately identified words and intelligibility scores obtained from a 100-word spontaneous speech sample were determined for 36 children with typically developing speech (TDS) and 36 children with speech sound disorders…

  10. Participation of the Classical Speech Areas in Auditory Long-Term Memory

    PubMed Central

    Karabanov, Anke Ninija; Paine, Rainer; Chao, Chi Chao; Schulze, Katrin; Scott, Brian; Hallett, Mark; Mishkin, Mortimer

    2015-01-01

    Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG) and in the inferior frontal gyrus (IFG) may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS) to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results. PMID:25815813

  11. Participation of the classical speech areas in auditory long-term memory.

    PubMed

    Karabanov, Anke Ninija; Paine, Rainer; Chao, Chi Chao; Schulze, Katrin; Scott, Brian; Hallett, Mark; Mishkin, Mortimer

    2015-01-01

    Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG) and in the inferior frontal gyrus (IFG) may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS) to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results.

  12. Articulatory-acoustic vowel space: application to clear speech in individuals with Parkinson's disease.

    PubMed

    Whitfield, Jason A; Goberman, Alexander M

    2014-01-01

    Individuals with Parkinson disease (PD) often exhibit decreased range of movement secondary to the disease process, which has been shown to affect articulatory movements. A number of investigations have failed to find statistically significant differences between control and disordered groups, and between speaking conditions, using traditional vowel space area measures. The purpose of the current investigation was to evaluate both between-group (PD versus control) and within-group (habitual versus clear) differences in articulatory function using a novel vowel space measure, the articulatory-acoustic vowel space (AAVS). The novel AAVS is calculated from continuously sampled formant trajectories of connected speech. In the current study, habitual and clear speech samples from twelve individuals with PD along with habitual control speech samples from ten neurologically healthy adults were collected and acoustically analyzed. In addition, a group of listeners completed perceptual rating of speech clarity for all samples. Individuals with PD were perceived to exhibit decreased speech clarity compared to controls. Similarly, the novel AAVS measure was significantly lower in individuals with PD. In addition, the AAVS measure significantly tracked changes between the habitual and clear conditions that were confirmed by perceptual ratings. In the current study, the novel AAVS measure is shown to be sensitive to disease-related group differences and within-person changes in articulatory function of individuals with PD. Additionally, these data confirm that individuals with PD can modulate the speech motor system to increase articulatory range of motion and speech clarity when given a simple prompt. The reader will be able to (i) describe articulatory behavior observed in the speech of individuals with Parkinson disease; (ii) describe traditional measures of vowel space area and how they relate to articulation; (iii) describe a novel measure of vowel space, the articulatory-acoustic vowel space and its relationship to articulation and the perception of speech clarity. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. No Racial Difference in Rehabilitation Therapy Across All Post-Acute Care Settings in the Year Following a Stroke.

    PubMed

    Skolarus, Lesli E; Feng, Chunyang; Burke, James F

    2017-12-01

    Black stroke survivors experience greater poststroke disability than whites. Differences in post-acute rehabilitation may contribute to this disparity. Therefore, we estimated racial differences in rehabilitation therapy utilization, intensity, and the number of post-acute care settings in the first year after a stroke. We used national Medicare data to study 186 168 elderly black and white patients hospitalized with a primary diagnosis of stroke in 2011. We tabulated the proportion of stroke survivors receiving physical, occupational, and speech and language therapy in each post-acute care setting (inpatient rehabilitation facility, skilled nursing facility, and home health agency), minutes of therapy, and number of transitions between settings. We then used generalized linear models to determine whether racial differences in minutes of physical therapy were influenced by demographics, comorbidities, thrombolysis, and markers of stroke severity. Black stroke patients were more likely to receive each type of therapy than white stroke patients. Compared with white stroke patients, black stroke patients received more minutes of physical therapy (897.8 versus 743.4; P <0.01), occupational therapy (752.7 versus 648.9; P <0.01), and speech and language therapy (865.7 versus 658.1; P <0.01). There were no clinically significant differences in physical therapy minutes after adjustment. Blacks had more transitions (median, 3; interquartile range, 1-5) than whites (median, 2; interquartile range, 1-5; P <0.01). There are no clinically significant racial differences in rehabilitation therapy utilization or intensity after accounting for patient characteristics. It is unlikely that differences in rehabilitation utilization or intensity are important contributors to racial disparities in poststroke disability. © 2017 American Heart Association, Inc.

  14. Variability and Diagnostic Accuracy of Speech Intelligibility Scores in Children

    ERIC Educational Resources Information Center

    Hustad, Katherine C.; Oakes, Ashley; Allison, Kristen

    2015-01-01

    Purpose: We examined variability of speech intelligibility scores and how well intelligibility scores predicted group membership among 5-year-old children with speech motor impairment (SMI) secondary to cerebral palsy and an age-matched group of typically developing (TD) children. Method: Speech samples varying in length from 1-4 words were…

  15. The Suitability of Cloud-Based Speech Recognition Engines for Language Learning

    ERIC Educational Resources Information Center

    Daniels, Paul; Iwago, Koji

    2017-01-01

    As online automatic speech recognition (ASR) engines become more accurate and more widely implemented with call software, it becomes important to evaluate the effectiveness and the accuracy of these recognition engines using authentic speech samples. This study investigates two of the most prominent cloud-based speech recognition engines--Apple's…

  16. Speech recognition systems on the Cell Broadband Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Jones, H; Vaidya, S

    In this paper we describe our design, implementation, and first results of a prototype connected-phoneme-based speech recognition system on the Cell Broadband Engine{trademark} (Cell/B.E.). Automatic speech recognition decodes speech samples into plain text (other representations are possible) and must process samples at real-time rates. Fortunately, the computational tasks involved in this pipeline are highly data-parallel and can receive significant hardware acceleration from vector-streaming architectures such as the Cell/B.E. Identifying and exploiting these parallelism opportunities is challenging, but also critical to improving system performance. We observed, from our initial performance timings, that a single Cell/B.E. processor can recognize speech from thousandsmore » of simultaneous voice channels in real time--a channel density that is orders-of-magnitude greater than the capacity of existing software speech recognizers based on CPUs (central processing units). This result emphasizes the potential for Cell/B.E.-based speech recognition and will likely lead to the future development of production speech systems using Cell/B.E. clusters.« less

  17. Changes in Maternal Expressed Emotion toward Clinically Anxious Children following Cognitive Behavioral Therapy

    ERIC Educational Resources Information Center

    Gar, Natalie S.; Hudson, Jennifer L.

    2009-01-01

    The aim of this study was to determine whether maternal expressed emotion (criticism and emotional overinvolvement) decreased across treatment for childhood anxiety. Mothers of 48 clinically anxious children (aged 6-14 years) were rated on levels of criticism (CRIT) and emotional overinvolvement (EOI), as measured by a Five Minute Speech Sample…

  18. American Association of Community Colleges 75th Annual Convention: Clinton Presidential Address. [Videotape].

    ERIC Educational Resources Information Center

    American Association of Community Colleges, Washington, DC.

    This 60 minute videotape is a live satellite presentation of the American Association of Community Colleges' 75th Annual Convention in 1995. Speeches by former Secretary of Labor, Robert Reich, and former Secretary of Education, Richard Riley, are followed by the presidential address to community colleges by former President Bill Clinton. He…

  19. Improved naming after TMS treatments in a chronic, global aphasia patient — case report

    PubMed Central

    NAESER, MARGARET A.; MARTIN, PAULA I; NICHOLAS, MARJORIE; BAKER, ERROL H.; SEEKINS, HEIDI; HELM-ESTABROOKS, NANCY; CAYER-MEADE, CAROL; KOBAYASHI, MASAHITO; THEORET, HUGO; FREGNI, FELIPE; TORMOS, JOSE MARIA; KURLAND, JACQUIE; DORON, KARL W.; PASCUAL-LEONE, ALVARO

    2005-01-01

    We report improved ability to name pictures at 2 and 8 months after repetitive transcranial magnetic stimulation (rTMS) treatments to the pars triangularis portion of right Broca’s homologue in a 57 year-old woman with severe nonfluent/global aphasia (6.5 years post left basal ganglia bleed, subcortical lesion). TMS was applied at 1 Hz, 20 minutes a day, 10 days, over a two-week period. She received no speech therapy during the study. One year after her TMS treatments, she entered speech therapy with continued improvement. TMS may have modulated activity in the remaining left and right hemisphere neural network for naming. PMID:16006338

  20. Describing Speech Usage in Daily Activities in Typical Adults.

    PubMed

    Anderson, Laine; Baylor, Carolyn R; Eadie, Tanya L; Yorkston, Kathryn M

    2016-01-01

    "Speech usage" refers to what people want or need to do with their speech to meet communication demands in life roles. The purpose of this study was to contribute to validation of the Levels of Speech Usage scale by providing descriptive data from a sample of adults without communication disorders, comparing this scale to a published Occupational Voice Demands scale and examining predictors of speech usage levels. This is a survey design. Adults aged ≥25 years without reported communication disorders were recruited nationally to complete an online questionnaire. The questionnaire included the Levels of Speech Usage scale, questions about relevant occupational and nonoccupational activities (eg, socializing, hobbies, childcare, and so forth), and demographic information. Participants were also categorized according to Koufman and Isaacson occupational voice demands scale. A total of 276 participants completed the questionnaires. People who worked for pay tended to report higher levels of speech usage than those who do not work for pay. Regression analyses showed employment to be the major contributor to speech usage; however, considerable variance left unaccounted for suggests that determinants of speech usage and the relationship between speech usage, employment, and other life activities are not yet fully defined. The Levels of Speech Usage may be a viable instrument to systematically rate speech usage because it captures both occupational and nonoccupational speech demands. These data from a sample of typical adults may provide a reference to help in interpreting the impact of communication disorders on speech usage patterns. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  1. Cleft audit protocol for speech (CAPS-A): a comprehensive training package for speech analysis.

    PubMed

    Sell, D; John, A; Harding-Bell, A; Sweeney, T; Hegarty, F; Freeman, J

    2009-01-01

    The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been paid to this issue. To design, execute, and evaluate a training programme for speech and language therapists on the systematic and reliable use of the Cleft Audit Protocol for Speech-Augmented (CAPS-A), addressing issues of standardized speech samples, data acquisition, recording, playback, and listening guidelines. Thirty-six specialist speech and language therapists undertook the training programme over four days. This consisted of two days' training on the CAPS-A tool followed by a third day, making independent ratings and transcriptions on ten new cases which had been previously recorded during routine audit data collection. This task was repeated on day 4, a minimum of one month later. Ratings were made using the CAPS-A record form with the CAPS-A definition table. An analysis was made of the speech and language therapists' CAPS-A ratings at occasion 1 and occasion 2 and the intra- and inter-rater reliability calculated. Trained therapists showed consistency in individual judgements on specific sections of the tool. Intraclass correlation coefficients were calculated for each section with good agreement on eight of 13 sections. There were only fair levels of agreement on anterior oral cleft speech characteristics, non-cleft errors/immaturities and voice. This was explained, at least in part, by their low prevalence which affects the calculation of the intraclass correlation coefficient statistic. Speech and language therapists benefited from training on the CAPS-A, focusing on specific aspects of speech using definitions of parameters and scalar points, in order to apply the tool systematically and reliably. Ratings are enhanced by ensuring a high degree of attention to the nature of the data, standardizing the speech sample, data acquisition, the listening process together with the use of high-quality recording and playback equipment. In addition, a method is proposed for maintaining listening skills following training as part of an individual's continuing education.

  2. The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder

    PubMed Central

    Shriberg, Lawrence D.; Paul, Rhea; Black, Lois M.; van Santen, Jan P.

    2010-01-01

    In a sample of 46 children aged 4 to 7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants’ speech, prosody, and voice were compared with data from 40 typically-developing children, 13 preschool children with Speech Delay, and 15 participants aged 5 to 49 years with CAS in neurogenetic disorders. Speech Delay and Speech Errors, respectively, were modestly and substantially more prevalent in participants with ASD than reported population estimates. Double dissociations in speech, prosody, and voice impairments in ASD were interpreted as consistent with a speech attunement framework, rather than with the motor speech impairments that define CAS. Key Words: apraxia, dyspraxia, motor speech disorder, speech sound disorder PMID:20972615

  3. Characteristics of speaking style and implications for speech recognition.

    PubMed

    Shinozaki, Takahiro; Ostendorf, Mari; Atlas, Les

    2009-09-01

    Differences in speaking style are associated with more or less spectral variability, as well as different modulation characteristics. The greater variation in some styles (e.g., spontaneous speech and infant-directed speech) poses challenges for recognition but possibly also opportunities for learning more robust models, as evidenced by prior work and motivated by child language acquisition studies. In order to investigate this possibility, this work proposes a new method for characterizing speaking style (the modulation spectrum), examines spontaneous, read, adult-directed, and infant-directed styles in this space, and conducts pilot experiments in style detection and sampling for improved speech recognizer training. Speaking style classification is improved by using the modulation spectrum in combination with standard pitch and energy variation. Speech recognition experiments on a small vocabulary conversational speech recognition task show that sampling methods for training with a small amount of data benefit from the new features.

  4. A characterization of verb use in Turkish agrammatic narrative speech.

    PubMed

    Arslan, Seçkin; Bamyacı, Elif; Bastiaanse, Roelien

    2016-01-01

    This study investigates the characteristics of narrative-speech production and the use of verbs in Turkish agrammatic speakers (n = 10) compared to non-brain-damaged controls (n = 10). To elicit narrative-speech samples, personal interviews and storytelling tasks were conducted. Turkish has a large and regular verb inflection paradigm where verbs are inflected for evidentiality (i.e. direct versus indirect evidence available to the speaker). Particularly, we explored the general characteristics of the speech samples (e.g. utterance length) and the uses of lexical, finite and non-finite verbs and direct and indirect evidentials. The results show that speech rate is slow, verbs per utterance are lower than normal and the verb diversity is reduced in the agrammatic speakers. Verb inflection is relatively intact; however, a trade-off pattern between inflection for direct evidentials and verb diversity is found. The implications of the data are discussed in connection with narrative-speech production studies on other languages.

  5. Intimate insight: MDMA changes how people talk about significant others.

    PubMed

    Baggott, Matthew J; Kirkpatrick, Matthew G; Bedi, Gillinder; de Wit, Harriet

    2015-06-01

    ±3,4-methylenedioxymethamphetamine (MDMA) is widely believed to increase sociability. The drug alters speech production and fluency, and may influence speech content. Here, we investigated the effect of MDMA on speech content, which may reveal how this drug affects social interactions. Thirty-five healthy volunteers with prior MDMA experience completed this two-session, within-subjects, double-blind study during which they received 1.5 mg/kg oral MDMA and placebo. Participants completed a five-minute standardized talking task during which they discussed a close personal relationship (e.g. a friend or family member) with a research assistant. The conversations were analyzed for selected content categories (e.g. words pertaining to affect, social interaction, and cognition), using both a standard dictionary method (Pennebaker's Linguistic Inquiry and Word Count: LIWC) and a machine learning method using random forest classifiers. Both analytic methods revealed that MDMA altered speech content relative to placebo. Using LIWC scores, the drug increased use of social and sexual words, consistent with reports that MDMA increases willingness to disclose. Using the machine learning algorithm, we found that MDMA increased use of social words and words relating to both positive and negative emotions. These findings are consistent with reports that MDMA acutely alters speech content, specifically increasing emotional and social content during a brief semistructured dyadic interaction. Studying effects of psychoactive drugs on speech content may offer new insights into drug effects on mental states, and on emotional and psychosocial interaction. © The Author(s) 2015.

  6. Classification of Parkinson's disease utilizing multi-edit nearest-neighbor and ensemble learning algorithms with speech samples.

    PubMed

    Zhang, He-Hua; Yang, Liuyang; Liu, Yuchuan; Wang, Pin; Yin, Jun; Li, Yongming; Qiu, Mingguo; Zhu, Xueru; Yan, Fang

    2016-11-16

    The use of speech based data in the classification of Parkinson disease (PD) has been shown to provide an effect, non-invasive mode of classification in recent years. Thus, there has been an increased interest in speech pattern analysis methods applicable to Parkinsonism for building predictive tele-diagnosis and tele-monitoring models. One of the obstacles in optimizing classifications is to reduce noise within the collected speech samples, thus ensuring better classification accuracy and stability. While the currently used methods are effect, the ability to invoke instance selection has been seldomly examined. In this study, a PD classification algorithm was proposed and examined that combines a multi-edit-nearest-neighbor (MENN) algorithm and an ensemble learning algorithm. First, the MENN algorithm is applied for selecting optimal training speech samples iteratively, thereby obtaining samples with high separability. Next, an ensemble learning algorithm, random forest (RF) or decorrelated neural network ensembles (DNNE), is used to generate trained samples from the collected training samples. Lastly, the trained ensemble learning algorithms are applied to the test samples for PD classification. This proposed method was examined using a more recently deposited public datasets and compared against other currently used algorithms for validation. Experimental results showed that the proposed algorithm obtained the highest degree of improved classification accuracy (29.44%) compared with the other algorithm that was examined. Furthermore, the MENN algorithm alone was found to improve classification accuracy by as much as 45.72%. Moreover, the proposed algorithm was found to exhibit a higher stability, particularly when combining the MENN and RF algorithms. This study showed that the proposed method could improve PD classification when using speech data and can be applied to future studies seeking to improve PD classification methods.

  7. Speech comprehension and emotional/behavioral problems in children with specific language impairment (SLI).

    PubMed

    Gregl, Ana; Kirigin, Marin; Bilać, Snjeiana; Sućeska Ligutić, Radojka; Jaksić, Nenad; Jakovljević, Miro

    2014-09-01

    This research aims to investigate differences in speech comprehension between children with specific language impairment (SLI) and their developmentally normal peers, and the relationship between speech comprehension and emotional/behavioral problems on Achenbach's Child Behavior Checklist (CBCL) and Caregiver Teacher's Report Form (C-TRF) according to the DSMIV The clinical sample comprised 97preschool children with SLI, while the peer sample comprised 60 developmentally normal preschool children. Children with SLI had significant delays in speech comprehension and more emotional/behavioral problems than peers. In children with SLI, speech comprehension significantly correlated with scores on Attention Deficit/Hyperactivity Problems (CBCL and C-TRF), and Pervasive Developmental Problems scales (CBCL)(p<0.05). In the peer sample, speech comprehension significantly correlated with scores on Affective Problems and Attention Deficit/Hyperactivity Problems (C-TRF) scales. Regression analysis showed that 12.8% of variance in speech comprehension is saturated with 5 CBCL variables, of which Attention Deficit/Hyperactivity (beta = -0.281) and Pervasive Developmental Problems (beta = -0.280) are statistically significant (p < 0.05). In the reduced regression model Attention Deficit/Hyperactivity explains 7.3% of the variance in speech comprehension, (beta = -0.270, p < 0.01). It is possible that, to a certain degree, the same neurodevelopmental process lies in the background of problems with speech comprehension, problems with attention and hyperactivity, and pervasive developmental problems. This study confirms the importance of triage for behavioral problems and attention training in the rehabilitation of children with SLI and children with normal language development that exhibit ADHD symptoms.

  8. The Hypothesis of Apraxia of Speech in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Shriberg, Lawrence D.; Paul, Rhea; Black, Lois M.; van Santen, Jan P.

    2011-01-01

    In a sample of 46 children aged 4-7 years with Autism Spectrum Disorder (ASD) and intelligible speech, there was no statistical support for the hypothesis of concomitant Childhood Apraxia of Speech (CAS). Perceptual and acoustic measures of participants' speech, prosody, and voice were compared with data from 40 typically-developing children, 13…

  9. The Effect of Background Noise on Intelligibility of Dysphonic Speech

    ERIC Educational Resources Information Center

    Ishikawa, Keiko; Boyce, Suzanne; Kelchner, Lisa; Powell, Maria Golla; Schieve, Heidi; de Alarcon, Alessandro; Khosla, Sid

    2017-01-01

    Purpose: The aim of this study is to determine the effect of background noise on the intelligibility of dysphonic speech and to examine the relationship between intelligibility in noise and an acoustic measure of dysphonia--cepstral peak prominence (CPP). Method: A study of speech perception was conducted using speech samples from 6 adult speakers…

  10. Autonomic and Emotional Responses of Graduate Student Clinicians in Speech-Language Pathology to Stuttered Speech

    ERIC Educational Resources Information Center

    Guntupalli, Vijaya K.; Nanjundeswaran, Chayadevie; Dayalu, Vikram N.; Kalinowski, Joseph

    2012-01-01

    Background: Fluent speakers and people who stutter manifest alterations in autonomic and emotional responses as they view stuttered relative to fluent speech samples. These reactions are indicative of an aroused autonomic state and are hypothesized to be triggered by the abrupt breakdown in fluency exemplified in stuttered speech. Furthermore,…

  11. The Effectiveness of SpeechEasy during Situations of Daily Living

    ERIC Educational Resources Information Center

    O'Donnell, Jennifer J.; Armson, Joy; Kiefte, Michael

    2008-01-01

    A multiple single-subject design was used to examine the effects of SpeechEasy on stuttering frequency in the laboratory and in longitudinal samples of speech produced in situations of daily living (SDL). Seven adults who stutter participated, all of whom had exhibited at least 30% reduction in stuttering frequency while using SpeechEasy during…

  12. The minor third communicates sadness in speech, mirroring its use in music.

    PubMed

    Curtis, Meagan E; Bharucha, Jamshed J

    2010-06-01

    There is a long history of attempts to explain why music is perceived as expressing emotion. The relationship between pitches serves as an important cue for conveying emotion in music. The musical interval referred to as the minor third is generally thought to convey sadness. We reveal that the minor third also occurs in the pitch contour of speech conveying sadness. Bisyllabic speech samples conveying four emotions were recorded by 9 actresses. Acoustic analyses revealed that the relationship between the 2 salient pitches of the sad speech samples tended to approximate a minor third. Participants rated the speech samples for perceived emotion, and the use of numerous acoustic parameters as cues for emotional identification was modeled using regression analysis. The minor third was the most reliable cue for identifying sadness. Additional participants rated musical intervals for emotion, and their ratings verified the historical association between the musical minor third and sadness. These findings support the theory that human vocal expressions and music share an acoustic code for communicating sadness.

  13. Attributions, criticism and warmth in mothers of children with intellectual disability and challenging behaviour: a pilot study.

    PubMed

    Lancaster, R L; Balling, K; Hastings, R; Lloyd, T J

    2014-11-01

    Associations between parental expressed emotion (EE) or parental attributions and the problem behaviours of children with intellectual disability (ID) have been explored in ID research. However, a more detailed examination of the attributional model of EE has not been reported. In the present study, we partially replicated and extended research focused on mothers of typically developing children with behaviour problems. Twenty-seven mothers of children with ID and behaviour problems aged 4-9 years were interviewed about their most problematic behaviours exhibited by their child, and completed a Five Minute Speech Sample. Interview transcripts and speech samples were coded for maternal EE and spontaneous causal attributions regarding the child's behaviour problems. Data were also collected on maternal well-being, and the child's behaviour problems. Mothers typically made attributions that were internal to the child, controllable by the child, personal to the child and stable for the child. Maternal attributions of being able to control the child's behaviour were associated with high maternal criticism and low warmth. Maternal depression was more strongly associated with the child's behaviour problems when mothers were coded as high in criticism or low in warmth. Patterns of maternal attributions about their child's behaviour problems and their consequences for maternal well-being and maternal-child relationships require more research attention. Implications for practice are discussed, including the potential for maternal attributions to be incompatible with the focus of positive behaviour supports offered to families. © 2013 The Authors. Journal of Intellectual Disability Research © 2013 John Wiley & Sons Ltd, MENCAP & IASSIDD.

  14. Sound frequency affects speech emotion perception: results from congenital amusia

    PubMed Central

    Lolli, Sydney L.; Lewenstein, Ari D.; Basurto, Julian; Winnik, Sean; Loui, Psyche

    2015-01-01

    Congenital amusics, or “tone-deaf” individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech. PMID:26441718

  15. Use of Language Sample Analysis by School-Based SLPs: Results of a Nationwide Survey

    ERIC Educational Resources Information Center

    Pavelko, Stacey L.; Owens, Robert E., Jr.; Ireland, Marie; Hahs-Vaughn, Debbie L.

    2016-01-01

    Purpose: This article examines use of language sample analysis (LSA) by school-based speech-language pathologists (SLPs), including characteristics of language samples, methods of transcription and analysis, barriers to LSA use, and factors affecting LSA use, such as American Speech-Language-Hearing Association certification, number of years'…

  16. Institutional Variation in Traumatic Brain Injury Acute Rehabilitation Practice.

    PubMed

    Seel, Ronald T; Barrett, Ryan S; Beaulieu, Cynthia L; Ryser, David K; Hammond, Flora M; Cullen, Nora; Garmoe, William; Sommerfeld, Teri; Corrigan, John D; Horn, Susan D

    2015-08-01

    To describe institutional variation in traumatic brain injury (TBI) inpatient rehabilitation program characteristics and evaluate to what extent patient factors and center effects explain how TBI inpatient rehabilitation services are delivered. Secondary analysis of a prospective, multicenter, cohort database. TBI inpatient rehabilitation programs. Patients with complicated mild, moderate, or severe TBI (N=2130). Not applicable. Mean minutes; number of treatment activities; use of groups in occupational therapy, physical therapy, speech therapy, therapeutic recreation, and psychology inpatient rehabilitation sessions; and weekly hours of treatment. A wide variation was observed between the 10 TBI programs, including census size, referral flow, payer mix, number of dedicated beds, clinician experience, and patient characteristics. At the centers with the longest weekday therapy sessions, the average session durations were 41.5 to 52.2 minutes. At centers with the shortest weekday sessions, the average session durations were approximately 30 minutes. The centers with the highest mean total weekday hours of occupational, physical, and speech therapies delivered twice as much therapy as the lowest center. Ordinary least-squares regression modeling found that center effects explained substantially more variance than patient factors for duration of therapy sessions, number of activities administered per session, use of group therapy, and amount of psychological services provided. This study provides preliminary evidence that there is significant institutional variation in rehabilitation practice and that center effects play a stronger role than patient factors in determining how TBI inpatient rehabilitation is delivered. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  17. Long-Term Trajectories of the Development of Speech Sound Production in Pediatric Cochlear Implant Recipients

    PubMed Central

    Tomblin, J. Bruce; Peng, Shu-Chen; Spencer, Linda J.; Lu, Nelson

    2011-01-01

    Purpose This study characterized the development of speech sound production in prelingually deaf children with a minimum of 8 years of cochlear implant (CI) experience. Method Twenty-seven pediatric CI recipients' spontaneous speech samples from annual evaluation sessions were phonemically transcribed. Accuracy for these speech samples was evaluated in piecewise regression models. Results As a group, pediatric CI recipients showed steady improvement in speech sound production following implantation, but the improvement rate declined after 6 years of device experience. Piecewise regression models indicated that the slope estimating the participants' improvement rate was statistically greater than 0 during the first 6 years postimplantation, but not after 6 years. The group of pediatric CI recipients' accuracy of speech sound production after 4 years of device experience reasonably predicts their speech sound production after 5–10 years of device experience. Conclusions The development of speech sound production in prelingually deaf children stabilizes after 6 years of device experience, and typically approaches a plateau by 8 years of device use. Early growth in speech before 4 years of device experience did not predict later rates of growth or levels of achievement. However, good predictions could be made after 4 years of device use. PMID:18695018

  18. The influence of speaking rate on nasality in the speech of hearing-impaired individuals.

    PubMed

    Dwyer, Claire H; Robb, Michael P; O'Beirne, Greg A; Gilbert, Harvey R

    2009-10-01

    The purpose of this study was to determine whether deliberate increases in speaking rate would serve to decrease the amount of nasality in the speech of severely hearing-impaired individuals. The participants were 11 severely to profoundly hearing-impaired students, ranging in age from 12 to 19 years (M = 16 years). Each participant provided a baseline speech sample (R1) followed by 3 training sessions during which participants were trained to increase their speaking rate. Following the training sessions, a second speech sample was obtained (R2). Acoustic and perceptual analyses of the speech samples obtained at R1 and R2 were undertaken. The acoustic analysis focused on changes in first (F(1)) and second (F(2)) formant frequency and formant bandwidths. The perceptual analysis involved listener ratings of the speech samples (at R1 and R2) for perceived nasality. Findings indicated a significant increase in speaking rate at R2. In addition, significantly narrower F(2) bandwidth and lower perceptual rating scores of nasality were obtained at R2 across all participants, suggesting a decrease in nasality as speaking rate increases. The nasality demonstrated by hearing-impaired individuals is amenable to change when speaking rate is increased. The influences of speaking rate changes on the perception and production of nasality in hearing-impaired individuals are discussed.

  19. Investigation of Preservice Teachers' Speech Anxiety with Different Points of View

    ERIC Educational Resources Information Center

    Kana, Fatih

    2015-01-01

    The purpose of this study is to find out the level of speech anxiety of last year students at Education Faculties and the effects of speech anxiety. For this purpose, speech anxiety inventory was delivered to 540 pre-service teachers at 2013-2014 academic year using stratified sampling method. Relational screening model was used in the study. To…

  20. Speech Abilities in Preschool Children with Speech Sound Disorder with and without Co-Occurring Language Impairment

    ERIC Educational Resources Information Center

    Macrae, Toby; Tyler, Ann A.

    2014-01-01

    Purpose: The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. Method: In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different…

  1. The Prevalence of Speech and Language Disorders in French-Speaking Preschool Children From Yaoundé (Cameroon).

    PubMed

    Tchoungui Oyono, Lilly; Pascoe, Michelle; Singh, Shajila

    2018-05-17

    The purpose of this study was to determine the prevalence of speech and language disorders in French-speaking preschool-age children in Yaoundé, the capital city of Cameroon. A total of 460 participants aged 3-5 years were recruited from the 7 communes of Yaoundé using a 2-stage cluster sampling method. Speech and language assessment was undertaken using a standardized speech and language test, the Evaluation du Langage Oral (Khomsi, 2001), which was purposefully renormed on the sample. A predetermined cutoff of 2 SDs below the normative mean was applied to identify articulation, expressive language, and receptive language disorders. Fluency and voice disorders were identified using clinical judgment by a speech-language pathologist. Overall prevalence was calculated as follows: speech disorders, 14.7%; language disorders, 4.3%; and speech and language disorders, 17.1%. In terms of disorders, prevalence findings were as follows: articulation disorders, 3.6%; expressive language disorders, 1.3%; receptive language disorders, 3%; fluency disorders, 8.4%; and voice disorders, 3.6%. Prevalence figures are higher than those reported for other countries and emphasize the urgent need to develop speech and language services for the Cameroonian population.

  2. Multimedia Storybooks: Supporting Vocabulary for Students Who Are Deaf/Hard-of-Hearing

    ERIC Educational Resources Information Center

    Donne, Vicki; Briley, Margaret L.

    2015-01-01

    A single case study examined the use of multimedia storybooks on the vocabulary acquisition of 7 preschool students who are deaf/hard of hearing in two classrooms at a school for the deaf in the U.S. Participants also included 3 speech-language pathologists. Students spent an average of 7.1 minutes daily working with the multimedia storybooks and…

  3. Minutes of the Speech Understanding Workshop Convened on 13 November 1975 in Washington, D.C.

    DTIC Science & Technology

    1975-11-13

    Paul Butler and the Technology Utilization progrn for their 76 support. To Dr. Orin Cornett of Gallaudet College, I want to express my thanks for his...to Dr. Ronald S. Hershel and "Messrs. Michael A. M\\’onahan, Timothy C. Strand and Larry B. Stotts for advice and criticism. we Cet article discute une

  4. Estimation of Teacher Practices Based on Text Transcripts of Teacher Speech Using a Support Vector Machine Algorithm

    ERIC Educational Resources Information Center

    Araya, Roberto; Plana, Francisco; Dartnell, Pablo; Soto-Andrade, Jorge; Luci, Gina; Salinas, Elena; Araya, Marylen

    2012-01-01

    Teacher practice is normally assessed by observers who watch classes or videos of classes. Here, we analyse an alternative strategy that uses text transcripts and a support vector machine classifier. For each one of the 710 videos of mathematics classes from the 2005 Chilean National Teacher Assessment Programme, a single 4-minute slice was…

  5. Crafting your Elevator Pitch: Key Features of an Elevator Speech to Help You Reach the Top Floor

    EPA Science Inventory

    You never know when you will end up talking to someone who will end up helping to shape your career. Many of these chance meetings are brief and when you only get 2-3 minutes to make your case everything that you say has to count. This presentation will cover the key features o...

  6. Assessing Disfluencies in School-Age Children Who Stutter: How Much Speech Is Enough?

    ERIC Educational Resources Information Center

    Gregg, Brent A.; Sawyer, Jean

    2015-01-01

    The question of what size speech sample is sufficient to accurately identify stuttering and its myriad characteristics is a valid one. Short samples have a risk of over- or underrepresenting disfluency types or characteristics. In recent years, there has been a trend toward using shorter samples because they are less time-consuming for…

  7. Speech Rhythms and Multiplexed Oscillatory Sensory Coding in the Human Brain

    PubMed Central

    Gross, Joachim; Hoogenboom, Nienke; Thut, Gregor; Schyns, Philippe; Panzeri, Stefano; Belin, Pascal; Garrod, Simon

    2013-01-01

    Cortical oscillations are likely candidates for segmentation and coding of continuous speech. Here, we monitored continuous speech processing with magnetoencephalography (MEG) to unravel the principles of speech segmentation and coding. We demonstrate that speech entrains the phase of low-frequency (delta, theta) and the amplitude of high-frequency (gamma) oscillations in the auditory cortex. Phase entrainment is stronger in the right and amplitude entrainment is stronger in the left auditory cortex. Furthermore, edges in the speech envelope phase reset auditory cortex oscillations thereby enhancing their entrainment to speech. This mechanism adapts to the changing physical features of the speech envelope and enables efficient, stimulus-specific speech sampling. Finally, we show that within the auditory cortex, coupling between delta, theta, and gamma oscillations increases following speech edges. Importantly, all couplings (i.e., brain-speech and also within the cortex) attenuate for backward-presented speech, suggesting top-down control. We conclude that segmentation and coding of speech relies on a nested hierarchy of entrained cortical oscillations. PMID:24391472

  8. Measuring word complexity in speech screening: single-word sampling to identify phonological delay/disorder in preschool children.

    PubMed

    Anderson, Carolyn; Cohen, Wendy

    2012-01-01

    Children's speech sound development is assessed by comparing speech production with the typical development of speech sounds based on a child's age and developmental profile. One widely used method of sampling is to elicit a single-word sample along with connected speech. Words produced spontaneously rather than imitated may give a more accurate indication of a child's speech development. A published word complexity measure can be used to score later-developing speech sounds and more complex word patterns. There is a need for a screening word list that is quick to administer and reliably differentiates children with typically developing speech from children with patterns of delayed/disordered speech. To identify a short word list based on word complexity that could be spontaneously named by most typically developing children aged 3;00-5;05 years. One hundred and five children aged between 3;00 and 5;05 years from three local authority nursery schools took part in the study. Items from a published speech assessment were modified and extended to include a range of phonemic targets in different word positions in 78 monosyllabic and polysyllabic words. The 78 words were ranked both by phonemic/phonetic complexity as measured by word complexity and by ease of spontaneous production. The ten most complex words (hereafter Triage 10) were named spontaneously by more than 90% of the children. There was no significant difference between the complexity measures for five identified age groups when the data were examined in 6-month groups. A qualitative analysis revealed eight children with profiles of phonological delay or disorder. When these children were considered separately, there was a statistically significant difference (p < 0.005) between the mean word complexity measure of the group compared with the mean for the remaining children in all other age groups. The Triage 10 words reliably differentiated children with typically developing speech from those with delayed or disordered speech patterns. The Triage 10 words can be used as a screening tool for triage and general assessment and have the potential to monitor progress during intervention. Further testing is being undertaken to establish reliability with children referred to speech and language therapy services. © 2012 Royal College of Speech and Language Therapists.

  9. Building an Interdepartmental Major in Speech Communication.

    ERIC Educational Resources Information Center

    Litterst, Judith K.

    This paper describes a popular and innovative major program of study in speech communication at St. Cloud University in Minnesota: the Speech Communication Interdepartmental Major. The paper provides background on the program, discusses overall program requirements, presents sample student options, identifies ingredients for program success,…

  10. Automatic initial and final segmentation in cleft palate speech of Mandarin speakers

    PubMed Central

    Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang

    2017-01-01

    The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with “quasi-unvoiced” or with “quasi-voiced” initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%. PMID:28926572

  11. Automatic initial and final segmentation in cleft palate speech of Mandarin speakers.

    PubMed

    He, Ling; Liu, Yin; Yin, Heng; Zhang, Junpeng; Zhang, Jing; Zhang, Jiang

    2017-01-01

    The speech unit segmentation is an important pre-processing step in the analysis of cleft palate speech. In Mandarin, one syllable is composed of two parts: initial and final. In cleft palate speech, the resonance disorders occur at the finals and the voiced initials, while the articulation disorders occur at the unvoiced initials. Thus, the initials and finals are the minimum speech units, which could reflect the characteristics of cleft palate speech disorders. In this work, an automatic initial/final segmentation method is proposed. It is an important preprocessing step in cleft palate speech signal processing. The tested cleft palate speech utterances are collected from the Cleft Palate Speech Treatment Center in the Hospital of Stomatology, Sichuan University, which has the largest cleft palate patients in China. The cleft palate speech data includes 824 speech segments, and the control samples contain 228 speech segments. The syllables are extracted from the speech utterances firstly. The proposed syllable extraction method avoids the training stage, and achieves a good performance for both voiced and unvoiced speech. Then, the syllables are classified into with "quasi-unvoiced" or with "quasi-voiced" initials. Respective initial/final segmentation methods are proposed to these two types of syllables. Moreover, a two-step segmentation method is proposed. The rough locations of syllable and initial/final boundaries are refined in the second segmentation step, in order to improve the robustness of segmentation accuracy. The experiments show that the initial/final segmentation accuracies for syllables with quasi-unvoiced initials are higher than quasi-voiced initials. For the cleft palate speech, the mean time error is 4.4ms for syllables with quasi-unvoiced initials, and 25.7ms for syllables with quasi-voiced initials, and the correct segmentation accuracy P30 for all the syllables is 91.69%. For the control samples, P30 for all the syllables is 91.24%.

  12. [The vocal behavior of telemarketing operators before and after a working day].

    PubMed

    Amorim, Geová Oliveira de; Bommarito, Silvana; Kanashiro, Célia Akemi; Chiari, Brasilia Maria

    2011-01-01

    To evaluate the vocal behavior of receptive telemarketing operators in pre- and post-work shift moments, and to relate the results to the variable gender. Participants were 55 telemarketing operators (11 men and 44 women) working in a receptive mode in the city of Maceió (Alagoas, Brazil). A questionnaire was applied before the work shift to initially identify the vocal complaints. After that, vocal samples were recorded, comprising sustained emissions and connected speech produced 10 minutes before and 10 minutes after the workday to be later evaluated. Auditory-perceptual and acoustic analyses of voice were conducted. Vocal complaints and symptoms reported by the operators after the work shift were: dry throat (64%); neck and cervix pain (33%); hoarseness (31%); voice failure (26%); and vocal fatigue (22%).Telemarketing operators presented reduced maximum phonation time before and after the day of work (p=0.645). Data from the auditory-perceptual assessment of voice were similar in pre- and post-shift moments (p=0.645). No difference was found between moments also on acoustic analysis data (p=0.738). Telemarketing operators have high indexes of vocal symptoms after the work shift, and there are no differences between pre- and post-work shift in auditory-perceptual and acoustic assessments of voice.

  13. A new method to sample stuttering in preschool children.

    PubMed

    O'Brian, Sue; Jones, Mark; Pilowsky, Rachel; Onslow, Mark; Packman, Ann; Menzies, Ross

    2010-06-01

    This study reports a new method for sampling the speech of preschool stuttering children outside the clinic environment. Twenty parents engaged their stuttering children in an everyday play activity in the home with a telephone handset nearby. A remotely located researcher telephoned the parent and recorded the play session with a phone-recording jack attached to a digital audio recorder at the remote location. The parent placed an audio recorder near the child for comparison purposes. Children as young as 2 years complied with the remote method of speech sampling. The quality of the remote recordings was superior to that of the in-home recordings. There was no difference in means or reliability of stutter-count measures made from the remote recordings compared with those made in-home. Advantages of the new method include: (1) cost efficiency of real-time measurement of percent syllables stuttered in naturalistic situations, (2) reduction of bias associated with parent-selected timing of home recordings, (3) standardization of speech sampling procedures, (4) improved parent compliance with sampling procedures, (5) clinician or researcher on-line control of the acoustic and linguistic quality of recordings, and (6) elimination of the need to lend equipment to parents for speech sampling.

  14. Comparison of audio and audiovisual measures of adult stuttering: Implications for clinical trials.

    PubMed

    O'Brian, Sue; Jones, Mark; Onslow, Mark; Packman, Ann; Menzies, Ross; Lowe, Robyn

    2015-04-15

    This study investigated whether measures of percentage syllables stuttered (%SS) and stuttering severity ratings with a 9-point scale differ when made from audiovisual compared with audio-only recordings. Four experienced speech-language pathologists measured %SS and assigned stuttering severity ratings to 10-minute audiovisual and audio-only recordings of 36 adults. There was a mean 18% increase in %SS scores when samples were presented in audiovisual compared with audio-only mode. This result was consistent across both higher and lower %SS scores and was found to be directly attributable to counts of stuttered syllables rather than the total number of syllables. There was no significant difference between stuttering severity ratings made from the two modes. In clinical trials research, when using %SS as the primary outcome measure, audiovisual samples would be preferred as long as clear, good quality, front-on images can be easily captured. Alternatively, stuttering severity ratings may be a more valid measure to use as they correlate well with %SS and values are not influenced by the presentation mode.

  15. Expressed emotion is not associated with disorder severity in first-episode mental disorder.

    PubMed

    Heikkilä, Jyrki; Karlsson, Hasse; Taiminen, Tero; Lauerma, Hannu; Ilonen, Tuula; Leinonen, Kirsi-Marja; Wallenius, Elina; Virtanen, Hilkka; Heinimaa, Markus; Koponen, Salla; Jalo, Päivi; Kaljonen, Anne; Salakangas, Raimo K R

    2002-08-30

    A family atmosphere characterized by expressed emotion (EE) is a robust predictor of clinical outcome of patients with schizophrenia and mood disorders. However, there is ongoing discussion as to whether EE is more a cause of clinical outcome or a parental reaction to disorder severity. This cross-sectional study examines a sample of 42 consecutive first-episode patients from a defined geographical area with severe mental disorders (schizophrenia-related disorders, psychotic mood disorders, and non-psychotic mood disorders). Their 42 relatives were interviewed, and the relationships between EE variables derived with the five-minute speech sample method (FMSS) and the patients' demographic, premorbid and clinical measures were analyzed. A high EE score was found in 40% of the relatives. High EE was associated with the interviewed relative's not being a spouse and the patient's being young and unmarried. It was not associated with premorbid characteristics, symptom dimensions or the diagnostic group of the patient. These results do not support the hypothesis that EE is a reaction to the clinical features of the patient. Instead, demographic factors may partly mediate the effect of EE on prognosis.

  16. Shame and guilt/self-blame as predictors of expressed emotion in family members of patients with schizophrenia

    PubMed Central

    Wasserman, Stephanie; Weisman de Mamani, Amy; Suro, Giulia

    2012-01-01

    Expressed emotion (EE) is a measure of the family environment reflecting the amount of criticism and emotional over-involvement expressed by a key relative towards a family member with a disorder or impairment. Patients from high EE homes have a poorer illness prognosis than do patients from low EE homes. Despite EE's well-established predictive validity, questions remain regarding why some family members express high levels of EE attitudes while others do not. Based on indirect evidence from previous research, the current study tested whether shame and guilt/self-blame about having a relative with schizophrenia serve as predictors of EE. A sample of 72 family members of patients with schizophrenia completed the Five Minute Speech Sample to measure EE, along with questionnaires assessing self-directed emotions. In line with the hypotheses, higher levels of both shame and guilt/self-blame about having a relative with schizophrenia predicted high EE. Results of the current study elucidate the EE construct and have implications for working with families of patients with schizophrenia. PMID:22357355

  17. Discourse Analysis of the Political Speeches of the Ousted Arab Presidents during the Arab Spring Revolution Using Halliday and Hasan's Framework of Cohesion

    ERIC Educational Resources Information Center

    Al-Majali, Wala'

    2015-01-01

    This study is designed to explore the salient linguistic features of the political speeches of the ousted Arab presidents during the Arab Spring Revolution. The sample of the study is composed of seven political speeches delivered by the ousted Arab presidents during the period from December 2010 to December 2012. Three speeches were delivered by…

  18. The Prompt Book for...Teaching the Art of Speech and Drama To Children: A Resource Guide for Teachers of Children in the Art of Speech and Drama.

    ERIC Educational Resources Information Center

    Dugger, Anita; And Others

    Providing for individual differences in ability, interest, and cultural values among students, this guide contains resources, goals, objectives, sample lesson plans, and activities for teaching speech and drama to elementary school students. The first section of the guide offers advice on the organization of a speech arts curriculum, approaches to…

  19. The Influence of Native Language on Auditory-Perceptual Evaluation of Vocal Samples Completed by Brazilian and Canadian SLPs.

    PubMed

    Chaves, Cristiane Ribeiro; Campbell, Melanie; Côrtes Gama, Ana Cristina

    2017-03-01

    This study aimed to determine the influence of native language on the auditory-perceptual assessment of voice, as completed by Brazilian and Anglo-Canadian listeners using Brazilian vocal samples and the grade, roughness, breathiness, asthenia, strain (GRBAS) scale. This is an analytical, observational, comparative, and transversal study conducted at the Speech Language Pathology Department of the Federal University of Minas Gerais in Brazil, and at the Communication Sciences and Disorders Department of the University of Alberta in Canada. The GRBAS scale, connected speech, and a sustained vowel were used in this study. The vocal samples were drawn randomly from a database of recorded speech of Brazilian adults, some with healthy voices and some with voice disorders. The database is housed at the Federal University of Minas Gerais. Forty-six samples of connected speech (recitation of days of the week), produced by 35 women and 11 men, and 46 samples of the sustained vowel /a/, produced by 37 women and 9 men, were used in this study. The listeners were divided into two groups of three speech therapists, according to nationality: Brazilian or Anglo-Canadian. The groups were matched according to the years of professional experience of participants. The weighted kappa was used to calculate the intra- and inter-rater agreements, with 95% confidence intervals, respectively. An analysis of the intra-rater agreement showed that Brazilians and Canadians had similar results in auditory-perceptual evaluation of sustained vowel and connected speech. The results of the inter-rater agreement of connected speech and sustained vowel indicated that Brazilians and Canadians had, respectively, moderate agreement on the overall severity (0.57 and 0.50), breathiness (0.45 and 0.45), and asthenia (0.50 and 0.46); poor correlation on roughness (0.19 and 0.007); and weak correlation on strain to connected speech (0.22), and moderate correlation to sustained vowel (0.50). In general, auditory-perceptual evaluation is not influenced by the native language on most dimensions of the perceptual parameters of the GRBAS scale. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  20. The Speech of Hyperactive Children and Their Mothers: Comparison with Normal Children and Stimulant Drug Effects.

    ERIC Educational Resources Information Center

    Barkley, Russell A.; And Others

    1983-01-01

    Verbal interactions of 18 hyperactive boys (8 to 11 years old) with their mothers during 15-minute free play and task periods were studied and compared to interactions of 18 normal boys with their mothers. Both hyperactive boys and their mothers were found to use significantly more utterances in free play than normal mother-child dyads.…

  1. Speech Intelligibility in Severe Adductor Spasmodic Dysphonia

    ERIC Educational Resources Information Center

    Bender, Brenda K.; Cannito, Michael P.; Murry, Thomas; Woodson, Gayle E.

    2004-01-01

    This study compared speech intelligibility in nondisabled speakers and speakers with adductor spasmodic dysphonia (ADSD) before and after botulinum toxin (Botox) injection. Standard speech samples were obtained from 10 speakers diagnosed with severe ADSD prior to and 1 month following Botox injection, as well as from 10 age- and gender-matched…

  2. Movement of the velum during speech and singing in classically trained singers.

    PubMed

    Austin, S F

    1997-06-01

    The present study addresses two questions: (a) Is the action and/or posture of the velopharyngeal valve conducive to allow significant resonance during Western tradition classical singing? (b) How do the actions of the velopharyngeal valve observed in this style of singing compare with normal speech? A photodetector system was used to observe the area function of the velopharyngeal port during speech and classical style singing. Identical speech samples were produced by each subject in a normal speaking voice and then in the low, medium, and high singing ranges. Results indicate that in these four singers the velopharyngeal port was closed significantly longer in singing than in speaking samples. The amount of time the velopharyngeal port was opened was greatest in speech and diminished as the singer ascended in pitch. In the high voice condition, little or no opening of the velopharyngeal port was measured.

  3. Listeners' Perceptions of Speech and Language Disorders

    ERIC Educational Resources Information Center

    Allard, Emily R.; Williams, Dale F.

    2008-01-01

    Using semantic differential scales with nine trait pairs, 445 adults rated five audio-taped speech samples, one depicting an individual without a disorder and four portraying communication disorders. Statistical analyses indicated that the no disorder sample was rated higher with respect to the trait of employability than were the articulation,…

  4. Fluency variation in adolescents.

    PubMed

    Furquim de Andrade, Claudia Regina; de Oliveira Martins, Vanessa

    2007-10-01

    The Speech Fluency Profile of fluent adolescent speakers of Brazilian Portuguese, were examined with respect to gender and neurolinguistic variations. Speech samples of 130 male and female adolescents, aged between 12;0 and 17;11 years were gathered. They were analysed according to type of speech disruption; speech rate; and frequency of speech disruptions. Statistical analysis did not find significant differences between genders for the variables studied. However, regarding the phases of adolescence (early: 12;0-14;11 years; late: 15;0-17;11 years), statistical differences were observed for all of the variables. As for neurolinguistic maturation, a decrease in the number of speech disruptions and an increase in speech rate occurred during the final phase of adolescence, indicating that the maturation of the motor and linguistic processes exerted an influence over the fluency profile of speech.

  5. Comparison of voice-automated transcription and human transcription in generating pathology reports.

    PubMed

    Al-Aynati, Maamoun M; Chorneyko, Katherine A

    2003-06-01

    Software that can convert spoken words into written text has been available since the early 1980s. Early continuous speech systems were developed in 1994, with the latest commercially available editions having a claimed accuracy of up to 98% of speech recognition at natural speech rates. To evaluate the efficacy of one commercially available voice-recognition software system with pathology vocabulary in generating pathology reports and to compare this with human transcription. To draw cost analysis conclusions regarding human versus computer-based transcription. Two hundred six routine pathology reports from the surgical pathology material handled at St Joseph's Healthcare, Hamilton, Ontario, were generated simultaneously using computer-based transcription and human transcription. The following hardware and software were used: a desktop 450-MHz Intel Pentium III processor with 192 MB of RAM, a speech-quality sound card (Sound Blaster), noise-canceling headset microphone, and IBM ViaVoice Pro version 8 with pathology vocabulary support (Voice Automated, Huntington Beach, Calif). The cost of the hardware and software used was approximately Can 2250 dollars. A total of 23 458 words were transcribed using both methods with a mean of 114 words per report. The mean accuracy rate was 93.6% (range, 87.4%-96%) using the computer software, compared to a mean accuracy of 99.6% (range, 99.4%-99.8%) for human transcription (P <.001). Time needed to edit documents by the primary evaluator (M.A.) using the computer was on average twice that needed for editing the documents produced by human transcriptionists (range, 1.4-3.5 times). The extra time needed to edit documents was 67 minutes per week (13 minutes per day). Computer-based continuous speech-recognition systems in pathology can be successfully used in pathology practice even during the handling of gross pathology specimens. The relatively low accuracy rate of this voice-recognition software with resultant increased editing burden on pathologists may not encourage its application on a wide scale in pathology departments with sufficient human transcription services, despite significant potential financial savings. However, computer-based transcription represents an attractive and relatively inexpensive alternative to human transcription in departments where there is a shortage of transcription services, and will no doubt become more commonly used in pathology departments in the future.

  6. Comparing Measures of Voice Quality From Sustained Phonation and Continuous Speech.

    PubMed

    Gerratt, Bruce R; Kreiman, Jody; Garellek, Marc

    2016-10-01

    The question of what type of utterance-a sustained vowel or continuous speech-is best for voice quality analysis has been extensively studied but with equivocal results. This study examines whether previously reported differences derive from the articulatory and prosodic factors occurring in continuous speech versus sustained phonation. Speakers with voice disorders sustained vowels and read sentences. Vowel samples were excerpted from the steadiest portion of each vowel in the sentences. In addition to sustained and excerpted vowels, a 3rd set of stimuli was created by shortening sustained vowel productions to match the duration of vowels excerpted from continuous speech. Acoustic measures were made on the stimuli, and listeners judged the severity of vocal quality deviation. Sustained vowels and those extracted from continuous speech contain essentially the same acoustic and perceptual information about vocal quality deviation. Perceived and/or measured differences between continuous speech and sustained vowels derive largely from voice source variability across segmental and prosodic contexts and not from variations in vocal fold vibration in the quasisteady portion of the vowels. Approaches to voice quality assessment by using continuous speech samples average across utterances and may not adequately quantify the variability they are intended to assess.

  7. Impairments of speech fluency in Lewy body spectrum disorder.

    PubMed

    Ash, Sharon; McMillan, Corey; Gross, Rachel G; Cook, Philip; Gunawardena, Delani; Morgan, Brianna; Boller, Ashley; Siderowf, Andrew; Grossman, Murray

    2012-03-01

    Few studies have examined connected speech in demented and non-demented patients with Parkinson's disease (PD). We assessed the speech production of 35 patients with Lewy body spectrum disorder (LBSD), including non-demented PD patients, patients with PD dementia (PDD), and patients with dementia with Lewy bodies (DLB), in a semi-structured narrative speech sample in order to characterize impairments of speech fluency and to determine the factors contributing to reduced speech fluency in these patients. Both demented and non-demented PD patients exhibited reduced speech fluency, characterized by reduced overall speech rate and long pauses between sentences. Reduced speech rate in LBSD correlated with measures of between-utterance pauses, executive functioning, and grammatical comprehension. Regression analyses related non-fluent speech, grammatical difficulty, and executive difficulty to atrophy in frontal brain regions. These findings indicate that multiple factors contribute to slowed speech in LBSD, and this is mediated in part by disease in frontal brain regions. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Development of early comprehensive stroke inpatient rehabilitation in Poland - current status and future requirements.

    PubMed

    Sarzyńska-Długosz, Iwona; Krawczyk, Maciej; Członkowska, Anna

    2011-01-01

    Every stroke patient should undergo early rehabilitation. We aimed to evaluate accessibility, development and needs in early stroke inpatient rehabilitation in Poland. A questionnaire evaluating rehabilitation departments was prepared and sent (in 2004 and 2008) to rehabilitation wards in Poland, where stroke patients are treated and undergo early rehabilitation. We divided departments into classes: class A - having comprehensive rehabilitation (physiotherapy minimum 60 minutes/day, speech therapy minimum 30 minutes/5 days/week, rehabilitation of other cognitive impairments minimum 30 minutes/5 days/week, group physiotherapy); B - having the possibility of all types of therapy, but done less frequently; C - physiotherapy and speech therapy; D - physiotherapy and cognitive rehabilitation; E - only physiotherapy. In 2004, we obtained responses from 115 of 172 (66.9%) rehabilitation departments. According to prespecified criteria there were 11 class A, 31 class B, 28 class C, 4 class D, and 41 class E wards. In 2008, we received response from 89 of 149 (59.7%) rehabilitation departments. According to prespecified criteria there were 17 class A, 40 class B, 22 class C, 0 class D, and 10 class E wards. In 2004, 159 beds and in 2008, 294 beds in class A departments were available for stroke patients. The minimal number of needed but lacking beds was 604 in 2004 and 469 in 2008. Development of departments providing early comprehensive stroke rehabilitation from 2004 to 2008 is marked, but still insufficient. In 2008, 19% of rehabilitation departments could provide comprehensive stroke rehabilitation and this was 38.5% of beds actually needed.

  9. A Wavelet Model for Vocalic Speech Coarticulation

    DTIC Science & Technology

    1994-10-01

    control vowel’s signal as the mother wavelet. A practical experiment is conducted to evaluate the coarticulation channel using samples 01 real speech...transformation from a control speech state (input) to an effected speech state (output). Specifically, a vowel produced in isolation is transformed into an...the wavelet transform of the effected vowel’s signal, using the control vowel’s signal as the mother wavelet. A practical experiment is conducted to

  10. White noise speech illusion and psychosis expression: An experimental investigation of psychosis liability.

    PubMed

    Pries, Lotta-Katrin; Guloksuz, Sinan; Menne-Lothmann, Claudia; Decoster, Jeroen; van Winkel, Ruud; Collip, Dina; Delespaul, Philippe; De Hert, Marc; Derom, Catherine; Thiery, Evert; Jacobs, Nele; Wichers, Marieke; Simons, Claudia J P; Rutten, Bart P F; van Os, Jim

    2017-01-01

    An association between white noise speech illusion and psychotic symptoms has been reported in patients and their relatives. This supports the theory that bottom-up and top-down perceptual processes are involved in the mechanisms underlying perceptual abnormalities. However, findings in nonclinical populations have been conflicting. The aim of this study was to examine the association between white noise speech illusion and subclinical expression of psychotic symptoms in a nonclinical sample. Findings were compared to previous results to investigate potential methodology dependent differences. In a general population adolescent and young adult twin sample (n = 704), the association between white noise speech illusion and subclinical psychotic experiences, using the Structured Interview for Schizotypy-Revised (SIS-R) and the Community Assessment of Psychic Experiences (CAPE), was analyzed using multilevel logistic regression analyses. Perception of any white noise speech illusion was not associated with either positive or negative schizotypy in the general population twin sample, using the method by Galdos et al. (2011) (positive: ORadjusted: 0.82, 95% CI: 0.6-1.12, p = 0.217; negative: ORadjusted: 0.75, 95% CI: 0.56-1.02, p = 0.065) and the method by Catalan et al. (2014) (positive: ORadjusted: 1.11, 95% CI: 0.79-1.57, p = 0.557). No association was found between CAPE scores and speech illusion (ORadjusted: 1.25, 95% CI: 0.88-1.79, p = 0.220). For the Catalan et al. (2014) but not the Galdos et al. (2011) method, a negative association was apparent between positive schizotypy and speech illusion with positive or negative affective valence (ORadjusted: 0.44, 95% CI: 0.24-0.81, p = 0.008). Contrary to findings in clinical populations, white noise speech illusion may not be associated with psychosis proneness in nonclinical populations.

  11. Effect of 24 hours of sleep deprivation on auditory and linguistic perception: a comparison among young controls, sleep-deprived participants, dyslexic readers, and aging adults.

    PubMed

    Fostick, Leah; Babkoff, Harvey; Zukerman, Gil

    2014-06-01

    To test the effects of 24 hr of sleep deprivation on auditory and linguistic perception and to assess the magnitude of this effect by comparing such performance with that of aging adults on speech perception and with that of dyslexic readers on phonological awareness. Fifty-five sleep-deprived young adults were compared with 29 aging adults (older than 60 years) and with 18 young controls on auditory temporal order judgment (TOJ) and on speech perception tasks (Experiment 1). The sleep deprived were also compared with 51 dyslexic readers and with the young controls on TOJ and phonological awareness tasks (One-Minute Test for Pseudowords, Phoneme Deletion, Pig Latin, and Spoonerism; Experiment 2). Sleep deprivation resulted in longer TOJ thresholds, poorer speech perception, and poorer nonword reading compared with controls. The TOJ thresholds of the sleep deprived were comparable to those of the aging adults, but their pattern of speech performance differed. They also performed better on TOJ and phonological awareness than dyslexic readers. A variety of linguistic skills are affected by sleep deprivation. The comparison of sleep-deprived individuals with other groups with known difficulties in these linguistic skills might suggest that different groups exhibit common difficulties.

  12. Singing can improve speech function in aphasics associated with intact right basal ganglia and preserve right temporal glucose metabolism: Implications for singing therapy indication.

    PubMed

    Akanuma, Kyoko; Meguro, Kenichi; Satoh, Masayuki; Tashiro, Manabu; Itoh, Masatoshi

    2016-01-01

    Clinically, we know that some aphasic patients can sing well despite their speech disturbances. Herein, we report 10 patients with non-fluent aphasia, of which half of the patients improved their speech function after singing training. We studied ten patients with non-fluent aphasia complaining of difficulty finding words. All had lesions in the left basal ganglia or temporal lobe. They selected the melodies they knew well, but which they could not sing. We made a new lyric with a familiar melody using words they could not name. The singing training using these new lyrics was performed for 30 minutes once a week for 10 weeks. Before and after the training, their speech functions were assessed by language tests. At baseline, 6 of them received positron emission tomography to evaluate glucose metabolism. Five patients exhibited improvements after intervention; all but one exhibited intact right basal ganglia and left temporal lobes, but all exhibited left basal ganglia lesions. Among them, three subjects exhibited preserved glucose metabolism in the right temporal lobe. We considered that patients who exhibit intact right basal ganglia and left temporal lobes, together with preserved right hemispheric glucose metabolism, might be an indication of the effectiveness of singing therapy.

  13. The Role of Threat Level and Intolerance of Uncertainty (IU) in Anxiety: An Experimental Test of IU Theory.

    PubMed

    Oglesby, Mary E; Schmidt, Norman B

    2017-07-01

    Intolerance of uncertainty (IU) has been proposed as an important transdiagnostic variable within mood- and anxiety-related disorders. The extant literature has suggested that individuals high in IU interpret uncertainty more negatively. Furthermore, theoretical models of IU posit that those elevated in IU may experience an uncertain threat as more anxiety provoking than a certain threat. However, no research to date has experimentally manipulated the certainty of an impending threat while utilizing an in vivo stressor. In the current study, undergraduate participants (N = 79) were randomized to one of two conditions: certain threat (participants were told that later on in the study they would give a 3-minute speech) or uncertain threat (participants were told that later on in the study they would flip a coin to determine whether or not they would give a 3-minute speech). Participants also completed self-report questionnaires measuring their baseline state anxiety, baseline trait IU, and prespeech state anxiety. Results indicated that trait IU was associated with greater state anticipatory anxiety when the prospect of giving a speech was made uncertain (i.e., uncertain condition). Further, findings indicated no significant difference in anticipatory state anxiety among individuals high in IU when comparing an uncertain versus certain threat (i.e., uncertain and certain threat conditions, respectively). Furthermore, results found no significant interaction between condition and trait IU when predicting state anticipatory anxiety. This investigation is the first to test a crucial component of IU theory while utilizing an ecologically valid paradigm. Results of the present study are discussed in terms of theoretical models of IU and directions for future work. Copyright © 2017. Published by Elsevier Ltd.

  14. Production Variability and Single Word Intelligibility in Aphasia and Apraxia of Speech

    ERIC Educational Resources Information Center

    Haley, Katarina L.; Martin, Gwenyth

    2011-01-01

    This study was designed to estimate test-retest reliability of orthographic speech intelligibility testing in speakers with aphasia and AOS and to examine its relationship to the consistency of speaker and listener responses. Monosyllabic single word speech samples were recorded from 13 speakers with coexisting aphasia and AOS. These words were…

  15. Audiovisual Matching in Speech and Nonspeech Sounds: A Neurodynamical Model

    ERIC Educational Resources Information Center

    Loh, Marco; Schmid, Gabriele; Deco, Gustavo; Ziegler, Wolfram

    2010-01-01

    Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult…

  16. Phonology and Vocal Behavior in Toddlers with Autism Spectrum Disorders

    PubMed Central

    Schoen, Elizabeth; Paul, Rhea; Chawarska, Katyrzyna

    2011-01-01

    Scientific Abstract The purpose of this study is to examine the phonological and other vocal productions of children, 18-36 months, with autism spectrum disorder (ASD) and to compare these productions to those of age-matched and language-matched controls. Speech samples were obtained from 30 toddlers with ASD, 11 age-matched toddlers and 23 language-matched toddlers during either parent-child or clinician-child play sessions. Samples were coded for a variety of speech-like and non-speech vocalization productions. Toddlers with ASD produced speech-like vocalizations similar to those of language-matched peers, but produced significantly more atypical non-speech vocalizations when compared to both control groups.Toddlers with ASD show speech-like sound production that is linked to their language level, in a manner similar to that seen in typical development. The main area of difference in vocal development in this population is in the production of atypical vocalizations. Findings suggest that toddlers with autism spectrum disorders might not tune into the language model of their environment. Failure to attend to the ambient language environment negatively impacts the ability to acquire spoken language. PMID:21308998

  17. Emotional and physiological responses of fluent listeners while watching the speech of adults who stutter.

    PubMed

    Guntupalli, Vijaya K; Everhart, D Erik; Kalinowski, Joseph; Nanjundeswaran, Chayadevie; Saltuklaroglu, Tim

    2007-01-01

    People who stutter produce speech that is characterized by intermittent, involuntary part-word repetitions and prolongations. In addition to these signature acoustic manifestations, those who stutter often display repetitive and fixated behaviours outside the speech producing mechanism (e.g. in the head, arm, fingers, nares, etc.). Previous research has examined the attitudes and perceptions of those who stutter and people who frequently interact with them (e.g. relatives, parents, employers). Results have shown an unequivocal, powerful and robust negative stereotype despite a lack of defined differences in personality structure between people who stutter and normally fluent individuals. However, physiological investigations of listener responses during moments of stuttering are limited. There is a need for data that simultaneously examine physiological responses (e.g. heart rate and galvanic skin conductance) and subjective behavioural responses to stuttering. The pairing of these objective and subjective data may provide information that casts light on the genesis of negative stereotypes associated with stuttering, the development of compensatory mechanisms in those who stutter, and the true impact of stuttering on senders and receivers alike. To compare the emotional and physiological responses of fluent speakers while listening and observing fluent and severe stuttered speech samples. Twenty adult participants (mean age = 24.15 years, standard deviation = 3.40) observed speech samples of two fluent speakers and two speakers who stutter reading aloud. Participants' skin conductance and heart rate changes were measured as physiological responses to stuttered or fluent speech samples. Participants' subjective responses on arousal (excited-calm) and valence (happy-unhappy) dimensions were assessed via the Self-Assessment Manikin (SAM) rating scale with an additional questionnaire comprised of a set of nine bipolar adjectives. Results showed significantly increased skin conductance and lower mean heart rate during the presentation of stuttered speech relative to the presentation of fluent speech samples (p<0.05). Listeners also self-rated themselves as being more aroused, unhappy, nervous, uncomfortable, sad, tensed, unpleasant, avoiding, embarrassed, and annoyed while viewing stuttered speech relative to the fluent speech. These data support the notion that stutter-filled speech can elicit physiological and emotional responses in listeners. Clinicians who treat stuttering should be aware that listeners show involuntary physiological responses to moderate-severe stuttering that probably remain salient over time and contribute to the evolution of negative stereotypes of people who stutter. With this in mind, it is hoped that clinicians can work with people who stutter to develop appropriate coping strategies. The role of amygdala and mirror neural mechanism in physiological and subjective responses to stuttering is discussed.

  18. Decoding spectrotemporal features of overt and covert speech from the human cortex

    PubMed Central

    Martin, Stéphanie; Brunner, Peter; Holdgraf, Chris; Heinze, Hans-Jochen; Crone, Nathan E.; Rieger, Jochem; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.

    2014-01-01

    Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10−5; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate. PMID:24904404

  19. High-frequency energy in singing and speech

    NASA Astrophysics Data System (ADS)

    Monson, Brian Bruce

    While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.

  20. Perception of Filtered Speech by Children with Developmental Dyslexia and Children with Specific Language Impairments

    PubMed Central

    Goswami, Usha; Cumming, Ruth; Chait, Maria; Huss, Martina; Mead, Natasha; Wilson, Angela M.; Barnes, Lisa; Fosker, Tim

    2016-01-01

    Here we use two filtered speech tasks to investigate children’s processing of slow (<4 Hz) versus faster (∼33 Hz) temporal modulations in speech. We compare groups of children with either developmental dyslexia (Experiment 1) or speech and language impairments (SLIs, Experiment 2) to groups of typically-developing (TD) children age-matched to each disorder group. Ten nursery rhymes were filtered so that their modulation frequencies were either low-pass filtered (<4 Hz) or band-pass filtered (22 – 40 Hz). Recognition of the filtered nursery rhymes was tested in a picture recognition multiple choice paradigm. Children with dyslexia aged 10 years showed equivalent recognition overall to TD controls for both the low-pass and band-pass filtered stimuli, but showed significantly impaired acoustic learning during the experiment from low-pass filtered targets. Children with oral SLIs aged 9 years showed significantly poorer recognition of band pass filtered targets compared to their TD controls, and showed comparable acoustic learning effects to TD children during the experiment. The SLI samples were also divided into children with and without phonological difficulties. The children with both SLI and phonological difficulties were impaired in recognizing both kinds of filtered speech. These data are suggestive of impaired temporal sampling of the speech signal at different modulation rates by children with different kinds of developmental language disorder. Both SLI and dyslexic samples showed impaired discrimination of amplitude rise times. Implications of these findings for a temporal sampling framework for understanding developmental language disorders are discussed. PMID:27303348

  1. Determining stability in connected speech in primary progressive aphasia and Alzheimer's disease.

    PubMed

    Beales, Ashleigh; Whitworth, Anne; Cartwright, Jade; Panegyres, Peter K; Kane, Robert T

    2018-03-08

    Using connected speech to assess progressive language disorders is confounded by uncertainty around whether connected speech is stable over successive sampling, and therefore representative of an individual's performance, and whether some contexts and/or language behaviours show greater stability than others. A repeated measure, within groups, research design was used to investigate stability of a range of behaviours in the connected speech of six individuals with primary progressive aphasia and three individuals with Alzheimer's disease. Stability was evaluated, at a group and individual level, across three samples, collected over 3 weeks, involving everyday monologue, narrative and picture description, and analysed for lexical content, fluency and communicative informativeness and efficiency. Excellent and significant stability was found on the majority of measures, at a group and individual level, across all genres, with isolated measures (e.g. nouns use, communicative efficiency) showing good, but greater variability, within one of the three genres. Findings provide evidence of stability on measures of lexical content, fluency and communicative informativeness and efficiency. While preliminary evidence suggests that task selection is influential when considering stability of particular connected speech measures, replication over a larger sample is necessary to reproduce findings.

  2. Minutes: Annual Meeting of the President's Committee on Employment of the Handicapped (Washington, D.C., May 1-2, 1969).

    ERIC Educational Resources Information Center

    President's Committee on Employment of the Handicapped, Washington, DC.

    Reporting the events of the meeting of the President's Committee on Employment of the Handicapped, the text includes speeches by Chairman Russell, Senator Bob Dole, Secretary of Labor Schultz, Rene Carpenter, Mr. Lustenberger of the W.T. Grant Company, W.F. Schnitzler of the AFL-CIO, Mrs. Koontz of the Department of Labor, Dr. Harlem, President of…

  3. Submental island pedicled flap vs radial forearm free flap for oral reconstruction: comparison of outcomes.

    PubMed

    Paydarfar, Joseph A; Patel, Urjeet A

    2011-01-01

    To compare intraoperative, postoperative, and functional results of submental island pedicled flap (SIPF) against radial forearm free flap (RFFF) reconstruction for tongue and floor-of-mouth reconstruction. Multi-institutional retrospective review. Academic tertiary referral center. Consecutive patients from February 2003 to December 2009 undergoing resection of oral tongue or floor of mouth followed by reconstruction with SIPF or RFFF. Two groups: SIPF vs RFFF. Duration of operation, hospital stay, surgical complications, and speech and swallowing function. The study included 60 patients, 27 with SIPF reconstruction and 33 with RFFF reconstruction. Sex, age, and TNM stage were similar for both groups. Mean flap size was smaller for SIPF (36 cm²) than for RFFF (50 cm²) (P < .001). Patients undergoing SIPF reconstruction had shorter operations (mean, 8 hours 44 minutes vs 13 hours 00 minutes; P < .001) and shorter hospitalization (mean, 10.6 days vs 14.0 days; P < .008) compared with patients who underwent RFFF. Donor site, flap-related, and other surgical complications were comparable between groups, as was speech and swallowing function. Reconstruction of oral cavity defects with the SIPF results in shorter operative time and hospitalization without compromising functional outcomes. The SIPF may be a preferable option in reconstruction of oral cavity defects less than 40 cm².

  4. Post-treatment speech naturalness of comprehensive stuttering program clients and differences in ratings among listener groups.

    PubMed

    Teshima, Shelli; Langevin, Marilyn; Hagler, Paul; Kully, Deborah

    2010-03-01

    The purposes of this study were to investigate naturalness of the post-treatment speech of Comprehensive Stuttering Program (CSP) clients and differences in naturalness ratings by three listener groups. Listeners were 21 student speech-language pathologists, 9 community members, and 15 listeners who stutter. Listeners rated perceptually fluent speech samples of CSP clients obtained immediately post-treatment (Post) and at 5 years follow-up (F5), and speech samples of matched typically fluent (TF) speakers. A 9-point interval rating scale was used. A 3 (listener group)x2 (time)x2 (speaker) mixed ANOVA was used to test for differences among mean ratings. The difference between CSP Post and F5 mean ratings was statistically significant. The F5 mean rating was within the range reported for typically fluent speakers. Student speech-language pathologists were found to be less critical than community members and listeners who stutter in rating naturalness; however, there were no significant differences in ratings made by community members and listeners who stutter. Results indicate that the naturalness of post-treatment speech of CSP clients improves in the post-treatment period and that it is possible for clients to achieve levels of naturalness that appear to be acceptable to adults who stutter and that are within the range of naturalness ratings given to typically fluent speakers. Readers will be able to (a) summarize key findings of studies that have investigated naturalness ratings, and (b) interpret the naturalness ratings of Comprehensive Stuttering Program speaker samples and the ratings made by the three listener groups in this study.

  5. Speech Discrimination Difficulties in High-Functioning Autism Spectrum Disorder Are Likely Independent of Auditory Hypersensitivity

    PubMed Central

    Dunlop, William A.; Enticott, Peter G.; Rajan, Ramesh

    2016-01-01

    Autism Spectrum Disorder (ASD), characterized by impaired communication skills and repetitive behaviors, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD) individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants. PMID:27555814

  6. Using the Electrocorticographic Speech Network to Control a Brain-Computer Interface in Humans

    PubMed Central

    Leuthardt, Eric C.; Gaona, Charles; Sharma, Mohit; Szrama, Nicholas; Roland, Jarod; Freudenberg, Zac; Solis, Jamie; Breshears, Jonathan; Schalk, Gerwin

    2013-01-01

    Electrocorticography (ECoG) has emerged as a new signal platform for brain-computer interface (BCI) systems. Classically, the cortical physiology that has been commonly investigated and utilized for device control in humans has been brain signals from sensorimotor cortex. Hence, it was unknown whether other neurophysiological substrates, such as the speech network, could be used to further improve on or complement existing motor-based control paradigms. We demonstrate here for the first time that ECoG signals associated with different overt and imagined phoneme articulation can enable invasively monitored human patients to control a one-dimensional computer cursor rapidly and accurately. This phonetic content was distinguishable within higher gamma frequency oscillations and enabled users to achieve final target accuracies between 68 and 91% within 15 minutes. Additionally, one of the patients achieved robust control using recordings from a microarray consisting of 1 mm spaced microwires. These findings suggest that the cortical network associated with speech could provide an additional cognitive and physiologic substrate for BCI operation and that these signals can be acquired from a cortical array that is small and minimally invasive. PMID:21471638

  7. Two Different Communication Genres and Implications for Vocabulary Development and Learning to Read

    ERIC Educational Resources Information Center

    Massaro, Dominic W.

    2015-01-01

    This study examined potential differences in vocabulary found in picture books and adult's speech to children and to other adults. Using a small sample of various sources of speech and print, Hayes observed that print had a more extensive vocabulary than speech. The current analyses of two different spoken language databases and an assembled…

  8. An Experimental Investigation of the Effect of Altered Auditory Feedback on the Conversational Speech of Adults Who Stutter

    ERIC Educational Resources Information Center

    Lincoln, Michelle; Packman, Ann; Onslow, Mark; Jones, Mark

    2010-01-01

    Purpose: To investigate the impact on percentage of syllables stuttered of various durations of delayed auditory feedback (DAF), levels of frequency-altered feedback (FAF), and masking auditory feedback (MAF) during conversational speech. Method: Eleven adults who stuttered produced 10-min conversational speech samples during a control condition…

  9. School-Based Speech-Language Pathologists' Use of iPads

    ERIC Educational Resources Information Center

    Romane, Garvin Philippe

    2017-01-01

    This study explored school-based speech-language pathologists' (SLPs') use of iPads and apps for speech and language instruction, specifically for articulation, language, and vocabulary goals. A mostly quantitative-based survey was administered to approximately 2,800 SLPs in a K-12 setting; the final sample consisted of 189 licensed SLPs. Overall,…

  10. The Measurement of the Oral and Nasal Sound Pressure Levels of Speech

    ERIC Educational Resources Information Center

    Clarke, Wayne M.

    1975-01-01

    A nasal separator was used to measure the oral and nasal components in the speech of a normal adult Australian population. Results indicated no difference in oral and nasal sound pressure levels for read versus spontaneous speech samples; however, females tended to have a higher nasal component than did males. (Author/TL)

  11. Effects of Culture and Gender in Comprehension of Speech Acts of Indirect Request

    ERIC Educational Resources Information Center

    Shams, Rabe'a; Afghari, Akbar

    2011-01-01

    This study investigates the comprehension of indirect request speech act used by Iranian people in daily communication. The study is an attempt to find out whether different cultural backgrounds and the gender of the speakers affect the comprehension of the indirect request of speech act. The sample includes thirty males and females in Gachsaran…

  12. Phonological Memory, Attention Control, and Musical Ability: Effects of Individual Differences on Rater Judgments of Second Language Speech

    ERIC Educational Resources Information Center

    Isaacs, Talia; Trofimovich, Pavel

    2011-01-01

    This study examines how listener judgments of second language speech relate to individual differences in listeners' phonological memory, attention control, and musical ability. Sixty native English listeners (30 music majors, 30 nonmusic majors) rated 40 nonnative speech samples for accentedness, comprehensibility, and fluency. The listeners were…

  13. The Influence of Social Class and Race on Language Test Performance and Spontaneous Speech of Preschool Children.

    ERIC Educational Resources Information Center

    Johnson, Dale L.

    This investigation compares child language obtained with standardized tests and samples of spontaneous speech obtained in natural settings. It was hypothesized that differences would exist between social class and racial groups on the unfamiliar standard tests, but such differences would not be evident on spontaneous speech measures. Also, higher…

  14. Speech disorders in neurofibromatosis type 1: a sample survey.

    PubMed

    Cosyns, Marjan; Vandeweghe, Lies; Mortier, Geert; Janssens, Sandra; Van Borsel, John

    2010-01-01

    Neurofibromatosis type 1 (NF1) is an autosomal-dominant neurocutaneous disorder with an estimated prevalence of two to three cases per 10,000 population. While the physical characteristics have been well documented, speech disorders have not been fully characterized in NF1 patients. This study serves as a pilot to identify key issues in the speech of NF1 patients. In particular, the aim is to explore further the occurrence and nature of problems associated with speech as perceived by the patients themselves. A questionnaire was sent to 149 patients with NF1 registered at the Department of Genetics, Ghent University Hospital. The questionnaire inquired about articulation, hearing, breathing, voice, resonance and fluency. Sixty individuals ranging in age from 4.5 to 61.3 years returned completed questionnaires and these served as the database for the study. The results of this sample survey were compared with data of the normal population. About two-thirds of participants experienced at least one speech or speech-related problem of any type. Compared with the normal population, the NF1 group indicated more articulation difficulties, hearing impairment, abnormalities in loudness, and stuttering. The results indicate that speech difficulties are an area of interest in the NF1 population. Further research to elucidate these findings is needed.

  15. Relationships among psychoacoustic judgments, speech understanding ability and self-perceived handicap in tinnitus subjects.

    PubMed

    Newman, C W; Wharton, J A; Shivapuja, B G; Jacobson, G P

    1994-01-01

    Tinnitus is often a disturbing symptom which affects 6-20% of the population. Relationships among tinnitus pitch and loudness judgments, audiometric speech understanding measures and self-perceived handicap were evaluated in a sample of subjects with tinnitus and hearing loss (THL). Data obtained from the THL sample on the audiometric speech measures were compared to the performance of an age-matched hearing loss only (HL) group. Both groups had normal hearing through 1 kHz with a sloping configuration of < or = 20 dB/octave between 2-12 kHz. The THL subjects performed more poorly on the low predictability items of the Speech Perception in Noise Test, suggesting that tinnitus may interfere with the perception of speech signals having reduced linguistic redundancy. The THL subjects rated their tinnitus as annoying at relatively low sensation levels using the pitch-match frequency as the reference tone. Further, significant relationships were found between loudness judgment measures and self-rated annoyance. No predictable relationships were observed between the audiometric speech measures and perceived handicap using the Tinnitus Handicap Questionnaire. These findings support the use of self-report measures in tinnitus patients in that audiometric speech tests alone may be insufficient in describing an individual's reaction to his/her communication breakdowns.

  16. Intelligibility assessment in developmental phonological disorders: accuracy of caregiver gloss.

    PubMed

    Kwiatkowski, J; Shriberg, L D

    1992-10-01

    Fifteen caregivers each glossed a simultaneously videotaped and audiotaped sample of their child with speech delay engaged in conversation with a clinician. One of the authors generated a reference gloss for each sample, aided by (a) prior knowledge of the child's speech-language status and error patterns, (b) glosses from the child's clinician and the child's caregiver, (c) unlimited replays of the taped sample, and (d) the information gained from completing a narrow phonetic transcription of the sample. Caregivers glossed an average of 78% of the utterances and 81% of the words. A comparison of their glosses to the reference glosses suggested that they accurately understood an average of 58% of the utterances and 73% of the words. Discussion considers the implications of such findings for methodological and theoretical issues underlying children's moment-to-moment intelligibility breakdowns during speech-language processing.

  17. Are the Literacy Difficulties That Characterize Developmental Dyslexia Associated with a Failure to Integrate Letters and Speech Sounds?

    ERIC Educational Resources Information Center

    Nash, Hannah M.; Gooch, Debbie; Hulme, Charles; Mahajan, Yatin; McArthur, Genevieve; Steinmetzger, Kurt; Snowling, Margaret J.

    2017-01-01

    The "automatic letter-sound integration hypothesis" (Blomert, [Blomert, L., 2011]) proposes that dyslexia results from a failure to fully integrate letters and speech sounds into automated audio-visual objects. We tested this hypothesis in a sample of English-speaking children with dyslexic difficulties (N = 13) and samples of…

  18. Attitudes toward speech disorders: sampling the views of Cantonese-speaking Americans.

    PubMed

    Bebout, L; Arthur, B

    1997-01-01

    Speech-language pathologists who serve clients from cultural backgrounds that are not familiar to them may encounter culturally influenced attitudinal differences. A questionnaire with statements about 4 speech disorders (dysfluency, cleft pallet, speech of the deaf, and misarticulations) was given to a focus group of Chinese Americans and a comparison group of non-Chinese Americans. The focus group was much more likely to believe that persons with speech disorders could improve their own speech by "trying hard," was somewhat more likely to say that people who use deaf speech and people with cleft palates might be "emotionally disturbed," and generally more likely to view deaf speech as a limitation. The comparison group was more pessimistic about stuttering children's acceptance by their peers than was the focus group. The two subject groups agreed about other items, such as the likelihood that older children with articulation problems are "less intelligent" than their peers.

  19. Longitudinal decline in speech production in Parkinson's disease spectrum disorders.

    PubMed

    Ash, Sharon; Jester, Charles; York, Collin; Kofman, Olga L; Langey, Rachel; Halpin, Amy; Firn, Kim; Dominguez Perez, Sophia; Chahine, Lama; Spindler, Meredith; Dahodwala, Nabila; Irwin, David J; McMillan, Corey; Weintraub, Daniel; Grossman, Murray

    2017-08-01

    We examined narrative speech production longitudinally in non-demented (n=15) and mildly demented (n=8) patients with Parkinson's disease spectrum disorder (PDSD), and we related increasing impairment to structural brain changes in specific language and motor regions. Patients provided semi-structured speech samples, describing a standardized picture at two time points (mean±SD interval=38±24months). The recorded speech samples were analyzed for fluency, grammar, and informativeness. PDSD patients with dementia exhibited significant decline in their speech, unrelated to changes in overall cognitive or motor functioning. Regression analysis in a subset of patients with MRI scans (n=11) revealed that impaired language performance at Time 2 was associated with reduced gray matter (GM) volume at Time 1 in regions of interest important for language functioning but not with reduced GM volume in motor brain areas. These results dissociate language and motor systems and highlight the importance of non-motor brain regions for declining language in PDSD. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  1. The Atlanta Motor Speech Disorders Corpus: Motivation, Development, and Utility.

    PubMed

    Laures-Gore, Jacqueline; Russell, Scott; Patel, Rupal; Frankel, Michael

    2016-01-01

    This paper describes the design and collection of a comprehensive spoken language dataset from speakers with motor speech disorders in Atlanta, Ga., USA. This collaborative project aimed to gather a spoken database consisting of nonmainstream American English speakers residing in the Southeastern US in order to provide a more diverse perspective of motor speech disorders. Ninety-nine adults with an acquired neurogenic disorder resulting in a motor speech disorder were recruited. Stimuli include isolated vowels, single words, sentences with contrastive focus, sentences with emotional content and prosody, sentences with acoustic and perceptual sensitivity to motor speech disorders, as well as 'The Caterpillar' and 'The Grandfather' passages. Utility of this data in understanding the potential interplay of dialect and dysarthria was demonstrated with a subset of the speech samples existing in the database. The Atlanta Motor Speech Disorders Corpus will enrich our understanding of motor speech disorders through the examination of speech from a diverse group of speakers. © 2016 S. Karger AG, Basel.

  2. Measuring Speech Comprehensibility in Students with Down Syndrome

    PubMed Central

    Woynaroski, Tiffany; Camarata, Stephen

    2016-01-01

    Purpose There is an ongoing need to develop assessments of spontaneous speech that focus on whether the child's utterances are comprehensible to listeners. This study sought to identify the attributes of a stable ratings-based measure of speech comprehensibility, which enabled examining the criterion-related validity of an orthography-based measure of the comprehensibility of conversational speech in students with Down syndrome. Method Participants were 10 elementary school students with Down syndrome and 4 unfamiliar adult raters. Averaged across-observer Likert ratings of speech comprehensibility were called a ratings-based measure of speech comprehensibility. The proportion of utterance attempts fully glossed constituted an orthography-based measure of speech comprehensibility. Results Averaging across 4 raters on four 5-min segments produced a reliable (G = .83) ratings-based measure of speech comprehensibility. The ratings-based measure was strongly (r > .80) correlated with the orthography-based measure for both the same and different conversational samples. Conclusion Reliable and valid measures of speech comprehensibility are achievable with the resources available to many researchers and some clinicians. PMID:27299989

  3. Speech and pause characteristics in multiple sclerosis: A preliminary study of speakers with high and low neuropsychological test performance

    PubMed Central

    FEENAUGHTY, LYNDA; TJADEN, KRIS; BENEDICT, RALPH H.B.; WEINSTOCK-GUTTMAN, BIANCA

    2017-01-01

    This preliminary study investigated how cognitive-linguistic status in multiple sclerosis (MS) is reflected in two speech tasks (i.e. oral reading, narrative) that differ in cognitive-linguistic demand. Twenty individuals with MS were selected to comprise High and Low performance groups based on clinical tests of executive function and information processing speed and efficiency. Ten healthy controls were included for comparison. Speech samples were audio-recorded and measures of global speech timing were obtained. Results indicated predicted differences in global speech timing (i.e. speech rate and pause characteristics) for speech tasks differing in cognitive-linguistic demand, but the magnitude of these task-related differences was similar for all speaker groups. Findings suggest that assumptions concerning the cognitive-linguistic demands of reading aloud as compared to spontaneous speech may need to be re-considered for individuals with cognitive impairment. Qualitative trends suggest that additional studies investigating the association between cognitive-linguistic and speech motor variables in MS are warranted. PMID:23294227

  4. Monkey vocal tracts are speech-ready.

    PubMed

    Fitch, W Tecumseh; de Boer, Bart; Mathur, Neil; Ghazanfar, Asif A

    2016-12-01

    For four decades, the inability of nonhuman primates to produce human speech sounds has been claimed to stem from limitations in their vocal tract anatomy, a conclusion based on plaster casts made from the vocal tract of a monkey cadaver. We used x-ray videos to quantify vocal tract dynamics in living macaques during vocalization, facial displays, and feeding. We demonstrate that the macaque vocal tract could easily produce an adequate range of speech sounds to support spoken language, showing that previous techniques based on postmortem samples drastically underestimated primate vocal capabilities. Our findings imply that the evolution of human speech capabilities required neural changes rather than modifications of vocal anatomy. Macaques have a speech-ready vocal tract but lack a speech-ready brain to control it.

  5. "The caterpillar": a novel reading passage for assessment of motor speech disorders.

    PubMed

    Patel, Rupal; Connaghan, Kathryn; Franco, Diana; Edsall, Erika; Forgit, Dory; Olsen, Laura; Ramage, Lianna; Tyler, Emily; Russell, Scott

    2013-02-01

    A review of the salient characteristics of motor speech disorders and common assessment protocols revealed the need for a novel reading passage tailored specifically to differentiate between and among the dysarthrias (DYSs) and apraxia of speech (AOS). "The Caterpillar" passage was designed to provide a contemporary, easily read, contextual speech sample with specific tasks (e.g., prosodic contrasts, words of increasing length and complexity) targeted to inform the assessment of motor speech disorders. Twenty-two adults, 15 with DYS or AOS and 7 healthy controls (HC), were recorded reading "The Caterpillar" passage to demonstrate its utility in examining motor speech performance. Analysis of performance across a subset of segmental and prosodic variables illustrated that "The Caterpillar" passage showed promise for extracting individual profiles of impairment that could augment current assessment protocols and inform treatment planning in motor speech disorders.

  6. Perceptual Measures of Speech from Individuals with Parkinson's Disease and Multiple Sclerosis: Intelligibility and beyond

    ERIC Educational Resources Information Center

    Sussman, Joan E.; Tjaden, Kris

    2012-01-01

    Purpose: The primary purpose of this study was to compare percent correct word and sentence intelligibility scores for individuals with multiple sclerosis (MS) and Parkinson's disease (PD) with scaled estimates of speech severity obtained for a reading passage. Method: Speech samples for 78 talkers were judged, including 30 speakers with MS, 16…

  7. Do Native Speakers of North American and Singapore English Differentially Perceive Comprehensibility in Second Language Speech?

    ERIC Educational Resources Information Center

    Saito, Kazuya; Shintani, Natsuko

    2016-01-01

    The current study examined the extent to which native speakers of North American and Singapore English differentially perceive the comprehensibility (ease of understanding) of second language (L2) speech. Spontaneous speech samples elicited from 50 Japanese learners of English with various proficiency levels were first rated by 10 Canadian and 10…

  8. Assessing Children's Home Language Environments Using Automatic Speech Recognition Technology

    ERIC Educational Resources Information Center

    Greenwood, Charles R.; Thiemann-Bourque, Kathy; Walker, Dale; Buzhardt, Jay; Gilkerson, Jill

    2011-01-01

    The purpose of this research was to replicate and extend some of the findings of Hart and Risley using automatic speech processing instead of human transcription of language samples. The long-term goal of this work is to make the current approach to speech processing possible by researchers and clinicians working on a daily basis with families and…

  9. Music and Speech Perception in Children Using Sung Speech

    PubMed Central

    Nie, Yingjiu; Galvin, John J.; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners. PMID:29609496

  10. Music and Speech Perception in Children Using Sung Speech.

    PubMed

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  11. Familial expressed emotion: outcome and course of Israeli patients with schizophrenia.

    PubMed

    Marom, Sofi; Munitz, Hanan; Jones, Peter B; Weizman, Abraham; Hermesh, Haggai

    2002-01-01

    We investigated the validity of expressed emotion (EE) in Israel. The study sample consisted of 108 patients with schizophrenia and 15 with schizoaffective disorder, and their key relatives. EE was rated with the Five Minute Speech Sample (FMSS). Patient households were categorized by EE and its two components: criticism and emotional overinvolvement. Patients were rated with the Brief Psychiatric Rating Scale (BPRS) at admission, at discharge, and 6 months after discharge. Readmissions were determined over a 9-month period. High EE and particularly high criticism were significantly associated with poorer outcome (higher rate of and earlier readmissions, and higher BPRS score at followup) and worse illness course (higher annual number of prior psychiatric hospital admissions). Odds ratios between high EE and high criticism and readmission were 2.6 and 3.5, respectively. The strongest predictor of earlier readmission was the interaction of high criticism x poor compliance with medication. The results converge to further confirm the notion that familial EE is a valid crosscultural predictor of the clinical course of schizophrenia. Moreover, EE has predictive power in very chronic samples. Criticism appears to be the crucial EE component linked with short-term outcome. Treatment aimed at reducing high criticism is warranted. The FMSS appears to have predictive validity.

  12. TOEFL iBT Speaking Test Scores as Indicators of Oral Communicative Language Proficiency

    ERIC Educational Resources Information Center

    Bridgeman, Brent; Powers, Donald; Stone, Elizabeth; Mollaun, Pamela

    2012-01-01

    Scores assigned by trained raters and by an automated scoring system (SpeechRater[TM]) on the speaking section of the TOEFL iBT[TM] were validated against a communicative competence criterion. Specifically, a sample of 555 undergraduate students listened to speech samples from 184 examinees who took the Test of English as a Foreign Language…

  13. An Analysis of the Use and Structure of Logic in Japanese Argument.

    ERIC Educational Resources Information Center

    Hazen, Michael David

    A study was conducted to determine if the Japanese use logic and argument in different ways than do Westerners. The study analyzed sample rebuttal speeches (in English) of 14 Japanese debaters using the Toulmin model of argument. In addition, it made comparisons with a sample of speeches made by 5 American high school debaters. Audiotapes of the…

  14. Risk and Protective Factors Associated with Speech and Language Impairment in a Nationally Representative Sample of 4- to 5-Year-Old Children

    ERIC Educational Resources Information Center

    Harrison, Linda J.; McLeod, Sharynne

    2010-01-01

    Purpose: To determine risk and protective factors for speech and language impairment in early childhood. Method: Data are presented for a nationally representative sample of 4,983 children participating in the Longitudinal Study of Australian Children (described in McLeod & Harrison, 2009). Thirty-one child, parent, family, and community…

  15. The prediction of speech intelligibility in classrooms using computer models

    NASA Astrophysics Data System (ADS)

    Dance, Stephen; Dentoni, Roger

    2005-04-01

    Two classrooms were measured and modeled using the industry standard CATT model and the Web model CISM. Sound levels, reverberation times and speech intelligibility were predicted in these rooms using data for 7 octave bands. It was found that overall sound levels could be predicted to within 2 dB by both models. However, overall reverberation time was found to be accurately predicted by CATT 14% prediction error, but not by CISM, 41% prediction error. This compared to a 30% prediction error using classical theory. As for STI: CATT predicted within 11%, CISM to within 3% and Sabine to within 28% of the measured value. It should be noted that CISM took approximately 15 seconds to calculate, while CATT took 15 minutes. CISM is freely available on-line at www.whyverne.co.uk/acoustics/Pages/cism/cism.html

  16. Stuttering on function words in bilingual children who stutter: A preliminary study.

    PubMed

    Gkalitsiou, Zoi; Byrd, Courtney T; Bedore, Lisa M; Taliancich-Klinger, Casey L

    2017-01-01

    Evidence suggests young monolingual children who stutter (CWS) are more disfluent on function than content words, particularly when produced in the initial utterance position. The purpose of the present preliminary study was to investigate whether young bilingual CWS present with this same pattern. The narrative and conversational samples of four bilingual Spanish- and English-speaking CWS were analysed. All four bilingual participants produced significantly more stuttering on function words compared to content words, irrespective of their position in the utterance, in their Spanish narrative and conversational speech samples. Three of the four participants also demonstrated more stuttering on function compared to content words in their narrative speech samples in English, but only one participant produced more stuttering on function than content words in her English conversational sample. These preliminary findings are discussed relative to linguistic planning and language proficiency and their potential contribution to stuttered speech.

  17. Afraid to be there? Evaluating the relation between presence, self-reported anxiety, and heart rate in a virtual public speaking task.

    PubMed

    Felnhofer, Anna; Kothgassner, Oswald D; Hetterle, Thomas; Beutl, Leon; Hlavacs, Helmut; Kryspin-Exner, Ilse

    2014-05-01

    The link between anxiety and presence in a virtual environment (VE) is still a subject of an unresolved debate, with little empirical research to support theoretical claims. Thus, the current study analyzed presence, self-reported anxiety, and a physiological parameter (heart rate [HR]) in a sample of 30 high anxious and 35 low anxious participants. Both groups delivered a 5 minute speech in a virtual lecture hall. Results indicate no mediating influences of presence on group differences in self-reported state anxiety during the speech, but point toward negative correlations between state anxiety and the iGroup Presence Questionnaire (IPQ) scales "sense of being there" and "realism." Furthermore, HR was found to be unrelated to self-reported presence. Only the IPQ scale "spatial presence" showed a marginally significant influence on group differences in state anxiety. The present results support the assumption that presence and anxiety are logically distinct, meaning that presence does not directly influence the intensity of an emotion felt in a VE. Rather, it constitutes a precondition for an emotion to be at all elicited by a VE. Also, HR has proven to be no adequate substitute measure for presence, since it only assesses anxiety not presence. It may, however, mediate the interplay between trait anxiety and state anxiety. Possible implications of the current findings are discussed alongside the problem of using presence questionnaires that seem to be prone to subjective bias (i.e., participants confusing presence and emotion).

  18. Software use in the (re)habilitation of hearing impaired children.

    PubMed

    Silva, Mariane Perin da; Comerlatto Junior, Ademir Antonio; Balen, Sheila Andreoli; Bevilacqua, Maria Cecília

    2012-01-01

    To verify the applicability of a software in the (re)habilitation of hearing impaired children. The sample comprised 17 children with hearing impairment, ten with cochlear implants (CI) and seven with hearing aids (HA). The Software Auxiliar na Reabilitação de Distúrbios Auditivos - SARDA (Auxiliary Software for the Rehabilitation of Hearing Disorders) was used. The training protocol was applied for 30 minutes, twice a week, for the necessary time to complete the strategies proposed in the software. To measure the software's applicability for training the speech perception ability in quiet and in noise, subjects were assessed through the Hearing in Noise Test (HINT), before and after the auditory training. Data were statistically analyzed. The group of CI users needed, in average, 12.2 days to finish the strategies, and the group of HA users, in average 10.14 days. Both groups presented differences between pre and post assessments, both in quiet and in noise. Younger children showed more difficulty executing the strategies, however, there was no correlation between age and performance. The type of electronic device did not influence the training. Children presented greater difficulty in the strategy involving non-verbal stimuli and in the strategy with verbal stimuli that trains the sustained attention ability. Children's attention and motivation during stimulation were fundamental for a successful auditory training. The auditory training using the SARDA was effective, providing improvement of the speech perception ability, both in quiet and in noise, for the hearing impaired children.

  19. White noise speech illusion and psychosis expression: An experimental investigation of psychosis liability

    PubMed Central

    Guloksuz, Sinan; Menne-Lothmann, Claudia; Decoster, Jeroen; van Winkel, Ruud; Collip, Dina; Delespaul, Philippe; De Hert, Marc; Derom, Catherine; Thiery, Evert; Jacobs, Nele; Wichers, Marieke; Simons, Claudia J. P.; Rutten, Bart P. F.; van Os, Jim

    2017-01-01

    Background An association between white noise speech illusion and psychotic symptoms has been reported in patients and their relatives. This supports the theory that bottom-up and top-down perceptual processes are involved in the mechanisms underlying perceptual abnormalities. However, findings in nonclinical populations have been conflicting. Objectives The aim of this study was to examine the association between white noise speech illusion and subclinical expression of psychotic symptoms in a nonclinical sample. Findings were compared to previous results to investigate potential methodology dependent differences. Methods In a general population adolescent and young adult twin sample (n = 704), the association between white noise speech illusion and subclinical psychotic experiences, using the Structured Interview for Schizotypy—Revised (SIS-R) and the Community Assessment of Psychic Experiences (CAPE), was analyzed using multilevel logistic regression analyses. Results Perception of any white noise speech illusion was not associated with either positive or negative schizotypy in the general population twin sample, using the method by Galdos et al. (2011) (positive: ORadjusted: 0.82, 95% CI: 0.6–1.12, p = 0.217; negative: ORadjusted: 0.75, 95% CI: 0.56–1.02, p = 0.065) and the method by Catalan et al. (2014) (positive: ORadjusted: 1.11, 95% CI: 0.79–1.57, p = 0.557). No association was found between CAPE scores and speech illusion (ORadjusted: 1.25, 95% CI: 0.88–1.79, p = 0.220). For the Catalan et al. (2014) but not the Galdos et al. (2011) method, a negative association was apparent between positive schizotypy and speech illusion with positive or negative affective valence (ORadjusted: 0.44, 95% CI: 0.24–0.81, p = 0.008). Conclusion Contrary to findings in clinical populations, white noise speech illusion may not be associated with psychosis proneness in nonclinical populations. PMID:28832672

  20. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  1. The Future of Software Engineering for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, G

    DOE ASCR requested that from May through mid-July 2015 a study group identify issues and recommend solutions from a software engineering perspective transitioning into the next generation of High Performance Computing. The approach used was to ask some of the DOE complex experts who will be responsible for doing this work to contribute to the study group. The technique used was to solicit elevator speeches: a short and concise write up done as if the author was a speaker with only a few minutes to convince a decision maker of their top issues. Pages 2-18 contain the original texts ofmore » the contributed elevator speeches and end notes identifying the 20 contributors. The study group also ranked the importance of each topic, and those scores are displayed with each topic heading. A perfect score (and highest priority) is three, two is medium priority, and one is lowest priority. The highest scoring topic areas were software engineering and testing resources; the lowest scoring area was compliance to DOE standards. The following two paragraphs are an elevator speech summarizing the contributed elevator speeches. Each sentence or phrase in the summary is hyperlinked to its source via a numeral embedded in the text. A risk one liner has also been added to each topic to allow future risk tracking and mitigation.« less

  2. Connected word recognition using a cascaded neuro-computational model

    NASA Astrophysics Data System (ADS)

    Hoya, Tetsuya; van Leeuwen, Cees

    2016-10-01

    We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.

  3. Speech after Radial Forearm Free Flap Reconstruction of the Tongue: A Longitudinal Acoustic Study of Vowel and Diphthong Sounds

    ERIC Educational Resources Information Center

    Laaksonen, Juha-Pertti; Rieger, Jana; Happonen, Risto-Pekka; Harris, Jeffrey; Seikaly, Hadi

    2010-01-01

    The purpose of this study was to use acoustic analyses to describe speech outcomes over the course of 1 year after radial forearm free flap (RFFF) reconstruction of the tongue. Eighteen Canadian English-speaking females and males with reconstruction for oral cancer had speech samples recorded (pre-operative, and 1 month, 6 months, and 1 year…

  4. Speech Sound Disorders in Preschool Children: Correspondence between Clinical Diagnosis and Teacher and Parent Report

    ERIC Educational Resources Information Center

    Harrison, Linda J.; McLeod, Sharynne; McAllister, Lindy; McCormack, Jane

    2017-01-01

    This study sought to assess the level of correspondence between parent and teacher report of concern about young children's speech and specialist assessment of speech sound disorders (SSD). A sample of 157 children aged 4-5 years was recruited in preschools and long day care centres in Victoria and New South Wales (NSW). SSD was assessed…

  5. Regular/Irregular is Not the Whole Story: The Role of Frequency and Generalization in the Acquisition of German Past Participle Inflection

    ERIC Educational Resources Information Center

    Szagun, Gisela

    2011-01-01

    The acquisition of German participle inflection was investigated using spontaneous speech samples from six children between 1 ; 4 and 3 ; 8 and ten children between 1 ; 4 and 2 ; 10 recorded longitudinally at regular intervals. Child-directed speech was also analyzed. In adult and child speech weak participles were significantly more frequent than…

  6. Loss of regional accent after damage to the speech production network.

    PubMed

    Berthier, Marcelo L; Dávila, Guadalupe; Moreno-Torres, Ignacio; Beltrán-Corbellini, Álvaro; Santana-Moreno, Daniel; Roé-Vellvé, Núria; Thurnhofer-Hemsi, Karl; Torres-Prioris, María José; Massone, María Ignacia; Ruiz-Cruces, Rafael

    2015-01-01

    Lesion-symptom mapping studies reveal that selective damage to one or more components of the speech production network can be associated with foreign accent syndrome, changes in regional accent (e.g., from Parisian accent to Alsatian accent), stronger regional accent, or re-emergence of a previously learned and dormant regional accent. Here, we report loss of regional accent after rapidly regressive Broca's aphasia in three Argentinean patients who had suffered unilateral or bilateral focal lesions in components of the speech production network. All patients were monolingual speakers with three different native Spanish accents (Cordobés or central, Guaranítico or northeast, and Bonaerense). Samples of speech production from the patient with native Córdoba accent were compared with previous recordings of his voice, whereas data from the patient with native Guaranítico accent were compared with speech samples from one healthy control matched for age, gender, and native accent. Speech samples from the patient with native Buenos Aires's accent were compared with data obtained from four healthy control subjects with the same accent. Analysis of speech production revealed discrete slowing in speech rate, inappropriate long pauses, and monotonous intonation. Phonemic production remained similar to those of healthy Spanish speakers, but phonetic variants peculiar to each accent (e.g., intervocalic aspiration of /s/ in Córdoba accent) were absent. While basic normal prosodic features of Spanish prosody were preserved, features intrinsic to melody of certain geographical areas (e.g., rising end F0 excursion in declarative sentences intoned with Córdoba accent) were absent. All patients were also unable to produce sentences with different emotional prosody. Brain imaging disclosed focal left hemisphere lesions involving the middle part of the motor cortex, the post-central cortex, the posterior inferior and/or middle frontal cortices, insula, anterior putamen and supplementary motor area. Our findings suggest that lesions affecting the middle part of the left motor cortex and other components of the speech production network disrupt neural processes involved in the production of regional accent features.

  7. Loss of regional accent after damage to the speech production network

    PubMed Central

    Berthier, Marcelo L.; Dávila, Guadalupe; Moreno-Torres, Ignacio; Beltrán-Corbellini, Álvaro; Santana-Moreno, Daniel; Roé-Vellvé, Núria; Thurnhofer-Hemsi, Karl; Torres-Prioris, María José; Massone, María Ignacia; Ruiz-Cruces, Rafael

    2015-01-01

    Lesion-symptom mapping studies reveal that selective damage to one or more components of the speech production network can be associated with foreign accent syndrome, changes in regional accent (e.g., from Parisian accent to Alsatian accent), stronger regional accent, or re-emergence of a previously learned and dormant regional accent. Here, we report loss of regional accent after rapidly regressive Broca’s aphasia in three Argentinean patients who had suffered unilateral or bilateral focal lesions in components of the speech production network. All patients were monolingual speakers with three different native Spanish accents (Cordobés or central, Guaranítico or northeast, and Bonaerense). Samples of speech production from the patient with native Córdoba accent were compared with previous recordings of his voice, whereas data from the patient with native Guaranítico accent were compared with speech samples from one healthy control matched for age, gender, and native accent. Speech samples from the patient with native Buenos Aires’s accent were compared with data obtained from four healthy control subjects with the same accent. Analysis of speech production revealed discrete slowing in speech rate, inappropriate long pauses, and monotonous intonation. Phonemic production remained similar to those of healthy Spanish speakers, but phonetic variants peculiar to each accent (e.g., intervocalic aspiration of /s/ in Córdoba accent) were absent. While basic normal prosodic features of Spanish prosody were preserved, features intrinsic to melody of certain geographical areas (e.g., rising end F0 excursion in declarative sentences intoned with Córdoba accent) were absent. All patients were also unable to produce sentences with different emotional prosody. Brain imaging disclosed focal left hemisphere lesions involving the middle part of the motor cortex, the post-central cortex, the posterior inferior and/or middle frontal cortices, insula, anterior putamen and supplementary motor area. Our findings suggest that lesions affecting the middle part of the left motor cortex and other components of the speech production network disrupt neural processes involved in the production of regional accent features. PMID:26594161

  8. Immediate effects of AAF devices on the characteristics of stuttering: a clinical analysis.

    PubMed

    Unger, Julia P; Glück, Christian W; Cholewa, Jürgen

    2012-06-01

    The present study investigated the immediate effects of altered auditory feedback (AAF) and one Inactive Condition (AAF parameters set to 0) on clinical attributes of stuttering during scripted and spontaneous speech. Two commercially available, portable AAF devices were used to create the combined delayed auditory feedback (DAF) and frequency altered feedback (FAF) effects. Thirty adults, who stutter, aged 18-68 years (M=36.5; SD=15.2), participated in this investigation. Each subject produced four sets of 5-min of oral reading, three sets of 5-min monologs as well as 10-min dialogs. These speech samples were analyzed to detect changes in descriptive features of stuttering (frequency, duration, speech/articulatory rate, core behaviors) across the various speech samples and within two SSI-4 (Riley, 2009) based severity ratings. A statistically significant difference was found in the frequency of stuttered syllables (%SS) during both Active Device conditions (p=.000) for all speech samples. The most sizable reductions in %SS occurred within scripted speech. In the analysis of stuttering type, it was found that blocks were reduced significantly (Device A: p=.017; Device B: p=.049). To evaluate the impact on severe and mild stuttering, participants were grouped into two SSI-4 based categories; mild and moderate-severe. During the Inactive Condition those participants within the moderate-severe group (p=.024) showed a statistically significant reduction in overall disfluencies. This result indicates, that active AAF parameters alone may not be the sole cause of a fluency-enhancement when using a technical speech aid. The reader will learn and be able to describe: (1) currently available scientific evidence on the use of altered auditory feedback (AAF) during scripted and spontaneous speech, (2) which characteristics of stuttering are impacted by an AAF device (frequency, duration, core behaviors, speech & articulatory rate, stuttering severity), (3) the effects of an Inactive Condition on people who stutter (PWS) falling into two severity groups, and (4) how the examined participants perceived the use of AAF devices. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Do not throw out the baby with the bath water: choosing an effective baseline for a functional localizer of speech processing.

    PubMed

    Stoppelman, Nadav; Harpaz, Tamar; Ben-Shachar, Michal

    2013-05-01

    Speech processing engages multiple cortical regions in the temporal, parietal, and frontal lobes. Isolating speech-sensitive cortex in individual participants is of major clinical and scientific importance. This task is complicated by the fact that responses to sensory and linguistic aspects of speech are tightly packed within the posterior superior temporal cortex. In functional magnetic resonance imaging (fMRI), various baseline conditions are typically used in order to isolate speech-specific from basic auditory responses. Using a short, continuous sampling paradigm, we show that reversed ("backward") speech, a commonly used auditory baseline for speech processing, removes much of the speech responses in frontal and temporal language regions of adult individuals. On the other hand, signal correlated noise (SCN) serves as an effective baseline for removing primary auditory responses while maintaining strong signals in the same language regions. We show that the response to reversed speech in left inferior frontal gyrus decays significantly faster than the response to speech, thus suggesting that this response reflects bottom-up activation of speech analysis followed up by top-down attenuation once the signal is classified as nonspeech. The results overall favor SCN as an auditory baseline for speech processing.

  10. Temporal modulations in speech and music.

    PubMed

    Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David

    2017-10-01

    Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. The Role of Clinical Experience in Speech-Language Pathologists' Perception of Subphonemic Detail in Children's Speech

    PubMed Central

    Munson, Benjamin; Johnson, Julie M.; Edwards, Jan

    2013-01-01

    Purpose This study examined whether experienced speech-language pathologists differ from inexperienced people in their perception of phonetic detail in children's speech. Method Convenience samples comprising 21 experienced speech-language pathologist and 21 inexperienced listeners participated in a series of tasks in which they made visual-analog scale (VAS) ratings of children's natural productions of target /s/-/θ/, /t/-/k/, and /d/-/ɡ/ in word-initial position. Listeners rated the perception distance between individual productions and ideal productions. Results The experienced listeners' ratings differed from inexperienced listeners' in four ways: they had higher intra-rater reliability, they showed less bias toward a more frequent sound, their ratings were more closely related to the acoustic characteristics of the children's speech, and their responses were related to a different set of predictor variables. Conclusions Results suggest that experience working as a speech-language pathologist leads to better perception of phonetic detail in children's speech. Limitations and future research are discussed. PMID:22230182

  12. Developmental profile of speech-language and communicative functions in an individual with the Preserved Speech Variant of Rett syndrome

    PubMed Central

    Marschik, Peter B.; Vollmann, Ralf; Bartl-Pokorny, Katrin D.; Green, Vanessa A.; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2018-01-01

    Objective We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant (PSV) of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. Methods For this study we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples, and picture stories to elicit narrative competences. Results Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Conclusion Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note. PMID:23870013

  13. Developmental profile of speech-language and communicative functions in an individual with the preserved speech variant of Rett syndrome.

    PubMed

    Marschik, Peter B; Vollmann, Ralf; Bartl-Pokorny, Katrin D; Green, Vanessa A; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2014-08-01

    We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.

  14. Hearing impaired speech in noisy classrooms

    NASA Astrophysics Data System (ADS)

    Shahin, Kimary; McKellin, William H.; Jamieson, Janet; Hodgson, Murray; Pichora-Fuller, M. Kathleen

    2005-04-01

    Noisy classrooms have been shown to induce among students patterns of interaction similar to those used by hearing impaired people [W. H. McKellin et al., GURT (2003)]. In this research, the speech of children in a noisy classroom setting was investigated to determine if noisy classrooms have an effect on students' speech. Audio recordings were made of the speech of students during group work in their regular classrooms (grades 1-7), and of the speech of the same students in a sound booth. Noise level readings in the classrooms were also recorded. Each student's noisy and quiet environment speech samples were acoustically analyzed for prosodic and segmental properties (f0, pitch range, pitch variation, phoneme duration, vowel formants), and compared. The analysis showed that the students' speech in the noisy classrooms had characteristics of the speech of hearing-impaired persons [e.g., R. O'Halpin, Clin. Ling. and Phon. 15, 529-550 (2001)]. Some educational implications of our findings were identified. [Work supported by the Peter Wall Institute for Advanced Studies, University of British Columbia.

  15. Computer-Mediated Assessment of Intelligibility in Aphasia and Apraxia of Speech

    PubMed Central

    Haley, Katarina L.; Roth, Heidi; Grindstaff, Enetta; Jacks, Adam

    2011-01-01

    Background Previous work indicates that single word intelligibility tests developed for dysarthria are sensitive to segmental production errors in aphasic individuals with and without apraxia of speech. However, potential listener learning effects and difficulties adapting elicitation procedures to coexisting language impairments limit their applicability to left hemisphere stroke survivors. Aims The main purpose of this study was to examine basic psychometric properties for a new monosyllabic intelligibility test developed for individuals with aphasia and/or AOS. A related purpose was to examine clinical feasibility and potential to standardize a computer-mediated administration approach. Methods & Procedures A 600-item monosyllabic single word intelligibility test was constructed by assembling sets of phonetically similar words. Custom software was used to select 50 target words from this test in a pseudo-random fashion and to elicit and record production of these words by 23 speakers with aphasia and 20 neurologically healthy participants. To evaluate test-retest reliability, two identical sets of 50-word lists were elicited by requesting repetition after a live speaker model. To examine the effect of a different word set and auditory model, an additional set of 50 different words was elicited with a pre-recorded model. The recorded words were presented to normal-hearing listeners for identification via orthographic and multiple-choice response formats. To examine construct validity, production accuracy for each speaker was estimated via phonetic transcription and rating of overall articulation. Outcomes & Results Recording and listening tasks were completed in less than six minutes for all speakers and listeners. Aphasic speakers were significantly less intelligible than neurologically healthy speakers and displayed a wide range of intelligibility scores. Test-retest and inter-listener reliability estimates were strong. No significant difference was found in scores based on recordings from a live model versus a pre-recorded model, but some individual speakers favored the live model. Intelligibility test scores correlated highly with segmental accuracy derived from broad phonetic transcription of the same speech sample and a motor speech evaluation. Scores correlated moderately with rated articulation difficulty. Conclusions We describe a computerized, single-word intelligibility test that yields clinically feasible, reliable, and valid measures of segmental speech production in adults with aphasia. This tool can be used in clinical research to facilitate appropriate participant selection and to establish matching across comparison groups. For a majority of speakers, elicitation procedures can be standardized by using a pre-recorded auditory model for repetition. This assessment tool has potential utility for both clinical assessment and outcomes research. PMID:22215933

  16. Missouri Assessment Program, Spring 2002: Social Studies, Grade 8. Released Items [and] Scoring Guide.

    ERIC Educational Resources Information Center

    Missouri State Dept. of Elementary and Secondary Education, Jefferson City.

    This booklet contains sample items from the Missouri social studies test for eighth graders. The first sample is based on a speech delivered by Elizabeth Cady Stanton in the mid-1880s, which proposed a new approach to raising girls. Students are directed to use their own knowledge and the speech excerpt to do three activities. The second sample…

  17. Can you hear my age? Influences of speech rate and speech spontaneity on estimation of speaker age

    PubMed Central

    Skoog Waller, Sara; Eriksson, Mårten; Sörqvist, Patrik

    2015-01-01

    Cognitive hearing science is mainly about the study of how cognitive factors contribute to speech comprehension, but cognitive factors also partake in speech processing to infer non-linguistic information from speech signals, such as the intentions of the talker and the speaker’s age. Here, we report two experiments on age estimation by “naïve” listeners. The aim was to study how speech rate influences estimation of speaker age by comparing the speakers’ natural speech rate with increased or decreased speech rate. In Experiment 1, listeners were presented with audio samples of read speech from three different speaker age groups (young, middle aged, and old adults). They estimated the speakers as younger when speech rate was faster than normal and as older when speech rate was slower than normal. This speech rate effect was slightly greater in magnitude for older (60–65 years) speakers in comparison with younger (20–25 years) speakers, suggesting that speech rate may gain greater importance as a perceptual age cue with increased speaker age. This pattern was more pronounced in Experiment 2, in which listeners estimated age from spontaneous speech. Faster speech rate was associated with lower age estimates, but only for older and middle aged (40–45 years) speakers. Taken together, speakers of all age groups were estimated as older when speech rate decreased, except for the youngest speakers in Experiment 2. The absence of a linear speech rate effect in estimates of younger speakers, for spontaneous speech, implies that listeners use different age estimation strategies or cues (possibly vocabulary) depending on the age of the speaker and the spontaneity of the speech. Potential implications for forensic investigations and other applied domains are discussed. PMID:26236259

  18. The relations among maternal depressive disorder, maternal Expressed Emotion, and toddler behavior problems and attachment

    PubMed Central

    Gravener, Julie A.; Rogosch, Fred A.; Oshri, Assaf; Narayan, Angela J.; Cicchetti, Dante; Toth, Sheree L.

    2015-01-01

    Direct and indirect relations among maternal depression, maternal Expressed Emotion (EE: Self- and Child-Criticism), child internalizing and externalizing symptoms, and child attachment were examined. Participants were mothers with depression (n = 130) and comparison mothers (n = 68) and their toddlers (M age = 20 mo.; 53% male). Assessments included the Diagnostic Interview Schedule (maternal depression); the Five Minute Speech Sample (EE); the Child Behavior Checklist (toddler behavior problems); the Strange Situation (child attachment). Direct relations were significant linking: 1) maternal depression with both EE and child functioning; 2) Child-Criticism with child internalizing and externalizing symptoms; 3) Self-Criticism with child attachment. Significant indirect relations were found linking maternal depression with: 1) child externalizing behaviors via Child-Criticism; 2) child internalizing behaviors via Self- and Child-Criticism; and 3) child attachment via Self-Criticism. Findings are consistent with a conceptual model in which maternal EE mediates relations between maternal depression and toddler socio-emotional functioning. PMID:22146899

  19. Affective Properties of Mothers' Speech to Infants With Hearing Impairment and Cochlear Implants

    PubMed Central

    Bergeson, Tonya R.; Xu, Huiping; Kitamura, Christine

    2015-01-01

    Purpose The affective properties of infant-directed speech influence the attention of infants with normal hearing to speech sounds. This study explored the affective quality of maternal speech to infants with hearing impairment (HI) during the 1st year after cochlear implantation as compared to speech to infants with normal hearing. Method Mothers of infants with HI and mothers of infants with normal hearing matched by age (NH-AM) or hearing experience (NH-EM) were recorded playing with their infants during 3 sessions over a 12-month period. Speech samples of 25 s were low-pass filtered, leaving intonation but not speech information intact. Sixty adults rated the stimuli along 5 scales: positive/negative affect and intention to express affection, to encourage attention, to comfort/soothe, and to direct behavior. Results Low-pass filtered speech to HI and NH-EM groups was rated as more positive, affective, and comforting compared with the such speech to the NH-AM group. Speech to infants with HI and with NH-AM was rated as more directive than speech to the NH-EM group. Mothers decreased affective qualities in speech to all infants but increased directive qualities in speech to infants with NH-EM over time. Conclusions Mothers fine-tune communicative intent in speech to their infant's developmental stage. They adjust affective qualities to infants' hearing experience rather than to chronological age but adjust directive qualities of speech to the chronological age of their infants. PMID:25679195

  20. Perception and analysis of Spanish accents in English speech

    NASA Astrophysics Data System (ADS)

    Chism, Cori; Lass, Norman

    2002-05-01

    The purpose of the present study was to determine what relates most closely to the degree of perceived foreign accent in the English speech of native Spanish speakers: intonation, vowel length, stress, voice onset time (VOT), or segmental accuracy. Nineteen native English speaking listeners rated speech samples from 7 native English speakers and 15 native Spanish speakers for comprehensibility and degree of foreign accent. The speech samples were analyzed spectrographically and perceptually to obtain numerical values for each variable. Correlation coefficients were computed to determine the relationship beween these values and the average foreign accent scores. Results showed that the average foreign accent scores were statistically significantly correlated with three variables: the length of stressed vowels (r=-0.48, p=0.05), voice onset time (r =-0.62, p=0.01), and segmental accuracy (r=0.92, p=0.001). Implications of these findings and suggestions for future research are discussed.

  1. Speech vs. singing: infants choose happier sounds

    PubMed Central

    Corbeil, Marieve; Trehub, Sandra E.; Peretz, Isabelle

    2013-01-01

    Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants' attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4–13 months of age were exposed to happy-sounding infant-directed speech vs. hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children's song spoken vs. sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children's song vs. a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing) was the principal contributor to infant attention, regardless of age. PMID:23805119

  2. The Relationship Between Apraxia of Speech and Oral Apraxia: Association or Dissociation?

    PubMed

    Whiteside, Sandra P; Dyson, Lucy; Cowell, Patricia E; Varley, Rosemary A

    2015-11-01

    Acquired apraxia of speech (AOS) is a motor speech disorder that affects the implementation of articulatory gestures and the fluency and intelligibility of speech. Oral apraxia (OA) is an impairment of nonspeech volitional movement. Although many speakers with AOS also display difficulties with volitional nonspeech oral movements, the relationship between the 2 conditions is unclear. This study explored the relationship between speech and volitional nonspeech oral movement impairment in a sample of 50 participants with AOS. We examined levels of association and dissociation between speech and OA using a battery of nonspeech oromotor, speech, and auditory/aphasia tasks. There was evidence of a moderate positive association between the 2 impairments across participants. However, individual profiles revealed patterns of dissociation between the 2 in a few cases, with evidence of double dissociation of speech and oral apraxic impairment. We discuss the implications of these relationships for models of oral motor and speech control. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. The speech naturalness of people who stutter speaking under delayed auditory feedback as perceived by different groups of listeners.

    PubMed

    Van Borsel, John; Eeckhout, Hannelore

    2008-09-01

    This study investigated listeners' perception of the speech naturalness of people who stutter (PWS) speaking under delayed auditory feedback (DAF) with particular attention for possible listener differences. Three panels of judges consisting of 14 stuttering individuals, 14 speech language pathologists, and 14 naive listeners rated the naturalness of speech samples of stuttering and non-stuttering individuals using a 9-point interval scale. Results clearly indicate that these three groups evaluate naturalness differently. Naive listeners appear to be more severe in their judgements than speech language pathologists and stuttering listeners, and speech language pathologists are apparently more severe than PWS. The three listener groups showed similar trends with respect to the relationship between speech naturalness and speech rate. Results of all three indicated that for PWS, the slower a speaker's rate was, the less natural speech was judged to sound. The three listener groups also showed similar trends with regard to naturalness of the stuttering versus the non-stuttering individuals. All three panels considered the speech of the non-stuttering participants more natural. The reader will be able to: (1) discuss the speech naturalness of people who stutter speaking under delayed auditory feedback, (2) discuss listener differences about the naturalness of people who stutter speaking under delayed auditory feedback, and (3) discuss the importance of speech rate for the naturalness of speech.

  4. Scores on Riley's stuttering severity instrument versions three and four for samples of different length and for different types of speech material.

    PubMed

    Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter

    2014-12-01

    Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.

  5. Prolonged Orientation to Pictorial Novelty in Severely Speech-Disordered Children. Papers and Reports on Child Language Development, No. 4.

    ERIC Educational Resources Information Center

    Mackworth, Norman H.; And Others

    1972-01-01

    The Mackworth wide-angle reflection eye camera was used to record the position of the gaze on a display of 16 white symbols. One of these symbols changed to red after 30 seconds, remained red for a minute of testing, and then became white again. The subjects were 10 aphasic children (aged 5-9), who were compared with a group of 10 normal children,…

  6. Department of Defense Training Technology Technical Group (T2TG) Minutes and Briefings of 6th Meeting

    DTIC Science & Technology

    1992-03-01

    the Services or "What are the Research Issues in the use of Virtual Reality in Training?" 173 Visual Communication In Multi-Media Virtual Realities...This basic research project in visual communication examines how visual knowledge should be structured to take full advantage of advanced computer...theoretical framework to begin to analyze the comparative strengths of speech communication versus visual communication in the exchange of shared mental

  7. Impact of cognitive function and dysarthria on spoken language and perceived speech severity in multiple sclerosis

    NASA Astrophysics Data System (ADS)

    Feenaughty, Lynda

    Purpose: The current study sought to investigate the separate effects of dysarthria and cognitive status on global speech timing, speech hesitation, and linguistic complexity characteristics and how these speech behaviors impose on listener impressions for three connected speech tasks presumed to differ in cognitive-linguistic demand for four carefully defined speaker groups; 1) MS with cognitive deficits (MSCI), 2) MS with clinically diagnosed dysarthria and intact cognition (MSDYS), 3) MS without dysarthria or cognitive deficits (MS), and 4) healthy talkers (CON). The relationship between neuropsychological test scores and speech-language production and perceptual variables for speakers with cognitive deficits was also explored. Methods: 48 speakers, including 36 individuals reporting a neurological diagnosis of MS and 12 healthy talkers participated. The three MS groups and control group each contained 12 speakers (8 women and 4 men). Cognitive function was quantified using standard clinical tests of memory, information processing speed, and executive function. A standard z-score of ≤ -1.50 indicated deficits in a given cognitive domain. Three certified speech-language pathologists determined the clinical diagnosis of dysarthria for speakers with MS. Experimental speech tasks of interest included audio-recordings of an oral reading of the Grandfather passage and two spontaneous speech samples in the form of Familiar and Unfamiliar descriptive discourse. Various measures of spoken language were of interest. Suprasegmental acoustic measures included speech and articulatory rate. Linguistic speech hesitation measures included pause frequency (i.e., silent and filled pauses), mean silent pause duration, grammatical appropriateness of pauses, and interjection frequency. For the two discourse samples, three standard measures of language complexity were obtained including subordination index, inter-sentence cohesion adequacy, and lexical diversity. Ten listeners judged each speech sample using the perceptual construct of Speech Severity using a visual analog scale. Additional measures obtained to describe participants included the Sentence Intelligibility Test (SIT), the 10-item Communication Participation Item Bank (CPIB), and standard biopsychosocial measures of depression (Beck Depression Inventory-Fast Screen; BDI-FS), fatigue (Fatigue Severity Scale; FSS), and overall disease severity (Expanded Disability Status Scale; EDSS). Healthy controls completed all measures, with the exception of the CPIB and EDSS. All data were analyzed using standard, descriptive and parametric statistics. For the MSCI group, the relationship between neuropsychological test scores and speech-language variables were explored for each speech task using Pearson correlations. The relationship between neuropsychological test scores and Speech Severity also was explored. Results and Discussion: Topic familiarity for descriptive discourse did not strongly influence speech production or perceptual variables; however, results indicated predicted task-related differences for some spoken language measures. With the exception of the MSCI group, all speaker groups produced the same or slower global speech timing (i.e., speech and articulatory rates), more silent and filled pauses, more grammatical and longer silent pause durations in spontaneous discourse compared to reading aloud. Results revealed no appreciable task differences for linguistic complexity measures. Results indicated group differences for speech rate. The MSCI group produced significantly faster speech rates compared to the MSDYS group. Both the MSDYS and the MSCI groups were judged to have significantly poorer perceived Speech Severity compared to typically aging adults. The Task x Group interaction was only significant for the number of silent pauses. The MSDYS group produced fewer silent pauses in spontaneous speech and more silent pauses in the reading task compared to other groups. Finally, correlation analysis revealed moderate relationships between neuropsychological test scores and speech hesitation measures, within the MSCI group. Slower information processing and poorer memory were significantly correlated with more silent pauses and poorer executive function was associated with fewer filled pauses in the Unfamiliar discourse task. Results have both clinical and theoretical implications. Overall, clinicians should demonstrate caution when interpreting global measures of speech timing and perceptual measures in the absence of information about cognitive ability. Results also have implications for a comprehensive model of spoken language incorporating cognitive, linguistic, and motor variables.

  8. An Analysis of the Variations from Standard English Pronunciation in the Phonetic Performance of Two Groups of Nonstandard-English-Speaking Children. Final Report.

    ERIC Educational Resources Information Center

    Williams, Frederick, Ed.; And Others

    In this second of two studies conducted with portions of the National Speech and Hearing Survey data, the investigators analyzed the phonetic variants from standard American English in the speech of two groups of nonstandard English speaking children. The study used samples of free speech and performance on the Gold-Fristoe Test of Articulation…

  9. Risk and protective factors associated with speech and language impairment in a nationally representative sample of 4- to 5-year-old children.

    PubMed

    Harrison, Linda J; McLeod, Sharynne

    2010-04-01

    To determine risk and protective factors for speech and language impairment in early childhood. Data are presented for a nationally representative sample of 4,983 children participating in the Longitudinal Study of Australian Children (described in McLeod & Harrison, 2009). Thirty-one child, parent, family, and community factors previously reported as being predictors of speech and language impairment were tested as predictors of (a) parent-rated expressive speech/language concern and (b) receptive language concern, (c) use of speech-language pathology services, and (d) low receptive vocabulary. Bivariate logistic regression analyses confirmed 29 of the identified factors. However, when tested concurrently with other predictors in multivariate analyses, only 19 remained significant: 9 for 2-4 outcomes and 10 for 1 outcome. Consistent risk factors were being male, having ongoing hearing problems, and having a more reactive temperament. Protective factors were having a more persistent and sociable temperament and higher levels of maternal well-being. Results differed by outcome for having an older sibling, parents speaking a language other than English, and parental support for children's learning at home. Identification of children requiring speech and language assessment requires consideration of the context of family life as well as biological and psychosocial factors intrinsic to the child.

  10. Differential effects of speech situations on mothers' and fathers' infant-directed and dog-directed speech: An acoustic analysis.

    PubMed

    Gergely, Anna; Faragó, Tamás; Galambos, Ágoston; Topál, József

    2017-10-23

    There is growing evidence that dog-directed and infant-directed speech have similar acoustic characteristics, like high overall pitch, wide pitch range, and attention-getting devices. However, it is still unclear whether dog- and infant-directed speech have gender or context-dependent acoustic features. In the present study, we collected comparable infant-, dog-, and adult directed speech samples (IDS, DDS, and ADS) in four different speech situations (Storytelling, Task solving, Teaching, and Fixed sentences situations); we obtained the samples from parents whose infants were younger than 30 months of age and also had pet dog at home. We found that ADS was different from IDS and DDS, independently of the speakers' gender and the given situation. Higher overall pitch in DDS than in IDS during free situations was also found. Our results show that both parents hyperarticulate their vowels when talking to children but not when addressing dogs: this result is consistent with the goal of hyperspeech in language tutoring. Mothers, however, exaggerate their vowels for their infants under 18 months more than fathers do. Our findings suggest that IDS and DDS have context-dependent features and support the notion that people adapt their prosodic features to the acoustic preferences and emotional needs of their audience.

  11. An acoustical assessment of pitch-matching accuracy in relation to speech frequency, speech frequency range, age and gender in preschool children

    NASA Astrophysics Data System (ADS)

    Trollinger, Valerie L.

    This study investigated the relationship between acoustical measurement of singing accuracy in relationship to speech fundamental frequency, speech fundamental frequency range, age and gender in preschool-aged children. Seventy subjects from Southeastern Pennsylvania; the San Francisco Bay Area, California; and Terre Haute, Indiana, participated in the study. Speech frequency was measured by having the subjects participate in spontaneous and guided speech activities with the researcher, with 18 diverse samples extracted from each subject's recording for acoustical analysis for fundamental frequency in Hz with the CSpeech computer program. The fundamental frequencies were averaged together to derive a mean speech frequency score for each subject. Speech range was calculated by subtracting the lowest fundamental frequency produced from the highest fundamental frequency produced, resulting in a speech range measured in increments of Hz. Singing accuracy was measured by having the subjects each echo-sing six randomized patterns using the pitches Middle C, D, E, F♯, G and A (440), using the solfege syllables of Do and Re, which were recorded by a 5-year-old female model. For each subject, 18 samples of singing were recorded. All samples were analyzed by the CSpeech for fundamental frequency. For each subject, deviation scores in Hz were derived by calculating the difference between what the model sang in Hz and what the subject sang in response in Hz. Individual scores for each child consisted of an overall mean total deviation frequency, mean frequency deviations for each pattern, and mean frequency deviation for each pitch. Pearson correlations, MANOVA and ANOVA analyses, Multiple Regressions and Discriminant Analysis revealed the following findings: (1) moderate but significant (p < .001) relationships emerged between mean speech frequency and the ability to sing the pitches E, F♯, G and A in the study; (2) mean speech frequency also emerged as the strongest predictor of subjects' ability to sing the notes E and F♯; (3) mean speech frequency correlated moderately and significantly (p < .001) with sharpness and flatness of singing response accuracy in Hz; (4) speech range was the strongest predictor of singing accuracy for the pitches G and A in the study (p < .001); (5) gender emerged as a significant, but not the strongest, predictor for ability to sing the pitches in the study above C and D; (6) gender did not correlate with mean speech frequency and speech range; (7) age in months emerged as a low but significant predictor of ability to sing the lower notes (C and D) in the study; (8) age correlated significantly but negatively low (r = -.23, p < .05, two-tailed) with mean speech frequency; and (9) age did not emerge as a significant predictor of overall singing accuracy. Ancillary findings indicated that there were significant differences in singing accuracy based on geographic location by gender, and that siblings and fraternal twins in the study generally performed similarly. In addition, reliability for using the CSpeech for acoustical analysis revealed test/retest correlations of .99, with one exception at .94. Based on these results, suggestions were made concerning future research concerned with studying the use of voice in speech and how it may affect singing development, overall use in singing, and pitch-matching accuracy.

  12. Longitudinal follow-up to evaluate speech disorders in early-treated patients with infantile-onset Pompe disease.

    PubMed

    Zeng, Yin-Ting; Hwu, Wuh-Liang; Torng, Pao-Chuan; Lee, Ni-Chung; Shieh, Jeng-Yi; Lu, Lu; Chien, Yin-Hsiu

    2017-05-01

    Patients with infantile-onset Pompe disease (IOPD) can be treated by recombinant human acid alpha glucosidase (rhGAA) replacement beginning at birth with excellent survival rates, but they still commonly present with speech disorders. This study investigated the progress of speech disorders in these early-treated patients and ascertained the relationship with treatments. Speech disorders, including hypernasal resonance, articulation disorders, and speech intelligibility, were scored by speech-language pathologists using auditory perception in seven early-treated patients over a period of 6 years. Statistical analysis of the first and last evaluations of the patients was performed with the Wilcoxon signed-rank test. A total of 29 speech samples were analyzed. All the patients suffered from hypernasality, articulation disorder, and impairment in speech intelligibility at the age of 3 years. The conditions were stable, and 2 patients developed normal or near normal speech during follow-up. Speech therapy and a high dose of rhGAA appeared to improve articulation in 6 of the 7 patients (86%, p = 0.028) by decreasing the omission of consonants, which consequently increased speech intelligibility (p = 0.041). Severity of hypernasality greatly reduced only in 2 patients (29%, p = 0.131). Speech disorders were common even in early and successfully treated patients with IOPD; however, aggressive speech therapy and high-dose rhGAA could improve their speech disorders. Copyright © 2016 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  13. Developing a weighted measure of speech sound accuracy.

    PubMed

    Preston, Jonathan L; Ramsdell, Heather L; Oller, D Kimbrough; Edwards, Mary Louise; Tobin, Stephen J

    2011-02-01

    To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound Accuracy (WSSA) score. The authors then evaluate the reliability and validity of this measure. Phonetic transcriptions were analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy was validated against existing measures, was used to discriminate typical and disordered speech production, and was evaluated to examine sensitivity to changes in phonetic accuracy over time. Reliability between transcribers and consistency of scores among different word sets and testing points are compared. Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners' judgments of the severity of a child's speech disorder. The measure separates children with and without speech sound disorders and captures growth in phonetic accuracy in toddlers' speech over time. The measure correlates highly across transcribers, word lists, and testing points. Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children's speech.

  14. Measurement of speech levels in the presence of time varying background noise

    NASA Technical Reports Server (NTRS)

    Pearsons, K. S.; Horonjeff, R.

    1982-01-01

    Short-term speech level measurements which could be used to note changes in vocal effort in a time varying noise environment were studied. Knowing the changes in speech level would in turn allow prediction of intelligibility in the presence of aircraft flyover noise. Tests indicated that it is possible to use two second samples of speech to estimate long term root mean square speech levels. Other tests were also performed in which people read out loud during aircraft flyover noise. Results of these tests indicate that people do indeed raise their voice during flyovers at a rate of about 3-1/2 dB for each 10 dB increase in background level. This finding is in agreement with other tests of speech levels in the presence of steady state background noise.

  15. Individual differences in children’s private speech: The role of imaginary companions

    PubMed Central

    Davis, Paige E.; Meins, Elizabeth; Fernyhough, Charles

    2013-01-01

    Relations between children’s imaginary companion status and their engagement in private speech during free play were investigated in a socially diverse sample of 5-year-olds (N = 148). Controlling for socioeconomic status, receptive verbal ability, total number of utterances, and duration of observation, there was a main effect of imaginary companion status on type of private speech. Children who had imaginary companions were more likely to engage in covert private speech compared with their peers who did not have imaginary companions. These results suggest that the private speech of children with imaginary companions is more internalized than that of their peers who do not have imaginary companions and that social engagement with imaginary beings may fulfill a similar role to social engagement with real-life partners in the developmental progression of private speech. PMID:23978382

  16. Effect of Environmental and Behavioral Interventions on Pain Intensity in Preterm Infants for Heel Prick Blood Sampling in the Neonatal Intensive Care Unit.

    PubMed

    Baharlooei, Fatemeh; Marofi, Maryam; Abdeyazdan, Zahra

    2017-01-01

    Recent researches suggest that preterm infants understand pain and stress. Because of the wide range of effects of pain on infants, the present study was conducted on the effect of environmental and behavioral interventions on pain due to heel-prick blood sampling in preterm infants. A clinical trial was conducted among 32 infants with gestational age of 32-37 weeks in the intervention and control groups. The effects of noise reduction by earplugs, light reduction by blindfolds, reduction of nursing manipulation, and creation of intrauterine position for neonates, 30 minutes before taking blood samples until 30 minutes after it, were measured during the intervention stage. Data were collected using the Neonatal Infant Pain Scale (NIPS) in 5 stages (before intervention, 2 minutes before sampling, during the sampling, and 5 minutes and 30 minutes after the sampling). The data were analyzed using analysis of variance (ANOVA) and paired t -test in SPSS software. The paired t -test results showed no significant differences between the control and intervention stages in terms of pain scores at base time ( P = 0.42) and 2 minutes before sampling ( P = 0.12). However, at the sampling time ( P = 0.0), and 5 minutes ( P = 0.001) and 30 minutes after the sampling ( P = 0.001), mean pain score in the intervention stage was significantly less than that in the control stage. Based on the findings, environmental and behavioral interventions reduced pain and facilitated heel-prick blood sampling in preterm infants.

  17. Effects of text-to-speech software use on the reading proficiency of high school struggling readers.

    PubMed

    Park, Hye Jin; Takahashi, Kiriko; Roberts, Kelly D; Delise, Danielle

    2017-01-01

    The literature highlights the benefits of text-to-speech (TTS) software when used as an assistive technology facilitating struggling readers' access to print. However, the effects of TTS software use, upon students' unassisted reading proficiency, have remained relatively unexplored. The researchers utilized an experimental design to investigate whether 9th grade struggling readers who use TTS software to read course materials demonstrate significant improvements in unassisted reading performance. A total of 164 students of 30 teachers in Hawaii participated in the study. Analyses of covariance results indicated that the TTS intervention had a significant, positive effect on student reading vocabulary and reading comprehension after 10 weeks of TTS software use (average 582 minutes). There are several limitations to the study; however, the current study opens up for discussions and need for further studies investigating TTS software as a viable reading intervention for adolescent struggling readers.

  18. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values aremore » constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.« less

  19. Automatic intelligibility classification of sentence-level pathological speech

    PubMed Central

    Kim, Jangwon; Kumar, Naveen; Tsiartas, Andreas; Li, Ming; Narayanan, Shrikanth S.

    2014-01-01

    Pathological speech usually refers to the condition of speech distortion resulting from atypicalities in voice and/or in the articulatory mechanisms owing to disease, illness or other physical or biological insult to the production system. Although automatic evaluation of speech intelligibility and quality could come in handy in these scenarios to assist experts in diagnosis and treatment design, the many sources and types of variability often make it a very challenging computational processing problem. In this work we propose novel sentence-level features to capture abnormal variation in the prosodic, voice quality and pronunciation aspects in pathological speech. In addition, we propose a post-classification posterior smoothing scheme which refines the posterior of a test sample based on the posteriors of other test samples. Finally, we perform feature-level fusions and subsystem decision fusion for arriving at a final intelligibility decision. The performances are tested on two pathological speech datasets, the NKI CCRT Speech Corpus (advanced head and neck cancer) and the TORGO database (cerebral palsy or amyotrophic lateral sclerosis), by evaluating classification accuracy without overlapping subjects’ data among training and test partitions. Results show that the feature sets of each of the voice quality subsystem, prosodic subsystem, and pronunciation subsystem, offer significant discriminating power for binary intelligibility classification. We observe that the proposed posterior smoothing in the acoustic space can further reduce classification errors. The smoothed posterior score fusion of subsystems shows the best classification performance (73.5% for unweighted, and 72.8% for weighted, average recalls of the binary classes). PMID:25414544

  20. How our own speech rate influences our perception of others.

    PubMed

    Bosker, Hans Rutger

    2017-08-01

    In conversation, our own speech and that of others follow each other in rapid succession. Effects of the surrounding context on speech perception are well documented but, despite the ubiquity of the sound of our own voice, it is unknown whether our own speech also influences our perception of other talkers. This study investigated context effects induced by our own speech through 6 experiments, specifically targeting rate normalization (i.e., perceiving phonetic segments relative to surrounding speech rate). Experiment 1 revealed that hearing prerecorded fast or slow context sentences altered the perception of ambiguous vowels, replicating earlier work. Experiment 2 demonstrated that talking at a fast or slow rate prior to target presentation also altered target perception, though the effect of preceding speech rate was reduced. Experiment 3 showed that silent talking (i.e., inner speech) at fast or slow rates did not modulate the perception of others, suggesting that the effect of self-produced speech rate in Experiment 2 arose through monitoring of the external speech signal. Experiment 4 demonstrated that, when participants were played back their own (fast/slow) speech, no reduction of the effect of preceding speech rate was observed, suggesting that the additional task of speech production may be responsible for the reduced effect in Experiment 2. Finally, Experiments 5 and 6 replicate Experiments 2 and 3 with new participant samples. Taken together, these results suggest that variation in speech production may induce variation in speech perception, thus carrying implications for our understanding of spoken communication in dialogue settings. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Questions You May Want to Ask Your Child's Speech-Language Pathologist = Preguntas que usted le podria hacer al patologo del habla y el lenguaje de su hijo

    ERIC Educational Resources Information Center

    Centers for Disease Control and Prevention, 2007

    2007-01-01

    This accordion style pamphlet, dual sided with English and Spanish text, suggests questions for parents to ask their Speech-Language Pathologist and speech and language therapy services for their children. Sample questions include: How will I participate in my child's therapy sessions? How do you decide how much time my child will spend on speech…

  2. RecoverNow: Feasibility of a Mobile Tablet-Based Rehabilitation Intervention to Treat Post-Stroke Communication Deficits in the Acute Care Setting

    PubMed Central

    Corbett, Dale; Finestone, Hillel M.; Hatcher, Simon; Lumsden, Jim; Momoli, Franco; Shamy, Michel C. F.; Stotts, Grant; Swartz, Richard H.; Yang, Christine

    2016-01-01

    Background Approximately 40% of patients diagnosed with stroke experience some degree of aphasia. With limited health care resources, patients’ access to speech and language therapies is often delayed. We propose using mobile-platform technology to initiate early speech-language therapy in the acute care setting. For this pilot, our objective was to assess the feasibility of a tablet-based speech-language therapy for patients with communication deficits following acute stroke. Methods We enrolled consecutive patients admitted with a stroke and communication deficits with NIHSS score ≥1 on the best language and/or dysarthria parameters. We excluded patients with severe comprehension deficits where communication was not possible. Following baseline assessment by a speech-language pathologist (SLP), patients were provided with a mobile tablet programmed with individualized therapy applications based on the assessment, and instructed to use it for at least one hour per day. Our objective was to establish feasibility by measuring recruitment rate, adherence rate, retention rate, protocol deviations and acceptability. Results Over 6 months, 143 patients were admitted with a new diagnosis of stroke: 73 had communication deficits, 44 met inclusion criteria, and 30 were enrolled into RecoverNow (median age 62, 26.6% female) for a recruitment rate of 68% of eligible participants. Participants received mobile tablets at a mean 6.8 days from admission [SEM 1.6], and used them for a mean 149.8 minutes/day [SEM 19.1]. In-hospital retention rate was 97%, and 96% of patients scored the mobile tablet-based communication therapy as at least moderately convenient 3/5 or better with 5/5 being most “convenient”. Conclusions Individualized speech-language therapy delivered by mobile tablet technology is feasible in acute care. PMID:28002479

  3. RecoverNow: Feasibility of a Mobile Tablet-Based Rehabilitation Intervention to Treat Post-Stroke Communication Deficits in the Acute Care Setting.

    PubMed

    Mallet, Karen H; Shamloul, Rany M; Corbett, Dale; Finestone, Hillel M; Hatcher, Simon; Lumsden, Jim; Momoli, Franco; Shamy, Michel C F; Stotts, Grant; Swartz, Richard H; Yang, Christine; Dowlatshahi, Dar

    2016-01-01

    Approximately 40% of patients diagnosed with stroke experience some degree of aphasia. With limited health care resources, patients' access to speech and language therapies is often delayed. We propose using mobile-platform technology to initiate early speech-language therapy in the acute care setting. For this pilot, our objective was to assess the feasibility of a tablet-based speech-language therapy for patients with communication deficits following acute stroke. We enrolled consecutive patients admitted with a stroke and communication deficits with NIHSS score ≥1 on the best language and/or dysarthria parameters. We excluded patients with severe comprehension deficits where communication was not possible. Following baseline assessment by a speech-language pathologist (SLP), patients were provided with a mobile tablet programmed with individualized therapy applications based on the assessment, and instructed to use it for at least one hour per day. Our objective was to establish feasibility by measuring recruitment rate, adherence rate, retention rate, protocol deviations and acceptability. Over 6 months, 143 patients were admitted with a new diagnosis of stroke: 73 had communication deficits, 44 met inclusion criteria, and 30 were enrolled into RecoverNow (median age 62, 26.6% female) for a recruitment rate of 68% of eligible participants. Participants received mobile tablets at a mean 6.8 days from admission [SEM 1.6], and used them for a mean 149.8 minutes/day [SEM 19.1]. In-hospital retention rate was 97%, and 96% of patients scored the mobile tablet-based communication therapy as at least moderately convenient 3/5 or better with 5/5 being most "convenient". Individualized speech-language therapy delivered by mobile tablet technology is feasible in acute care.

  4. Methodology for speech assessment in the Scandcleft project--an international randomized clinical trial on palatal surgery: experiences from a pilot study.

    PubMed

    Lohmander, A; Willadsen, E; Persson, C; Henningsson, G; Bowden, M; Hutters, B

    2009-07-01

    To present the methodology for speech assessment in the Scandcleft project and discuss issues from a pilot study. Description of methodology and blinded test for speech assessment. Speech samples and instructions for data collection and analysis for comparisons of speech outcomes across five included languages were developed and tested. PARTICIPANTS AND MATERIALS: Randomly selected video recordings of 10 5-year-old children from each language (n = 50) were included in the project. Speech material consisted of test consonants in single words, connected speech, and syllable chains with nasal consonants. Five experienced speech and language pathologists participated as observers. Narrow phonetic transcription of test consonants translated into cleft speech characteristics, ordinal scale rating of resonance, and perceived velopharyngeal closure (VPC). A velopharyngeal composite score (VPC-sum) was extrapolated from raw data. Intra-agreement comparisons were performed. Range for intra-agreement for consonant analysis was 53% to 89%, for hypernasality on high vowels in single words the range was 20% to 80%, and the agreement between the VPC-sum and the overall rating of VPC was 78%. Pooling data of speakers of different languages in the same trial and comparing speech outcome across trials seems possible if the assessment of speech concerns consonants and is confined to speech units that are phonetically similar across languages. Agreed conventions and rules are important. A composite variable for perceptual assessment of velopharyngeal function during speech seems usable; whereas, the method for hypernasality evaluation requires further testing.

  5. Speech Characteristics and Intelligibility in Adults with Mild and Moderate Intellectual Disabilities

    PubMed Central

    Coppens-Hofman, Marjolein C.; Terband, Hayo; Snik, Ad F.M.; Maassen, Ben A.M.

    2017-01-01

    Purpose Adults with intellectual disabilities (ID) often show reduced speech intelligibility, which affects their social interaction skills. This study aims to establish the main predictors of this reduced intelligibility in order to ultimately optimise management. Method Spontaneous speech and picture naming tasks were recorded in 36 adults with mild or moderate ID. Twenty-five naïve listeners rated the intelligibility of the spontaneous speech samples. Performance on the picture-naming task was analysed by means of a phonological error analysis based on expert transcriptions. Results The transcription analyses showed that the phonemic and syllabic inventories of the speakers were complete. However, multiple errors at the phonemic and syllabic level were found. The frequencies of specific types of errors were related to intelligibility and quality ratings. Conclusions The development of the phonemic and syllabic repertoire appears to be completed in adults with mild-to-moderate ID. The charted speech difficulties can be interpreted to indicate speech motor control and planning difficulties. These findings may aid the development of diagnostic tests and speech therapies aimed at improving speech intelligibility in this specific group. PMID:28118637

  6. [Detection of Weak Speech Signals from Strong Noise Background Based on Adaptive Stochastic Resonance].

    PubMed

    Lu, Huanhuan; Wang, Fuzhong; Zhang, Huichun

    2016-04-01

    Traditional speech detection methods regard the noise as a jamming signal to filter,but under the strong noise background,these methods lost part of the original speech signal while eliminating noise.Stochastic resonance can use noise energy to amplify the weak signal and suppress the noise.According to stochastic resonance theory,a new method based on adaptive stochastic resonance to extract weak speech signals is proposed.This method,combined with twice sampling,realizes the detection of weak speech signals from strong noise.The parameters of the systema,b are adjusted adaptively by evaluating the signal-to-noise ratio of the output signal,and then the weak speech signal is optimally detected.Experimental simulation analysis showed that under the background of strong noise,the output signal-to-noise ratio increased from the initial value-7dB to about 0.86 dB,with the gain of signalto-noise ratio is 7.86 dB.This method obviously raises the signal-to-noise ratio of the output speech signals,which gives a new idea to detect the weak speech signals in strong noise environment.

  7. Audio-visual speech perception: a developmental ERP investigation

    PubMed Central

    Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC

    2014-01-01

    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002

  8. Oral motor deficits in speech-impaired children with autism

    PubMed Central

    Belmonte, Matthew K.; Saxena-Chandhok, Tanushree; Cherian, Ruth; Muneer, Reema; George, Lisa; Karanth, Prathibha

    2013-01-01

    Absence of communicative speech in autism has been presumed to reflect a fundamental deficit in the use of language, but at least in a subpopulation may instead stem from motor and oral motor issues. Clinical reports of disparity between receptive vs. expressive speech/language abilities reinforce this hypothesis. Our early-intervention clinic develops skills prerequisite to learning and communication, including sitting, attending, and pointing or reference, in children below 6 years of age. In a cohort of 31 children, gross and fine motor skills and activities of daily living as well as receptive and expressive speech were assessed at intake and after 6 and 10 months of intervention. Oral motor skills were evaluated separately within the first 5 months of the child's enrolment in the intervention programme and again at 10 months of intervention. Assessment used a clinician-rated structured report, normed against samples of 360 (for motor and speech skills) and 90 (for oral motor skills) typically developing children matched for age, cultural environment and socio-economic status. In the full sample, oral and other motor skills correlated with receptive and expressive language both in terms of pre-intervention measures and in terms of learning rates during the intervention. A motor-impaired group comprising a third of the sample was discriminated by an uneven profile of skills with oral motor and expressive language deficits out of proportion to the receptive language deficit. This group learnt language more slowly, and ended intervention lagging in oral motor skills. In individuals incapable of the degree of motor sequencing and timing necessary for speech movements, receptive language may outstrip expressive speech. Our data suggest that autistic motor difficulties could range from more basic skills such as pointing to more refined skills such as articulation, and need to be assessed and addressed across this entire range in each individual. PMID:23847480

  9. Recovering With Acquired Apraxia of Speech: The First 2 Years.

    PubMed

    Haley, Katarina L; Shafer, Jennifer N; Harmon, Tyson G; Jacks, Adam

    2016-12-01

    This study was intended to document speech recovery for 1 person with acquired apraxia of speech quantitatively and on the basis of her lived experience. The second author sustained a traumatic brain injury that resulted in acquired apraxia of speech. Over a 2-year period, she documented her recovery through 22 video-recorded monologues. We analyzed these monologues using a combination of auditory perceptual, acoustic, and qualitative methods. Recovery was evident for all quantitative variables examined. For speech sound production, the recovery was most prominent during the first 3 months, but slower improvement was evident for many months. Measures of speaking rate, fluency, and prosody changed more gradually throughout the entire period. A qualitative analysis of topics addressed in the monologues was consistent with the quantitative speech recovery and indicated a subjective dynamic relationship between accuracy and rate, an observation that several factors made speech sound production variable, and a persisting need for cognitive effort while speaking. Speech features improved over an extended time, but the recovery trajectories differed, indicating dynamic reorganization of the underlying speech production system. The relationship among speech dimensions should be examined in other cases and in population samples. The combination of quantitative and qualitative analysis methods offers advantages for understanding clinically relevant aspects of recovery.

  10. Acoustic Sources of Accent in Second Language Japanese Speech.

    PubMed

    Idemaru, Kaori; Wei, Peipei; Gubbins, Lucy

    2018-05-01

    This study reports an exploratory analysis of the acoustic characteristics of second language (L2) speech which give rise to the perception of a foreign accent. Japanese speech samples were collected from American English and Mandarin Chinese speakers ( n = 16 in each group) studying Japanese. The L2 participants and native speakers ( n = 10) provided speech samples modeling after six short sentences. Segmental (vowels and stops) and prosodic features (rhythm, tone, and fluency) were examined. Native Japanese listeners ( n = 10) rated the samples with regard to degrees of foreign accent. The analyses predicting accent ratings based on the acoustic measurements indicated that one of the prosodic features in particular, tone (defined as high and low patterns of pitch accent and intonation in this study), plays an important role in robustly predicting accent rating in L2 Japanese across the two first language (L1) backgrounds. These results were consistent with the prediction based on phonological and phonetic comparisons between Japanese and English, as well as Japanese and Mandarin Chinese. The results also revealed L1-specific predictors of perceived accent in Japanese. The findings of this study contribute to the growing literature that examines sources of perceived foreign accent.

  11. Behavioral and neurobiological correlates of childhood apraxia of speech in Italian children.

    PubMed

    Chilosi, Anna Maria; Lorenzini, Irene; Fiori, Simona; Graziosi, Valentina; Rossi, Giuseppe; Pasquariello, Rosa; Cipriani, Paola; Cioni, Giovanni

    2015-11-01

    Childhood apraxia of speech (CAS) is a neurogenic Speech Sound Disorder whose etiology and neurobiological correlates are still unclear. In the present study, 32 Italian children with idiopathic CAS underwent a comprehensive speech and language, genetic and neuroradiological investigation aimed to gather information on the possible behavioral and neurobiological markers of the disorder. The results revealed four main aggregations of behavioral symptoms that indicate a multi-deficit disorder involving both motor-speech and language competence. Six children presented with chromosomal alterations. The familial aggregation rate for speech and language difficulties and the male to female ratio were both very high in the whole sample, supporting the hypothesis that genetic factors make substantial contribution to the risk of CAS. As expected in accordance with the diagnosis of idiopathic CAS, conventional MRI did not reveal macrostructural pathogenic neuroanatomical abnormalities, suggesting that CAS may be due to brain microstructural alterations. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Ultrasound as visual feedback in speech habilitation: exploring consultative use in rural British Columbia, Canada.

    PubMed

    Bernhardt, May B; Bacsfalvi, Penelope; Adler-Bock, Marcy; Shimizu, Reiko; Cheney, Audrey; Giesbrecht, Nathan; O'connell, Maureen; Sirianni, Jason; Radanov, Bosko

    2008-02-01

    Ultrasound has shown promise as a visual feedback tool in speech therapy. Rural clients, however, often have minimal access to new technologies. The purpose of the current study was to evaluate consultative treatment using ultrasound in rural communities. Two speech-language pathologists (SLPs) trained in ultrasound use provided consultation with ultrasound in rural British Columbia to 13 school-aged children with residual speech impairments. Local SLPs provided treatment without ultrasound before and after the consultation. Speech samples were transcribed phonetically by independent trained listeners. Eleven children showed greater gains in production of the principal target /[image omitted]/ after the ultrasound consultation. Four of the seven participants who received more consultation time with ultrasound showed greatest improvement. Individual client factors also affected outcomes. The current study was a quasi-experimental clinic-based study. Larger, controlled experimental studies are needed to provide ultimate evaluation of the consultative use of ultrasound in speech therapy.

  13. A user-operated test of suprathreshold acuity in noise for adult hearing screening: The SUN (Speech Understanding in Noise) test.

    PubMed

    Paglialonga, Alessia; Tognola, Gabriella; Grandori, Ferdinando

    2014-09-01

    A novel, user-operated test of suprathreshold acuity in noise for use in adult hearing screening (AHS) was developed. The Speech Understanding in Noise test (SUN) is a speech-in-noise test that makes use of a list of vowel-consonant-vowel (VCV) stimuli in background noise presented in a three-alternative forced choice (3AFC) paradigm by means of a touch sensitive screen. The test is automated, easy-to-use, and provides self-explanatory results (i.e., 'no hearing difficulties', or 'a hearing check would be advisable', or 'a hearing check is recommended'). The test was developed from its building blocks (VCVs and speech-shaped noise) through two main steps: (i) development of the test list through equalization of the intelligibility of test stimuli across the set and (ii) optimization of the test results through maximization of the test sensitivity and specificity. The test had 82.9% sensitivity and 85.9% specificity compared to conventional pure-tone screening, and 83.8% sensitivity and 83.9% specificity to identify individuals with disabling hearing impairment. Results obtained so far showed that the test could be easily performed by adults and older adults in less than one minute per ear and that its results were not influenced by ambient noise (up to 65dBA), suggesting that the test might be a viable method for AHS in clinical as well as non-clinical settings. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. High-frame-rate full-vocal-tract 3D dynamic speech imaging.

    PubMed

    Fu, Maojing; Barlaz, Marissa S; Holtrop, Joseph L; Perry, Jamie L; Kuehn, David P; Shosted, Ryan K; Liang, Zhi-Pei; Sutton, Bradley P

    2017-04-01

    To achieve high temporal frame rate, high spatial resolution and full-vocal-tract coverage for three-dimensional dynamic speech MRI by using low-rank modeling and sparse sampling. Three-dimensional dynamic speech MRI is enabled by integrating a novel data acquisition strategy and an image reconstruction method with the partial separability model: (a) a self-navigated sparse sampling strategy that accelerates data acquisition by collecting high-nominal-frame-rate cone navigator sand imaging data within a single repetition time, and (b) are construction method that recovers high-quality speech dynamics from sparse (k,t)-space data by enforcing joint low-rank and spatiotemporal total variation constraints. The proposed method has been evaluated through in vivo experiments. A nominal temporal frame rate of 166 frames per second (defined based on a repetition time of 5.99 ms) was achieved for an imaging volume covering the entire vocal tract with a spatial resolution of 2.2 × 2.2 × 5.0 mm 3 . Practical utility of the proposed method was demonstrated via both validation experiments and a phonetics investigation. Three-dimensional dynamic speech imaging is possible with full-vocal-tract coverage, high spatial resolution and high nominal frame rate to provide dynamic speech data useful for phonetic studies. Magn Reson Med 77:1619-1629, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  15. Long-Term Speech and Language Outcomes in Prelingually Deaf Children, Adolescents and Young Adults Who Received Cochlear Implants in Childhood

    PubMed Central

    Ruffin, Chad V.; Kronenberger, William G.; Colson, Bethany G.; Henning, Shirley C.; Pisoni, David B.

    2013-01-01

    This study investigated long-term speech and language outcomes in 51 prelingually deaf children, adolescents, and young adults who received cochlear implants (CIs) prior to 7 years of age and used their implants for at least 7 years. Average speech perception scores were similar to those found in prior research with other samples of experienced CI users. Mean language test scores were lower than norm-referenced scores from nationally representative normal-hearing, typically-developing samples, although a majority of the CI users scored within one standard deviation of the normative mean or higher on the Peabody Picture Vocabulary Test, Fourth Edition (63%) and Clinical Evaluation of Language Fundamentals, Fourth Edition (69%). Speech perception scores were negatively associated with a meningitic etiology of hearing loss, older age at implantation, poorer pre-implant unaided pure tone average thresholds, lower family income, and the use of Total Communication. Users of CIs for 15 years or more were more likely to have these characteristics and were more likely to score lower on measures of speech perception compared to users of CIs for 14 years or less. The aggregation of these risk factors in the > 15 years of CI use subgroup accounts for their lower speech perception scores and may stem from more conservative CI candidacy criteria in use at the beginning of pediatric cochlear implantation. PMID:23988907

  16. Speech Analyses of Four Children with Repaired Cleft Palates.

    ERIC Educational Resources Information Center

    Powers, Gene R.; And Others

    1990-01-01

    Spontaneous speech samples were collected from four three-year olds with surgically repaired cleft palates. Analyses showed that subjects were similar to one another with respect to their phonetic inventories but differed considerably in the frequency and types of phonological processes used. (Author/JDD)

  17. Speech acoustic markers of early stage and prodromal Huntington's disease: a marker of disease onset?

    PubMed

    Vogel, Adam P; Shirbin, Christopher; Churchyard, Andrew J; Stout, Julie C

    2012-12-01

    Speech disturbances (e.g., altered prosody) have been described in symptomatic Huntington's Disease (HD) individuals, however, the extent to which speech changes in gene positive pre-manifest (PreHD) individuals is largely unknown. The speech of individuals carrying the mutant HTT gene is a behavioural/motor/cognitive marker demonstrating some potential as an objective indicator of early HD onset and disease progression. Speech samples were acquired from 30 individuals carrying the mutant HTT gene (13 PreHD, 17 early stage HD) and 15 matched controls. Participants read a passage, produced a monologue and said the days of the week. Data were analysed acoustically for measures of timing, frequency and intensity. There was a clear effect of group across most acoustic measures, so that speech performance differed in-line with disease progression. Comparisons across groups revealed significant differences between the control and the early stage HD group on measures of timing (e.g., speech rate). Participants carrying the mutant HTT gene presented with slower rates of speech, took longer to say words and produced greater silences between and within words compared to healthy controls. Importantly, speech rate showed a significant correlation to burden of disease scores. The speech of early stage HD differed significantly from controls. The speech of PreHD, although not reaching significance, tended to lie between the performance of controls and early stage HD. This suggests that changes in speech production appear to be developing prior to diagnosis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. How much is a word? Predicting ease of articulation planning from apraxic speech error patterns.

    PubMed

    Ziegler, Wolfram; Aichert, Ingrid

    2015-08-01

    According to intuitive concepts, 'ease of articulation' is influenced by factors like word length or the presence of consonant clusters in an utterance. Imaging studies of speech motor control use these factors to systematically tax the speech motor system. Evidence from apraxia of speech, a disorder supposed to result from speech motor planning impairment after lesions to speech motor centers in the left hemisphere, supports the relevance of these and other factors in disordered speech planning and the genesis of apraxic speech errors. Yet, there is no unified account of the structural properties rendering a word easy or difficult to pronounce. To model the motor planning demands of word articulation by a nonlinear regression model trained to predict the likelihood of accurate word production in apraxia of speech. We used a tree-structure model in which vocal tract gestures are embedded in hierarchically nested prosodic domains to derive a recursive set of terms for the computation of the likelihood of accurate word production. The model was trained with accuracy data from a set of 136 words averaged over 66 samples from apraxic speakers. In a second step, the model coefficients were used to predict a test dataset of accuracy values for 96 new words, averaged over 120 samples produced by a different group of apraxic speakers. Accurate modeling of the first dataset was achieved in the training study (R(2)adj = .71). In the cross-validation, the test dataset was predicted with a high accuracy as well (R(2)adj = .67). The model shape, as reflected by the coefficient estimates, was consistent with current phonetic theories and with clinical evidence. In accordance with phonetic and psycholinguistic work, a strong influence of word stress on articulation errors was found. The proposed model provides a unified and transparent account of the motor planning requirements of word articulation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. An open-label study of sodium oxybate in Spasmodic dysphonia.

    PubMed

    Rumbach, Anna F; Blitzer, Andrew; Frucht, Steven J; Simonyan, Kristina

    2017-06-01

    Spasmodic dysphonia (SD) is a task-specific laryngeal dystonia that affects speech production. Co-occurring voice tremor (VT) often complicates the diagnosis and clinical management of SD. Treatment of SD and VT is largely limited to botulinum toxin injections into laryngeal musculature; other pharmacological options are not sufficiently developed. Open-label study. We conducted an open-label study in 23 SD and 22 SD/VT patients to examine the effects of sodium oxybate (Xyrem), an oral agent with therapeutic effects similar to those of alcohol in these patients. Blinded randomized analysis of voice and speech samples assessed symptom improvement before and after drug administration. Sodium oxybate significantly improved voice symptoms (P = .001) primarily by reducing the number of SD-characteristic voice breaks and severity of VT. Sodium oxybate further showed a trend for improving VT symptoms (P = .03) in a subset of patients who received successful botulinum toxin injections for the management of their SD symptoms. The drug's effects were observed approximately 30 to 40 minutes after its intake and lasted about 3.5 to 4 hours. Our study demonstrated that sodium oxybate reduced voice symptoms in 82.2% of alcohol-responsive SD patients both with and without co-occurring VT. Our findings suggest that the therapeutic mechanism of sodium oxybate in SD and SD/VT may be linked to that of alcohol, and as such, sodium oxybate might be beneficial for alcohol-responsive SD and SD/VT patients. 4 Laryngoscope, 127:1402-1407, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Power in methods: language to infants in structured and naturalistic contexts.

    PubMed

    Tamis-LeMonda, Catherine S; Kuchirko, Yana; Luo, Rufan; Escobar, Kelly; Bornstein, Marc H

    2017-11-01

    Methods can powerfully affect conclusions about infant experiences and learning. Data from naturalistic observations may paint a very different picture of learning and development from those based on structured tasks, as illustrated in studies of infant walking, object permanence, intention understanding, and so forth. Using language as a model system, we compared the speech of 40 mothers to their 13-month-old infants during structured play and naturalistic home routines. The contrasting methods yielded unique portrayals of infant language experiences, while simultaneously underscoring cross-situational correspondence at an individual level. Infants experienced substantially more total words and different words per minute during structured play than they did during naturalistic routines. Language input during structured play was consistently dense from minute to minute, whereas language during naturalistic routines showed striking fluctuations interspersed with silence. Despite these differences, infants' language experiences during structured play mirrored the peak language interactions infants experienced during naturalistic routines, and correlations between language inputs in the two conditions were strong. The implications of developmental methods for documenting the nature of experiences and individual differences are discussed. © 2017 John Wiley & Sons Ltd.

  1. Voice Acoustical Measurement of the Severity of Major Depression

    ERIC Educational Resources Information Center

    Cannizzaro, Michael; Harel, Brian; Reilly, Nicole; Chappell, Phillip; Snyder, Peter J.

    2004-01-01

    A number of empirical studies have documented the relationship between quantifiable and objective acoustical measures of voice and speech, and clinical subjective ratings of severity of Major Depression. To further explore this relationship, speech samples were extracted from videotape recordings of structured interviews made during the…

  2. Perceptions of University Instructors When Listening to International Student Speech

    ERIC Educational Resources Information Center

    Sheppard, Beth; Elliott, Nancy; Baese-Berk, Melissa

    2017-01-01

    Intensive English Program (IEP) Instructors and content faculty both listen to international students at the university. For these two groups of instructors, this study compared perceptions of international student speech by collecting comprehensibility ratings and transcription samples for intelligibility scores. No significant differences were…

  3. Real-time continuous visual biofeedback in the treatment of speech breathing disorders following childhood traumatic brain injury: report of one case.

    PubMed

    Murdoch, B E; Pitt, G; Theodoros, D G; Ward, E C

    1999-01-01

    The efficacy of traditional and physiological biofeedback methods for modifying abnormal speech breathing patterns was investigated in a child with persistent dysarthria following severe traumatic brain injury (TBI). An A-B-A-B single-subject experimental research design was utilized to provide the subject with two exclusive periods of therapy for speech breathing, based on traditional therapy techniques and physiological biofeedback methods, respectively. Traditional therapy techniques included establishing optimal posture for speech breathing, explanation of the movement of the respiratory muscles, and a hierarchy of non-speech and speech tasks focusing on establishing an appropriate level of sub-glottal air pressure, and improving the subject's control of inhalation and exhalation. The biofeedback phase of therapy utilized variable inductance plethysmography (or Respitrace) to provide real-time, continuous visual biofeedback of ribcage circumference during breathing. As in traditional therapy, a hierarchy of non-speech and speech tasks were devised to improve the subject's control of his respiratory pattern. Throughout the project, the subject's respiratory support for speech was assessed both instrumentally and perceptually. Instrumental assessment included kinematic and spirometric measures, and perceptual assessment included the Frenchay Dysarthria Assessment, Assessment of Intelligibility of Dysarthric Speech, and analysis of a speech sample. The results of the study demonstrated that real-time continuous visual biofeedback techniques for modifying speech breathing patterns were not only effective, but superior to the traditional therapy techniques for modifying abnormal speech breathing patterns in a child with persistent dysarthria following severe TBI. These results show that physiological biofeedback techniques are potentially useful clinical tools for the remediation of speech breathing impairment in the paediatric dysarthric population.

  4. Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis

    PubMed Central

    Vielsmeier, Veronika; Kreuzer, Peter M.; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R. O.; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin

    2016-01-01

    Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments (“How would you rate your ability to understand speech?”; “How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?”). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role. PMID:28018209

  5. Speech Comprehension Difficulties in Chronic Tinnitus and Its Relation to Hyperacusis.

    PubMed

    Vielsmeier, Veronika; Kreuzer, Peter M; Haubner, Frank; Steffens, Thomas; Semmler, Philipp R O; Kleinjung, Tobias; Schlee, Winfried; Langguth, Berthold; Schecklmann, Martin

    2016-01-01

    Objective: Many tinnitus patients complain about difficulties regarding speech comprehension. In spite of the high clinical relevance little is known about underlying mechanisms and predisposing factors. Here, we performed an exploratory investigation in a large sample of tinnitus patients to (1) estimate the prevalence of speech comprehension difficulties among tinnitus patients, to (2) compare subjective reports of speech comprehension difficulties with behavioral measurements in a standardized speech comprehension test and to (3) explore underlying mechanisms by analyzing the relationship between speech comprehension difficulties and peripheral hearing function (pure tone audiogram), as well as with co-morbid hyperacusis as a central auditory processing disorder. Subjects and Methods: Speech comprehension was assessed in 361 tinnitus patients presenting between 07/2012 and 08/2014 at the Interdisciplinary Tinnitus Clinic at the University of Regensburg. The assessment included standard audiological assessments (pure tone audiometry, tinnitus pitch, and loudness matching), the Goettingen sentence test (in quiet) for speech audiometric evaluation, two questions about hyperacusis, and two questions about speech comprehension in quiet and noisy environments ("How would you rate your ability to understand speech?"; "How would you rate your ability to follow a conversation when multiple people are speaking simultaneously?"). Results: Subjectively-reported speech comprehension deficits are frequent among tinnitus patients, especially in noisy environments (cocktail party situation). 74.2% of all investigated patients showed disturbed speech comprehension (indicated by values above 21.5 dB SPL in the Goettingen sentence test). Subjective speech comprehension complaints (both for general and in noisy environment) were correlated with hearing level and with audiologically-assessed speech comprehension ability. In contrast, co-morbid hyperacusis was only correlated with speech comprehension difficulties in noisy environments, but not with speech comprehension difficulties in general. Conclusion: Speech comprehension deficits are frequent among tinnitus patients. Whereas speech comprehension deficits in quiet environments are primarily due to peripheral hearing loss, speech comprehension deficits in noisy environments are related to both peripheral hearing loss and dysfunctional central auditory processing. Disturbed speech comprehension in noisy environments might be modulated by a central inhibitory deficit. In addition, attentional and cognitive aspects may play a role.

  6. Association between enterovirus infection and speech and language impairments: A nationwide population-based study.

    PubMed

    Hung, Tai-Hsin; Chen, Vincent Chin-Hung; Yang, Yao-Hsu; Tsai, Ching-Shu; Lu, Mong-Liang; McIntyre, Roger S; Lee, Yena; Huang, Kuo-You

    2018-06-01

    Delay and impairment in Speech and language are common developmental problems in younger populations. Hitherto, there has been minimal study of the association between common childhood infections (e.g. enterovirus [EV]) and speech and language. The impetus for evaluating this association is provided by evidence linking inflammation to neurodevelopmental disorders. Herein we sought to determine whether an association exists between EV infection and subsequent diagnoses of speech and language impairments in a nationwide population-based sample in Taiwan. Our study acquired data from the Taiwan National Health Insurance Research Database. The sample was comprised of individuals under 18 years of age with newly diagnosed EV infection during the period from January 1998 to December 2011. 39669 eligible cases were compared to matched controls and assessed during the study period for incident cases of speech and language impairments. Cox regression analyses were applied, adjusting for sex, age and other physical and mental problems. In the fully adjusted Cox regression model for hazard ratios, EV infection as positively associated with speech and language impairments (HR = 1.14, 95% CI: 1.06-1.22) after adjusting for age, sex and other confounds. Compared to the control group, the hazard ratio for speech and language impairments was 1.12 (95% CI: 1.03-1.21) amongst the group of EV infection without hospitalization, and 1.26 (95% CI: 1.10-1.45) amongst the group of EV infection with hospitalization. EV infection is temporally associated with incident speech and language impairments. Our findings herein provide rationale for educating families that EV infection may be associated with subsequent speech and language problems in susceptible individuals and that monitoring for such a presentation would be warranted. WHAT THIS PAPER ADDS?: Speech and language impairments associated with central nervous system infections have been reported in the literature. EV are medically important human pathogens and associated with select neuropsychiatric diseases. Notwithstanding, relatively few reports have mentioned the effects of EV infection on speech and language problems. Our study used a nationwide longitudinal dataset and identified that children with EV infection have a greater risk for speech and language impairments as compared with control group. Infected children combined other comorbidities or risk factors might have greater possibility to develop speech problems. Clinicians should be vigilant for the onset of language developmental abnormalities of preschool children with EV infection. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. SPEECH EVALUATION WITH AND WITHOUT PALATAL OBTURATOR IN PATIENTS SUBMITTED TO MAXILLECTOMY

    PubMed Central

    de Carvalho-Teles, Viviane; Pegoraro-Krook, Maria Inês; Lauris, José Roberto Pereira

    2006-01-01

    Most patients who have undergone resection of the maxillae due to benign or malignant tumors in the palatomaxillary region present with speech and swallowing disorders. Coupling of the oral and nasal cavities increases nasal resonance, resulting in hypernasality and unintelligible speech. Prosthodontic rehabilitation of maxillary resections with effective separation of the oral and nasal cavities can improve speech and esthetics, and assist the psychosocial adjustment of the patient as well. The objective of this study was to evaluate the efficacy of the palatal obturator prosthesis on speech intelligibility and resonance of 23 patients with age ranging from 18 to 83 years (Mean = 49.5 years), who had undergone inframedial-structural maxillectomy. The patients were requested to count from 1 to 20, to repeat 21 words and to spontaneously speak for 15 seconds, once with and again without the prosthesis, for tape recording purposes. The resonance and speech intelligibility were judged by 5 speech language pathologists from the tape recordings samples. The results have shown that the majority of patients (82.6%) significantly improved their speech intelligibility, and 16 patients (69.9%) exhibited a significant hypernasality reduction with the obturator in place. The results of this study indicated that maxillary obturator prosthesis was efficient to improve the speech intelligibility and resonance in patients who had undergone maxillectomy. PMID:19089242

  8. Speech Intelligibility in Persian Hearing Impaired Children with Cochlear Implants and Hearing Aids.

    PubMed

    Rezaei, Mohammad; Emadi, Maryam; Zamani, Peyman; Farahani, Farhad; Lotfi, Gohar

    2017-04-01

    The aim of present study is to evaluate and compare speech intelligibility in hearing impaired children with cochlear implants (CI) and hearing aid (HA) users and children with normal hearing (NH). The sample consisted of 45 Persian-speaking children aged 3 to 5-years-old. They were divided into three groups, and each group had 15, children, children with CI and children using hearing aids in Hamadan. Participants was evaluated by the test of speech intelligibility level. Results of ANOVA on speech intelligibility test showed that NH children had significantly better reading performance than hearing impaired children with CI and HA. Post-hoc analysis, using Scheffe test, indicated that the mean score of speech intelligibility of normal children was higher than the HA and CI groups; but the difference was not significant between mean of speech intelligibility in children with hearing loss that use cochlear implant and those using HA. It is clear that even with remarkabkle advances in HA technology, many hearing impaired children continue to find speech production a challenging problem. Given that speech intelligibility is a key element in proper communication and social interaction, consequently, educational and rehabilitation programs are essential to improve speech intelligibility of children with hearing loss.

  9. Recording high quality speech during tagged cine-MRI studies using a fiber optic microphone.

    PubMed

    NessAiver, Moriel S; Stone, Maureen; Parthasarathy, Vijay; Kahana, Yuvi; Paritsky, Alexander; Paritsky, Alex

    2006-01-01

    To investigate the feasibility of obtaining high quality speech recordings during cine imaging of tongue movement using a fiber optic microphone. A Complementary Spatial Modulation of Magnetization (C-SPAMM) tagged cine sequence triggered by an electrocardiogram (ECG) simulator was used to image a volunteer while speaking the syllable pairs /a/-/u/, /i/-/u/, and the words "golly" and "Tamil" in sync with the imaging sequence. A noise-canceling, optical microphone was fastened approximately 1-2 inches above the mouth of the volunteer. The microphone was attached via optical fiber to a laptop computer, where the speech was sampled at 44.1 kHz. A reference recording of gradient activity with no speech was subtracted from target recordings. Good quality speech was discernible above the background gradient sound using the fiber optic microphone without reference subtraction. The audio waveform of gradient activity was extremely stable and reproducible. Subtraction of the reference gradient recording further reduced gradient noise by roughly 21 dB, resulting in exceptionally high quality speech waveforms. It is possible to obtain high quality speech recordings using an optical microphone even during exceptionally loud cine imaging sequences. This opens up the possibility of more elaborate MRI studies of speech including spectral analysis of the speech signal in all types of MRI.

  10. Everyday listeners' impressions of speech produced by individuals with adductor spasmodic dysphonia.

    PubMed

    Nagle, Kathleen F; Eadie, Tanya L; Yorkston, Kathryn M

    2015-01-01

    Individuals with adductor spasmodic dysphonia (ADSD) have reported that unfamiliar communication partners appear to judge them as sneaky, nervous or not intelligent, apparently based on the quality of their speech; however, there is minimal research into the actual everyday perspective of listening to ADSD speech. The purpose of this study was to investigate the impressions of listeners hearing ADSD speech for the first time using a mixed-methods design. Everyday listeners were interviewed following sessions in which they made ratings of ADSD speech. A semi-structured interview approach was used and data were analyzed using thematic content analysis. Three major themes emerged: (1) everyday listeners make judgments about speakers with ADSD; (2) ADSD speech does not sound normal to everyday listeners; and (3) rating overall severity is difficult for everyday listeners. Participants described ADSD speech similarly to existing literature; however, some listeners inaccurately extrapolated speaker attributes based solely on speech samples. Listeners may draw erroneous conclusions about individuals with ADSD and these biases may affect the communicative success of these individuals. Results have implications for counseling individuals with ADSD, as well as the need for education and awareness about ADSD. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. The Relationship between Binaural Benefit and Difference in Unilateral Speech Recognition Performance for Bilateral Cochlear Implant Users

    PubMed Central

    Yoon, Yang-soo; Li, Yongxin; Kang, Hou-Yong; Fu, Qian-Jie

    2011-01-01

    Objective The full benefit of bilateral cochlear implants may depend on the unilateral performance with each device, the speech materials, processing ability of the user, and/or the listening environment. In this study, bilateral and unilateral speech performances were evaluated in terms of recognition of phonemes and sentences presented in quiet or in noise. Design Speech recognition was measured for unilateral left, unilateral right, and bilateral listening conditions; speech and noise were presented at 0° azimuth. The “binaural benefit” was defined as the difference between bilateral performance and unilateral performance with the better ear. Study Sample 9 adults with bilateral cochlear implants participated. Results On average, results showed a greater binaural benefit in noise than in quiet for all speech tests. More importantly, the binaural benefit was greater when unilateral performance was similar across ears. As the difference in unilateral performance between ears increased, the binaural advantage decreased; this functional relationship was observed across the different speech materials and noise levels even though there was substantial intra- and inter-subject variability. Conclusions The results indicate that subjects who show symmetry in speech recognition performance between implanted ears in general show a large binaural benefit. PMID:21696329

  12. The brain dynamics of rapid perceptual adaptation to adverse listening conditions.

    PubMed

    Erb, Julia; Henry, Molly J; Eisner, Frank; Obleser, Jonas

    2013-06-26

    Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an "executive" network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic "language" areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory-language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.

  13. Development and functional significance of private speech among attention-deficit hyperactivity disordered and normal boys.

    PubMed

    Berk, L E; Potts, M K

    1991-06-01

    We compared the development of spontaneous private speech and its relationship to self-controlled behavior in a sample of 6- to 12-year-olds with attention-deficit hyperactivity disorder (ADHD) and matched normal controls. Thirty-eight boys were observed in their classrooms while engaged in math seatwork. Results revealed that ADHD children were delayed in private speech development in that they engaged in more externalized, self-fuiding and less inaudible, internalized speech than normal youngsters. Several findings suggest that the developmental lag was a consequence of a highly unmanageable attentional system that prevents ADHD children's private speech from gaining efficient mastery over behavior. First, self-guiding speech was associated with greater attentional focus only among the least distractible ADHD boys. Second, the most mature, internalized speech forms were correlated with self-stimulating behavior for ADHD subjects but not for controls. Third, observations of ADHD children both on and off stimulant medication indicated that reducing their symptoms substantially increased the maturity of private speech and its association with motor quiescence and attention to task. Results suggest that the Vygotskian hypothesis of a unidirectional path of influence from private speech to self-controlled behavior should be expanded into a bidirectional model. These findings may also shed light on why treatment programs that train children with attentional deficits in speech-to-self have shown limited efficacy.

  14. Cardiovascular reactivity to acute psychological stress following sleep deprivation.

    PubMed

    Franzen, Peter L; Gianaros, Peter J; Marsland, Anna L; Hall, Martica H; Siegle, Greg J; Dahl, Ronald E; Buysse, Daniel J

    2011-10-01

    Psychological stress and sleep disturbances are highly prevalent and are both implicated in the etiology of cardiovascular diseases. Given the common co-occurrence of psychological distress and sleep disturbances including short sleep duration, this study examined the combined effects of these two factors on blood pressure reactivity to immediate mental challenge tasks after well-rested and sleep-deprived experimental conditions. Participants (n = 20) were healthy young adults free from current or past sleep, psychiatric, or major medical disorders. Using a within-subjects crossover design, we examined acute stress reactivity under two experimental conditions: after a night of normal sleep in the laboratory and after a night of total sleep deprivation. Two standardized psychological stress tasks were administered, a Stroop color-word naming interference task and a speech task, which were preceded by a prestress baseline period and followed by a poststress recovery period. Each period was 10 minutes in duration, and blood pressure recordings were collected every 2.5 minutes throughout each period. Mean blood pressure responses during stress and recovery periods were examined with a mixed-effects analysis of covariance, controlling for baseline blood pressure. There was a significant interaction between sleep deprivation and stress on systolic blood pressure (F(2,82.7) = 4.05, p = .02). Systolic blood pressure was higher in the sleep deprivation condition compared with the normal sleep condition during the speech task and during the two baseline periods. Sleep deprivation amplified systolic blood pressure increases to psychological stress. Sleep loss may increase cardiovascular risk by dysregulating stress physiology.

  15. Connected speech as a marker of disease progression in autopsy-proven Alzheimer's disease.

    PubMed

    Ahmed, Samrah; Haigh, Anne-Marie F; de Jager, Celeste A; Garrard, Peter

    2013-12-01

    Although an insidious history of episodic memory difficulty is a typical presenting symptom of Alzheimer's disease, detailed neuropsychological profiling frequently demonstrates deficits in other cognitive domains, including language. Previous studies from our group have shown that language changes may be reflected in connected speech production in the earliest stages of typical Alzheimer's disease. The aim of the present study was to identify features of connected speech that could be used to examine longitudinal profiles of impairment in Alzheimer's disease. Samples of connected speech were obtained from 15 former participants in a longitudinal cohort study of ageing and dementia, in whom Alzheimer's disease was diagnosed during life and confirmed at post-mortem. All patients met clinical and neuropsychological criteria for mild cognitive impairment between 6 and 18 months before converting to a status of probable Alzheimer's disease. In a subset of these patients neuropsychological data were available, both at the point of conversion to Alzheimer's disease, and after disease severity had progressed from the mild to moderate stage. Connected speech samples from these patients were examined at later disease stages. Spoken language samples were obtained using the Cookie Theft picture description task. Samples were analysed using measures of syntactic complexity, lexical content, speech production, fluency and semantic content. Individual case analysis revealed that subtle changes in language were evident during the prodromal stages of Alzheimer's disease, with two-thirds of patients with mild cognitive impairment showing significant but heterogeneous changes in connected speech. However, impairments at the mild cognitive impairment stage did not necessarily entail deficits at mild or moderate stages of disease, suggesting non-language influences on some aspects of performance. Subsequent examination of these measures revealed significant linear trends over the three stages of disease in syntactic complexity, semantic and lexical content. The findings suggest, first, that there is a progressive disruption in language integrity, detectable from the prodromal stage in a subset of patients with Alzheimer's disease, and secondly that measures of semantic and lexical content and syntactic complexity best capture the global progression of linguistic impairment through the successive clinical stages of disease. The identification of disease-specific language impairment in prodromal Alzheimer's disease could enhance clinicians' ability to distinguish probable Alzheimer's disease from changes attributable to ageing, while longitudinal assessment could provide a simple approach to disease monitoring in therapeutic trials.

  16. GRIN2A: an aptly named gene for speech dysfunction.

    PubMed

    Turner, Samantha J; Mayes, Angela K; Verhoeven, Andrea; Mandelstam, Simone A; Morgan, Angela T; Scheffer, Ingrid E

    2015-02-10

    To delineate the specific speech deficits in individuals with epilepsy-aphasia syndromes associated with mutations in the glutamate receptor subunit gene GRIN2A. We analyzed the speech phenotype associated with GRIN2A mutations in 11 individuals, aged 16 to 64 years, from 3 families. Standardized clinical speech assessments and perceptual analyses of conversational samples were conducted. Individuals showed a characteristic phenotype of dysarthria and dyspraxia with lifelong impact on speech intelligibility in some. Speech was typified by imprecise articulation (11/11, 100%), impaired pitch (monopitch 10/11, 91%) and prosody (stress errors 7/11, 64%), and hypernasality (7/11, 64%). Oral motor impairments and poor performance on maximum vowel duration (8/11, 73%) and repetition of monosyllables (10/11, 91%) and trisyllables (7/11, 64%) supported conversational speech findings. The speech phenotype was present in one individual who did not have seizures. Distinctive features of dysarthria and dyspraxia are found in individuals with GRIN2A mutations, often in the setting of epilepsy-aphasia syndromes; dysarthria has not been previously recognized in these disorders. Of note, the speech phenotype may occur in the absence of a seizure disorder, reinforcing an important role for GRIN2A in motor speech function. Our findings highlight the need for precise clinical speech assessment and intervention in this group. By understanding the mechanisms involved in GRIN2A disorders, targeted therapy may be designed to improve chronic lifelong deficits in intelligibility. © 2015 American Academy of Neurology.

  17. Martin Luther King, Jr. Teacher's Resource Manual.

    ERIC Educational Resources Information Center

    Connecticut State Dept. of Education, Hartford.

    This Connecticut teachers' manual on Martin Luther King, Jr. includes: (1) teacher background information; (2) five excerpts from King's speeches; (3) four themes for lesson plans; and (4) sample lesson plans. The teacher's background information provides biographical sketches of King and his precursors. The five speeches reproduced here are…

  18. Native Reactions to Non-Native Speech: A Review of Empirical Research.

    ERIC Educational Resources Information Center

    Eisenstein, Miriam

    1983-01-01

    Recent research on native speakers' reactions to nonnative speech that views listeners, speakers, and language from a variety of perspectives using both objective and subjective research paradigms is reviewed. Studies of error gravity, relative intelligibility of language samples, the role of accent, speakers' characteristics, and context in which…

  19. PACs: A Framework for Determining Appropriate Service Delivery Options.

    ERIC Educational Resources Information Center

    Blosser, Jean L.; Kratcoski, Annette

    1997-01-01

    Offers speech-language clinicians a framework for team decision making and service delivery by encouraging speech-language pathologists and their colleagues to consider the unique combination of providers, activities, and contexts (PACs) necessary to meet the specific needs of each individual with a communication disorder. Sample cases involving…

  20. Phonological Acquisition of Korean Consonants in Conversational Speech Produced by Young Korean Children

    ERIC Educational Resources Information Center

    Kim, Minjung; Kim, Soo-Jin; Stoel-Gammon, Carol

    2017-01-01

    This study investigates the phonological acquisition of Korean consonants using conversational speech samples collected from sixty monolingual typically developing Korean children aged two, three, and four years. Phonemic acquisition was examined for syllable-initial and syllable-final consonants. Results showed that Korean children acquired stops…

  1. Phonological Development of Monolingual Haitian Creole-Speaking Preschool Children

    ERIC Educational Resources Information Center

    Archer, Justine; Champion, Tempii; Tyrone, Martha E.; Walters, Sylvia

    2018-01-01

    This study provides preliminary data on the phonological development of Haitian Creole-Speaking children. The purpose of this study is to determine phonological acquisition in the speech of normally developing monolingual Haitian Creole-Speaking preschoolers, ages 2 to 4. Speech samples were collected cross-sectionally from 12 Haitian children…

  2. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    NASA Astrophysics Data System (ADS)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  3. Extensions to the Speech Disorders Classification System (SDCS)

    PubMed Central

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    This report describes three extensions to a classification system for pediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three subtypes of motor speech disorders. Part II describes the Madison Speech Assessment Protocol (MSAP), an approximately two-hour battery of 25 measures that includes 15 speech tests and tasks. Part III describes the Competence, Precision, and Stability Analytics (CPSA) framework, a current set of approximately 90 perceptual- and acoustic-based indices of speech, prosody, and voice used to quantify and classify subtypes of Speech Sound Disorders (SSD). A companion paper, Shriberg, Fourakis, et al. (2010) provides reliability estimates for the perceptual and acoustic data reduction methods used in the SDCS. The agreement estimates in the companion paper support the reliability of SDCS methods and illustrate the complementary roles of perceptual and acoustic methods in diagnostic analyses of SSD of unknown origin. Examples of research using the extensions to the SDCS described in the present report include diagnostic findings for a sample of youth with motor speech disorders associated with galactosemia (Shriberg, Potter, & Strand, 2010) and a test of the hypothesis of apraxia of speech in a group of children with autism spectrum disorders (Shriberg, Paul, Black, & van Santen, 2010). All SDCS methods and reference databases running in the PEPPER (Programs to Examine Phonetic and Phonologic Evaluation Records; [Shriberg, Allen, McSweeny, & Wilson, 2001]) environment will be disseminated without cost when complete. PMID:20831378

  4. Eliciting and maintaining ruminative thought: the role of social-evaluative threat.

    PubMed

    Zoccola, Peggy M; Dickerson, Sally S; Lam, Suman

    2012-08-01

    This study tested whether a performance stressor characterized by social-evaluative threat (SET) elicits more rumination than a stressor without this explicit evaluative component and whether this difference persists minutes, hours, and days later. The mediating role of shame-related cognition and emotion (SRCE) was also examined. During a laboratory visit, 144 undergraduates (50% female) were randomly assigned to complete a speech stressor in a social-evaluative threat condition (SET; n = 86), in which an audience was present, or a nonexplicit social-evaluative threat condition (ne-SET; n = 58), in which they were alone in a room. Participants completed measures of stressor-related rumination 10 and 40 min posttask, later that night, and upon returning to the laboratory 3-5 days later. SRCE and other emotions experienced during the stressor (fear, anger, and sadness) were assessed immediately posttask. As hypothesized, the SET speech stressor elicited more rumination than the ne-SET speech stressor, and these differences persisted for 3-5 days. SRCE-but not other specific negative emotions or general emotional arousal-mediated the effect of stressor context on rumination. Stressors characterized by SET may be likely candidates for eliciting and maintaining ruminative thought immediately and also days later, potentially by eliciting shame-related emotions and cognitions.

  5. Beyond stuttering: Speech disfluencies in normally fluent French-speaking children at age 4.

    PubMed

    Leclercq, Anne-Lise; Suaire, Pauline; Moyse, Astrid

    2018-01-01

    The aim of this study was to establish normative data on the speech disfluencies of normally fluent French-speaking children at age 4, an age at which stuttering has begun in 95% of children who stutter (Yairi & Ambrose, 2013). Fifty monolingual French-speaking children who do not stutter participated in the study. Analyses of a conversational speech sample comprising 250-550 words revealed an average of 10% total disfluencies, 2% stuttering-like disfluencies and around 8% non-stuttered disfluencies. Possible explanations for these high speech disfluency frequencies are discussed, including explanations linked to French in particular. The results shed light on the importance of normative data specific to each language.

  6. Gender differences in identifying emotions from auditory and visual stimuli.

    PubMed

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  7. Factors affecting articulation skills in children with velocardiofacial syndrome and children with cleft palate or velopharyngeal dysfunction: A preliminary report

    PubMed Central

    Baylis, Adriane L.; Munson, Benjamin; Moller, Karlind T.

    2010-01-01

    Objective To examine the influence of speech perception, cognition, and implicit phonological learning on articulation skills of children with Velocardiofacial syndrome (VCFS) and children with cleft palate or velopharyngeal dysfunction (VPD). Design Cross-sectional group experimental design. Participants 8 children with VCFS and 5 children with non-syndromic cleft palate or VPD. Methods and Measures All children participated in a phonetic inventory task, speech perception task, implicit priming nonword repetition task, conversational sample, nonverbal intelligence test, and hearing screening. Speech tasks were scored for percentage of phonemes correctly produced. Group differences and relations among measures were examined using nonparametric statistics. Results Children in the VCFS group demonstrated significantly poorer articulation skills and lower standard scores of nonverbal intelligence compared to the children with cleft palate or VPD. There were no significant group differences in speech perception skills. For the implicit priming task, both groups of children were more accurate in producing primed nonwords than unprimed nonwords. Nonverbal intelligence and severity of velopharyngeal inadequacy for speech were correlated with articulation skills. Conclusions In this study, children with VCFS had poorer articulation skills compared to children with cleft palate or VPD. Articulation difficulties seen in the children with VCFS did not appear to be associated with speech perception skills or the ability to learn new phonological representations. Future research should continue to examine relationships between articulation, cognition, and velopharyngeal dysfunction in a larger sample of children with cleft palate and VCFS. PMID:18333642

  8. Psychoacoustic cues to emotion in speech prosody and music.

    PubMed

    Coutinho, Eduardo; Dibben, Nicola

    2013-01-01

    There is strong evidence of shared acoustic profiles common to the expression of emotions in music and speech, yet relatively limited understanding of the specific psychoacoustic features involved. This study combined a controlled experiment and computational modelling to investigate the perceptual codes associated with the expression of emotion in the acoustic domain. The empirical stage of the study provided continuous human ratings of emotions perceived in excerpts of film music and natural speech samples. The computational stage created a computer model that retrieves the relevant information from the acoustic stimuli and makes predictions about the emotional expressiveness of speech and music close to the responses of human subjects. We show that a significant part of the listeners' second-by-second reported emotions to music and speech prosody can be predicted from a set of seven psychoacoustic features: loudness, tempo/speech rate, melody/prosody contour, spectral centroid, spectral flux, sharpness, and roughness. The implications of these results are discussed in the context of cross-modal similarities in the communication of emotion in the acoustic domain.

  9. Spectral analysis method and sample generation for real time visualization of speech

    NASA Astrophysics Data System (ADS)

    Hobohm, Klaus

    A method for translating speech signals into optical models, characterized by high sound discrimination and learnability and designed to provide to deaf persons a feedback towards control of their way of speaking, is presented. Important properties of speech production and perception processes and organs involved in these mechanisms are recalled in order to define requirements for speech visualization. It is established that the spectral representation of time, frequency and amplitude resolution of hearing must be fair and continuous variations of acoustic parameters of speech signal must be depicted by a continuous variation of images. A color table was developed for dynamic illustration and sonograms were generated with five spectral analysis methods such as Fourier transformations and linear prediction coding. For evaluating sonogram quality, test persons had to recognize consonant/vocal/consonant words and an optimized analysis method was achieved with a fast Fourier transformation and a postprocessor. A hardware concept of a real time speech visualization system, based on multiprocessor technology in a personal computer, is presented.

  10. Smartphone Application for the Analysis of Prosodic Features in Running Speech with a Focus on Bipolar Disorders: System Performance Evaluation and Case Study.

    PubMed

    Guidi, Andrea; Salvi, Sergio; Ottaviano, Manuel; Gentili, Claudio; Bertschy, Gilles; de Rossi, Danilo; Scilingo, Enzo Pasquale; Vanello, Nicola

    2015-11-06

    Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented.

  11. Smartphone Application for the Analysis of Prosodic Features in Running Speech with a Focus on Bipolar Disorders: System Performance Evaluation and Case Study

    PubMed Central

    Guidi, Andrea; Salvi, Sergio; Ottaviano, Manuel; Gentili, Claudio; Bertschy, Gilles; de Rossi, Danilo; Scilingo, Enzo Pasquale; Vanello, Nicola

    2015-01-01

    Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented. PMID:26561811

  12. Speech sound articulation abilities of preschool-age children who stutter.

    PubMed

    Clark, Chagit E; Conture, Edward G; Walden, Tedra A; Lambert, Warren E

    2013-12-01

    The purpose of this study was to assess the association between speech sound articulation and childhood stuttering in a relatively large sample of preschool-age children who do and do not stutter, using the Goldman-Fristoe Test of Articulation-2 (GFTA-2; Goldman & Fristoe, 2000). Participants included 277 preschool-age children who do (CWS; n=128, 101 males) and do not stutter (CWNS; n=149, 76 males). Generalized estimating equations (GEE) were performed to assess between-group (CWS versus CWNS) differences on the GFTA-2. Additionally, within-group correlations were performed to explore the relation between CWS' speech sound articulation abilities and their stuttering frequency and severity, as well as their sound prolongation index (SPI; Schwartz & Conture, 1988). No significant differences were found between the articulation scores of preschool-age CWS and CWNS. However, there was a small gender effect for the 5-year-old age group, with girls generally exhibiting better articulation scores than boys. Additional findings indicated no relation between CWS' speech sound articulation abilities and their stuttering frequency, severity, or SPI. Findings suggest no apparent association between speech sound articulation-as measured by one standardized assessment (GFTA-2)-and childhood stuttering for this sample of preschool-age children (N=277). After reading this article, the reader will be able to: (1) discuss salient issues in the articulation literature relative to children who stutter; (2) compare/contrast the present study's methodologies and main findings to those of previous studies that investigated the association between childhood stuttering and speech sound articulation; (3) identify future research needs relative to the association between childhood stuttering and speech sound development; (4) replicate the present study's methodology to expand this body of knowledge. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Dense home-based recordings reveal typical and atypical development of tense/aspect in a child with delayed language development.

    PubMed

    Chin, Iris; Goodwin, Matthew S; Vosoughi, Soroush; Roy, Deb; Naigles, Letitia R

    2018-01-01

    Studies investigating the development of tense/aspect in children with developmental disorders have focused on production frequency and/or relied on short spontaneous speech samples. How children with developmental disorders use future forms/constructions is also unknown. The current study expands this literature by examining frequency, consistency, and productivity of past, present, and future usage, using the Speechome Recorder, which enables collection of dense, longitudinal audio-video recordings of children's speech. Samples were collected longitudinally in a child who was previously diagnosed with autism spectrum disorder, but at the time of the study exhibited only language delay [Audrey], and a typically developing child [Cleo]. While Audrey was comparable to Cleo in frequency and productivity of tense/aspect use, she was atypical in her consistency and production of an unattested future form. Examining additional measures of densely collected speech samples may reveal subtle atypicalities that are missed when relying on only few typical measures of acquisition.

  14. The effect of group music therapy on mood, speech, and singing in individuals with Parkinson's disease--a feasibility study.

    PubMed

    Elefant, Cochavit; Baker, Felicity A; Lotan, Meir; Lagesen, Simen Krogstie; Skeie, Geir Olve

    2012-01-01

    Parkinson's disease (PD) is a progressive neurodegenerative disorder where patients exhibit impairments in speech production. Few studies have investigated the influence of music interventions on vocal abilities of individuals with PD. To evaluate the influence of a group voice and singing intervention on speech, singing, and depressive symptoms in individuals with PD. Ten patients diagnosed with PD participated in this one-group, repeated measures design study. Participants received the sixty-minute intervention, in a small group setting once a week for 20 consecutive weeks. Speech and singing quality were acoustically analyzed using a KayPentax Multi-Dimensional Voice Program, voice ability using the Voice Handicap Index (VHI), and depressive symptoms using the Montgomery and Asberg Depression rating scale (MADRS). Measures were taken at baseline (Time 1), after 10 weeks of weekly sessions (Time 2), and after 20 weeks of weekly sessions (Time 3). Significant changes were observed for five of the six singing quality outcomes at Time 2 and 3, as well as voice range and the VHI physical subscale at Time 3. No significant changes were found for speaking quality or depressive symptom outcomes; however, there was an absence of decline on speaking quality outcomes over the intervention period. Significant improvements in singing quality and voice range, coupled with the absence of decline in speaking quality support group singing as a promising intervention for persons with PD. A two-group randomized control study is needed to determine whether the intervention contributes to maintenance of speaking quality in persons with PD.

  15. The neural processing of hierarchical structure in music and speech at different timescales

    PubMed Central

    Farbood, Morwaread M.; Heeger, David J.; Marcus, Gary; Hasson, Uri; Lerner, Yulia

    2015-01-01

    Music, like speech, is a complex auditory signal that contains structures at multiple timescales, and as such is a potentially powerful entry point into the question of how the brain integrates complex streams of information. Using an experimental design modeled after previous studies that used scrambled versions of a spoken story (Lerner et al., 2011) and a silent movie (Hasson et al., 2008), we investigate whether listeners perceive hierarchical structure in music beyond short (~6 s) time windows and whether there is cortical overlap between music and language processing at multiple timescales. Experienced pianists were presented with an extended musical excerpt scrambled at multiple timescales—by measure, phrase, and section—while measuring brain activity with functional magnetic resonance imaging (fMRI). The reliability of evoked activity, as quantified by inter-subject correlation of the fMRI responses, was measured. We found that response reliability depended systematically on musical structure coherence, revealing a topographically organized hierarchy of processing timescales. Early auditory areas (at the bottom of the hierarchy) responded reliably in all conditions. For brain areas at the top of the hierarchy, the original (unscrambled) excerpt evoked more reliable responses than any of the scrambled excerpts, indicating that these brain areas process long-timescale musical structures, on the order of minutes. The topography of processing timescales was analogous with that reported previously for speech, but the timescale gradients for music and speech overlapped with one another only partially, suggesting that temporally analogous structures—words/measures, sentences/musical phrases, paragraph/sections—are processed separately. PMID:26029037

  16. The neural processing of hierarchical structure in music and speech at different timescales.

    PubMed

    Farbood, Morwaread M; Heeger, David J; Marcus, Gary; Hasson, Uri; Lerner, Yulia

    2015-01-01

    Music, like speech, is a complex auditory signal that contains structures at multiple timescales, and as such is a potentially powerful entry point into the question of how the brain integrates complex streams of information. Using an experimental design modeled after previous studies that used scrambled versions of a spoken story (Lerner et al., 2011) and a silent movie (Hasson et al., 2008), we investigate whether listeners perceive hierarchical structure in music beyond short (~6 s) time windows and whether there is cortical overlap between music and language processing at multiple timescales. Experienced pianists were presented with an extended musical excerpt scrambled at multiple timescales-by measure, phrase, and section-while measuring brain activity with functional magnetic resonance imaging (fMRI). The reliability of evoked activity, as quantified by inter-subject correlation of the fMRI responses, was measured. We found that response reliability depended systematically on musical structure coherence, revealing a topographically organized hierarchy of processing timescales. Early auditory areas (at the bottom of the hierarchy) responded reliably in all conditions. For brain areas at the top of the hierarchy, the original (unscrambled) excerpt evoked more reliable responses than any of the scrambled excerpts, indicating that these brain areas process long-timescale musical structures, on the order of minutes. The topography of processing timescales was analogous with that reported previously for speech, but the timescale gradients for music and speech overlapped with one another only partially, suggesting that temporally analogous structures-words/measures, sentences/musical phrases, paragraph/sections-are processed separately.

  17. Parent-child interactions in children with asthma and anxiety.

    PubMed

    Sicouri, Gemma; Sharpe, Louise; Hudson, Jennifer L; Dudeney, Joanne; Jaffe, Adam; Selvadurai, Hiran; Hunt, Caroline

    2017-10-01

    Anxiety disorders are highly prevalent in children with asthma yet very little is known about the parenting factors that may underlie this relationship. The aim of the current study was to examine observed parenting behaviours - involvement and negativity - associated with asthma and anxiety in children using the tangram task and the Five Minute Speech Sample (FMSS). Eighty-nine parent-child dyads were included across four groups of children (8-13 years old): asthma and anxiety, anxiety only, asthma only and healthy controls. Overall, results from both tasks showed that parenting behaviours of children with and without asthma did not differ significantly. Results from a subcomponent of the FMSS indicated that parents of children with asthma were more overprotective, or self-sacrificing, or non-objective than parents of children without asthma, and this difference was greater in the non-anxious groups. The results suggest that some parenting strategies developed for parents of children with anxiety may be useful for parents of children with asthma and anxiety (e.g. strategies targeting involvement), however, others may not be necessary (e.g. those targeting negativity). Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Speech Disorders in Neurofibromatosis Type 1: A Sample Survey

    ERIC Educational Resources Information Center

    Cosyns, Marjan; Vandeweghe, Lies; Mortier, Geert; Janssens, Sandra; Van Borsel, John

    2010-01-01

    Background: Neurofibromatosis type 1 (NF1) is an autosomal-dominant neurocutaneous disorder with an estimated prevalence of two to three cases per 10 000 population. While the physical characteristics have been well documented, speech disorders have not been fully characterized in NF1 patients. Aims: This study serves as a pilot to identify key…

  19. Disfluency Markers in L1 Attrition

    ERIC Educational Resources Information Center

    Schmid, Monika S.; Fagersten, Kristy Beers

    2010-01-01

    Based on an analysis of the speech of long-term emigres of German and Dutch origin, the present investigation discusses to what extent hesitation patterns in language attrition may be the result of the creation of an interlanguage system, on the one hand, or of language-internal attrition patterns on the other. We compare speech samples elicited…

  20. Perceptual Speech and Paralinguistic Skills of Adolescents with Williams Syndrome

    ERIC Educational Resources Information Center

    Hargrove, Patricia M.; Pittelko, Stephen; Fillingane, Evan; Rustman, Emily; Lund, Bonnie

    2013-01-01

    The purpose of this research was to compare selected speech and paralinguistic skills of speakers with Williams syndrome (WS) and typically developing peers and to demonstrate the feasibility of providing preexisting databases to students to facilitate graduate research. In a series of three studies, conversational samples of 12 adolescents with…

  1. Consonant Inventories in the Spontaneous Speech of Young Children: A Bootstrapping Procedure

    ERIC Educational Resources Information Center

    Van Severen, Lieve; Van Den Berg, Renate; Molemans, Inge; Gillis, Steven

    2012-01-01

    Consonant inventories are commonly drawn to assess the phonological acquisition of toddlers. However, the spontaneous speech data that are analysed often vary substantially in size and composition. Consequently, comparisons between children and across studies are fundamentally hampered. This study aims to examine the effect of sample size on the…

  2. Social Biases toward Children with Speech and Language Impairments: A Correlative Causal Model of Language Limitations.

    ERIC Educational Resources Information Center

    Rice, Mabel L.; And Others

    1993-01-01

    In a study of adults' attitudes toward children with limited linguistic competence, four groups of judges listened to audiotaped samples of preschool children's speech and responded to questionnaire items addressing child attributes (e.g., intelligence, social maturity). Systemic biases were revealed toward children with limited communication…

  3. Investigating Prompt Difficulty in an Automatically Scored Speaking Performance Assessment

    ERIC Educational Resources Information Center

    Cox, Troy L.

    2013-01-01

    Speaking assessments for second language learners have traditionally been expensive to administer because of the cost of rating the speech samples. To reduce the cost, many researchers are investigating the potential of using automatic speech recognition (ASR) as a means to score examinee responses to open-ended prompts. This study examined the…

  4. Effects of Length, Complexity, and Grammatical Correctness on Stuttering in Spanish-Speaking Preschool Children

    ERIC Educational Resources Information Center

    Watson, Jennifer B.; Byrd, Courtney T.; Carlo, Edna J.

    2011-01-01

    Purpose: To explore the effects of utterance length, syntactic complexity, and grammatical correctness on stuttering in the spontaneous speech of young, monolingual Spanish-speaking children. Method: Spontaneous speech samples of 11 monolingual Spanish-speaking children who stuttered, ages 35 to 70 months, were examined. Mean number of syllables,…

  5. GRIN2A

    PubMed Central

    Turner, Samantha J.; Mayes, Angela K.; Verhoeven, Andrea; Mandelstam, Simone A.; Morgan, Angela T.

    2015-01-01

    Objective: To delineate the specific speech deficits in individuals with epilepsy-aphasia syndromes associated with mutations in the glutamate receptor subunit gene GRIN2A. Methods: We analyzed the speech phenotype associated with GRIN2A mutations in 11 individuals, aged 16 to 64 years, from 3 families. Standardized clinical speech assessments and perceptual analyses of conversational samples were conducted. Results: Individuals showed a characteristic phenotype of dysarthria and dyspraxia with lifelong impact on speech intelligibility in some. Speech was typified by imprecise articulation (11/11, 100%), impaired pitch (monopitch 10/11, 91%) and prosody (stress errors 7/11, 64%), and hypernasality (7/11, 64%). Oral motor impairments and poor performance on maximum vowel duration (8/11, 73%) and repetition of monosyllables (10/11, 91%) and trisyllables (7/11, 64%) supported conversational speech findings. The speech phenotype was present in one individual who did not have seizures. Conclusions: Distinctive features of dysarthria and dyspraxia are found in individuals with GRIN2A mutations, often in the setting of epilepsy-aphasia syndromes; dysarthria has not been previously recognized in these disorders. Of note, the speech phenotype may occur in the absence of a seizure disorder, reinforcing an important role for GRIN2A in motor speech function. Our findings highlight the need for precise clinical speech assessment and intervention in this group. By understanding the mechanisms involved in GRIN2A disorders, targeted therapy may be designed to improve chronic lifelong deficits in intelligibility. PMID:25596506

  6. Schizophrenia affects speech-induced functional connectivity of the superior temporal gyrus under cocktail-party listening conditions.

    PubMed

    Li, Juanhua; Wu, Chao; Zheng, Yingjun; Li, Ruikeng; Li, Xuanzi; She, Shenglin; Wu, Haibo; Peng, Hongjun; Ning, Yuping; Li, Liang

    2017-09-17

    The superior temporal gyrus (STG) is involved in speech recognition against informational masking under cocktail-party-listening conditions. Compared to healthy listeners, people with schizophrenia perform worse in speech recognition under informational speech-on-speech masking conditions. It is not clear whether the schizophrenia-related vulnerability to informational masking is associated with certain changes in FC of the STG with some critical brain regions. Using sparse-sampling fMRI design, this study investigated the differences between people with schizophrenia and healthy controls in FC of the STG for target-speech listening against informational speech-on-speech masking, when a listening condition with either perceived spatial separation (PSS, with a spatial release of informational masking) or perceived spatial co-location (PSC, without the spatial release) between target speech and masking speech was introduced. The results showed that in healthy participants, but not participants with schizophrenia, the contrast of either the PSS or PSC condition against the masker-only condition induced an enhancement of functional connectivity (FC) of the STG with the left superior parietal lobule and the right precuneus. Compared to healthy participants, participants with schizophrenia showed declined FC of the STG with the bilateral precuneus, right SPL, and right supplementary motor area. Thus, FC of the STG with the parietal areas is normally involved in speech listening against informational masking under either the PSS or PSC conditions, and declined FC of the STG in people with schizophrenia with the parietal areas may be associated with the increased vulnerability to informational masking. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Motor speech signature of behavioral variant frontotemporal dementia: Refining the phenotype.

    PubMed

    Vogel, Adam P; Poole, Matthew L; Pemberton, Hugh; Caverlé, Marja W J; Boonstra, Frederique M C; Low, Essie; Darby, David; Brodtmann, Amy

    2017-08-22

    To provide a comprehensive description of motor speech function in behavioral variant frontotemporal dementia (bvFTD). Forty-eight individuals (24 bvFTD and 24 age- and sex-matched healthy controls) provided speech samples. These varied in complexity and thus cognitive demand. Their language was assessed using the Progressive Aphasia Language Scale and verbal fluency tasks. Speech was analyzed perceptually to describe the nature of deficits and acoustically to quantify differences between patients with bvFTD and healthy controls. Cortical thickness and subcortical volume derived from MRI scans were correlated with speech outcomes in patients with bvFTD. Speech of affected individuals was significantly different from that of healthy controls. The speech signature of patients with bvFTD is characterized by a reduced rate (75%) and accuracy (65%) on alternating syllable production tasks, and prosodic deficits including reduced speech rate (45%), prolonged intervals (54%), and use of short phrases (41%). Groups differed on acoustic measures derived from the reading, unprepared monologue, and diadochokinetic tasks but not the days of the week or sustained vowel tasks. Variability of silence length was associated with cortical thickness of the inferior frontal gyrus and insula and speech rate with the precentral gyrus. One in 8 patients presented with moderate speech timing deficits with a further two-thirds rated as mild or subclinical. Subtle but measurable deficits in prosody are common in bvFTD and should be considered during disease management. Language function correlated with speech timing measures derived from the unprepared monologue only. © 2017 American Academy of Neurology.

  8. Don’t speak too fast! Processing of fast rate speech in children with specific language impairment

    PubMed Central

    Bedoin, Nathalie; Krifi-Papoz, Sonia; Herbillon, Vania; Caillot-Bascoul, Aurélia; Gonzalez-Monge, Sibylle; Boulenger, Véronique

    2018-01-01

    Background Perception of speech rhythm requires the auditory system to track temporal envelope fluctuations, which carry syllabic and stress information. Reduced sensitivity to rhythmic acoustic cues has been evidenced in children with Specific Language Impairment (SLI), impeding syllabic parsing and speech decoding. Our study investigated whether these children experience specific difficulties processing fast rate speech as compared with typically developing (TD) children. Method Sixteen French children with SLI (8–13 years old) with mainly expressive phonological disorders and with preserved comprehension and 16 age-matched TD children performed a judgment task on sentences produced 1) at normal rate, 2) at fast rate or 3) time-compressed. Sensitivity index (d′) to semantically incongruent sentence-final words was measured. Results Overall children with SLI perform significantly worse than TD children. Importantly, as revealed by the significant Group × Speech Rate interaction, children with SLI find it more challenging than TD children to process both naturally or artificially accelerated speech. The two groups do not significantly differ in normal rate speech processing. Conclusion In agreement with rhythm-processing deficits in atypical language development, our results suggest that children with SLI face difficulties adjusting to rapid speech rate. These findings are interpreted in light of temporal sampling and prosodic phrasing frameworks and of oscillatory mechanisms underlying speech perception. PMID:29373610

  9. Speech in 10-Year-Olds Born With Cleft Lip and Palate: What Do Peers Say?

    PubMed

    Nyberg, Jill; Havstam, Christina

    2016-09-01

    The aim of this study was to explore how 10-year-olds describe speech and communicative participation in children born with unilateral cleft lip and palate in their own words, whether they perceive signs of velopharyngeal insufficiency (VPI) and articulation errors of different degrees, and if so, which terminology they use. Methods/Participants: Nineteen 10-year-olds participated in three focus group interviews where they listened to 10 to 12 speech samples with different types of cleft speech characteristics assessed by speech and language pathologists (SLPs) and described what they heard. The interviews were transcribed and analyzed with qualitative content analysis. The analysis resulted in three interlinked categories encompassing different aspects of speech, personality, and social implications: descriptions of speech, thoughts on causes and consequences, and emotional reactions and associations. Each category contains four subcategories exemplified with quotes from the children's statements. More pronounced signs of VPI were perceived but referred to in terms relevant to 10-year-olds. Articulatory difficulties, even minor ones, were noted. Peers reflected on the risk to teasing and bullying and on how children with impaired speech might experience their situation. The SLPs and peers did not agree on minor signs of VPI, but they were unanimous in their analysis of clinically normal and more severely impaired speech. Articulatory impairments may be more important to treat than minor signs of VPI based on what peers say.

  10. Beginning to Talk Like an Adult: Increases in Speech-like Utterances in Young Cochlear Implant Recipients and Toddlers with Normal Hearing

    PubMed Central

    Ertmer, David J.; Jung, Jongmin; Kloiber, Diana True

    2013-01-01

    Background Speech-like utterances containing rapidly combined consonants and vowels eventually dominate the prelinguistic and early word productions of toddlers who are developing typically (TD). It seems reasonable to expect a similar phenomenon in young cochlear implants (CI) recipients. This study sought to determine the number of months of robust hearing experience needed to achieve a majority of speech-like utterances in both of these groups. Methods Speech samples were recorded at 3-month intervals during the first 2 years of CI experience, and between 6- and 24 months of age in TD children. Speech-like utterances were operationally defined as those belonging to the Basic Canonical Syllables (BCS) or Advanced Forms (AF) levels of the Consolidated Stark Assessment of Early Vocal Development-Revised. Results On average, the CI group achieved a majority of speech- like utterances after 12 months, and the TD group after 18 months of robust hearing experience. The CI group produced greater percentages of speech-like utterances at each interval until 24-months, when both groups approximated 80%. Conclusion Auditory deprivation did not limit progress in vocal development as young CI recipients showed more-rapid-than-typical speech development during the first 2 years of device use. Implications for the Infraphonological model of speech development are considered. PMID:23813203

  11. Data-Driven Subclassification of Speech Sound Disorders in Preschool Children

    PubMed Central

    Vick, Jennell C.; Campbell, Thomas F.; Shriberg, Lawrence D.; Green, Jordan R.; Truemper, Klaus; Rusiewicz, Heather Leavy; Moore, Christopher A.

    2015-01-01

    Purpose The purpose of the study was to determine whether distinct subgroups of preschool children with speech sound disorders (SSD) could be identified using a subgroup discovery algorithm (SUBgroup discovery via Alternate Random Processes, or SUBARP). Of specific interest was finding evidence of a subgroup of SSD exhibiting performance consistent with atypical speech motor control. Method Ninety-seven preschool children with SSD completed speech and nonspeech tasks. Fifty-three kinematic, acoustic, and behavioral measures from these tasks were input to SUBARP. Results Two distinct subgroups were identified from the larger sample. The 1st subgroup (76%; population prevalence estimate = 67.8%–84.8%) did not have characteristics that would suggest atypical speech motor control. The 2nd subgroup (10.3%; population prevalence estimate = 4.3%– 16.5%) exhibited significantly higher variability in measures of articulatory kinematics and poor ability to imitate iambic lexical stress, suggesting atypical speech motor control. Both subgroups were consistent with classes of SSD in the Speech Disorders Classification System (SDCS; Shriberg et al., 2010a). Conclusion Characteristics of children in the larger subgroup were consistent with the proportionally large SDCS class termed speech delay; characteristics of children in the smaller subgroup were consistent with the SDCS subtype termed motor speech disorder—not otherwise specified. The authors identified candidate measures to identify children in each of these groups. PMID:25076005

  12. Temporal dynamics and the identification of musical key.

    PubMed

    Farbood, Morwaread Mary; Marcus, Gary; Poeppel, David

    2013-08-01

    A central process in music cognition involves the identification of key; however, little is known about how listeners accomplish this task in real time. This study derives from work that suggests overlap between the neural and cognitive resources underlying the analyses of both music and speech and is the first, to our knowledge, to explore the timescales at which the brain infers musical key. We investigated the temporal psychophysics of key-finding over a wide range of tempi using melodic sequences with strong structural cues, where statistical information about overall key profile was ambiguous. Listeners were able to provide robust judgments within specific limits, at rates as high as 400 beats per minute (bpm; ∼7 Hz) and as low as 30 bpm (0.5 Hz), but not outside those bounds. These boundaries on reliable performance show that the process of key-finding is restricted to timescales that are closely aligned with beat induction and speech processing. 2013 APA, all rights reserved

  13. Speech and Voice Response to a Levodopa Challenge in Late-Stage Parkinson's Disease.

    PubMed

    Fabbri, Margherita; Guimarães, Isabel; Cardoso, Rita; Coelho, Miguel; Guedes, Leonor Correia; Rosa, Mario M; Godinho, Catarina; Abreu, Daisy; Gonçalves, Nilza; Antonini, Angelo; Ferreira, Joaquim J

    2017-01-01

    Parkinson's disease (PD) patients are affected by hypokinetic dysarthria, characterized by hypophonia and dysprosody, which worsens with disease progression. Levodopa's (l-dopa) effect on quality of speech is inconclusive; no data are currently available for late-stage PD (LSPD). To assess the modifications of speech and voice in LSPD following an acute l-dopa challenge. LSPD patients [Schwab and England score <50/Hoehn and Yahr stage >3 (MED ON)] performed several vocal tasks before and after an acute l-dopa challenge. The following was assessed: respiratory support for speech, voice quality, stability and variability, speech rate, and motor performance (MDS-UPDRS-III). All voice samples were recorded and analyzed by a speech and language therapist blinded to patients' therapeutic condition using Praat 5.1 software. 24/27 (14 men) LSPD patients succeeded in performing voice tasks. Median age and disease duration of patients were 79 [IQR: 71.5-81.7] and 14.5 [IQR: 11-15.7] years, respectively. In MED OFF, respiratory breath support and pitch break time of LSPD patients were worse than the normative values of non-parkinsonian. A correlation was found between disease duration and voice quality ( R  = 0.51; p  = 0.013) and speech rate ( R  = -0.55; p  = 0.008). l-Dopa significantly improved MDS-UPDRS-III score (20%), with no effect on speech as assessed by clinical rating scales and automated analysis. Speech is severely affected in LSPD. Although l-dopa had some effect on motor performance, including axial signs, speech and voice did not improve. The applicability and efficacy of non-pharmacological treatment for speech impairment should be considered for speech disorder management in PD.

  14. A warning to the Brazilian Speech-Language Pathology and Audiology community about the importance of scientific and clinical activities in primary progressive aphasia.

    PubMed

    Beber, Bárbara Costa; Brandão, Lenisa; Chaves, Márcia Lorena Fagundes

    2015-01-01

    This article aims to warn the Brazilian Speech-Language Pathology and Audiology scientific community about the importance and necessity of scientific and clinical activities regarding Primary Progressive Aphasia. This warning is based on a systematic literature review of the scientific production on Primary Progressive Aphasia, from which nine Brazilian articles were selected. It was observed that there is an obvious lack of studies on the subject, as all the retrieved articles were published in medical journals and much of it consisted of small samples; only two articles described the effectiveness of speech-language therapy in patients with Primary Progressive Aphasia. A perspective for the future in the area and characteristics of Speech-Language Therapy for Primary Progressive Aphasia are discussed. As a conclusion, it is evident the need for greater action by Speech-Language Pathology and Audiology on Primary Progressive Aphasia.

  15. The effect of guessing on the speech reception thresholds of children.

    PubMed

    Moodley, A

    1990-01-01

    Speech audiometry is an essential part of the assessment of hearing impaired children and it is now widely used throughout the United Kingdom. Although instructions are universally agreed upon as an important aspect in the administration of any form of audiometric testing, there has been little, if any, research towards evaluating the influence which instructions that are given to a listener have on the Speech Reception Threshold obtained. This study attempts to evaluate what effect guessing has on the Speech Reception Threshold of children. A sample of 30 secondary school pupils between 16 and 18 years of age with normal hearing was used in the study. It is argued that the type of instruction normally used for Speech Reception Threshold in audiometric testing may not provide a sufficient amount of control for guessing and the implications of this, using data obtained in the study, are examined.

  16. Rhythmic patterning in Malaysian and Singapore English.

    PubMed

    Tan, Rachel Siew Kuang; Low, Ee-Ling

    2014-06-01

    Previous work on the rhythm of Malaysian English has been based on impressionistic observations. This paper utilizes acoustic analysis to measure the rhythmic patterns of Malaysian English. Recordings of the read speech and spontaneous speech of 10 Malaysian English speakers were analyzed and compared with recordings of an equivalent sample of Singaporean English speakers. Analysis was done using two rhythmic indexes, the PVI and VarcoV. It was found that although the rhythm of read speech of the Singaporean speakers was syllable-based as described by previous studies, the rhythm of the Malaysian speakers was even more syllable-based. Analysis of the syllables in specific utterances showed that Malaysian speakers did not reduce vowels as much as Singaporean speakers in cases of syllables in utterances. Results of the spontaneous speech confirmed the findings for the read speech; that is, the same rhythmic patterning was found which normally triggers vowel reductions.

  17. Automated analysis of connected speech reveals early biomarkers of Parkinson's disease in patients with rapid eye movement sleep behaviour disorder.

    PubMed

    Hlavnička, Jan; Čmejla, Roman; Tykalová, Tereza; Šonka, Karel; Růžička, Evžen; Rusz, Jan

    2017-02-02

    For generations, the evaluation of speech abnormalities in neurodegenerative disorders such as Parkinson's disease (PD) has been limited to perceptual tests or user-controlled laboratory analysis based upon rather small samples of human vocalizations. Our study introduces a fully automated method that yields significant features related to respiratory deficits, dysphonia, imprecise articulation and dysrhythmia from acoustic microphone data of natural connected speech for predicting early and distinctive patterns of neurodegeneration. We compared speech recordings of 50 subjects with rapid eye movement sleep behaviour disorder (RBD), 30 newly diagnosed, untreated PD patients and 50 healthy controls, and showed that subliminal parkinsonian speech deficits can be reliably captured even in RBD patients, which are at high risk of developing PD or other synucleinopathies. Thus, automated vocal analysis should soon be able to contribute to screening and diagnostic procedures for prodromal parkinsonian neurodegeneration in natural environments.

  18. Telephone-quality pathological speech classification using empirical mode decomposition.

    PubMed

    Kaleem, M F; Ghoraani, B; Guergachi, A; Krishnan, S

    2011-01-01

    This paper presents a computationally simple and effective methodology based on empirical mode decomposition (EMD) for classification of telephone quality normal and pathological speech signals. EMD is used to decompose continuous normal and pathological speech signals into intrinsic mode functions, which are analyzed to extract physically meaningful and unique temporal and spectral features. Using continuous speech samples from a database of 51 normal and 161 pathological speakers, which has been modified to simulate telephone quality speech under different levels of noise, a linear classifier is used with the feature vector thus obtained to obtain a high classification accuracy, thereby demonstrating the effectiveness of the methodology. The classification accuracy reported in this paper (89.7% for signal-to-noise ratio 30 dB) is a significant improvement over previously reported results for the same task, and demonstrates the utility of our methodology for cost-effective remote voice pathology assessment over telephone channels.

  19. The Cleft Care UK study. Part 4: perceptual speech outcomes

    PubMed Central

    Sell, D; Mildinhall, S; Albery, L; Wills, A K; Sandy, J R; Ness, A R

    2015-01-01

    Structured Abstract Objectives To describe the perceptual speech outcomes from the Cleft Care UK (CCUK) study and compare them to the 1998 Clinical Standards Advisory Group (CSAG) audit. Setting and sample population A cross-sectional study of 248 children born with complete unilateral cleft lip and palate, between 1 April 2005 and 31 March 2007 who underwent speech assessment. Materials and methods Centre-based specialist speech and language therapists (SLT) took speech audio–video recordings according to nationally agreed guidelines. Two independent listeners undertook the perceptual analysis using the CAPS-A Audit tool. Intra- and inter-rater reliability were tested. Results For each speech parameter of intelligibility/distinctiveness, hypernasality, palatal/palatalization, backed to velar/uvular, glottal, weak and nasalized consonants, and nasal realizations, there was strong evidence that speech outcomes were better in the CCUK children compared to CSAG children. The parameters which did not show improvement were nasal emission, nasal turbulence, hyponasality and lateral/lateralization. Conclusion These results suggest that centralization of cleft care into high volume centres has resulted in improvements in UK speech outcomes in five-year-olds with unilateral cleft lip and palate. This may be associated with the development of a specialized workforce. Nevertheless, there still remains a group of children with significant difficulties at school entry. PMID:26567854

  20. Speech Understanding with a New Implant Technology: A Comparative Study with a New Nonskin Penetrating Baha System

    PubMed Central

    Caversaccio, Marco

    2014-01-01

    Objective. To compare hearing and speech understanding between a new, nonskin penetrating Baha system (Baha Attract) to the current Baha system using a skin-penetrating abutment. Methods. Hearing and speech understanding were measured in 16 experienced Baha users. The transmission path via the abutment was compared to a simulated Baha Attract transmission path by attaching the implantable magnet to the abutment and then by adding a sample of artificial skin and the external parts of the Baha Attract system. Four different measurements were performed: bone conduction thresholds directly through the sound processor (BC Direct), aided sound field thresholds, aided speech understanding in quiet, and aided speech understanding in noise. Results. The simulated Baha Attract transmission path introduced an attenuation starting from approximately 5 dB at 1000 Hz, increasing to 20–25 dB above 6000 Hz. However, aided sound field threshold shows smaller differences and aided speech understanding in quiet and in noise does not differ significantly between the two transmission paths. Conclusion. The Baha Attract system transmission path introduces predominately high frequency attenuation. This attenuation can be partially compensated by adequate fitting of the speech processor. No significant decrease in speech understanding in either quiet or in noise was found. PMID:25140314

  1. Letting go of yesterday: Effect of distraction on post-event processing and anticipatory anxiety in a socially anxious sample.

    PubMed

    Blackie, Rebecca A; Kocovski, Nancy L

    2016-01-01

    According to cognitive models, post-event processing (PEP) is a key factor in the maintenance of social anxiety. Given that decreasing PEP can be challenging for socially anxious individuals, it is important to identify potentially useful strategies. Although distraction may help to decrease PEP, the findings have been equivocal. The primary purpose of this study was to examine whether a brief distraction period immediately following a speech would lead to less PEP the next day. The secondary aim was to examine the effect of distraction following an initial speech on anticipatory anxiety for a second speech, via reductions in PEP. Participants (N = 77 undergraduates with elevated social anxiety; 67.53% female) delivered a speech and were randomly assigned to a distraction, rumination, or control condition. The following day, participants reported levels of PEP in relation to the first speech, as well as anxiety regarding a second, upcoming speech. As expected, those in the distraction condition reported less PEP than those in the rumination and control conditions. Additionally, distraction following the first speech was indirectly related to anticipatory anxiety for the second speech, via PEP. Distraction may represent a potentially useful strategy for reducing PEP and other maladaptive processes that may maintain social anxiety.

  2. Vocal Age Disguise: The Role of Fundamental Frequency and Speech Rate and Its Perceived Effects.

    PubMed

    Skoog Waller, Sara; Eriksson, Mårten

    2016-01-01

    The relationship between vocal characteristics and perceived age is of interest in various contexts, as is the possibility to affect age perception through vocal manipulation. A few examples of such situations are when age is staged by actors, when ear witnesses make age assessments based on vocal cues only or when offenders (e.g., online groomers) disguise their voice to appear younger or older. This paper investigates how speakers spontaneously manipulate two age related vocal characteristics ( f 0 and speech rate) in attempt to sound younger versus older than their true age, and if the manipulations correspond to actual age related changes in f 0 and speech rate (Study 1). Further aims of the paper is to determine how successful vocal age disguise is by asking listeners to estimate the age of generated speech samples (Study 2) and to examine whether or not listeners use f 0 and speech rate as cues to perceived age. In Study 1, participants from three age groups (20-25, 40-45, and 60-65 years) agreed to read a short text under three voice conditions. There were 12 speakers in each age group (six women and six men). They used their natural voice in one condition, attempted to sound 20 years younger in another and 20 years older in a third condition. In Study 2, 60 participants (listeners) listened to speech samples from the three voice conditions in Study 1 and estimated the speakers' age. Each listener was exposed to all three voice conditions. The results from Study 1 indicated that the speakers increased fundamental frequency ( f 0 ) and speech rate when attempting to sound younger and decreased f 0 and speech rate when attempting to sound older. Study 2 showed that the voice manipulations had an effect in the sought-after direction, although the achieved mean effect was only 3 years, which is far less than the intended effect of 20 years. Moreover, listeners used speech rate, but not f 0 , as a cue to speaker age. It was concluded that age disguise by voice can be achieved by naïve speakers even though the perceived effect was smaller than intended.

  3. Phonological processes in the speech of school-age children with hearing loss: Comparisons with children with normal hearing.

    PubMed

    Asad, Areej Nimer; Purdy, Suzanne C; Ballard, Elaine; Fairgray, Liz; Bowen, Caroline

    2018-04-27

    In this descriptive study, phonological processes were examined in the speech of children aged 5;0-7;6 (years; months) with mild to profound hearing loss using hearing aids (HAs) and cochlear implants (CIs), in comparison to their peers. A second aim was to compare phonological processes of HA and CI users. Children with hearing loss (CWHL, N = 25) were compared to children with normal hearing (CWNH, N = 30) with similar age, gender, linguistic, and socioeconomic backgrounds. Speech samples obtained from a list of 88 words, derived from three standardized speech tests, were analyzed using the CASALA (Computer Aided Speech and Language Analysis) program to evaluate participants' phonological systems, based on lax (a process appeared at least twice in the speech of at least two children) and strict (a process appeared at least five times in the speech of at least two children) counting criteria. Developmental phonological processes were eliminated in the speech of younger and older CWNH while eleven developmental phonological processes persisted in the speech of both age groups of CWHL. CWHL showed a similar trend of age of elimination to CWNH, but at a slower rate. Children with HAs and CIs produced similar phonological processes. Final consonant deletion, weak syllable deletion, backing, and glottal replacement were present in the speech of HA users, affecting their overall speech intelligibility. Developmental and non-developmental phonological processes persist in the speech of children with mild to profound hearing loss compared to their peers with typical hearing. The findings indicate that it is important for clinicians to consider phonological assessment in pre-school CWHL and the use of evidence-based speech therapy in order to reduce non-developmental and non-age-appropriate developmental processes, thereby enhancing their speech intelligibility. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Robust relationship between reading span and speech recognition in noise

    PubMed Central

    Souza, Pamela; Arehart, Kathryn

    2015-01-01

    Objective Working memory refers to a cognitive system that manages information processing and temporary storage. Recent work has demonstrated that individual differences in working memory capacity measured using a reading span task are related to ability to recognize speech in noise. In this project, we investigated whether the specific implementation of the reading span task influenced the strength of the relationship between working memory capacity and speech recognition. Design The relationship between speech recognition and working memory capacity was examined for two different working memory tests that varied in approach, using a within-subject design. Data consisted of audiometric results along with the two different working memory tests; one speech-in-noise test; and a reading comprehension test. Study sample The test group included 94 older adults with varying hearing loss and 30 younger adults with normal hearing. Results Listeners with poorer working memory capacity had more difficulty understanding speech in noise after accounting for age and degree of hearing loss. That relationship did not differ significantly between the two different implementations of reading span. Conclusions Our findings suggest that different implementations of a verbal reading span task do not affect the strength of the relationship between working memory capacity and speech recognition. PMID:25975360

  5. Children's Attitudes Toward Peers With Unintelligible Speech Associated With Cleft Lip and/or Palate.

    PubMed

    Lee, Alice; Gibbon, Fiona E; Spivey, Kimberley

    2017-05-01

      The objective of this study was to investigate whether reduced speech intelligibility in children with cleft palate affects social and personal attribute judgments made by typically developing children of different ages.   The study (1) measured the correlation between intelligibility scores of speech samples from children with cleft palate and social and personal attribute judgments made by typically developing children based on these samples and (2) compared the attitude judgments made by children of different ages. Participants   A total of 90 typically developing children, 30 in each of three age groups (7 to 8 years, 9 to 10 years, and 11 to 12 years).   Speech intelligibility scores and typically developing children's attitudes were measured using eight social and personal attributes on a three-point rating scale.   There was a significant correlation between the speech intelligibility scores and attitude judgments for a number of traits: "sick-healthy" as rated by the children aged 7 to 8 years, "no friends-friends" by the children aged 9 to 10 years, and "ugly-good looking" and "no friends-friends" by the children aged 11 to 12 years. Children aged 7 to 8 years gave significantly lower ratings for "mean-kind" but higher ratings for "shy-outgoing" when compared with the other two groups.   Typically developing children tended to make negative social and personal attribute judgments about children with cleft palate based solely on the intelligibility of their speech. Society, educators, and health professionals should work together to ensure that children with cleft palate are not stigmatized by their peers.

  6. The enhancement of beneficial effects following audio feedback by cognitive preparation in the treatment of social anxiety: a single-session experiment.

    PubMed

    Nilsson, Jan-Erik; Lundh, Lars-Gunnar; Faghihi, Shahriar; Roth-Andersson, Gun

    2011-12-01

    According to cognitive models, negatively biased processing of the publicly observable self is an important aspect of social phobia; if this is true, effective methods for producing corrective feedback concerning the public self should be strived for. Video feedback is proven effective, but since one's voice represents another aspect of the self, audio feedback should produce equivalent results. This is the first study to assess the enhancement of audio feedback by cognitive preparation in a single-session randomized controlled experiment. Forty socially anxious participants were asked to give a speech, then to listen to and evaluate a taped recording of their performance. Half of the sample was given cognitive preparation prior to the audio feedback and the remainder received audio feedback only. Cognitive preparation involved asking participants to (1) predict in detail what they would hear on the audiotape, (2) form an image of themselves giving the speech and (3) listen to the audio recording as though they were listening to a stranger. To assess generalization effects all participants were asked to give a second speech. Audio feedback with cognitive preparation was shown to produce less negative ratings after the first speech, and effects generalized to the evaluation of the second speech. More positive speech evaluations were associated with corresponding reductions of state anxiety. Social anxiety as indexed by the Implicit Association Test was reduced in participants given cognitive preparation. Small sample size; analogue study. Audio feedback with cognitive preparation may be utilized as a treatment intervention for social phobia. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Inverted U-Shaped Dose-Response Curve of the Anxiolytic Effect of Cannabidiol during Public Speaking in Real Life.

    PubMed

    Zuardi, Antonio W; Rodrigues, Natália P; Silva, Angélica L; Bernardo, Sandra A; Hallak, Jaime E C; Guimarães, Francisco S; Crippa, José A S

    2017-01-01

    The purpose of this study was to investigate whether the anxiolytic effect of cannabidiol (CBD) in humans follows the same pattern of an inverted U-shaped dose-effect curve observed in many animal studies. Sixty healthy subjects of both sexes aged between 18 and 35 years were randomly assigned to five groups that received placebo, clonazepam (1 mg), and CBD (100, 300, and 900 mg). The subjects were underwent a test of public speaking in a real situation (TPSRS) where each subject had to speak in front of a group formed by the remaining participants. Each subject completed the anxiety and sedation factors of the Visual Analog Mood Scale and had their blood pressure and heart rate recorded. These measures were obtained in five experimental sessions with 12 volunteers each. Each session had four steps at the following times (minutes) after administration of the drug/placebo, as time 0: -5 (baseline), 80 (pre-test), 153 (speech), and 216 (post-speech). Repeated-measures analyses of variance showed that the TPSRS increased the subjective measures of anxiety, heart rate, and blood pressure. Student-Newman-Keuls test comparisons among the groups in each phase showed significant attenuation in anxiety scores relative to the placebo group in the group treated with clonazepam during the speech phase, and in the clonazepam and CBD 300 mg groups in the post-speech phase. Clonazepam was more sedative than CBD 300 and 900 mg and induced a smaller increase in systolic and diastolic blood pressure than CBD 300 mg. The results confirmed that the acute administration of CBD induced anxiolytic effects with a dose-dependent inverted U-shaped curve in healthy subjects, since the subjective anxiety measures were reduced with CBD 300 mg, but not with CBD 100 and 900 mg, in the post-speech phase.

  8. Inverted U-Shaped Dose-Response Curve of the Anxiolytic Effect of Cannabidiol during Public Speaking in Real Life

    PubMed Central

    Zuardi, Antonio W.; Rodrigues, Natália P.; Silva, Angélica L.; Bernardo, Sandra A.; Hallak, Jaime E. C.; Guimarães, Francisco S.; Crippa, José A. S.

    2017-01-01

    The purpose of this study was to investigate whether the anxiolytic effect of cannabidiol (CBD) in humans follows the same pattern of an inverted U-shaped dose-effect curve observed in many animal studies. Sixty healthy subjects of both sexes aged between 18 and 35 years were randomly assigned to five groups that received placebo, clonazepam (1 mg), and CBD (100, 300, and 900 mg). The subjects were underwent a test of public speaking in a real situation (TPSRS) where each subject had to speak in front of a group formed by the remaining participants. Each subject completed the anxiety and sedation factors of the Visual Analog Mood Scale and had their blood pressure and heart rate recorded. These measures were obtained in five experimental sessions with 12 volunteers each. Each session had four steps at the following times (minutes) after administration of the drug/placebo, as time 0: -5 (baseline), 80 (pre-test), 153 (speech), and 216 (post-speech). Repeated-measures analyses of variance showed that the TPSRS increased the subjective measures of anxiety, heart rate, and blood pressure. Student-Newman-Keuls test comparisons among the groups in each phase showed significant attenuation in anxiety scores relative to the placebo group in the group treated with clonazepam during the speech phase, and in the clonazepam and CBD 300 mg groups in the post-speech phase. Clonazepam was more sedative than CBD 300 and 900 mg and induced a smaller increase in systolic and diastolic blood pressure than CBD 300 mg. The results confirmed that the acute administration of CBD induced anxiolytic effects with a dose-dependent inverted U-shaped curve in healthy subjects, since the subjective anxiety measures were reduced with CBD 300 mg, but not with CBD 100 and 900 mg, in the post-speech phase. PMID:28553229

  9. A Multivariate Analytic Approach to the Differential Diagnosis of Apraxia of Speech

    ERIC Educational Resources Information Center

    Basilakos, Alexandra; Yourganov, Grigori; den Ouden, Dirk-Bart; Fogerty, Daniel; Rorden, Chris; Feenaughty, Lynda; Fridriksson, Julius

    2017-01-01

    Purpose: Apraxia of speech (AOS) is a consequence of stroke that frequently co-occurs with aphasia. Its study is limited by difficulties with its perceptual evaluation and dissociation from co-occurring impairments. This study examined the classification accuracy of several acoustic measures for the differential diagnosis of AOS in a sample of…

  10. Vulnerability to Bullying in Children with a History of Specific Speech and Language Difficulties

    ERIC Educational Resources Information Center

    Lindsay, Geoff; Dockrell, Julie E.; Mackie, Clare

    2008-01-01

    This study examined the susceptibility to problems with peer relationships and being bullied in a UK sample of 12-year-old children with a history of specific speech and language difficulties. Data were derived from the children's self-reports and the reports of parents and teachers using measures of victimization, emotional and behavioral…

  11. Yaounde French Speech Corpus

    DTIC Science & Technology

    2017-03-01

    the Center for Technology Enhanced Language Learning (CTELL), a research cell in the Department of Foreign Languages, United States Military Academy...models for automatic speech recognition (ASR), and to, thereby, investigate the utility of ASR in pedagogical technology . The corpus is a sample of...lexical resources, language technology 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF

  12. Expressive Language during Conversational Speech in Boys with Fragile X Syndrome

    ERIC Educational Resources Information Center

    Roberts, Joanne E.; Hennon, Elizabeth A.; Price, Johanna R.; Dear, Elizabeth; Anderson, Kathleen; Vandergrift, Nathan A.

    2007-01-01

    We compared the expressive syntax and vocabulary skills of 35 boys with fragile X syndrome and 27 younger typically developing boys who were at similar nonverbal mental levels. During a conversational speech sample, the boys with fragile X syndrome used shorter, less complex utterances and produced fewer different words than did the typically…

  13. Examining Acoustic and Kinematic Measures of Articulatory Working Space: Effects of Speech Intensity

    ERIC Educational Resources Information Center

    Whitfield, Jason A.; Dromey, Christopher; Palmer, Panika

    2018-01-01

    Purpose: The purpose of this study was to examine the effect of speech intensity on acoustic and kinematic vowel space measures and conduct a preliminary examination of the relationship between kinematic and acoustic vowel space metrics calculated from continuously sampled lingual marker and formant traces. Method: Young adult speakers produced 3…

  14. Use of Spectral/Cepstral Analyses for Differentiating Normal from Hypofunctional Voices in Sustained Vowel and Continuous Speech Contexts

    ERIC Educational Resources Information Center

    Watts, Christopher R.; Awan, Shaheen N.

    2011-01-01

    Purpose: In this study, the authors evaluated the diagnostic value of spectral/cepstral measures to differentiate dysphonic from nondysphonic voices using sustained vowels and continuous speech samples. Methodology: Thirty-two age- and gender-matched individuals (16 participants with dysphonia and 16 controls) were recorded reading a standard…

  15. Bivariate Genetic Analyses of Stuttering and Nonfluency in a Large Sample of 5-Year-Old Twins

    ERIC Educational Resources Information Center

    van Beijsterveldt, Catharina Eugenie Maria; Felsenfeld, Susan; Boomsma, Dorret Irene

    2010-01-01

    Purpose: Behavioral genetic studies of speech fluency have focused on participants who present with clinical stuttering. Knowledge about genetic influences on the development and regulation of normal speech fluency is limited. The primary aims of this study were to identify the heritability of stuttering and high nonfluency and to assess the…

  16. Lexical Profiles of Comprehensible Second Language Speech: The Role of Appropriateness, Fluency, Variation, Sophistication, Abstractness, and Sense Relations

    ERIC Educational Resources Information Center

    Saito, Kazuya; Webb, Stuart; Trofimovich, Pavel; Isaacs, Talia

    2016-01-01

    This study examined contributions of lexical factors to native-speaking raters' assessments of comprehensibility (ease of understanding) of second language (L2) speech. Extemporaneous oral narratives elicited from 40 French speakers of L2 English were transcribed and evaluated for comprehensibility by 10 raters. Subsequently, the samples were…

  17. Effects of Visual Information on Intelligibility of Open and Closed Class Words in Predictable Sentences Produced by Speakers with Dysarthria

    ERIC Educational Resources Information Center

    Hustad, Katherine C.; Dardis, Caitlin M.; Mccourt, Kelly A.

    2007-01-01

    This study examined the independent and interactive effects of visual information and linguistic class of words on intelligibility of dysarthric speech. Seven speakers with dysarthria participated in the study, along with 224 listeners who transcribed speech samples in audiovisual (AV) or audio-only (AO) listening conditions. Orthographic…

  18. [Design of standard voice sample text for subjective auditory perceptual evaluation of voice disorders].

    PubMed

    Li, Jin-rang; Sun, Yan-yan; Xu, Wen

    2010-09-01

    To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, ê, was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population (r = 0.742, P < 0.001 and r = 0.844, P < 0.001, respectively). The constituent ratios of the tones presented in this short text were statistically not similar as those in Mandarin (r = 0.731, P > 0.05). A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.

  19. Connected speech as a marker of disease progression in autopsy-proven Alzheimer’s disease

    PubMed Central

    Ahmed, Samrah; Haigh, Anne-Marie F.; de Jager, Celeste A.

    2013-01-01

    Although an insidious history of episodic memory difficulty is a typical presenting symptom of Alzheimer’s disease, detailed neuropsychological profiling frequently demonstrates deficits in other cognitive domains, including language. Previous studies from our group have shown that language changes may be reflected in connected speech production in the earliest stages of typical Alzheimer’s disease. The aim of the present study was to identify features of connected speech that could be used to examine longitudinal profiles of impairment in Alzheimer’s disease. Samples of connected speech were obtained from 15 former participants in a longitudinal cohort study of ageing and dementia, in whom Alzheimer’s disease was diagnosed during life and confirmed at post-mortem. All patients met clinical and neuropsychological criteria for mild cognitive impairment between 6 and 18 months before converting to a status of probable Alzheimer’s disease. In a subset of these patients neuropsychological data were available, both at the point of conversion to Alzheimer’s disease, and after disease severity had progressed from the mild to moderate stage. Connected speech samples from these patients were examined at later disease stages. Spoken language samples were obtained using the Cookie Theft picture description task. Samples were analysed using measures of syntactic complexity, lexical content, speech production, fluency and semantic content. Individual case analysis revealed that subtle changes in language were evident during the prodromal stages of Alzheimer’s disease, with two-thirds of patients with mild cognitive impairment showing significant but heterogeneous changes in connected speech. However, impairments at the mild cognitive impairment stage did not necessarily entail deficits at mild or moderate stages of disease, suggesting non-language influences on some aspects of performance. Subsequent examination of these measures revealed significant linear trends over the three stages of disease in syntactic complexity, semantic and lexical content. The findings suggest, first, that there is a progressive disruption in language integrity, detectable from the prodromal stage in a subset of patients with Alzheimer’s disease, and secondly that measures of semantic and lexical content and syntactic complexity best capture the global progression of linguistic impairment through the successive clinical stages of disease. The identification of disease-specific language impairment in prodromal Alzheimer’s disease could enhance clinicians’ ability to distinguish probable Alzheimer’s disease from changes attributable to ageing, while longitudinal assessment could provide a simple approach to disease monitoring in therapeutic trials. PMID:24142144

  20. Effect of Performance Time of the Semi-Occluded Vocal Tract Exercises in Dysphonic Children.

    PubMed

    Ramos, Lorena de Almeida; Gama, Ana Cristina Côrtes

    2017-05-01

    This study aimed to verify the effects of execution time on auditory-perceptual and acoustic responses in children with dysphonia completing straw phonation exercises. A randomized, prospective, comparative intra-subject study design was used. Twenty-seven children, ranging from 5 to 10 years of age, diagnosed with vocal cord nodules or cysts, were enrolled in the study. All subjects included in the Experimental Group were also included in the Control Group which involved complete voice rest. Sustained vowels (/a/e/ε/e/) counting from 1 to 10 were recorded before the exercises (m0) and then again after the first (m1), third (m3), fifth (m5), and seventh (m7) minutes of straw phonation exercises. The recordings were randomized and presented to five speech therapists, who evaluated vocal quality based on the Grade Roughness Breathiness Asthenia/Strain Instability scale. For acoustic analysis, fundamental frequency, jitter, shimmer, glottal to noise excitation ratio, and noise parameters were analyzed. Reduced roughness, breathiness, and noise measurements as well as increased glottal to noise excitation ratio were observed in the Experimental Group after 3 minutes of exercise. Reduced grade of dysphonia and breathiness were noted after 5 minutes. The ideal duration of straw phonation in children with dysphonia is from 3 to 5 minutes. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  1. Audiovisual integration in children listening to spectrally degraded speech.

    PubMed

    Maidment, David W; Kang, Hi Jee; Stewart, Hannah J; Amitay, Sygal

    2015-02-01

    The study explored whether visual information improves speech identification in typically developing children with normal hearing when the auditory signal is spectrally degraded. Children (n=69) and adults (n=15) were presented with noise-vocoded sentences from the Children's Co-ordinate Response Measure (Rosen, 2011) in auditory-only or audiovisual conditions. The number of bands was adaptively varied to modulate the degradation of the auditory signal, with the number of bands required for approximately 79% correct identification calculated as the threshold. The youngest children (4- to 5-year-olds) did not benefit from accompanying visual information, in comparison to 6- to 11-year-old children and adults. Audiovisual gain also increased with age in the child sample. The current data suggest that children younger than 6 years of age do not fully utilize visual speech cues to enhance speech perception when the auditory signal is degraded. This evidence not only has implications for understanding the development of speech perception skills in children with normal hearing but may also inform the development of new treatment and intervention strategies that aim to remediate speech perception difficulties in pediatric cochlear implant users.

  2. "Having the heart to be evaluated": The differential effects of fears of positive and negative evaluation on emotional and cardiovascular responses to social threat.

    PubMed

    Weeks, Justin W; Zoccola, Peggy M

    2015-12-01

    Accumulating evidence supports fear of evaluation in general as important in social anxiety, including fear of positive evaluation (FPE) and fear of negative evaluation (FNE). The present study examined state responses to an impromptu speech task with a sample of 81 undergraduates. This study is the first to compare and contrast physiological responses associated with FPE and FNE, and to examine both FPE- and FNE-related changes in state anxiety/affect in response to perceived social evaluation during a speech. FPE uniquely predicted (relative to FNE/depression) increases in mean heart rate during the speech; in contrast, neither FNE nor depression related to changes in heart rate. Both FPE and FNE related uniquely to increases in negative affect and state anxiety during the speech. Furthermore, pre-speech state anxiety mediated the relationship between trait FPE and diminished positive affect during the speech. Implications for the theoretical conceptualization and treatment of social anxiety are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Out-of-synchrony speech entrainment in developmental dyslexia.

    PubMed

    Molinaro, Nicola; Lizarazu, Mikel; Lallier, Marie; Bourguignon, Mathieu; Carreiras, Manuel

    2016-08-01

    Developmental dyslexia is a reading disorder often characterized by reduced awareness of speech units. Whether the neural source of this phonological disorder in dyslexic readers results from the malfunctioning of the primary auditory system or damaged feedback communication between higher-order phonological regions (i.e., left inferior frontal regions) and the auditory cortex is still under dispute. Here we recorded magnetoencephalographic (MEG) signals from 20 dyslexic readers and 20 age-matched controls while they were listening to ∼10-s-long spoken sentences. Compared to controls, dyslexic readers had (1) an impaired neural entrainment to speech in the delta band (0.5-1 Hz); (2) a reduced delta synchronization in both the right auditory cortex and the left inferior frontal gyrus; and (3) an impaired feedforward functional coupling between neural oscillations in the right auditory cortex and the left inferior frontal regions. This shows that during speech listening, individuals with developmental dyslexia present reduced neural synchrony to low-frequency speech oscillations in primary auditory regions that hinders higher-order speech processing steps. The present findings, thus, strengthen proposals assuming that improper low-frequency acoustic entrainment affects speech sampling. This low speech-brain synchronization has the strong potential to cause severe consequences for both phonological and reading skills. Interestingly, the reduced speech-brain synchronization in dyslexic readers compared to normal readers (and its higher-order consequences across the speech processing network) appears preserved through the development from childhood to adulthood. Thus, the evaluation of speech-brain synchronization could possibly serve as a diagnostic tool for early detection of children at risk of dyslexia. Hum Brain Mapp 37:2767-2783, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Working with women prisoners who seriously harm themselves: ratings of staff expressed emotion (EE).

    PubMed

    Moore, Estelle; Andargachew, Sara; Taylor, Pamela J

    2011-02-01

    Prison staff are repeatedly exposed to prisoners' suicidal behaviours; this may impair their capacity to care. Expressed emotion (EE), as a descriptor of the 'emotional climate' between people, has been associated with challenging behaviour in closed environments, but not previously applied to working alliances in a prison. To investigate the feasibility of rating EE between staff and suicidal women in prison; to test the hypothesis that most such staff-inmate alliances would be rated high EE. All regular staff on two small UK prison units with high suicidal behaviour rates were invited to participate. An audiotaped five-minute speech sample (FMSS) about work with one nominated suicidal prisoner was embedded in a longer research interview, then rated by two trained raters, independent of the interview process and the prison. Seven prison officers and 8 clinically qualified staff completed interviews; 3 refused, but 17 others were not interviewed, reasons including not having worked long enough with any one such prisoner. Participants and non-participants had similar relevant backgrounds. Contrary to our hypothesis, EE ratings were generally 'low'. As predicted, critical comments were directed at high frequency oppositional behaviour. EE assessments with prison staff are feasible, but our sample was small and turnover of prisoners high, so the study needs replication. Attributions about problem behaviour to illness, and/or traumatic life experience, tend to confirm generally supportive working relationships in this sample. Copyright © 2010 John Wiley & Sons, Ltd.

  5. Look Who’s Talking NOW! Parentese Speech, Social Context, and Language Development Across Time

    PubMed Central

    Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.

    2017-01-01

    In previous studies, we found that the social interactions infants experience in their everyday lives at 11- and 14-months of age affect language ability at 24 months of age. These studies investigated relationships between the speech style (i.e., parentese speech vs. standard speech) and social context [i.e., one-on-one (1:1) vs. group] of language input in infancy and later speech development (i.e., at 24 months of age), controlling for socioeconomic status (SES). Results showed that the amount of exposure to parentese speech-1:1 in infancy was related to productive vocabulary at 24 months. The general goal of the present study was to investigate changes in (1) the pattern of social interactions between caregivers and their children from infancy to childhood and (2) relationships among speech style, social context, and language learning across time. Our study sample consisted of 30 participants from the previously published infant studies, evaluated at 33 months of age. Social interactions were assessed at home using digital first-person perspective recordings of the auditory environment. We found that caregivers use less parentese speech-1:1, and more standard speech-1:1, as their children get older. Furthermore, we found that the effects of parentese speech-1:1 in infancy on later language development at 24 months persist at 33 months of age. Finally, we found that exposure to standard speech-1:1 in childhood was the only social interaction that related to concurrent word production/use. Mediation analyses showed that standard speech-1:1 in childhood fully mediated the effects of parentese speech-1:1 in infancy on language development in childhood, controlling for SES. This study demonstrates that engaging in one-on-one interactions in infancy and later in life has important implications for language development. PMID:28676774

  6. Speech intelligibility after glossectomy and speech rehabilitation.

    PubMed

    Furia, C L; Kowalski, L P; Latorre, M R; Angelis, E C; Martins, N M; Barros, A P; Ribeiro, K C

    2001-07-01

    Oral tumor resections cause articulation deficiencies, depending on the site, extent of resection, type of reconstruction, and tongue stump mobility. To evaluate the speech intelligibility of patients undergoing total, subtotal, or partial glossectomy, before and after speech therapy. Twenty-seven patients (24 men and 3 women), aged 34 to 77 years (mean age, 56.5 years), underwent glossectomy. Tumor stages were T1 in 3 patients, T2 in 4, T3 in 8, T4 in 11, and TX in 1; node stages, N0 in 15 patients, N1 in 5, N2a-c in 6, and N3 in 1. No patient had metastases (M0). Patients were divided into 3 groups by extent of tongue resection, ie, total (group 1; n = 6), subtotal (group 2; n = 9), and partial (group 3; n = 12). Different phonological tasks were recorded and analyzed by 3 experienced judges, including sustained 7 oral vowels, vowel in a syllable, and the sequence vowel-consonant-vowel (VCV). The intelligibility of spontaneous speech (sequence story) was scored from 1 to 4 in consensus. All patients underwent a therapeutic program to activate articulatory adaptations, compensations, and maximization of the remaining structures for 3 to 6 months. The tasks were recorded after speech therapy. To compare mean changes, analyses of variance and Wilcoxon tests were used. Patients of groups 1 and 2 significantly improved their speech intelligibility (P<.05). Group 1 improved vowels, VCV, and spontaneous speech; group 2, syllable, VCV, and spontaneous speech. Group 3 demonstrated better intelligibility in the pretherapy phase, but the improvement after therapy was not significant. Speech therapy was effective in improving speech intelligibility of patients undergoing glossectomy, even after major resection. Different pretherapy ability between groups was seen, with improvement of speech intelligibility in groups 1 and 2. The improvement of speech intelligibility in group 3 was not statistically significant, possibly because of the small and heterogeneous sample.

  7. Long-term neurodevelopmental outcomes in school-aged children after neonatal arterial switch operation.

    PubMed

    Hövels-Gürich, Hedwig H; Seghaye, Marie-Christine; Schnitker, Ralph; Wiesner, Magdalene; Huber, Walter; Minkenberg, Ralf; Kotlarek, Franz; Messmer, Bruno J; Von Bernuth, Götz

    2002-09-01

    Neurodevelopmental status of children between 8 and 14 years of age after neonatal arterial switch operation for transposition of the great arteries has not previously been systematically evaluated. Within a longitudinal study, 60 unselected children operated on as neonates with combined deep hypothermic circulatory arrest and low-flow cardiopulmonary bypass were reevaluated at the age of 7.9 to 14.3 years (mean +/- SD 10.5 +/- 1.6 years). Clinical neurologic status and standardized tests to assess gross motor function, intelligence, acquired abilities, language, and speech were carried out, and the results were related to preoperative, perioperative, and postoperative status, to management, and to neurodevelopmental status at a mean age of 5.4 years. Neurologic and speech impairments were evidently more frequent (27% and 40%, respectively) than in the general population. Intelligence and socioeconomic status were not different (P =.29 and P =.11), whereas motor function, acquired abilities, and language were reduced (P < or =.04 for each). Overall rate of developmental impairment in one or more domains was 55%, compared with 26% at age 5.4 years. Multivariable analysis showed that severe preoperative acidosis and hypoxia predicted reduced motor function (mean deficit 52.7 points, P <.001), whereas longer bypass duration predicted both neurologic (odds ratio per 10 minutes of bypass duration 1.8, P =.04) and speech (odds ratio per 10 minutes of bypass duration 1.9, P =.02) dysfunction, and perioperative and postoperative cardiocirculatory insufficiency predicted neurologic (odds ratio 6.5, P =.04) and motor (mean deficit 6.8 points, P =.03) dysfunction. The neonatal arterial switch operation with combined circulatory arrest and low-flow bypass is associated increasingly with age, with reduced neurodevelopmental outcome but not with cognitive dysfunction. In our experience, the risk of long-term neurodevelopmental impairment after neonatal corrective cardiac surgery is related to deleterious effects of the global perioperative management and to special adverse effects of prolonged bypass duration. Severe preoperative acidosis and hypoxia and postoperative hemodynamic instability must be considered as important additional risk factors.

  8. Validation of the Focus on the Outcomes of Communication under Six outcome measure

    PubMed Central

    Thomas-Stonell, Nancy; Oddson, Bruce; Robertson, Bernadette; Rosenbaum, Peter

    2013-01-01

    Aim The aim of this study was to establish the construct validity of the Focus on the Outcomes of Communication Under Six (FOCUS©),a tool designed to measure changes in communication skills in preschool children. Method Participating families' children (n=97; 68 males, 29 females; mean age 2y 8mo; SD 1.04y, range 10mo–4y 11mo) were recruited through eight Canadian organizations. The children were on a waiting list for speech and language intervention. Parents completed the Ages and Stages Questionnaire – Social/Emotional (ASQ-SE) and the FOCUS three times: at assessment and at the start and end of treatment. A second sample (n=28; 16 males 12 females) was recruited from another organization to correlate the FOCUS scores with speech, intelligibility and language measures. Second sample participants ranged in age from 3 years 1 month to 4 years 9 months (mean 3y 11mo; SD 0.41y). At the start and end of treatment, children were videotaped to obtain speech and language samples. Parents and speech–language pathologists (SLPs) independently completed the FOCUS tool. SLPs who were blind to the pre/post order of the videotapes analysed the samples. Results The FOCUS measured significantly more change (p<0.01) during treatment than during the waiting list period. It demonstrated both convergent and discriminant validity against the ASQ-SE. The FOCUS change corresponded to change measured by a combination of clinical speech and language measures (κ=0.31, p<0.05). Conclusion The FOCUS shows strong construct validity as a change-detecting instrument. PMID:23461266

  9. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  10. Emotion Recognition from Chinese Speech for Smart Affective Services Using a Combination of SVM and DBN

    PubMed Central

    Zhu, Lianzhang; Chen, Leiming; Zhao, Dehai

    2017-01-01

    Accurate emotion recognition from speech is important for applications like smart health care, smart entertainment, and other smart services. High accuracy emotion recognition from Chinese speech is challenging due to the complexities of the Chinese language. In this paper, we explore how to improve the accuracy of speech emotion recognition, including speech signal feature extraction and emotion classification methods. Five types of features are extracted from a speech sample: mel frequency cepstrum coefficient (MFCC), pitch, formant, short-term zero-crossing rate and short-term energy. By comparing statistical features with deep features extracted by a Deep Belief Network (DBN), we attempt to find the best features to identify the emotion status for speech. We propose a novel classification method that combines DBN and SVM (support vector machine) instead of using only one of them. In addition, a conjugate gradient method is applied to train DBN in order to speed up the training process. Gender-dependent experiments are conducted using an emotional speech database created by the Chinese Academy of Sciences. The results show that DBN features can reflect emotion status better than artificial features, and our new classification approach achieves an accuracy of 95.8%, which is higher than using either DBN or SVM separately. Results also show that DBN can work very well for small training databases if it is properly designed. PMID:28737705

  11. From Birdsong to Human Speech Recognition: Bayesian Inference on a Hierarchy of Nonlinear Dynamical Systems

    PubMed Central

    Yildiz, Izzet B.; von Kriegstein, Katharina; Kiebel, Stefan J.

    2013-01-01

    Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments. PMID:24068902

  12. From birdsong to human speech recognition: bayesian inference on a hierarchy of nonlinear dynamical systems.

    PubMed

    Yildiz, Izzet B; von Kriegstein, Katharina; Kiebel, Stefan J

    2013-01-01

    Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents-an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.

  13. Feeding Tube Placement in Patients with Advanced Dementia: The Beliefs and Practice Patterns of Speech-Language Pathologists

    ERIC Educational Resources Information Center

    Sharp, Helen M.; Shega, Joseph W.

    2009-01-01

    Purpose: To describe the beliefs and practices of speech-language pathologists (SLPs) about the use of percutaneous endoscopic gastrostomy (PEG) among patients with advanced dementia and dysphagia. Method: A survey was mailed to a geographically stratified random sample of 1,050 medical SLPs. Results: The response rate was 57%, and 326 surveys met…

  14. Restricted Consonant Inventories of 2-Year-Old Finnish Children with a History of Recurrent Acute Otitis Media

    ERIC Educational Resources Information Center

    Haapala, Sini; Niemitalo-Haapola, Elina; Raappana, Antti; Kujala, Tiia; Kujala, Teija; Jansson-Verkasalo, Eira

    2015-01-01

    Many children experience recurrent acute otitis media (RAOM) in early childhood. In a previous study, 2-year-old children with RAOM were shown to have immature neural patterns for speech sound discrimination. The present study further investigated the consonant inventories of these same children using natural speech samples. The results showed…

  15. Prevalence of Speech Disorders in Elementary School Students in Jordan

    ERIC Educational Resources Information Center

    Al-Jazi, Aya Bassam; Al-Khamra, Rana

    2015-01-01

    Goal: The aim of this study was to find the prevalence of speech (articulation, voice, and fluency) disorders among elementary school students from first grade to fourth grade. This research was based on the screening implemented as part of the Madrasati Project, which is designed to serve the school system in Jordan. Method: A sample of 1,231…

  16. Kindergarten Risk Factors, Cognitive Factors, and Teacher Judgments as Predictors of Early Reading in Dutch

    ERIC Educational Resources Information Center

    Gijsel, Martine A. R.; Bosman, Anna M. T.; Verhoeven, Ludo

    2006-01-01

    This study focused on the predictive value of risk factors, cognitive factors, and teachers' judgments in a sample of 462 kindergartners for their early reading skills and reading failure at the beginning of Grade 1. With respect to risk factors, enrollment in speech-language therapy, history of dyslexia or speech-language problems in the family,…

  17. Using Key Part-of-Speech Analysis to Examine Spoken Discourse by Taiwanese EFL Learners

    ERIC Educational Resources Information Center

    Lin, Yen-Liang

    2015-01-01

    This study reports on a corpus analysis of samples of spoken discourse between a group of British and Taiwanese adolescents, with the aim of exploring the statistically significant differences in the use of grammatical categories between the two groups of participants. The key word method extended to a part-of-speech level using the web-based…

  18. Children with Autism Spectrum Disorders Who Do Not Develop Phrase Speech in the Preschool Years

    ERIC Educational Resources Information Center

    Norrelgen, Fritjof; Fernell, Elisabeth; Eriksson, Mats; Hedvall, Asa; Persson, Clara; Sjölin, Maria; Gillberg, Christopher; Kjellmer, Liselotte

    2015-01-01

    There is uncertainty about the proportion of children with autism spectrum disorders who do not develop phrase speech during the preschool years. The main purpose of this study was to examine this ratio in a population-based community sample of children. The cohort consisted of 165 children (141 boys, 24 girls) with autism spectrum disorders aged…

  19. Speech-language pathology telehealth in rural and remote schools: the experience of school executive and therapy assistants.

    PubMed

    Fairweather, Glenn C; Lincoln, Michelle A; Ramsden, Robyn

    2017-01-01

    Difficulties in accessing allied health services, especially in rural and remote areas, appear to be driving the use of telehealth services to children in schools. The objectives of this study were to investigate the experiences and views of school executive staff and therapy assistants regarding the feasibility and acceptability of a speech-language pathology telehealth program for children attending schools in rural and remote New South Wales, Australia. The program, called Come N See, provided therapy interventions remotely via low-bandwidth videoconferencing, with email follow-up. Over a 12-week period, children were offered therapy blocks of six fortnightly sessions, each lasting a maximum of 30 minutes. School executives (n=5) and therapy assistants (n=6) described factors that promoted or threatened the program's feasibility and acceptability, during semistructured interviews. Thematic content analysis with constant comparison was applied to the transcribed interviews to identify relationships in the data. Emergent themes related to (a) unmet speech pathology needs, (b) building relationships, (c) telehealth's advantages, (d) telehealth's disadvantages, (e) anxiety replaced by joy and confidence in growing skills, and (f) supports. School executive staff and therapy assistants verified that the delivery of the school-based telehealth service was feasible and acceptable. However, the participants saw significant opportunities to enhance this acceptability through building into the program stronger working relationships and supports for stakeholders. These findings are important for the future development of allied health telehealth programs that are sustainable as well as effective and fit the needs of all crucial stakeholders. The results have significant implications for speech pathology clinical practice relating to technology, program planning and teamwork within telehealth programs.

  20. Fast phonetic learning occurs already in 2-to-3-month old infants: an ERP study

    PubMed Central

    Wanrooij, Karin; Boersma, Paul; van Zuijen, Titia L.

    2014-01-01

    An important mechanism for learning speech sounds in the first year of life is “distributional learning,” i.e., learning by simply listening to the frequency distributions of the speech sounds in the environment. In the lab, fast distributional learning has been reported for infants in the second half of the first year; the present study examined whether it can also be demonstrated at a much younger age, long before the onset of language-specific speech perception (which roughly emerges between 6 and 12 months). To investigate this, Dutch infants aged 2 to 3 months were presented with either a unimodal or a bimodal vowel distribution based on the English /æ/~/ε/ contrast, for only 12 minutes. Subsequently, mismatch responses (MMRs) were measured in an oddball paradigm, where one half of the infants in each group heard a representative [æ] as the standard and a representative [ε] as the deviant, and the other half heard the same reversed. The results (from the combined MMRs during wakefulness and active sleep) disclosed a larger MMR, implying better discrimination of [æ] and [ε], for bimodally than unimodally trained infants, thus extending an effect of distributional training found in previous behavioral research to a much younger age when speech perception is still universal rather than language-specific, and to a new method (using event-related potentials). Moreover, the analysis revealed a robust interaction between the distribution (unimodal vs. bimodal) and the identity of the standard stimulus ([æ] vs. [ε]), which provides evidence for an interplay between a perceptual asymmetry and distributional learning. The outcomes show that distributional learning can affect vowel perception already in the first months of life. PMID:24701203

  1. Investigation of an HMM/ANN hybrid structure in pattern recognition application using cepstral analysis of dysarthric (distorted) speech signals.

    PubMed

    Polur, Prasad D; Miller, Gerald E

    2006-10-01

    Computer speech recognition of individuals with dysarthria, such as cerebral palsy patients requires a robust technique that can handle conditions of very high variability and limited training data. In this study, application of a 10 state ergodic hidden Markov model (HMM)/artificial neural network (ANN) hybrid structure for a dysarthric speech (isolated word) recognition system, intended to act as an assistive tool, was investigated. A small size vocabulary spoken by three cerebral palsy subjects was chosen. The effect of such a structure on the recognition rate of the system was investigated by comparing it with an ergodic hidden Markov model as a control tool. This was done in order to determine if this modified technique contributed to enhanced recognition of dysarthric speech. The speech was sampled at 11 kHz. Mel frequency cepstral coefficients were extracted from them using 15 ms frames and served as training input to the hybrid model setup. The subsequent results demonstrated that the hybrid model structure was quite robust in its ability to handle the large variability and non-conformity of dysarthric speech. The level of variability in input dysarthric speech patterns sometimes limits the reliability of the system. However, its application as a rehabilitation/control tool to assist dysarthric motor impaired individuals holds sufficient promise.

  2. The contrast between alveolar and velar stops with typical speech data: acoustic and articulatory analyses.

    PubMed

    Melo, Roberta Michelon; Mota, Helena Bolli; Berti, Larissa Cristina

    2017-06-08

    This study used acoustic and articulatory analyses to characterize the contrast between alveolar and velar stops with typical speech data, comparing the parameters (acoustic and articulatory) of adults and children with typical speech development. The sample consisted of 20 adults and 15 children with typical speech development. The analyzed corpus was organized through five repetitions of each target-word (/'kap ə/, /'tapə/, /'galo/ e /'daɾə/). These words were inserted into a carrier phrase and the participant was asked to name them spontaneously. Simultaneous audio and video data were recorded (tongue ultrasound images). The data was submitted to acoustic analyses (voice onset time; spectral peak and burst spectral moments; vowel/consonant transition and relative duration measures) and articulatory analyses (proportion of significant axes of the anterior and posterior tongue regions and description of tongue curves). Acoustic and articulatory parameters were effective to indicate the contrast between alveolar and velar stops, mainly in the adult group. Both speech analyses showed statistically significant differences between the two groups. The acoustic and articulatory parameters provided signals to characterize the phonic contrast of speech. One of the main findings in the comparison between adult and child speech was evidence of articulatory refinement/maturation even after the period of segment acquisition.

  3. Speech versus Song: Multiple Pitch-Sensitive Areas Revealed by a Naturally Occurring Musical Illusion

    PubMed Central

    Dick, Fred; Deutsch, Diana; Sereno, Marty

    2013-01-01

    It is normally obvious to listeners whether a human vocalization is intended to be heard as speech or song. However, the 2 signals are remarkably similar acoustically. A naturally occurring boundary case between speech and song has been discovered where a spoken phrase sounds as if it were sung when isolated and repeated. In the present study, an extensive search of audiobooks uncovered additional similar examples, which were contrasted with samples from the same corpus that do not sound like song, despite containing clear prosodic pitch contours. Using functional magnetic resonance imaging, we show that hearing these 2 closely matched stimuli is not associated with differences in response of early auditory areas. Rather, we find that a network of 8 regions, including the anterior superior temporal gyrus (STG) just anterior to Heschl's gyrus and the right midposterior STG, respond more strongly to speech perceived as song than to mere speech. This network overlaps a number of areas previously associated with pitch extraction and song production, confirming that phrases originally intended to be heard as speech can, under certain circumstances, be heard as song. Our results suggest that song processing compared with speech processing makes increased demands on pitch processing and auditory–motor integration. PMID:22314043

  4. Automatic bio-sample bacteria detection system

    NASA Technical Reports Server (NTRS)

    Chappelle, E. W.; Colburn, M.; Kelbaugh, B. N.; Picciolo, G. L.

    1971-01-01

    Electromechanical device analyzes urine specimens in 15 minutes and processes one sample per minute. Instrument utilizes bioluminescent reaction between luciferase-luciferin mixture and adenosine triphosphate (ATP) to determine number of bacteria present in the sample. Device has potential application to analysis of other body fluids.

  5. Analysis of human scream and its impact on text-independent speaker verification.

    PubMed

    Hansen, John H L; Nandwana, Mahesh Kumar; Shokouhi, Navid

    2017-04-01

    Scream is defined as sustained, high-energy vocalizations that lack phonological structure. Lack of phonological structure is how scream is identified from other forms of loud vocalization, such as "yell." This study investigates the acoustic aspects of screams and addresses those that are known to prevent standard speaker identification systems from recognizing the identity of screaming speakers. It is well established that speaker variability due to changes in vocal effort and Lombard effect contribute to degraded performance in automatic speech systems (i.e., speech recognition, speaker identification, diarization, etc.). However, previous research in the general area of speaker variability has concentrated on human speech production, whereas less is known about non-speech vocalizations. The UT-NonSpeech corpus is developed here to investigate speaker verification from scream samples. This study considers a detailed analysis in terms of fundamental frequency, spectral peak shift, frame energy distribution, and spectral tilt. It is shown that traditional speaker recognition based on the Gaussian mixture models-universal background model framework is unreliable when evaluated with screams.

  6. A prepared speech in front of a pre-recorded audience: subjective, physiological, and neuroendocrine responses to the Leiden Public Speaking Task.

    PubMed

    Westenberg, P Michiel; Bokhorst, Caroline L; Miers, Anne C; Sumter, Sindy R; Kallen, Victor L; van Pelt, Johannes; Blöte, Anke W

    2009-10-01

    This study describes a new public speaking protocol for youth. The main question asked whether a speech prepared at home and given in front of a pre-recorded audience creates a condition of social-evaluative threat. Findings showed that, on average, this task elicits a moderate stress response in a community sample of 83 12- to 15-year-old adolescents. During the speech, participants reported feeling more nervous and having higher heart rate and sweatiness of the hands than at baseline or recovery. Likewise, physiological (heart rate and skin conductance) and neuroendocrine (cortisol) activity were higher during the speech than at baseline or recovery. Additionally, an anticipation effect was observed: baseline levels were higher than recovery levels for most variables. Taking the anticipation and speech response together, a substantial cortisol response was observed for 55% of participants. The findings indicate that the Leiden Public Speaking Task might be particularly suited to investigate individual differences in sensitivity to social-evaluative situations.

  7. Effects of speaking task on intelligibility in Parkinson’s disease

    PubMed Central

    TJADEN, KRIS; WILDING, GREG

    2017-01-01

    Intelligibility tests for dysarthria typically provide an estimate of overall severity for speech materials elicited through imitation or read from a printed script. The extent to which these types of tasks and procedures reflect intelligibility for extemporaneous speech is not well understood. The purpose of this study was to compare intelligibility estimates obtained for a reading passage and an extemporaneous monologue produced by12 speakers with Parkinson’s disease (PD). The relationship between structural characteristics of utterances and scaled intelligibility was explored within speakers. Speakers were audio-recorded while reading a paragraph and producing a monologue. Speech samples were separated into individual utterances for presentation to 70 listeners who judged intelligibility using orthographic transcription and direct magnitude estimation (DME). Results suggest that scaled estimates of intelligibility for reading show potential for indexing intelligibility of an extemporaneous monologue. Within-speaker variation in scaled intelligibility also was related to the number of words per speech run for extemporaneous speech. PMID:20887216

  8. Classification of speech dysfluencies using LPC based parameterization techniques.

    PubMed

    Hariharan, M; Chee, Lim Sin; Ai, Ooi Chia; Yaacob, Sazali

    2012-06-01

    The goal of this paper is to discuss and compare three feature extraction methods: Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC) and Weighted Linear Prediction Cepstral Coefficients (WLPCC) for recognizing the stuttered events. Speech samples from the University College London Archive of Stuttered Speech (UCLASS) were used for our analysis. The stuttered events were identified through manual segmentation and were used for feature extraction. Two simple classifiers namely, k-nearest neighbour (kNN) and Linear Discriminant Analysis (LDA) were employed for speech dysfluencies classification. Conventional validation method was used for testing the reliability of the classifier results. The study on the effect of different frame length, percentage of overlapping, value of ã in a first order pre-emphasizer and different order p were discussed. The speech dysfluencies classification accuracy was found to be improved by applying statistical normalization before feature extraction. The experimental investigation elucidated LPC, LPCC and WLPCC features can be used for identifying the stuttered events and WLPCC features slightly outperforms LPCC features and LPC features.

  9. Vocal Age Disguise: The Role of Fundamental Frequency and Speech Rate and Its Perceived Effects

    PubMed Central

    Skoog Waller, Sara; Eriksson, Mårten

    2016-01-01

    The relationship between vocal characteristics and perceived age is of interest in various contexts, as is the possibility to affect age perception through vocal manipulation. A few examples of such situations are when age is staged by actors, when ear witnesses make age assessments based on vocal cues only or when offenders (e.g., online groomers) disguise their voice to appear younger or older. This paper investigates how speakers spontaneously manipulate two age related vocal characteristics (f0 and speech rate) in attempt to sound younger versus older than their true age, and if the manipulations correspond to actual age related changes in f0 and speech rate (Study 1). Further aims of the paper is to determine how successful vocal age disguise is by asking listeners to estimate the age of generated speech samples (Study 2) and to examine whether or not listeners use f0 and speech rate as cues to perceived age. In Study 1, participants from three age groups (20–25, 40–45, and 60–65 years) agreed to read a short text under three voice conditions. There were 12 speakers in each age group (six women and six men). They used their natural voice in one condition, attempted to sound 20 years younger in another and 20 years older in a third condition. In Study 2, 60 participants (listeners) listened to speech samples from the three voice conditions in Study 1 and estimated the speakers’ age. Each listener was exposed to all three voice conditions. The results from Study 1 indicated that the speakers increased fundamental frequency (f0) and speech rate when attempting to sound younger and decreased f0 and speech rate when attempting to sound older. Study 2 showed that the voice manipulations had an effect in the sought-after direction, although the achieved mean effect was only 3 years, which is far less than the intended effect of 20 years. Moreover, listeners used speech rate, but not f0, as a cue to speaker age. It was concluded that age disguise by voice can be achieved by naïve speakers even though the perceived effect was smaller than intended. PMID:27917144

  10. Comparison of speech performance in labial and lingual orthodontic patients: A prospective study

    PubMed Central

    Rai, Ambesh Kumar; Rozario, Joe E.; Ganeshkar, Sanjay V.

    2014-01-01

    Background: The intensity and duration of speech difficulty inherently associated with lingual therapy is a significant issue of concern in orthodontics. This study was designed to evaluate and to compare the duration of changes in speech between labial and lingual orthodontics. Materials and Methods: A prospective longitudinal clinical study was designed to assess speech of 24 patients undergoing labial or lingual orthodontic treatment. An objective spectrographic evaluation of/s/sound was done using software PRAAT version 5.0.47, a semiobjective auditive evaluation of articulation was done by four speech pathologists and a subjective assessment of speech was done by four laypersons. The tests were performed before (T1), within 24 h (T2), after 1 week (T3) and after 1 month (T4) of the start of therapy. The Mann-Whitney U-test for independent samples was used to assess the significance difference between the labial and lingual appliances. A speech alteration with P < 0.05 was considered to be significant. Results: The objective method showed a significant difference to be present between the two groups for the/s/sound in the middle position (P < 0.001) at T3. The semiobjective assessment showed worst speech performance in the lingual group to be present at T3 for vowels and blends (P < 0.01) and at T3 and T4 for alveolar and palatal consonants (P < 0.01). The subjective assessment also showed a significant difference between the two groups at T3 (P < 0.01) and T4 (P < 0.05). Conclusion: Both appliance systems caused a comparable speech difficulty immediately after bonding (T2). Although the speech recovered within a week in the labial group (T3), the lingual group continued to experience discomfort even after a month (T4). PMID:25540661

  11. [Occurrence of child abuse: knowledge and possibility of action of speech-language pathologists].

    PubMed

    Noguchi, Milica Satake; de Assis, Simone Gonçalves; Malaquias, Juaci Vitória

    2006-01-01

    This work presents the results of an epidemiological survey about the professional experience of Speech-Language Pathologists and Audiologists of Rio de Janeiro (Brazil) with children and adolescents who are victims of domestic violence. To understand the occurrence of child abuse and neglect of children and adolescents treated by speech-language pathologists, characterizing the victims according to: most affected age group, gender, form of violence, aggressor, most frequent speech-language complaint, how the abuse was identified and follow-up. 500 self-administered mail surveys were sent to a random sample of professional living in Rio de Janeiro. The survey forms were identified only by numbers to assure anonymity. 224 completed surveys were mailed back. 54 respondents indicated exposure to at least one incident of abuse. The majority of victims were children, the main abuser was the mother, and physical violence was the most frequent form of abuse. The main speech disorder was late language development. In most cases, the victim himself told the therapist about the abuse--through verbal expression or other means of expression such as drawings, story telling, dramatizing or playing. As the majority of the victims abandoned speech-language therapy, it was not possible to follow-up the cases. Due to the importance if this issue and the limited Brazilian literature about Speech-Language and Hearing Sciences and child abuse, it is paramount to invest in the training of speech-language pathologists. It is the duty of speech-language pathologists to expose this complex problem and to give voice to children who are victims of violence, understanding that behind a speech-language complaint there might be a cry for help.

  12. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    NASA Technical Reports Server (NTRS)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  13. Objective measurement of motor speech characteristics in the healthy pediatric population.

    PubMed

    Wong, A W; Allegro, J; Tirado, Y; Chadha, N; Campisi, P

    2011-12-01

    To obtain objective measurements of motor speech characteristics in normal children, using a computer-based motor speech software program. Cross-sectional, observational design in a university-based ambulatory pediatric otolaryngology clinic. Participants included 112 subjects (54 females and 58 males) aged 4-18 years. Participants with previously diagnosed hearing loss, voice and motor disorders, and children unable to repeat a passage in English were excluded. Voice samples were recorded and analysed using the Motor Speech Profile (MSP) software (KayPENTAX, Lincoln Park, NJ). The MSP produced measures of diadochokinetics, second formant transition, intonation, and syllabic rates. Demographic data, including sex, age, and cigarette smoke exposure were obtained. Normative data for several motor speech characteristics were derived for children ranging from age 4 to 18 years. A number of age-dependent changes were indentified, including an increase in average diadochokinetic rate (p<0.001) and standard syllabic duration (p<0.001) with age. There were no identified differences in motor speech characteristics between males and females across the measured age range. Variations in fundamental frequency (Fo) during speech did not change significantly with age for both males and females. To our knowledge, this is the first pediatric normative database for the MSP progam. The MSP is suitable for testing children and can be used to study developmental changes in motor speech. The analysis demonstrated that males and females behave similarly and show the same relationship with age for the motor speech characteristics studied. This normative database will provide essential comparative data for future studies exploring alterations in motor speech that may occur with hearing, voice, and motor disorders and to assess the results of targeted therapies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Laboratory sample turnaround times: do they cause delays in the ED?

    PubMed

    Gill, Dipender; Galvin, Sean; Ponsford, Mark; Bruce, David; Reicher, John; Preston, Laura; Bernard, Stephani; Lafferty, Jessica; Robertson, Andrew; Rose-Morris, Anna; Stoneham, Simon; Rieu, Romelie; Pooley, Sophie; Weetch, Alison; McCann, Lloyd

    2012-02-01

    Blood tests are requested for approximately 50% of patients attending the emergency department (ED). The time taken to obtain the results is perceived as a common reason for delay. The objective of this study was therefore to investigate the turnaround time (TAT) for blood results and whether this affects patient length of stay (LOS) and to identify potential areas for improvement. A time-in-motion study was performed at the ED of the John Radcliffe Hospital (JRH), Oxford, UK. The duration of each of the stages leading up to receipt of 101 biochemistry and haematology results was recorded, along with the corresponding patient's LOS. The findings reveal that the mean time for haematology results to become available was 1 hour 6 minutes (95% CI: 29 minutes to 2 hours 13 minutes), while biochemistry samples took 1 hour 42 minutes (95% CI: 1 hour 1 minute to 4 hours 21 minutes), with some positive correlation noted with the patient LOS, but no significant variation between different days or shifts. With the fastest 10% of samples being reported within 35 minutes (haematology) and 1 hour 5 minutes (biochemistry) of request, our study showed that delays can be attributable to laboratory TAT. Given the limited ability to further improve laboratory processes, the solutions to improving TAT need to come from a collaborative and integrated approach that includes strategies before samples reach the laboratory and downstream review of results. © 2010 Blackwell Publishing Ltd.

  15. Caregiver Expressed Emotion and Psychiatric Symptoms in African-Americans with Schizophrenia: An Attempt to Understand the Paradoxical Relationship.

    PubMed

    Gurak, Kayla; Weisman de Mamani, Amy

    2017-06-01

    Expressed emotion (EE) is a family environmental construct that assesses how much criticism, hostility, and/or emotional over-involvement a family member expresses about a patient (Hooley, Annual Review of Clinical Psychology, 2007, 3, 329). Having high levels of EE within the family environment has generally been associated with poorer patient outcomes for schizophrenia and a range of other disorders. Paradoxically, for African-American patients, high-EE may be associated with a better symptom course (Rosenfarb, Bellack, & Aziz, Journal of Abnormal Psychology, 2006, 115, 112). However, this finding is in need of additional support and, if confirmed, clarification. In line with previous research, using a sample of 30 patients with schizophrenia and their primary caregivers, we hypothesized that having a caregiver classified as low-EE would be associated with greater patient symptom severity. We also aimed to better understand why this pattern may exist by examining the content of interviews taken from the Five-Minute Speech Sample. Results supported study hypotheses. In line with Rosenfarb et al. (2006), having a low-EE caregiver was associated with greater symptom severity in African-American patients. A content analysis uncovered some interesting patterns that may help elucidate this finding. Results of this study suggest that attempts to lower high-EE in African Americans may, in fact, be counterproductive. © 2015 Family Process Institute.

  16. Depressive Symptoms, Criticism, and Counter-Criticism in Marital Interactions.

    PubMed

    Trombello, Joseph M; Post, Kristina M; Smith, David A

    2018-02-23

    Although people with depressive symptoms face criticism, hostility, and rejection in their close relationships, we do not know how they respond. Following interpersonal theories of depression, it might be expected that depressive symptoms would be associated with a tendency to receive and also to express criticism toward one's spouse, and that at least some of this criticism would be a contingent response to criticism received (i.e., "counter-criticism"). However, other research has determined that depressive symptoms/behaviors suppress partner criticism, suggesting that depressed people might respond to partner criticism similarly, by subsequently expressing less criticism. In a sample of 112 married couples, partial correlations, regressions, and Actor-Partner Interdependence Modeling indicated that lower criticism and counter-criticism expression during a laboratory marital interaction task was associated with higher depressive symptoms, especially when such individuals were clinically depressed. Furthermore, during a separate and private Five-Minute Speech Sample, lower criticism by partners was associated with higher depressive symptoms, especially when those who chose the interaction topic were also clinically depressed. All analyses controlled for relationship adjustment. These results suggest that spouses with higher depressive symptoms and clinical depression diagnoses may be suppressing otherwise ordinary criticism expression toward their nondepressed partners; furthermore, nondepressed partners of depressed people are especially likely to display less criticism toward their spouse in a private task. © 2018 Family Process Institute.

  17. Maternal Mind-Mindedness Provides a Buffer for Pre-Adolescents at Risk for Disruptive Behavior.

    PubMed

    Hughes, Claire; Aldercotte, Amanda; Foley, Sarah

    2017-02-01

    Maternal mind-mindedness, defined as the propensity to view one's child as an agent with independent thoughts and feelings, mitigates the impact of low maternal education on conduct problems in young children (Meins et al. 2013), but has been little studied beyond the preschool years. Addressing this gap, we applied a multi-measure and multi-informant approach to assess family adversity and disruptive behavior at age 12 for a socially diverse sample of 116 children for whom ratings of disruptive behavior at age 6 were available. Each mother was asked to describe her child and transcripts of these five-minute speech samples were coded for (i) mind-mindedness (defined by the proportion of child attributes that were mental rather than physical or behavioral) and (ii) positivity (defined by the proportion of child attributes that were positive rather than neutral or negative). Our regression results showed that, independent of associations with prior adjustment, family adversity, child gender and low maternal monitoring, mothers' mind-mindedness (but not positivity) predicted unique variance in disruptive behavior at age 12. In addition, a trend interaction term provided partial support for the hypothesis that pre-adolescents exposed to family adversity may benefit in particular from maternal mind-mindedness. We discuss the possible mechanisms underpinning these findings and their implications for clinical interventions to reduce disruptive behavior in adolescence.

  18. Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages.

    PubMed

    Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart

    2016-01-01

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure-regularities arising in an ordered series of syllable timings-testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.

  19. Alignment of classification paradigms for communication abilities in children with cerebral palsy

    PubMed Central

    Hustad, Katherine C.; Oakes, Ashley; McFadd, Emily; Allison, Kristen M.

    2015-01-01

    Aim We examined three communication ability classification paradigms for children with cerebral palsy (CP): the Communication Function Classification System (CFCS), the Viking Speech Scale (VSS), and the Speech Language Profile Groups (SLPG). Questions addressed inter-judge reliability, whether the VSS and the CFCS captured impairments in speech and language, and whether there were differences in speech intelligibility among levels within each classification paradigm. Method 80 children (42 males) with a range of types and severity levels of CP participated (mean age, 60 months; SD 4.8 months). Two speech-language pathologists classified each child via parent-child interaction samples and previous experience with the children for the CFCS and VSS, and uisng quantitative speech and language assessment data for the SLPG. Intelligibility scores were obtained using standard clinical intelligibility measurement. Results Kappa values were .67 (95% CI [.55, .79]) for the CFCS, .82 (95% CI [.72, .92]), for the VSS, .95 (95% CI [.72, .92]) for the SLPG. Descriptively, reliability within levels of each paradigm varied, with the lowest agreement occurring within the CFCS at levels II (42%), III (40%), and IV (61%). Neither the CFCS nor the VSS were sensitive to language impairments captured by the SLPG. Significant differences in speech intelligibility were found among levels for all classification paradigms. Interpretation Multiple tools are necessary to understand speech, language, and communication profiles in children with CP. Characterization of abilities at all levels of the ICF will advance our understanding of the ways that speech, language, and communication abilities present in children with CP. PMID:26521844

  20. Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages

    PubMed Central

    Jadoul, Yannick; Ravignani, Andrea; Thompson, Bill; Filippi, Piera; de Boer, Bart

    2016-01-01

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint. PMID:27994544

  1. Real-time spectrum estimation–based dual-channel speech-enhancement algorithm for cochlear implant

    PubMed Central

    2012-01-01

    Background Improvement of the cochlear implant (CI) front-end signal acquisition is needed to increase speech recognition in noisy environments. To suppress the directional noise, we introduce a speech-enhancement algorithm based on microphone array beamforming and spectral estimation. The experimental results indicate that this method is robust to directional mobile noise and strongly enhances the desired speech, thereby improving the performance of CI devices in a noisy environment. Methods The spectrum estimation and the array beamforming methods were combined to suppress the ambient noise. The directivity coefficient was estimated in the noise-only intervals, and was updated to fit for the mobile noise. Results The proposed algorithm was realized in the CI speech strategy. For actual parameters, we use Maxflat filter to obtain fractional sampling points and cepstrum method to differentiate the desired speech frame and the noise frame. The broadband adjustment coefficients were added to compensate the energy loss in the low frequency band. Discussions The approximation of the directivity coefficient is tested and the errors are discussed. We also analyze the algorithm constraint for noise estimation and distortion in CI processing. The performance of the proposed algorithm is analyzed and further be compared with other prevalent methods. Conclusions The hardware platform was constructed for the experiments. The speech-enhancement results showed that our algorithm can suppresses the non-stationary noise with high SNR. Excellent performance of the proposed algorithm was obtained in the speech enhancement experiments and mobile testing. And signal distortion results indicate that this algorithm is robust with high SNR improvement and low speech distortion. PMID:23006896

  2. Associations of acoustically measured tongue/jaw movements and portion of time speaking with negative symptom severity in patients with schizophrenia in Italy and the United States.

    PubMed

    Bernardini, Francesco; Lunden, Anya; Covington, Michael; Broussard, Beth; Halpern, Brooke; Alolayan, Yazeed; Crisafio, Anthony; Pauselli, Luca; Balducci, Pierfrancesco M; Capulong, Leslie; Attademo, Luigi; Lucarini, Emanuela; Salierno, Gianfranco; Natalicchi, Luca; Quartesan, Roberto; Compton, Michael T

    2016-05-30

    This is the first cross-language study of the effect of schizophrenia on speech as measured by analyzing phonetic parameters with sound spectrography. We hypothesized that reduced variability in pitch and formants would be correlated with negative symptom severity in two samples of patients with schizophrenia, one from Italy, and one from the United States. Audio recordings of spontaneous speech were available from 40 patients. From each speech sample, a file of F0 (pitch) and formant values (F1 and F2, resonance bands indicating the moment-by-moment shape of the oral cavity), and the portion of the recording in which there was speaking ("fraction voiced," FV), was created. Correlations between variability in the phonetic indices and negative symptom severity were tested and further examined using regression analyses. Meaningful negative correlations between Scale for the Assessment of Negative Symptoms (SANS) total score and standard deviation (SD) of F2, as well as variability in pitch (SD F0) were observed in the Italian sample. We also found meaningful associations of SANS affective flattening and SANS alogia with SD F0, and of SANS avolition/apathy and SD F2 in the Italian sample. In both samples, FV was meaningfully correlated with SANS total score, avolition/apathy, and anhedonia/asociality. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. Brief Report: A Mobile Application to Treat Prosodic Deficits in Autism Spectrum Disorder and Other Communication Impairments: A Pilot Study.

    PubMed

    Simmons, Elizabeth Schoen; Paul, Rhea; Shic, Frederick

    2016-01-01

    This study examined the acceptability of a mobile application, SpeechPrompts, designed to treat prosodic disorders in children with ASD and other communication impairments. Ten speech-language pathologists (SLPs) in public schools and 40 of their students, 5-19 years with prosody deficits participated. Students received treatment with the software over eight weeks. Pre- and post-treatment speech samples and student engagement data were collected. Feedback on the utility of the software was also obtained. SLPs implemented the software with their students in an authentic education setting. Student engagement ratings indicated students' attention to the software was maintained during treatment. Although more testing is warranted, post-treatment prosody ratings suggest that SpeechPrompts has potential to be a useful tool in the treatment of prosodic disorders.

  4. Application of independent component analysis for speech-music separation using an efficient score function estimation

    NASA Astrophysics Data System (ADS)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  5. Voice Conversion Using Pitch Shifting Algorithm by Time Stretching with PSOLA and Re-Sampling

    NASA Astrophysics Data System (ADS)

    Mousa, Allam

    2010-01-01

    Voice changing has many applications in the industry and commercial filed. This paper emphasizes voice conversion using a pitch shifting method which depends on detecting the pitch of the signal (fundamental frequency) using Simplified Inverse Filter Tracking (SIFT) and changing it according to the target pitch period using time stretching with Pitch Synchronous Over Lap Add Algorithm (PSOLA), then resampling the signal in order to have the same play rate. The same study was performed to see the effect of voice conversion when some Arabic speech signal is considered. Treatment of certain Arabic voiced vowels and the conversion between male and female speech has shown some expansion or compression in the resulting speech. Comparison in terms of pitch shifting is presented here. Analysis was performed for a single frame and a full segmentation of speech.

  6. Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans.

    PubMed

    Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth

    2006-10-01

    This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given.

  7. Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans (L)

    PubMed Central

    Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth

    2007-01-01

    This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given. PMID:17069275

  8. Longitudinal Patterns of Behaviour Problems in Children with Specific Speech and Language Difficulties: Child and Contextual Factors

    ERIC Educational Resources Information Center

    Lindsay, Geoff; Dockrell, Julie E.; Strand, Steve

    2007-01-01

    Background: The purpose of this study was to examine the stability of behavioural, emotional and social difficulties (BESD) in children with specific speech and language difficulties (SSLD), and the relationship between BESD and the language ability. Methods: A sample of children with SSLD were assessed for BESD at ages 8, 10 and 12 years by both…

  9. Auditory, Visual, and Auditory-Visual Speech Perception by Individuals with Cochlear Implants versus Individuals with Hearing Aids

    ERIC Educational Resources Information Center

    Most, Tova; Rothem, Hilla; Luntz, Michal

    2009-01-01

    The researchers evaluated the contribution of cochlear implants (CIs) to speech perception by a sample of prelingually deaf individuals implanted after age 8 years. This group was compared with a group with profound hearing impairment (HA-P), and with a group with severe hearing impairment (HA-S), both of which used hearing aids. Words and…

  10. Setting up a cohort study in speech and language therapy: lessons from The UK Cleft Collective Speech and Language (CC-SL) study.

    PubMed

    Wren, Yvonne; Humphries, Kerry; Stock, Nicola Marie; Rumsey, Nichola; Lewis, Sarah; Davies, Amy; Bennett, Rhiannon; Sandy, Jonathan

    2018-05-01

    Efforts to increase the evidence base in speech and language therapy are often limited by methodological factors that have restricted the strength of the evidence to the lower levels of the evidence hierarchy. Where higher graded studies, such as randomized controlled trials, have been carried out, it has sometimes been difficult to obtain sufficient power to detect a potential effect of intervention owing to small sample sizes or heterogeneity in the participants. With certain clinical groups such as cleft lip and palate, systematic reviews of intervention studies have shown that there is no robust evidence to support the efficacy of any one intervention protocol over another. To describe the setting up of an observational clinical cohort study and to present this as an alternative design for answering research questions relating to prevalence, risk factors and outcomes from intervention. The Cleft Collective Speech and Language (CC-SL) study is a national cohort study of children born with cleft palate. Working in partnership with regional clinical cleft centres, a sample size of over 600 children and 600 parents is being recruited and followed up from birth to age 5 years. Variables being collected include demographic, psychological, surgical, hearing, and speech and language data. The process of setting up the study has led to the creation of a unique, large-scale data set which is available for researchers to access now and in future. As well as exploring predictive factors, the data can be used to explore the impact of interventions in relation to individual differences. Findings from these investigations can be used to provide information on sample criteria and definitions of intervention and dosage which can be used in future trials. The observational cohort study is a useful alternative design to explore questions around prevalence, risk factors and intervention for clinical groups where robust research data are not yet available. Findings from such a study can be used to guide service-delivery decisions and to determine power for future clinical trials. © 2017 Royal College of Speech and Language Therapists.

  11. Validity and Reliability of Visual Analog Scaling for Assessment of Hypernasality and Audible Nasal Emission in Children With Repaired Cleft Palate.

    PubMed

    Baylis, Adriane; Chapman, Kathy; Whitehill, Tara L; Group, The Americleft Speech

    2015-11-01

    To investigate the validity and reliability of multiple listener judgments of hypernasality and audible nasal emission, in children with repaired cleft palate, using visual analog scaling (VAS) and equal-appearing interval (EAI) scaling. Prospective comparative study of multiple listener ratings of hypernasality and audible nasal emission. Multisite institutional. Five trained and experienced speech-language pathologist listeners from the Americleft Speech Project. Average VAS and EAI ratings of hypernasality and audible nasal emission/turbulence for 12 video-recorded speech samples from the Americleft Speech Project. Intrarater and interrater reliability was computed, as well as linear and polynomial models of best fit. Intrarater and interrater reliability was acceptable for both rating methods; however, reliability was higher for VAS as compared to EAI ratings. When VAS ratings were plotted against EAI ratings, results revealed a stronger curvilinear relationship. The results of this study provide additional evidence that alternate rating methods such as VAS may offer improved validity and reliability over EAI ratings of speech. VAS should be considered a viable method for rating hypernasality and nasal emission in speech in children with repaired cleft palate.

  12. The Mechanism of Speech Processing in Congenital Amusia: Evidence from Mandarin Speakers

    PubMed Central

    Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren

    2012-01-01

    Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results. PMID:22347374

  13. Stuttering treatment for a school-age child with Down syndrome: a descriptive case report.

    PubMed

    Harasym, Jessica; Langevin, Marilyn

    2012-12-01

    Little is known about optimal treatment approaches and stuttering treatment outcomes for children with Down syndrome. The purpose of this study was to investigate outcomes for a child with Down syndrome who received a combination of fluency shaping therapy and parent delivered contingencies for normally fluent speech, prolonged speech, and stuttered speech. In-clinic speech measures obtained at post-treatment and at 4 months follow-up reflected improvements in fluency of 89.0% and 98.6%, respectively. The participant's beyond-clinic follow-up sample reflected an improvement of 95.5%. Following treatment, the participant demonstrated improved self-confidence, self-esteem, and improved participation and functioning at school. Findings suggest that fluency shaping with parental contingencies may be a viable treatment approach to reduce stuttering in children with Down syndrome. Future research using an experimental research design is warranted. Readers will be able to describe (a) prevalence estimates of stuttering in individuals with Down syndrome, (b) the main components of a fluency shaping program for a child with Down syndrome who stutters and has co-occurring speech and language delays, and (c) speech and parent-, teacher-, and self-report treatment outcomes. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. The mechanism of speech processing in congenital amusia: evidence from Mandarin speakers.

    PubMed

    Liu, Fang; Jiang, Cunmei; Thompson, William Forde; Xu, Yi; Yang, Yufang; Stewart, Lauren

    2012-01-01

    Congenital amusia is a neuro-developmental disorder of pitch perception that causes severe problems with music processing but only subtle difficulties in speech processing. This study investigated speech processing in a group of Mandarin speakers with congenital amusia. Thirteen Mandarin amusics and thirteen matched controls participated in a set of tone and intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on word discrimination in natural speech and their gliding tone analogs. They also performed worse than controls on discriminating gliding tone sequences derived from statements and questions, and showed elevated thresholds for pitch change detection and pitch direction discrimination. However, they performed as well as controls on word identification, and on statement-question identification and discrimination in natural speech. Overall, tasks that involved multiple acoustic cues to communicative meaning were not impacted by amusia. Only when the tasks relied mainly on pitch sensitivity did amusics show impaired performance compared to controls. These findings help explain why amusia only affects speech processing in subtle ways. Further studies on a larger sample of Mandarin amusics and on amusics of other language backgrounds are needed to consolidate these results.

  15. Similar frequency of the McGurk effect in large samples of native Mandarin Chinese and American English speakers.

    PubMed

    Magnotti, John F; Basu Mallick, Debshila; Feng, Guo; Zhou, Bin; Zhou, Wen; Beauchamp, Michael S

    2015-09-01

    Humans combine visual information from mouth movements with auditory information from the voice to recognize speech. A common method for assessing multisensory speech perception is the McGurk effect: When presented with particular pairings of incongruent auditory and visual speech syllables (e.g., the auditory speech sounds for "ba" dubbed onto the visual mouth movements for "ga"), individuals perceive a third syllable, distinct from the auditory and visual components. Chinese and American cultures differ in the prevalence of direct facial gaze and in the auditory structure of their languages, raising the possibility of cultural- and language-related group differences in the McGurk effect. There is no consensus in the literature about the existence of these group differences, with some studies reporting less McGurk effect in native Mandarin Chinese speakers than in English speakers and others reporting no difference. However, these studies sampled small numbers of participants tested with a small number of stimuli. Therefore, we collected data on the McGurk effect from large samples of Mandarin-speaking individuals from China and English-speaking individuals from the USA (total n = 307) viewing nine different stimuli. Averaged across participants and stimuli, we found similar frequencies of the McGurk effect between Chinese and American participants (48 vs. 44 %). In both groups, we observed a large range of frequencies both across participants (range from 0 to 100 %) and stimuli (15 to 83 %) with the main effect of culture and language accounting for only 0.3 % of the variance in the data. High individual variability in perception of the McGurk effect necessitates the use of large sample sizes to accurately estimate group differences.

  16. Cryopreservation of human spermatozoa. I. Effects of holding procedure and seeding on motility, fertilizability, and acrosome reaction.

    PubMed

    Critser, J K; Huse-Benda, A R; Aaker, D V; Arneson, B W; Ball, G D

    1987-04-01

    Three experiments were conducted to evaluate effects of holding semen at +5.0 degrees C for 30 minutes or -5.0 degrees C for 10 minutes and ice crystal induction (seeding) on frozen-thawed human spermatozoa. In experiment 1, spermatozoa were frozen, and postthaw motility was evaluated immediately (0 hour) and 24 hours later. At both 0 and 24 hours, nonfrozen control samples had higher motility than all other treatment groups. At 0 hour postthaw, motility was higher in samples held at -5.0 degrees C for 10 minutes with no significant effect of seeding. At 24 hours, samples held at -5.0 degrees C for 10 minutes and seeded, but not samples held at -5.0 degrees C and not seeded, had higher motility than samples held at +5.0 degrees C. In experiment 2, semen samples were frozen, and fertilizability was evaluated in a zona-free hamster egg penetration assay. Seeded samples had a higher frequency of sperm penetration than either nonfrozen or nonseeded samples. In experiment 3, nonfrozen controls and frozen treatment groups were evaluated for the frequency of survival and acrosomal integrity. Seeded samples had higher frequencies of survival and loss of acrosomal integrity than nonseeded samples. All frozen-thawed samples had a lower frequency of survival and a higher frequency of loss of acrosomal integrity than nonfrozen controls. Although altered patterns of fertilizability and acrosomal integrity are induced, collectively these data suggest that incorporating a holding temperature of -5.0 degrees C for 10 minutes and seeding may result in a superior protocol for freezing human spermatozoa.

  17. [Multidimensionality of inner speech and its relationship with abnormal perceptions].

    PubMed

    Tamayo-Agudelo, William; Vélez-Urrego, Juan David; Gaviria-Castaño, Gilberto; Perona-Garcelán, Salvador

    Inner speech is a common human experience. Recently, there have been studies linking this experience with cognitive functions, such as problem solving, reading, writing, autobiographical memory, and some disorders, such as anxiety and depression. In addition, inner speech is recognised as the main source of auditory hallucinations. The main purpose of this study is to establish the factor structure of Varieties of Inner Speech Questionnaire (VISQ) in a sample of the Colombian population. Furthermore, it aims at establishing a link between VISQ and abnormal perceptions. This was a cross-sectional study in which 232 college students were assessed using the VISQ and the Cardiff Anomalous Perceptions Scale (CAPS). Through an exploratory factor analysis, a structure of three factors was found: Other Voices in the Internal Speech, Condensed Inner speech, and Dialogical/Evaluative Inner speech, all of them with acceptable levels of reliability. Gender differences were found in the second and third factor, with higher averages for women. Positive correlations were found among the three VISQ and the two CAPS factors: Multimodal Perceptual Alterations and Experiences Associated with the Temporal Lobe. The results are consistent with previous findings linking the factors of inner speech with the propensity to auditory hallucination, a phenomenon widely associated with temporal lobe abnormalities. The hallucinations associated with other perceptual systems, however, are still weakly explained. Copyright © 2016 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  18. Objective voice and speech analysis of persons with chronic hoarseness by prosodic analysis of speech samples.

    PubMed

    Haderlein, Tino; Döllinger, Michael; Matoušek, Václav; Nöth, Elmar

    2016-10-01

    Automatic voice assessment is often performed using sustained vowels. In contrast, speech analysis of read-out texts can be applied to voice and speech assessment. Automatic speech recognition and prosodic analysis were used to find regression formulae between automatic and perceptual assessment of four voice and four speech criteria. The regression was trained with 21 men and 62 women (average age 49.2 years) and tested with another set of 24 men and 49 women (48.3 years), all suffering from chronic hoarseness. They read the text 'Der Nordwind und die Sonne' ('The North Wind and the Sun'). Five voice and speech therapists evaluated the data on 5-point Likert scales. Ten prosodic and recognition accuracy measures (features) were identified which describe all the examined criteria. Inter-rater correlation within the expert group was between r = 0.63 for the criterion 'match of breath and sense units' and r = 0.87 for the overall voice quality. Human-machine correlation was between r = 0.40 for the match of breath and sense units and r = 0.82 for intelligibility. The perceptual ratings of different criteria were highly correlated with each other. Likewise, the feature sets modeling the criteria were very similar. The automatic method is suitable for assessing chronic hoarseness in general and for subgroups of functional and organic dysphonia. In its current version, it is almost as reliable as a randomly picked rater from a group of voice and speech therapists.

  19. Speech outcome in unilateral complete cleft lip and palate patients: a descriptive study.

    PubMed

    Rullo, R; Di Maggio, D; Addabbo, F; Rullo, F; Festa, V M; Perillo, L

    2014-09-01

    In this study, resonance and articulation disorders were examined in a group of patients surgically treated for cleft lip and palate, considering family social background, and children's ability of self monitoring their speech output while speaking. Fifty children (32 males and 18 females) mean age 6.5 ± 1.6 years, affected by non-syndromic complete unilateral cleft of the lip and palate underwent the same surgical protocol. The speech level was evaluated using the Accordi's speech assessment protocol that focuses on intelligibility, nasality, nasal air escape, pharyngeal friction, and glottal stop. Pearson product-moment correlation analysis was used to detect significant associations between analysed parameters. A total of 16% (8 children) of the sample had severe to moderate degree of nasality and nasal air escape, presence of pharyngeal friction and glottal stop, which obviously compromise speech intelligibility. Ten children (10%) showed a barely acceptable phonological outcome: nasality and nasal air escape were mild to moderate, but the intelligibility remained poor. Thirty-two children (64%) had normal speech. Statistical analysis revealed a significant correlation between the severity of nasal resonance and nasal air escape (p ≤ 0.05). No statistical significant correlation was found between the final intelligibility and the patient social background, neither between the final intelligibility nor the age of the patients. The differences in speech outcome could be explained with a specific, subjective, and inborn ability, different for each child, in self-monitoring their speech output.

  20. In Vitro UV-Visible Spectroscopy Study of Yellow Laser Irradiation on Human Blood

    NASA Astrophysics Data System (ADS)

    Fuad, Siti Sakinah Mohd; Suardi, N.; Mustafa, I. S.

    2018-04-01

    This experimental study was performed to investigate the effect of low level yellow laser of 589nm wavelength with various laser irradiation time. Human blood samples with random diseases are irradiated with yellow laser of power density of 450mW/cm2 from 10 minutes to 60 minutes at 10 minutes intervals. The morphology of the red blood cell were also observed for different irradiation time. The result shows that there is a significant different in the absorption of light with varying laser irradiation time (p<0.01). The maximum absorption recorded at 40 minutes of irradiation at 340nm peak. Blood smear of the samples reveals that there are observable changes in the morphology of the red blood cell at 40 minutes and 60 minutes of irradiation.

  1. Swahili speech development: preliminary normative data from typically developing pre-school children in Tanzania.

    PubMed

    Gangji, Nazneen; Pascoe, Michelle; Smouse, Mantoa

    2015-01-01

    Swahili is widely spoken in East Africa, but to date there are no culturally and linguistically appropriate materials available for speech-language therapists working in the region. The challenges are further exacerbated by the limited research available on the typical acquisition of Swahili phonology. To describe the speech development of 24 typically developing first language Swahili-speaking children between the ages of 3;0 and 5;11 years in Dar es Salaam, Tanzania. A cross-sectional design was used with six groups of four children in 6-month age bands. Single-word speech samples were obtained from each child using a set of culturally appropriate pictures designed to elicit all consonants and vowels of Swahili. Each child's speech was audio-recorded and phonetically transcribed using International Phonetic Alphabet (IPA) conventions. Children's speech development is described in terms of (1) phonetic inventory, (2) syllable structure inventory, (3) phonological processes and (4) percentage consonants correct (PCC) and percentage vowels correct (PVC). Results suggest a gradual progression in the acquisition of speech sounds and syllables between the ages of 3;0 and 5;11 years. Vowel acquisition was completed and most of the consonants acquired by age 3;0. Fricatives/z, s, h/ were later acquired at 4 years and /θ/and /r/ were the last acquired consonants at age 5;11. Older children were able to produce speech sounds more accurately and had fewer phonological processes in their speech than younger children. Common phonological processes included lateralization and sound preference substitutions. The study contributes a preliminary set of normative data on speech development of Swahili-speaking children. Findings are discussed in relation to theories of phonological development, and may be used as a basis for further normative studies with larger numbers of children and ultimately the development of a contextually relevant assessment of the phonology of Swahili-speaking children. © 2014 Royal College of Speech and Language Therapists.

  2. Selling the story: narratives and charisma in adults with TBI.

    PubMed

    Jones, Corinne A; Turkstra, Lyn S

    2011-01-01

    To examine storytelling performance behaviours in adults with traumatic brain injury (TBI) and relate these behaviours to perceived charisma and desirability as a conversation partner. Seven adult males with traumatic brain injury (TBI) told their accident narratives to a male confederate. Ten male undergraduate students rated 1-minute video clips from the beginning of each narrative using the Charismatic Leadership Communication Scale (CLCS). Raters also indicated whether or not they would like to engage in conversation with each participant. Of the performative behaviours analysed, gestures alone significantly influenced CLCS ratings and reported likelihood of engaging in future conversation with the participant. Post-hoc analysis revealed that speech rate was significantly correlated with all of the preceding measures. There was a significant correlation between self- and other-ratings of charisma. The findings suggest that aspects of non-verbal performance, namely gesture use and speech rate, influence how charismatic an individual is perceived to be and how likely someone is to engage in conversation with that person. Variability in these performance behaviours may contribute to the variation in social outcomes seen in the TBI population.

  3. The case of the combative CFO.

    PubMed

    Nichols, N A; Bagley, C E; Beard, E; Dworkin, D; Millstein, I M

    1992-01-01

    Minute Publishing Chairman and CEO Neil Harcum has a right to be proud of his new national newspaper, America Today. It has won three Pulitzer Prizes and attracted one million readers in just three years of publication. But, as CFO Peter Rawson points out, it's also losing $100 million a year and has broken Minute's 20-year string of earnings gains. In the process, the company has been split between two warring factions: one is backing Harcum and favors continuing the paper. The other agrees with Rawson that the project must be stopped. The board of directors has been assembled to decide the newspaper's fate. In his speech to the board, Rawson says it's time to cut Minute's losses and put an end to America Today. And Wall Street agrees. Several brokerage houses have taken Minute off their buy lists, and rating agencies are about to down-grade the company's debt. "America Today is not a good investment," Rawson argues. "Certainly, it isn't in keeping with our commitment to deliver maximum value to our shareholders." But Harcum thinks Rawson is way out of line. "We cannot allow our bean-counters to set policy," he claims. Harcum sees the newspaper as a product of the future that has created its own market. It's only a matter of time before America Today attracts enough advertising to put it in the black. He has a successful track record, and he doesn't want the board to lose faith in him now.(ABSTRACT TRUNCATED AT 250 WORDS)

  4. Speech-on-speech masking with variable access to the linguistic content of the masker speech for native and non-native speakers of English

    PubMed Central

    Calandruccio, Lauren; Bradlow, Ann R.; Dhar, Sumitrajit

    2013-01-01

    Background Masking release for an English sentence-recognition task in the presence of foreign-accented English speech compared to native-accented English speech was reported in Calandruccio, Dhar and Bradlow (2010). The masking release appeared to increase as the masker intelligibility decreased. However, it could not be ruled out that spectral differences between the speech maskers were influencing the significant differences observed. Purpose The purpose of the current experiment was to minimize spectral differences between speech maskers to determine how various amounts of linguistic information within competing speech affect masking release. Research Design A mixed model design with within- (four two-talker speech maskers) and between-subject (listener group) factors was conducted. Speech maskers included native-accented English speech, and high-intelligibility, moderate-intelligibility and low-intelligibility Mandarin-accented English. Normalizing the long-term average speech spectra of the maskers to each other minimized spectral differences between the masker conditions. Study Sample Three listener groups were tested including monolingual English speakers with normal hearing, non-native speakers of English with normal hearing, and monolingual speakers of English with hearing loss. The non-native speakers of English were from various native-language backgrounds, not including Mandarin (or any other Chinese dialect). Listeners with hearing loss had symmetrical, mild sloping to moderate sensorineural hearing loss. Data Collection and Analysis Listeners were asked to repeat back sentences that were presented in the presence of four different two-talker speech maskers. Responses were scored based on the keywords within the sentences (100 keywords/masker condition). A mixed-model regression analysis was used to analyze the difference in performance scores between the masker conditions and the listener groups. Results Monolingual speakers of English with normal hearing benefited when the competing speech signal was foreign-accented compared to native-accented allowing for improved speech recognition. Various levels of intelligibility across the foreign-accented speech maskers did not influence results. Neither the non-native English listeners with normal hearing, nor the monolingual English speakers with hearing loss benefited from masking release when the masker was changed from native-accented to foreign-accented English. Conclusions Slight modifications between the target and the masker speech allowed monolingual speakers of English with normal hearing to improve their recognition of native-accented English even when the competing speech was highly intelligible. Further research is needed to determine which modifications within the competing speech signal caused the Mandarin-accented English to be less effective with respect to masking. Determining the influences within the competing speech that make it less effective as a masker, or determining why monolingual normal-hearing listeners can take advantage of these differences could help improve speech recognition for those with hearing loss in the future. PMID:25126683

  5. A speech and psychological profile of treatment-seeking adolescents who stutter.

    PubMed

    Iverach, Lisa; Lowe, Robyn; Jones, Mark; O'Brian, Susan; Menzies, Ross G; Packman, Ann; Onslow, Mark

    2017-03-01

    The purpose of this study was to evaluate the relationship between stuttering severity, psychological functioning, and overall impact of stuttering, in a large sample of adolescents who stutter. Participants were 102 adolescents (11-17 years) seeking speech treatment for stuttering, including 86 boys and 16 girls, classified into younger (11-14 years, n=57) and older (15-17 years, n=45) adolescents. Linear regression models were used to evaluate the relationship between speech and psychological variables and overall impact of stuttering. The impact of stuttering during adolescence is influenced by a complex interplay of speech and psychological variables. Anxiety and depression scores fell within normal limits. However, higher self-reported stuttering severity predicted higher anxiety and internalizing problems. Boys reported externalizing problems-aggression, rule-breaking-in the clinical range, and girls reported total problems in the borderline-clinical range. Overall, higher scores on measures of anxiety, stuttering severity, and speech dissatisfaction predicted a more negative overall impact of stuttering. To our knowledge, this is the largest cohort study of adolescents who stutter. Higher stuttering severity, speech dissatisfaction, and anxiety predicted a more negative overall impact of stuttering, indicating the importance of carefully managing the speech and psychological needs of adolescents who stutter. Further research is needed to understand the relationship between stuttering and externalizing problems for adolescent boys who stutter. Copyright © 2016. Published by Elsevier Inc.

  6. A greater reduction in high-frequency heart rate variability to a psychological stressor is associated with subclinical coronary and aortic calcification in postmenopausal women.

    PubMed

    Gianaros, Peter J; Salomon, Kristen; Zhou, Fan; Owens, Jane F; Edmundowicz, Daniel; Kuller, Lewis H; Matthews, Karen A

    2005-01-01

    Reduced cardiac parasympathetic activity, as indicated by a reduced level of clinic or ambulatory high-frequency heart rate variability (HF-HRV), is associated with an increased risk for atherosclerosis and coronary artery disease. We tested whether the reduction in HF-HRV to a psychological stressor relative to a baseline level is also associated with subclinical coronary or aortic atherosclerosis, as assessed by calcification in these vascular regions. Spectral estimates of 0.15 to 0.40 Hz HF-HRV were obtained from 94 postmenopausal women (61-69 years) who engaged in a 3-minute speech-preparation stressor after a 6-minute resting baseline. A median of 282 days later, electron beam tomography (EBT) was used to measure the extent of coronary and aortic calcification. In univariate analyses, a greater reduction in HF-HRV from baseline to speech preparation was associated with having more extensive calcification in the coronary arteries (rho = -0.29, p = .03) and in the aorta (rho = -0.22, p = .06). In multivariate analyses that controlled for age, education level, smoking status, hormone therapy use, fasting glucose, high-density lipoproteins, baseline HF-HRV, and the stressor-induced change in respiration rate, a greater stressor-induced reduction in HF-HRV was associated with more calcification in the coronary arteries (B = -1.21, p < .05), and it was marginally associated with more calcification in the aorta (B = -0.92, p = .09). In postmenopausal women, a greater reduction in cardiac parasympathetic activity to a psychological stressor from baseline may be an independent correlate of subclinical atherosclerosis, particularly in the coronary arteries.

  7. A behavior analytic analogue of learning to use synonyms, syntax, and parts of speech.

    PubMed

    Chase, Philip N; Ellenwood, David W; Madden, Gregory

    2008-01-01

    Matching-to-sample and sequence training procedures were used to develop responding to stimulus classes that were considered analogous to 3 aspects of verbal behavior: identifying synonyms and parts of speech, and using syntax. Matching-to-sample procedures were used to train 12 paired associates from among 24 stimuli. These pairs were analogous to synonyms. Then, sequence characteristics were trained to 6 of the stimuli. The result was the formation of 3 classes of 4 stimuli, with the classes controlling a sequence response analogous to a simple ordering syntax: first, second, and third. Matching-to-sample procedures were then used to add 4 stimuli to each class. These stimuli, without explicit sequence training, also began to control the same sequence responding as the other members of their class. Thus, three 8-member functionally equivalent sequence classes were formed. These classes were considered to be analogous to parts of speech. Further testing revealed three 8-member equivalence classes and 512 different sequences of first, second, and third. The study indicated that behavior analytic procedures may be used to produce some generative aspects of verbal behavior related to simple syntax and semantics.

  8. Neuromorphic crossbar circuit with nanoscale filamentary-switching binary memristors for speech recognition.

    PubMed

    Truong, Son Ngoc; Ham, Seok-Jin; Min, Kyeong-Sik

    2014-01-01

    In this paper, a neuromorphic crossbar circuit with binary memristors is proposed for speech recognition. The binary memristors which are based on filamentary-switching mechanism can be found more popularly and are easy to be fabricated than analog memristors that are rare in materials and need a more complicated fabrication process. Thus, we develop a neuromorphic crossbar circuit using filamentary-switching binary memristors not using interface-switching analog memristors. The proposed binary memristor crossbar can recognize five vowels with 4-bit 64 input channels. The proposed crossbar is tested by 2,500 speech samples and verified to be able to recognize 89.2% of the tested samples. From the statistical simulation, the recognition rate of the binary memristor crossbar is estimated to be degraded very little from 89.2% to 80%, though the percentage variation in memristance is increased very much from 0% to 15%. In contrast, the analog memristor crossbar loses its recognition rate significantly from 96% to 9% for the same percentage variation in memristance.

  9. Factors involved in the identification of stuttering severity in a foreign language.

    PubMed

    Cosyns, Marjan; Einarsdóttir, Jóhanna T; Van Borsel, John

    2015-01-01

    Speech-language pathologists nowadays are more and more confronted with clients who speak a language different from their own mother tongue. The assessment of persons who speak a foreign language poses particular challenges. The present study investigated the possible role and interplay of factors involved in the identification of stuttering severity in a foreign language. Nineteen speech-language pathologists from five different countries (i.e. Iceland, Sweden, Norway, Finland, and Belgium) rated stuttering severity of speech samples featuring persons who stutter speaking Icelandic, Swedish, Norwegian, or Dutch. Additionally, they were asked to score how easy they found it to rate the samples. Accuracy of rating stuttering severity in another language appeared to be foremost determined by the client's stuttering severity, while experienced ease of rating stuttering severity was essentially related to closeness of the language of the clinician to that of the client and familiarity of the clinician with the client's language. Stuttering measurement training programmes in different languages are needed.

  10. The Words Children Hear: Picture Books and the Statistics for Language Learning.

    PubMed

    Montag, Jessica L; Jones, Michael N; Smith, Linda B

    2015-09-01

    Young children learn language from the speech they hear. Previous work suggests that greater statistical diversity of words and of linguistic contexts is associated with better language outcomes. One potential source of lexical diversity is the text of picture books that caregivers read aloud to children. Many parents begin reading to their children shortly after birth, so this is potentially an important source of linguistic input for many children. We constructed a corpus of 100 children's picture books and compared word type and token counts in that sample and a matched sample of child-directed speech. Overall, the picture books contained more unique word types than the child-directed speech. Further, individual picture books generally contained more unique word types than length-matched, child-directed conversations. The text of picture books may be an important source of vocabulary for young children, and these findings suggest a mechanism that underlies the language benefits associated with reading to children. © The Author(s) 2015.

  11. Longitudinal Study of Language and Speech of Twins at 4 and 6 Years: Twinning Effects Decrease, Zygosity Effects Disappear, and Heritability Increases

    ERIC Educational Resources Information Center

    Rice, Mabel L.; Zubrick, Stephen R.; Taylor, Catherine L.; Hoffman, Lesa; Gayán, Javier

    2018-01-01

    Purpose: This study investigates the heritability of language, speech, and nonverbal cognitive development of twins at 4 and 6 years of age. Possible confounding effects of twinning and zygosity, evident at 2 years, were investigated among other possible predictors of outcomes. Method: The population-based twin sample included 627 twin pairs and 1…

  12. Is the hand to speech what speech is to the hand?

    PubMed

    Mildner, V

    2000-01-01

    Interference between the manual and the verbal performance on two types of concurrent verbal-manual tasks was studied on a sample of 48 female right-handers. The more complex verbal task (storytelling) affected both hands significantly, the less complex (essentially phonemic) task affected only the right hand, with insignificant negative influence on the left-hand performance. No significant reciprocal effects of the motor task on verbalization were found.

  13. Sibling relationships in adults who have siblings with or without intellectual disabilities.

    PubMed

    Doody, Mairéad A; Hastings, Richard P; O'Neill, Sarah; Grey, Ian M

    2010-01-01

    There is relatively little research on the relationships between adults with intellectual disability and their siblings, despite the potential importance of these relationships for either individual's psychological well-being and future care roles that might be adopted by adult siblings. In the present study, sibling relationships of adults with adult siblings with (N=63) and without (N=123) intellectual disability were explored. Contact, warmth, conflict, and rivalry were measured using questionnaires available as an on-line survey. Expressed emotion was measured using the Five Minute Speech Sample over the telephone to establish an independently coded measure of criticism from the participant towards their sibling. Overall, there were few group differences in contact and sibling relationship. There was less telephone contact in the intellectual disability group, and less reported warmth in the relationship with siblings with intellectual disability although this was mainly associated with severe/profound intellectual disability. Exploratory analyses were conducted of the correlates of sibling relationships in both the intellectual disability and control groups. These analyses revealed a small number of different associations especially for conflict, which was lower when either the participant or sibling was younger in the control group but associated with relative age in the intellectual disability group.

  14. Expressed emotion in mothers of boys with gender identity disorder.

    PubMed

    Owen-Anderson, Allison F H; Bradley, Susan J; Zucker, Kenneth J

    2010-01-01

    The authors examined the construct of expressed emotion in mothers of 20 boys with gender identity disorder (GID), 20 clinical control boys with externalizing disorders (ECC), 20 community control boys (NCB), and 20 community control girls (NCG). The mean age of the children was 6.86 years (SD = 1.46, range = 4-8 years). The authors predicted that the mothers of boys with GID would demonstrate (a) higher percentages of expressed emotion, criticism, and emotional overinvolvement compared with normal controls; and (b) higher percentages of only emotional overinvolvement compared with mothers of boys with externalizing difficulties. They used the Five-Minute Speech Sample (Magana-Amato, A., 1986) to assess maternal expressed emotion. A significantly greater percentage of mothers in both clinical groups were classified as high expressed emotion than mothers in the NCB group. When the authors compared the GID group with all other groups combined, they found that the mothers of boys with GID were classified as having higher levels of a combination of both high or borderline emotional overinvolvement and low criticism than were mothers in the other 3 groups. The authors discuss expressed emotion as a maternal characteristic in the genesis and perpetuation of GID in boys.

  15. Bonding and expressed emotion: two interlinked concepts?

    PubMed

    Duclos, Jeanne; Maria, Anne-Solène; Dorard, Géraldine; Curt, Florence; Apfel, Alexandre; Vibert, Sarah; Rein, Zoé; Perdereau, Fabienne; Godart, Nathalie

    2013-01-01

    Bonding and expressed emotion (EE) are two concepts modeling family relationships. Two studies, with contradictory results, have explored whether these concepts and their corresponding instruments [the Parental Bonding Instrument (PBI) and the Camberwell Family Interview] do indeed measure the same aspects of family relationships. Our first objective was to compare the adolescents' perceptions of family relationships using the PBI, and the parental viewpoint using the Five-Minute Speech Sample (FMSS-EE). Secondly, we compared the PBI scores and EE levels of the parents. Sixty adolescent girls with anorexia nervosa completed the PBI. The FMSS and a modified version of the PBI were administered to parents separately. No significant link was identified between adolescent PBI scores and parental EE levels. However, a link between maternal 'modified' PBI scores and maternal EE was observed: when mothers registered a high Final EE, they were more likely to deny their daughter's psychological autonomy compared to mothers with lower EE. Our empirical results do not support the hypothesis of an overlap between the two concepts. Indeed bonding and EE measure the same object, i.e. the quality of family relationships, but time scales differ and so do the perspectives (patient vs. parental viewpoint). Copyright © 2012 S. Karger AG, Basel.

  16. THE NEW YORK CITY URBAN DISPERSION PROGRAM MARCH 2005 FIELD STUDY: TRACER METHODS AND RESULTS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WATSON, T.B.; HEISER, J.; KALB, P.

    The Urban Dispersion Program March 2005 Field Study tracer releases, sampling, and analytical methods are described in detail. There were two days where tracer releases and sampling were conducted. A total of 16.0 g of six tracers were released during the first test day or Intensive Observation Period (IOP) 1 and 15.7 g during IOP 2. Three types of sampling instruments were used in this study. Sequential air samplers, or SAS, collected six-minute samples, while Brookhaven atmospheric tracer samplers (BATS) and personal air samplers (PAS) collected thirty-minute samples. There were a total of 1300 samples resulting from the two IOPs.more » Confidence limits in the sampling and analysis method were 20% as determined from 100 duplicate samples. The sample recovery rate was 84%. The integrally averaged 6-minute samples were compared to the 30-minute samples. The agreement was found to be good in most cases. The validity of using a background tracer to calculate sample volumes was examined and also found to have a confidence level of 20%. Methods for improving sampling and analysis are discussed. The data described in this report are available as Excel files. An additional Excel file of quality assured tracer data for use in model validation efforts is also available. The file consists of extensively quality assured BATS tracer data with background concentrations subtracted.« less

  17. A Challenging Issue in the Etiology of Speech Problems: The Effect of Maternal Exposure to Electromagnetic Fields on Speech Problems in the Offspring

    PubMed Central

    Zarei, S.; Mortazavi, S. M. J.; Mehdizadeh, A. R.; Jalalipour, M.; Borzou, S.; Taeb, S.; Haghani, M.; Mortazavi, S. A. R.; Shojaei-fard, M. B.; Nematollahi, S.; Alighanbari, N.; Jarideh, S.

    2015-01-01

    Background Nowadays, mothers are continuously exposed to different sources of electromagnetic fields before and even during pregnancy.  It has recently been shown that exposure to mobile phone radiation during pregnancy may lead to adverse effects on the brain development in offspring and cause hyperactivity. Researchers have shown that behavioral problems in laboratory animals which have a similar appearance to ADHD are caused by intrauterine exposure to mobile phones. Objective The purpose of this study was to investigate whether the maternal exposure to different sources of electromagnetic fields affect on the rate and severity of speech problems in their offspring. Methods In this study, mothers of 35 healthy 3-5 year old children (control group) and 77 children and diagnosed with speech problems who had been referred to a speech treatment center in Shiraz, Iran were interviewed. These mothers were asked whether they had exposure to different sources of electromagnetic fields such as mobile phones, mobile base stations, Wi-Fi, cordless phones, laptops and power lines. Results We found a significant association between either the call time (P=0.002) or history of mobile phone use (months used) and speech problems in the offspring (P=0.003). However, other exposures had no effect on the occurrence of speech problems. To the best of our knowledge, this is the first study to investigate a possible association between maternal exposure to electromagnetic field and speech problems in the offspring. Although a major limitation in our study is the relatively small sample size, this study indicates that the maternal exposure to common sources of electromagnetic fields such as mobile phones can affect the occurrence of speech problems in the offspring. PMID:26396971

  18. A Challenging Issue in the Etiology of Speech Problems: The Effect of Maternal Exposure to Electromagnetic Fields on Speech Problems in the Offspring.

    PubMed

    Zarei, S; Mortazavi, S M J; Mehdizadeh, A R; Jalalipour, M; Borzou, S; Taeb, S; Haghani, M; Mortazavi, S A R; Shojaei-Fard, M B; Nematollahi, S; Alighanbari, N; Jarideh, S

    2015-09-01

    Nowadays, mothers are continuously exposed to different sources of electromagnetic fields before and even during pregnancy.  It has recently been shown that exposure to mobile phone radiation during pregnancy may lead to adverse effects on the brain development in offspring and cause hyperactivity. Researchers have shown that behavioral problems in laboratory animals which have a similar appearance to ADHD are caused by intrauterine exposure to mobile phones. The purpose of this study was to investigate whether the maternal exposure to different sources of electromagnetic fields affect on the rate and severity of speech problems in their offspring. In this study, mothers of 35 healthy 3-5 year old children (control group) and 77 children and diagnosed with speech problems who had been referred to a speech treatment center in Shiraz, Iran were interviewed. These mothers were asked whether they had exposure to different sources of electromagnetic fields such as mobile phones, mobile base stations, Wi-Fi, cordless phones, laptops and power lines. We found a significant association between either the call time (P=0.002) or history of mobile phone use (months used) and speech problems in the offspring (P=0.003). However, other exposures had no effect on the occurrence of speech problems. To the best of our knowledge, this is the first study to investigate a possible association between maternal exposure to electromagnetic field and speech problems in the offspring. Although a major limitation in our study is the relatively small sample size, this study indicates that the maternal exposure to common sources of electromagnetic fields such as mobile phones can affect the occurrence of speech problems in the offspring.

  19. Alignment of classification paradigms for communication abilities in children with cerebral palsy.

    PubMed

    Hustad, Katherine C; Oakes, Ashley; McFadd, Emily; Allison, Kristen M

    2016-06-01

    We examined three communication ability classification paradigms for children with cerebral palsy (CP): the Communication Function Classification System (CFCS), the Viking Speech Scale (VSS), and the Speech Language Profile Groups (SLPG). Questions addressed interjudge reliability, whether the VSS and the CFCS captured impairments in speech and language, and whether there were differences in speech intelligibility among levels within each classification paradigm. Eighty children (42 males, 38 females) with a range of types and severity levels of CP participated (mean age 60mo, range 50-72mo [SD 5mo]). Two speech-language pathologists classified each child via parent-child interaction samples and previous experience with the children for the CFCS and VSS, and using quantitative speech and language assessment data for the SLPG. Intelligibility scores were obtained using standard clinical intelligibility measurement. Kappa values were 0.67 (95% confidence interval [CI] 0.55-0.79) for the CFCS, 0.82 (95% CI 0.72-0.92) for the VSS, and 0.95 (95% CI 0.72-0.92) for the SLPG. Descriptively, reliability within levels of each paradigm varied, with the lowest agreement occurring within the CFCS at levels II (42%), III (40%), and IV (61%). Neither the CFCS nor the VSS were sensitive to language impairments captured by the SLPG. Significant differences in speech intelligibility were found among levels for all classification paradigms. Multiple tools are necessary to understand speech, language, and communication profiles in children with CP. Characterization of abilities at all levels of the International Classification of Functioning, Disability and Health will advance our understanding of the ways that speech, language, and communication abilities present in children with CP. © 2015 Mac Keith Press.

  20. The association between Mycoplasma pneumoniae infection and speech and language impairment: A nationwide population-based study in Taiwan.

    PubMed

    Tsai, Ching-Shu; Chen, Vincent Chin-Hung; Yang, Yao-Hsu; Hung, Tai-Hsin; Lu, Mong-Liang; Huang, Kuo-You; Gossop, Michael

    2017-01-01

    Manifestations of Mycoplasma pneumoniae infection can range from self-limiting upper respiratory symptoms to various neurological complications, including speech and language impairment. But an association between Mycoplasma pneumoniae infection and speech and language impairment has not been sufficiently explored. In this study, we aim to investigate the association between Mycoplasma pneumoniae infection and subsequent speech and language impairment in a nationwide population-based sample using Taiwan's National Health Insurance Research Database. We identified 5,406 children with Mycoplasma pneumoniae infection (International Classification of Disease, Revision 9, Clinical Modification code 4830) and compared to 21,624 age-, sex-, urban- and income-matched controls on subsequent speech and language impairment. The mean follow-up interval for all subjects was 6.44 years (standard deviation = 2.42 years); the mean latency period between the initial Mycoplasma pneumoniae infection and presence of speech and language impairment was 1.96 years (standard deviation = 1.64 years). The results showed that Mycoplasma pneumoniae infection was significantly associated with greater incidence of speech and language impairment [hazard ratio (HR) = 1.49, 95% CI: 1.23-1.80]. In addition, significantly increased hazard ratio of subsequent speech and language impairment in the groups younger than 6 years old and no significant difference in the groups over the age of 6 years were found (HR = 1.43, 95% CI:1.09-1.88 for age 0-3 years group; HR = 1.67, 95% CI: 1.25-2.23 for age 4-5 years group; HR = 1.14, 95% CI: 0.54-2.39 for age 6-7 years group; and HR = 0.83, 95% CI:0.23-2.92 for age 8-18 years group). In conclusion, Mycoplasma pneumoniae infection is temporally associated with incident speech and language impairment.

  1. Walking while talking: Young adults flexibly allocate resources between speech and gait.

    PubMed

    Raffegeau, Tiphanie E; Haddad, Jeffrey M; Huber, Jessica E; Rietdyk, Shirley

    2018-05-26

    Walking while talking is an ideal multitask behavior to assess how young healthy adults manage concurrent tasks as it is well-practiced, cognitively demanding, and has real consequences for impaired performance in either task. Since the association between cognitive tasks and gait appears stronger when the gait task is more challenging, gait challenge was systematically manipulated in this study. To understand how young adults accomplish the multitask behavior of walking while talking as the gait challenge was systematically manipulated. Sixteen young adults (21 ± 1.6 years, 9 males) performed three gait tasks with and without speech: unobstructed gait (easy), obstacle crossing (moderate), obstacle crossing and tray carrying (difficult). Participants also provided a speech sample while seated for a baseline indicator of speech. The speech task was to speak extemporaneously about a topic (e.g. first car). Gait speed and the duration of silent pauses during speaking were determined. Silent pauses reflect cognitive processes involved in speech production and language planning. When speaking and walking without obstacles, gait speed decreased (relative to walking without speaking) but silent pause duration did not change (relative to seated speech). These changes are consistent with the idea that, in the easy gait task, participants placed greater value on speech pauses than on gait speed, likely due to the negative social consequences of impaired speech. In the moderate and difficult gait tasks both parameters changed: gait speed decreased and silent pauses increased. Walking while talking is a cognitively demanding task for healthy young adults, despite being a well-practiced habitual activity. These findings are consistent with the integrated model of task prioritization from Yogev-Seligmann et al., [1]. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Effect of signal to noise ratio on the speech perception ability of older adults

    PubMed Central

    Shojaei, Elahe; Ashayeri, Hassan; Jafari, Zahra; Zarrin Dast, Mohammad Reza; Kamali, Koorosh

    2016-01-01

    Background: Speech perception ability depends on auditory and extra-auditory elements. The signal- to-noise ratio (SNR) is an extra-auditory element that has an effect on the ability to normally follow speech and maintain a conversation. Speech in noise perception difficulty is a common complaint of the elderly. In this study, the importance of SNR magnitude as an extra-auditory effect on speech perception in noise was examined in the elderly. Methods: The speech perception in noise test (SPIN) was conducted on 25 elderly participants who had bilateral low–mid frequency normal hearing thresholds at three SNRs in the presence of ipsilateral white noise. These participants were selected by available sampling method. Cognitive screening was done using the Persian Mini Mental State Examination (MMSE) test. Results: Independent T- test, ANNOVA and Pearson Correlation Index were used for statistical analysis. There was a significant difference in word discrimination scores at silence and at three SNRs in both ears (p≤0.047). Moreover, there was a significant difference in word discrimination scores for paired SNRs (0 and +5, 0 and +10, and +5 and +10 (p≤0.04)). No significant correlation was found between age and word recognition scores at silence and at three SNRs in both ears (p≥0.386). Conclusion: Our results revealed that decreasing the signal level and increasing the competing noise considerably reduced the speech perception ability in normal hearing at low–mid thresholds in the elderly. These results support the critical role of SNRs for speech perception ability in the elderly. Furthermore, our results revealed that normal hearing elderly participants required compensatory strategies to maintain normal speech perception in challenging acoustic situations. PMID:27390712

  3. Differentiation of Speech Delay and Global Developmental Delay in Children Using DTI Tractography-Based Connectome.

    PubMed

    Jeong, J-W; Sundaram, S; Behen, M E; Chugani, H T

    2016-06-01

    Pure speech delay is a common developmental disorder which, according to some estimates, affects 5%-8% of the population. Speech delay may not only be an isolated condition but also can be part of a broader condition such as global developmental delay. The present study investigated whether diffusion tensor imaging tractography-based connectome can differentiate global developmental delay from speech delay in young children. Twelve children with pure speech delay (39.1 ± 20.9 months of age, 9 boys), 14 children with global developmental delay (39.3 ± 18.2 months of age, 12 boys), and 10 children with typical development (38.5 ± 20.5 months of age, 7 boys) underwent 3T DTI. For each subject, whole-brain connectome analysis was performed by using 116 cortical ROIs. The following network metrics were measured at individual regions: strength (number of the shortest paths), efficiency (measures of global and local integration), cluster coefficient (a measure of local aggregation), and betweeness (a measure of centrality). Compared with typical development, global and local efficiency were significantly reduced in both global developmental delay and speech delay (P < .0001). The nodal strength of the cognitive network is reduced in global developmental delay, whereas the nodal strength of the language network is reduced in speech delay. This finding resulted in a high accuracy of >83% ± 4% to discriminate global developmental delay from speech delay. The network abnormalities identified in the present study may underlie the neurocognitive and behavioral consequences commonly identified in children with global developmental delay and speech delay. Further validation studies in larger samples are required. © 2016 by American Journal of Neuroradiology.

  4. Effects of neurological damage on production of formulaic language

    PubMed Central

    Sidtis, D.; Canterucci, G.; Katsnelson, D.

    2014-01-01

    Early studies reported preserved formulaic language in left hemisphere damaged subjects and reduced incidence of formulaic expressions in the conversational speech of stroke patients with right hemispheric damage. Clinical observations suggest a possible role also of subcortical nuclei. This study examined formulaic language in the spontaneous speech of stroke patients with left, right, or subcortical damage. Four subjects were interviewed and their speech samples compared to normal speakers. Raters classified formulaic expressions as speech formulae, fillers, sentence stems, and proper nouns. Results demonstrated that brain damage affected novel and formulaic language competence differently, with a significantly smaller proportion of formulaic expressions in subjects with right or subcortical damage compared to left hemisphere damaged or healthy speakers. These findings converge with previous studies that support the proposal of a right hemisphere/subcortical circuit in the management of formulaic expressions, based on a dual-process model of language incorporating novel and formulaic language use. PMID:19382014

  5. Atypical coordination of cortical oscillations in response to speech in autism

    PubMed Central

    Jochaut, Delphine; Lehongre, Katia; Saitovitch, Ana; Devauchelle, Anne-Dominique; Olasagasti, Itsaso; Chabane, Nadia; Zilbovicius, Monica; Giraud, Anne-Lise

    2015-01-01

    Subjects with autism often show language difficulties, but it is unclear how they relate to neurophysiological anomalies of cortical speech processing. We used combined EEG and fMRI in 13 subjects with autism and 13 control participants and show that in autism, gamma and theta cortical activity do not engage synergistically in response to speech. Theta activity in left auditory cortex fails to track speech modulations, and to down-regulate gamma oscillations in the group with autism. This deficit predicts the severity of both verbal impairment and autism symptoms in the affected sample. Finally, we found that oscillation-based connectivity between auditory and other language cortices is altered in autism. These results suggest that the verbal disorder in autism could be associated with an altered balance of slow and fast auditory oscillations, and that this anomaly could compromise the mapping between sensory input and higher-level cognitive representations. PMID:25870556

  6. Callous-unemotional behavior and early-childhood onset of behavior problems: the role of parental harshness and warmth

    PubMed Central

    Waller, Rebecca; Gardner, Frances; Shaw, Daniel S.; Dishion, Thomas J.; Wilson, Melvin N.; Hyde, Luke W.

    2014-01-01

    Objective Youth with callous unemotional (CU) behavior are at risk of developing more severe forms of aggressive and antisocial behavior. Previous cross-sectional studies suggest that associations between parenting and conduct problems are less strong when children or adolescents have high levels of CU behavior, implying lower malleability of behavior compared to low-CU children. The current study extends previous findings by examining the moderating role of CU behavior on associations between parenting and behavior problems in a very young sample, both concurrently and longitudinally, and using a variety of measurement methods. Methods Data were collected from a multi-ethnic, high-risk sample at ages 2–4 (N = 364; 49% female). Parent-reported CU behavior was assessed at age 3 using a previously validated measure (Hyde et al., 2013). Parental harshness was coded from observations of parent-child interactions and parental warmth was coded from five-minute speech samples. Results In this large and young sample, CU behavior moderated cross-sectional correlations between parent-reported and observed warmth and child behavior problems. However, in cross-sectional and longitudinal models testing parental harshness, and longitudinal models testing warmth, there was no moderation by CU behavior. Conclusions The findings are in line with recent literature suggesting parental warmth may be important to child behavior problems at high levels of CU behavior. In general, however, the results of this study contrast with much of the extant literature and suggest that in young children, affective aspects of parenting appear to be related to emerging behavior problems, regardless of the presence of early CU behavior. PMID:24661288

  7. Speech production in experienced cochlear implant users undergoing short-term auditory deprivation

    NASA Astrophysics Data System (ADS)

    Greenman, Geoffrey; Tjaden, Kris; Kozak, Alexa T.

    2005-09-01

    This study examined the effect of short-term auditory deprivation on the speech production of five postlingually deafened women, all of whom were experienced cochlear implant users. Each cochlear implant user, as well as age and gender matched control speakers, produced CVC target words embedded in a reading passage. Speech samples for the deafened adults were collected on two separate occasions. First, the speakers were recorded after wearing their speech processor consistently for at least two to three hours prior to recording (implant ``ON''). The second recording occurred when the speakers had their speech processors turned off for approximately ten to twelve hours prior to recording (implant ``OFF''). Acoustic measures, including fundamental frequency (F0), the first (F1) and second (F2) formants of the vowels, vowel space area, vowel duration, spectral moments of the consonants, as well as utterance duration and sound pressure level (SPL) across the entire utterance were analyzed in both speaking conditions. For each implant speaker, acoustic measures will be compared across implant ``ON'' and implant ``OFF'' speaking conditions, and will also be compared to data obtained from normal hearing speakers.

  8. An examination of speech reception thresholds measured in a simulated reverberant cafeteria environment

    PubMed Central

    Best, Virginia; Keidser, Gitte; Buchholz, J(x004E7)rg M.; Freeston, Katrina

    2016-01-01

    Objective There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Design Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. Study Sample The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Results Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing aid benefit from those measured in the standard environment. Conclusions The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests. PMID:25853616

  9. Aging-related gains and losses associated with word production in connected speech.

    PubMed

    Dennis, Paul A; Hess, Thomas M

    2016-11-01

    Older adults have been observed to use more nonnormative, or atypical, words than younger adults in connected speech. We examined whether aging-related losses in word-finding abilities or gains in language expertise underlie these age differences. Sixty younger and 60 older adults described two neutral photographs. These descriptions were processed into word types, and textual analysis was used to identify interrupted speech (e.g., pauses), reflecting word-finding difficulty. Word types were assessed for normativeness, with nonnormative word types defined as those used by six (5%) or fewer participants to describe a particular picture. Accuracy and precision ratings were provided by another sample of 48 high-vocabulary younger and older adults. Older adults produced more interrupted and, as predicted, nonnormative words than younger adults. Older adults were more likely than younger adults to use nonnormative language via interrupted speech, suggesting a compensatory process. However, older adults' nonnormative words were more precise and trended for having higher accuracy, reflecting expertise. In tasks offering response flexibility, like connected speech, older adults may be able to offset instances of aging-related deficits by maximizing their expertise in other instances.

  10. Speech waveform perturbation analysis: a perceptual-acoustical comparison of seven measures.

    PubMed

    Askenfelt, A G; Hammarberg, B

    1986-03-01

    The performance of seven acoustic measures of cycle-to-cycle variations (perturbations) in the speech waveform was compared. All measures were calculated automatically and applied on running speech. Three of the measures refer to the frequency of occurrence and severity of waveform perturbations in special selected parts of the speech, identified by means of the rate of change in the fundamental frequency. Three other measures refer to statistical properties of the distribution of the relative frequency differences between adjacent pitch periods. One perturbation measure refers to the percentage of consecutive pitch period differences with alternating signs. The acoustic measures were tested on tape recorded speech samples from 41 voice patients, before and after successful therapy. Scattergrams of acoustic waveform perturbation data versus an average of perceived deviant voice qualities, as rated by voice clinicians, are presented. The perturbation measures were compared with regard to the acoustic-perceptual correlation and their ability to discriminate between normal and pathological voice status. The standard deviation of the distribution of the relative frequency differences was suggested as the most useful acoustic measure of waveform perturbations for clinical applications.

  11. A white matter tract mediating awareness of speech.

    PubMed

    Koubeissi, Mohamad Z; Fernandez-Baca Vaca, Guadalupe; Maciunas, Robert; Stephani, Caspar

    2016-01-12

    To investigate the effects of extraoperative electrical stimulation of fiber tracts connecting the language territories. We describe results of extraoperative electrical stimulation of stereotactic electrodes in 3 patients with epilepsy who underwent presurgical evaluation for epilepsy surgery. Contacts of these electrodes sampled, among other structures, the suprainsular white matter of the left hemisphere. Aside from speech disturbance and speech arrest, subcortical electrical stimulation of white matter tracts directly superior to the insula representing the anterior part of the arcuate fascicle, reproducibly induced complex verbal auditory phenomena including (1) hearing one's own voice in the absence of overt speech, and (2) lack of perception of arrest or alteration in ongoing repetition of words. These results represent direct evidence that the anterior part of the arcuate fascicle is part of a network that is important in the mediation of speech planning and awareness likely by linking the language areas of the inferior parietal and posterior inferior frontal cortices. More specifically, our observations suggest that this structure may be relevant to the pathophysiology of thought disorders and auditory verbal hallucinations. © 2015 American Academy of Neurology.

  12. The Effects of Different Noise Types on Heart Rate Variability in Men

    PubMed Central

    Sim, Chang Sun; Sung, Joo Hyun; Cheon, Sang Hyeon; Lee, Jang Myung; Lee, Jae Won

    2015-01-01

    Purpose To determine the impact of noise on heart rate variability (HRV) in men, with a focus on the noise type rather than on noise intensity. Materials and Methods Forty college-going male volunteers were enrolled in this study and were randomly divided into four groups according to the type of noise they were exposed to: background, traffic, speech, or mixed (traffic and speech) noise. All groups except the background group (35 dB) were exposed to 45 dB sound pressure levels. We collected data on age, smoking status, alcohol consumption, and disease status from responses to self-reported questionnaires and medical examinations. We also measured HRV parameters and blood pressure levels before and after exposure to noise. The HRV parameters were evaluated while patients remained seated for 5 minutes, and frequency and time domain analyses were then performed. Results After noise exposure, only the speech noise group showed a reduced low frequency (LF) value, reflecting the activity of both the sympathetic and parasympathetic nervous systems. The low-to-high frequency (LF/HF) ratio, which reflected the activity of the autonomic nervous system (ANS), became more stable, decreasing from 5.21 to 1.37; however, this change was not statistically significant. Conclusion These results indicate that 45 dB(A) of noise, 10 dB(A) higher than background noise, affects the ANS. Additionally, the impact on HRV activity might differ according to the noise quality. Further studies will be required to ascertain the role of noise type. PMID:25510770

  13. The effects of different noise types on heart rate variability in men.

    PubMed

    Sim, Chang Sun; Sung, Joo Hyun; Cheon, Sang Hyeon; Lee, Jang Myung; Lee, Jae Won; Lee, Jiho

    2015-01-01

    To determine the impact of noise on heart rate variability (HRV) in men, with a focus on the noise type rather than on noise intensity. Forty college-going male volunteers were enrolled in this study and were randomly divided into four groups according to the type of noise they were exposed to: background, traffic, speech, or mixed (traffic and speech) noise. All groups except the background group (35 dB) were exposed to 45 dB sound pressure levels. We collected data on age, smoking status, alcohol consumption, and disease status from responses to self-reported questionnaires and medical examinations. We also measured HRV parameters and blood pressure levels before and after exposure to noise. The HRV parameters were evaluated while patients remained seated for 5 minutes, and frequency and time domain analyses were then performed. After noise exposure, only the speech noise group showed a reduced low frequency (LF) value, reflecting the activity of both the sympathetic and parasympathetic nervous systems. The low-to-high frequency (LF/HF) ratio, which reflected the activity of the autonomic nervous system (ANS), became more stable, decreasing from 5.21 to 1.37; however, this change was not statistically significant. These results indicate that 45 dB(A) of noise, 10 dB(A) higher than background noise, affects the ANS. Additionally, the impact on HRV activity might differ according to the noise quality. Further studies will be required to ascertain the role of noise type.

  14. The Meaning of Emotional Overinvolvement in Early Development: Prospective Relations with Child Behavior Problems

    PubMed Central

    Khafi, Tamar Y.; Yates, Tuppett M.; Sher-Censor, Efrat

    2015-01-01

    Emotional Overinvolvement (EOI) in parents’ Five Minute Speech Samples (FMSS; Magaña-Amato, 1993) is thought to measure overconcern and enmeshment with one’s child. Although related to maladaptive outcomes in studies of adult children, FMSS-EOI evidences varied relations with behavior problems in studies with young children. These mixed findings may indicate that certain FMSS-EOI criteria reflect inappropriate and excessive involvement with adult children, but do not indicate maladaptive processes when parenting younger children. Thus, this study evaluated relations of each FMSS-EOI criterion with changes in child behavior problems from preschool to first grade in a community sample of 223 child-mother dyads (47.98% female; Mage_W1 = 49.08 months; 56.50% Hispanic/Latina). Maternal FMSS-EOI ratings were obtained at wave 1, and independent examiners rated child externalizing and internalizing behavior problems at wave 1 and two years later. Path analyses indicated that both the Self-Sacrifice/Overprotection (SSOP) and Statements of Attitude (SOAs) FMSS-EOI criteria predicted increased externalizing problems. In contrast, Excessive Detail and Exaggerated Praise were not related to child externalizing behavior problems, and Emotional Display was not evident in this sample. None of the FMSS-EOI criteria evidenced significant relations with internalizing behavior problems. Multigroup comparisons indicated that the effect of SOAs on externalizing behavior problems was significant for boys but not for girls, and there were no significant group differences by race/ethnicity. These findings point to the salience of SSOP and SOAs for understanding the developmental significance of EOI in early development. PMID:26147935

  15. Comparison of upper and lower lip muscle activity between stutterers and fluent speakers.

    PubMed

    de Felício, Cláudia Maria; Freitas, Rosana Luiza Rodrigues Gomes; Vitti, Mathias; Regalo, Simone Cecilio Hallak

    2007-08-01

    There is a widespread clinical view that stuttering is associated with high levels of muscles activity. The proposal of this research was to compare stutterers and fluent speakers with respect to the electromyographic activity of the upper and lower lip muscles. Ten individuals who stutter and 10 fluent speakers (control group) paired by gender and age were studied (mean age: 13.4 years). Groups were defined by the speech sample analysis of the ABFW-Language Test. A K6-I EMG (Myo-tronics Co., Seattle, WA, USA) with double disposable silver electrodes (Duotrodes, Myo-tronics Co., Seattle, WA) being used in order to analyze lip muscle activity. The clinical conditions investigated were movements during speech, orofacial non-speech tasks, and rest. Electromyographic data were normalized by lip pursing activity. The non-parametric Mann-Whitney test was used for the comparison of speech fluency profile, and the Student t-test for independent samples for group comparison regarding electromyographic data. There was a statistically significant difference between groups regarding speech fluency profile and upper lip activity in the following conditions: lip lateralization to the right and to the left and rest before exercises (P<0.05). There was no significant difference between groups regarding lower lip activity (P>0.05). The EMG activity of the upper lip muscle in the group with stuttering was significantly lower than in the control group in some of the clinical conditions analyzed. There was no significant difference between groups regarding the lower lip muscle. The subjects who stutter did not present higher levels of muscle activity in lip muscles than fluent speakers.

  16. Age-Related Neural Oscillation Patterns During the Processing of Temporally Manipulated Speech.

    PubMed

    Rufener, Katharina S; Oechslin, Mathias S; Wöstmann, Malte; Dellwo, Volker; Meyer, Martin

    2016-05-01

    This EEG-study aims to investigate age-related differences in the neural oscillation patterns during the processing of temporally modulated speech. Viewing from a lifespan perspective, we recorded the electroencephalogram (EEG) data of three age samples: young adults, middle-aged adults and older adults. Stimuli consisted of temporally degraded sentences in Swedish-a language unfamiliar to all participants. We found age-related differences in phonetic pattern matching when participants were presented with envelope-degraded sentences, whereas no such age-effect was observed in the processing of fine-structure-degraded sentences. Irrespective of age, during speech processing the EEG data revealed a relationship between envelope information and the theta band (4-8 Hz) activity. Additionally, an association between fine-structure information and the gamma band (30-48 Hz) activity was found. No interaction, however, was found between acoustic manipulation of stimuli and age. Importantly, our main finding was paralleled by an overall enhanced power in older adults in high frequencies (gamma: 30-48 Hz). This occurred irrespective of condition. For the most part, this result is in line with the Asymmetric Sampling in Time framework (Poeppel in Speech Commun 41:245-255, 2003), which assumes an isomorphic correspondence between frequency modulations in neurophysiological patterns and acoustic oscillations in spoken language. We conclude that speech-specific neural networks show strong stability over adulthood, despite initial processes of cortical degeneration indicated by enhanced gamma power. The results of our study therefore confirm the concept that sensory and cognitive processes undergo multidirectional trajectories within the context of healthy aging.

  17. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  18. Neural source dynamics of brain responses to continuous stimuli: Speech processing from acoustics to comprehension.

    PubMed

    Brodbeck, Christian; Presacco, Alessandro; Simon, Jonathan Z

    2018-05-15

    Human experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables, as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. [Viewing of horror and violence videos by adolescence. A study of speech samples of video consumers with the Gottschalk-Gleser Speech Content analysis].

    PubMed

    Hopf, H; Weiss, R H

    1996-01-01

    In 1990 pupils of different schools in Württemberg were interviewed about their television and video consumption. It turned out that a high percentage of mainly male pupils of Hauptschulen (upper division of elementary schools) and special schools excessively and regularly consumed films which were on the index (X-rated) or seized depicting horror and violence. Subsequent to the inquiry through questionnaires and different personality tests, speech samples of 51 test persons were recorded on tape. 5 speech samples had to be excluded from further investigation since they contained less than 70 words. The transcribed and anonymized records were examined according to the Gottschalk-Gleser content analysis of verbal behavior, and two groups of so-called seldom lookers (n = 22) and frequent lookers (n = 24) were compared to each other. The frequent lookers significantly often reported about film contents which presumably means that their imagination is more restricted and less productive than that of the other group. In addition, this group of frequent lookers had significantly higher scores concerning death anxiety and guilt anxiety. With regard to hostility affects, their scores were also significantly raised concerning outward-overt hostility, outward-covert hostility, and ambivalent hostility. Probably the group of frequent lookers comprised more test persons with relationship disorders, with borderline risks, dissocial personality features, and problems to cope with their aggressiveness. So they show on the one hand a raised affinity to watch such films, but simultaneously unconscious and conscious learning processes take place which stimulate further aggressive fantasies (and possibly also actions).

  20. Rapid recovery from aphasia after infarction of Wernicke's area.

    PubMed

    Yagata, Stephanie A; Yen, Melodie; McCarron, Angelica; Bautista, Alexa; Lamair-Orosco, Genevieve; Wilson, Stephen M

    2017-01-01

    Aphasia following infarction of Wernicke's area typically resolves to some extent over time. The nature of this recovery process and its time course have not been characterized in detail, especially in the acute/subacute period. The goal of this study was to document recovery after infarction of Wernicke's area in detail in the first 3 months after stroke. Specifically, we aimed to address two questions about language recovery. First, which impaired language domains improve over time, and which do not? Second, what is the time course of recovery? We used quantitative analysis of connected speech and a brief aphasia battery to document language recovery in two individuals with aphasia following infarction of the posterior superior temporal gyrus. Speech samples were acquired daily between 2 and 16 days post stroke, and also at 1 month and 3 months. Speech samples were transcribed and coded using the CHAT system, in order to quantify multiple language domains. A brief aphasia battery was also administered at a subset of five time points during the 3 months. Both patients showed substantial recovery of language function over this time period. Most, but not all, language domains showed improvements, including fluency, lexical access, phonological retrieval and encoding, and syntactic complexity. The time course of recovery was logarithmic, with the greatest gains taking place early in the course of recovery. There is considerable potential for amelioration of language deficits when damage is relatively circumscribed to the posterior superior temporal gyrus. Quantitative analysis of connected speech samples proved to be an effective, albeit time-consuming, approach to tracking day-by-day recovery in the acute/subacute post-stroke period.

  1. [Acoustic voice analysis using the Praat program: comparative study with the Dr. Speech program].

    PubMed

    Núñez Batalla, Faustino; González Márquez, Rocío; Peláez González, M Belén; González Laborda, Irene; Fernández Fernández, María; Morato Galán, Marta

    2014-01-01

    The European Laryngological Society (ELS) basic protocol for functional assessment of voice pathology includes 5 different approaches: perception, videostroboscopy, acoustics, aerodynamics and subjective rating by the patient. In this study we focused on acoustic voice analysis. The purpose of the present study was to correlate the results obtained by the commercial software Dr. Speech and the free software Praat in 2 fields: 1. Narrow-band spectrogram (the presence of noise according to Yanagihara, and the presence of subharmonics) (semi-quantitative). 2. Voice acoustic parameters (jitter, shimmer, harmonics-to-noise ratio, fundamental frequency) (quantitative). We studied a total of 99 voice samples from individuals with Reinke's oedema diagnosed using videostroboscopy. One independent observer used Dr. Speech 3.0 and a second one used the Praat program (Phonetic Sciences, University of Amsterdam). The spectrographic analysis consisted of obtaining a narrow-band spectrogram from the previous digitalised voice samples by the 2 independent observers. They then determined the presence of noise in the spectrogram, using the Yanagihara grades, as well as the presence of subharmonics. As a final result, the acoustic parameters of jitter, shimmer, harmonics-to-noise ratio and fundamental frequency were obtained from the 2 acoustic analysis programs. The results indicated that the sound spectrogram and the numerical values obtained for shimmer and jitter were similar for both computer programs, even though types 1, 2 and 3 voice samples were analysed. The Praat and Dr. Speech programs provide similar results in the acoustic analysis of pathological voices. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  2. Improving speech-in-noise recognition for children with hearing loss: Potential effects of language abilities, binaural summation, and head shadow

    PubMed Central

    Nittrouer, Susan; Caldwell-Tarr, Amanda; Tarr, Eric; Lowenstein, Joanna H.; Rice, Caitlin; Moberly, Aaron C.

    2014-01-01

    Objective: This study examined speech recognition in noise for children with hearing loss, compared it to recognition for children with normal hearing, and examined mechanisms that might explain variance in children’s abilities to recognize speech in noise. Design: Word recognition was measured in two levels of noise, both when the speech and noise were co-located in front and when the noise came separately from one side. Four mechanisms were examined as factors possibly explaining variance: vocabulary knowledge, sensitivity to phonological structure, binaural summation, and head shadow. Study sample: Participants were 113 eight-year-old children. Forty-eight had normal hearing (NH) and 65 had hearing loss: 18 with hearing aids (HAs), 19 with one cochlear implant (CI), and 28 with two CIs. Results: Phonological sensitivity explained a significant amount of between-groups variance in speech-in-noise recognition. Little evidence of binaural summation was found. Head shadow was similar in magnitude for children with NH and with CIs, regardless of whether they wore one or two CIs. Children with HAs showed reduced head shadow effects. Conclusion: These outcomes suggest that in order to improve speech-in-noise recognition for children with hearing loss, intervention needs to be comprehensive, focusing on both language abilities and auditory mechanisms. PMID:23834373

  3. Automated Intelligibility Assessment of Pathological Speech Using Phonological Features

    NASA Astrophysics Data System (ADS)

    Middag, Catherine; Martens, Jean-Pierre; Van Nuffelen, Gwen; De Bodt, Marc

    2009-12-01

    It is commonly acknowledged that word or phoneme intelligibility is an important criterion in the assessment of the communication efficiency of a pathological speaker. People have therefore put a lot of effort in the design of perceptual intelligibility rating tests. These tests usually have the drawback that they employ unnatural speech material (e.g., nonsense words) and that they cannot fully exclude errors due to listener bias. Therefore, there is a growing interest in the application of objective automatic speech recognition technology to automate the intelligibility assessment. Current research is headed towards the design of automated methods which can be shown to produce ratings that correspond well with those emerging from a well-designed and well-performed perceptual test. In this paper, a novel methodology that is built on previous work (Middag et al., 2008) is presented. It utilizes phonological features, automatic speech alignment based on acoustic models that were trained on normal speech, context-dependent speaker feature extraction, and intelligibility prediction based on a small model that can be trained on pathological speech samples. The experimental evaluation of the new system reveals that the root mean squared error of the discrepancies between perceived and computed intelligibilities can be as low as 8 on a scale of 0 to 100.

  4. Developing the Alphabetic Principle to Aid Text-Based Augmentative and Alternative Communication Use by Adults With Low Speech Intelligibility and Intellectual Disabilities.

    PubMed

    Schmidt-Naylor, Anna C; Saunders, Kathryn J; Brady, Nancy C

    2017-05-17

    We explored alphabet supplementation as an augmentative and alternative communication strategy for adults with minimal literacy. Study 1's goal was to teach onset-letter selection with spoken words and assess generalization to untaught words, demonstrating the alphabetic principle. Study 2 incorporated alphabet supplementation within a naming task and then assessed effects on speech intelligibility. Three men with intellectual disabilities (ID) and low speech intelligibility participated. Study 1 used a multiple-probe design, across three 20-word sets, to show that our computer-based training improved onset-letter selection. We also probed generalization to untrained words. Study 2 taught onset-letter selection for 30 new words chosen for functionality. Five listeners transcribed speech samples of the 30 words in 2 conditions: speech only and speech with alphabet supplementation. Across studies 1 and 2, participants demonstrated onset-letter selection for at least 90 words. Study 1 showed evidence of the alphabetic principle for some but not all word sets. In study 2, participants readily used alphabet supplementation, enabling listeners to understand twice as many words. This is the first demonstration of alphabet supplementation in individuals with ID and minimal literacy. The large number of words learned holds promise both for improving communication and providing a foundation for improved literacy.

  5. Phonological Awareness and Print Knowledge of Preschool Children with Cochlear Implants

    PubMed Central

    Ambrose, Sophie E.; Fey, Marc E.; Eisenberg, Laurie S.

    2012-01-01

    Purpose To determine whether preschool-age children with cochlear implants have age-appropriate phonological awareness and print knowledge and to examine the relationships of these skills with related speech and language abilities. Method 24 children with cochlear implants (CIs) and 23 peers with normal hearing (NH), ages 36 to 60 months, participated. Children’s print knowledge, phonological awareness, language, speech production, and speech perception abilities were assessed. Results For phonological awareness, the CI group’s mean score fell within 1 standard deviation of the TOPEL’s normative sample mean but was more than 1 standard deviation below our NH group mean. The CI group’s performance did not differ significantly from that of the NH group for print knowledge. For the CI group, phonological awareness and print knowledge were significantly correlated with language, speech production, and speech perception. Together, these predictor variables accounted for 34% of variance in the CI group’s phonological awareness but no significant variance in their print knowledge. Conclusions Children with CIs have the potential to develop age-appropriate early literacy skills by preschool-age but are likely to lag behind their NH peers in phonological awareness. Intervention programs serving these children should target these skills with instruction and by facilitating speech and language development. PMID:22223887

  6. Profiles of verbal working memory growth predict speech and language development in children with cochlear implants.

    PubMed

    Kronenberger, William G; Pisoni, David B; Harris, Michael S; Hoen, Helena M; Xu, Huiping; Miyamoto, Richard T

    2013-06-01

    Verbal short-term memory (STM) and working memory (WM) skills predict speech and language outcomes in children with cochlear implants (CIs) even after conventional demographic, device, and medical factors are taken into account. However, prior research has focused on single end point outcomes as opposed to the longitudinal process of development of verbal STM/WM and speech-language skills. In this study, the authors investigated relations between profiles of verbal STM/WM development and speech-language development over time. Profiles of verbal STM/WM development were identified through the use of group-based trajectory analysis of repeated digit span measures over at least a 2-year time period in a sample of 66 children (ages 6-16 years) with CIs. Subjects also completed repeated assessments of speech and language skills during the same time period. Clusters representing different patterns of development of verbal STM (digit span forward scores) were related to the growth rate of vocabulary and language comprehension skills over time. Clusters representing different patterns of development of verbal WM (digit span backward scores) were related to the growth rate of vocabulary and spoken word recognition skills over time. Different patterns of development of verbal STM/WM capacity predict the dynamic process of development of speech and language skills in this clinical population.

  7. Longitudinal changes in speech recognition in older persons.

    PubMed

    Dubno, Judy R; Lee, Fu-Shing; Matthews, Lois J; Ahlstrom, Jayne B; Horwitz, Amy R; Mills, John H

    2008-01-01

    Recognition of isolated monosyllabic words in quiet and recognition of key words in low- and high-context sentences in babble were measured in a large sample of older persons enrolled in a longitudinal study of age-related hearing loss. Repeated measures were obtained yearly or every 2 to 3 years. To control for concurrent changes in pure-tone thresholds and speech levels, speech-recognition scores were adjusted using an importance-weighted speech-audibility metric (AI). Linear-regression slope estimated the rate of change in adjusted speech-recognition scores. Recognition of words in quiet declined significantly faster with age than predicted by declines in speech audibility. As subjects aged, observed scores deviated increasingly from AI-predicted scores, but this effect did not accelerate with age. Rate of decline in word recognition was significantly faster for females than males and for females with high serum progesterone levels, whereas noise history had no effect. Rate of decline did not accelerate with age but increased with degree of hearing loss, suggesting that with more severe injury to the auditory system, impairments to auditory function other than reduced audibility resulted in faster declines in word recognition as subjects aged. Recognition of key words in low- and high-context sentences in babble did not decline significantly with age.

  8. Neural Entrainment to Rhythmically Presented Auditory, Visual, and Audio-Visual Speech in Children

    PubMed Central

    Power, Alan James; Mead, Natasha; Barnes, Lisa; Goswami, Usha

    2012-01-01

    Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal “samples” of information from the speech stream at different rates, phase resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (“phase locking”). Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate) based on repetition of the syllable “ba,” presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a “talking head”). To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the “ba” stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a “ba” in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling, such as dyslexia. PMID:22833726

  9. Logopenic and Nonfluent Variants of Primary Progressive Aphasia Are Differentiated by Acoustic Measures of Speech Production

    PubMed Central

    Ballard, Kirrie J.; Savage, Sharon; Leyton, Cristian E.; Vogel, Adam P.; Hornberger, Michael; Hodges, John R.

    2014-01-01

    Differentiation of logopenic (lvPPA) and nonfluent/agrammatic (nfvPPA) variants of Primary Progressive Aphasia is important yet remains challenging since it hinges on expert based evaluation of speech and language production. In this study acoustic measures of speech in conjunction with voxel-based morphometry were used to determine the success of the measures as an adjunct to diagnosis and to explore the neural basis of apraxia of speech in nfvPPA. Forty-one patients (21 lvPPA, 20 nfvPPA) were recruited from a consecutive sample with suspected frontotemporal dementia. Patients were diagnosed using the current gold-standard of expert perceptual judgment, based on presence/absence of particular speech features during speaking tasks. Seventeen healthy age-matched adults served as controls. MRI scans were available for 11 control and 37 PPA cases; 23 of the PPA cases underwent amyloid ligand PET imaging. Measures, corresponding to perceptual features of apraxia of speech, were periods of silence during reading and relative vowel duration and intensity in polysyllable word repetition. Discriminant function analyses revealed that a measure of relative vowel duration differentiated nfvPPA cases from both control and lvPPA cases (r 2 = 0.47) with 88% agreement with expert judgment of presence of apraxia of speech in nfvPPA cases. VBM analysis showed that relative vowel duration covaried with grey matter intensity in areas critical for speech motor planning and programming: precentral gyrus, supplementary motor area and inferior frontal gyrus bilaterally, only affected in the nfvPPA group. This bilateral involvement of frontal speech networks in nfvPPA potentially affects access to compensatory mechanisms involving right hemisphere homologues. Measures of silences during reading also discriminated the PPA and control groups, but did not increase predictive accuracy. Findings suggest that a measure of relative vowel duration from of a polysyllable word repetition task may be sufficient for detecting most cases of apraxia of speech and distinguishing between nfvPPA and lvPPA. PMID:24587083

  10. a Comparative Analysis of Fluent and Cerebral Palsied Speech.

    NASA Astrophysics Data System (ADS)

    van Doorn, Janis Lee

    Several features of the acoustic waveforms of fluent and cerebral palsied speech were compared, using six fluent and seven cerebral palsied subjects, with a major emphasis being placed on an investigation of the trajectories of the first three formants (vocal tract resonances). To provide an overall picture which included other acoustic features, fundamental frequency, intensity, speech timing (speech rate and syllable duration), and prevocalization (vocalization prior to initial stop consonants found in cerebral palsied speech) were also investigated. Measurements were made using repetitions of a test sentence which was chosen because it required large excursions of the speech articulators (lips, tongue and jaw), so that differences in the formant trajectories for the fluent and cerebral palsied speakers would be emphasized. The acoustic features were all extracted from the digitized speech waveform (10 kHz sampling rate): the fundamental frequency contours were derived manually, the intensity contours were measured using the signal covariance, speech rate and syllable durations were measured manually, as were the prevocalization durations, while the formant trajectories were derived from short time spectra which were calculated for each 10 ms of speech using linear prediction analysis. Differences which were found in the acoustic features can be summarized as follows. For cerebral palsied speakers, the fundamental frequency contours generally showed inappropriate exaggerated fluctuations, as did some of the intensity contours; the mean fundamental frequencies were either higher or the same as for the fluent subjects; speech rates were reduced, and syllable durations were longer; prevocalization was consistently present at the beginning of the test sentence; formant trajectories were found to have overall reduced frequency ranges, and to contain anomalous transitional features, but it is noteworthy that for any one cerebral palsied subject, the inappropriate trajectory pattern was generally reproducible. The anomalous transitional features took the form of (a) inappropriate transition patterns, (b) reduced frequency excursions, (c) increased transition durations, and (d) decreased maximum rates of frequency change.

  11. Central Presbycusis: A Review and Evaluation of the Evidence

    PubMed Central

    Humes, Larry E.; Dubno, Judy R.; Gordon-Salant, Sandra; Lister, Jennifer J.; Cacace, Anthony T.; Cruickshanks, Karen J.; Gates, George A.; Wilson, Richard H.; Wingfield, Arthur

    2018-01-01

    Background The authors reviewed the evidence regarding the existence of age-related declines in central auditory processes and the consequences of any such declines for everyday communication. Purpose This report summarizes the review process and presents its findings. Data Collection and Analysis The authors reviewed 165 articles germane to central presbycusis. Of the 165 articles, 132 articles with a focus on human behavioral measures for either speech or nonspeech stimuli were selected for further analysis. Results For 76 smaller-scale studies of speech understanding in older adults reviewed, the following findings emerged: (1) the three most commonly studied behavioral measures were speech in competition, temporally distorted speech, and binaural speech perception (especially dichotic listening); (2) for speech in competition and temporally degraded speech, hearing loss proved to have a significant negative effect on performance in most of the laboratory studies; (3) significant negative effects of age, unconfounded by hearing loss, were observed in most of the studies of speech in competing speech, time-compressed speech, and binaural speech perception; and (4) the influence of cognitive processing on speech understanding has been examined much less frequently, but when included, significant positive associations with speech understanding were observed. For 36 smaller-scale studies of the perception of nonspeech stimuli by older adults reviewed, the following findings emerged: (1) the three most frequently studied behavioral measures were gap detection, temporal discrimination, and temporal-order discrimination or identification; (2) hearing loss was seldom a significant factor; and (3) negative effects of age were almost always observed. For 18 studies reviewed that made use of test batteries and medium-to-large sample sizes, the following findings emerged: (1) all studies included speech-based measures of auditory processing; (2) 4 of the 18 studies included nonspeech stimuli; (3) for the speech-based measures, monaural speech in a competing-speech background, dichotic speech, and monaural time-compressed speech were investigated most frequently; (4) the most frequently used tests were the Synthetic Sentence Identification (SSI) test with Ipsilateral Competing Message (ICM), the Dichotic Sentence Identification (DSI) test, and time-compressed speech; (5) many of these studies using speech-based measures reported significant effects of age, but most of these studies were confounded by declines in hearing, cognition, or both; (6) for nonspeech auditory-processing measures, the focus was on measures of temporal processing in all four studies; (7) effects of cognition on nonspeech measures of auditory processing have been studied less frequently, with mixed results, whereas the effects of hearing loss on performance were minimal due to judicious selection of stimuli; and (8) there is a paucity of observational studies using test batteries and longitudinal designs. Conclusions Based on this review of the scientific literature, there is insufficient evidence to confirm the existence of central presbycusis as an isolated entity. On the other hand, recent evidence has been accumulating in support of the existence of central presbycusis as a multifactorial condition that involves age- and/or disease-related changes in the auditory system and in the brain. Moreover, there is a clear need for additional research in this area. PMID:22967738

  12. Quantization noise in digital speech. M.S. Thesis- Houston Univ.

    NASA Technical Reports Server (NTRS)

    Schmidt, O. L.

    1972-01-01

    The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.

  13. [Examination of gastric emptying rate by means of 13C-octanoic acid breath test. Methods of the test for adults and results of the investigation of healthy volunteers].

    PubMed

    Bures, J; Kopácová, M; Vorísek, V; Bukac, J; Neumann, D; Rejchrt, S; Pozler, O; Douda, T; Zivný, P; Palicka, V

    2005-01-01

    13C-octanoic acid breath test (13C-OABT) is a simple, safe and non-invasive technique for measuring gastric emptying. However, the method has not been standardized yet. Aim of the study was to work up, introduce and evaluate our own method of the 13C-OABT for adults. Ten healthy volunteers entered the study (5 men, 5 women, mean age 32 years, 50 % Helicobacter pylori positive). Standard test meals (with 100 mg 13C-sodium octanoate) were used three times within 3 weeks. The same solid meal (1,178 kJ) for Tests 1 and 2 contained scrambled egg (+ 3 g oil), white bread (40 g), butter (10 g) and distilled water (200 ml). Semi-solid meal (1,020 kJ) for Test 3 contained milk pudding (200 g) and distilled water (200 ml). Duplicate breath samples were obtained before and every 15 minutes after eating the test meal during 255 minutes. Altogether 1,080 breath samples were analysed twice (isotope ratio mass spectrometry, AP2003 Analytical Precision, UK). To assess the half-life of elimination (t1/2 E), we modelled the process of elimination with the incomplete gamma-function, which has a convenient form for the empiric plotting of breath test data. Mean t1/2E was 136+/-10 minutes (Test 1), 134+/-14 (Test 2) and 123+/-16 minutes (Test 3). Clinical reproducibility of 13C-OABT in particular persons was 98.2% (18 breath samples series), 90.8 % (15 samples) and 87.1% (9 breath samples series). There was a significant correlation between Test 1 and Test 2 results (r=0.887, p<0.0001). Mean difference of duplicate breath sample analysis was 1.460 % (in 540 pairs), mean baseline one-day analysis difference was 0.0982 (99.9274% accuracy). In healthy volunteers, normal range of t1/2E is 110-160 minutes for solids and 91-155 minutes for semisolid test meal. Using our own computed mean time of intermediate metabolism of 13C-octanoic acid (76.5+/-7.5 minutes), gastric emptying half-time is 33.5-83.5 minutes for solids and 14.5-78.5 minutes for semisolid test meal in healthy volunteers. The 13C-OABT is accurate non-invasive method for gastric emptying measurement.

  14. Speech-language pathologists' practices regarding assessment, analysis, target selection, intervention, and service delivery for children with speech sound disorders.

    PubMed

    Mcleod, Sharynne; Baker, Elise

    2014-01-01

    A survey of 231 Australian speech-language pathologists (SLPs) was undertaken to describe practices regarding assessment, analysis, target selection, intervention, and service delivery for children with speech sound disorders (SSD). The participants typically worked in private practice, education, or community health settings and 67.6% had a waiting list for services. For each child, most of the SLPs spent 10-40 min in pre-assessment activities, 30-60 min undertaking face-to-face assessments, and 30-60 min completing paperwork after assessments. During an assessment SLPs typically conducted a parent interview, single-word speech sampling, collected a connected speech sample, and used informal tests. They also determined children's stimulability and estimated intelligibility. With multilingual children, informal assessment procedures and English-only tests were commonly used and SLPs relied on family members or interpreters to assist. Common analysis techniques included determination of phonological processes, substitutions-omissions-distortions-additions (SODA), and phonetic inventory. Participants placed high priority on selecting target sounds that were stimulable, early developing, and in error across all word positions and 60.3% felt very confident or confident selecting an appropriate intervention approach. Eight intervention approaches were frequently used: auditory discrimination, minimal pairs, cued articulation, phonological awareness, traditional articulation therapy, auditory bombardment, Nuffield Centre Dyspraxia Programme, and core vocabulary. Children typically received individual therapy with an SLP in a clinic setting. Parents often observed and participated in sessions and SLPs typically included siblings and grandparents in intervention sessions. Parent training and home programs were more frequently used than the group therapy. Two-thirds kept up-to-date by reading journal articles monthly or every 6 months. There were many similarities with previously reported practices for children with SSD in the US, UK, and the Netherlands, with some (but not all) practices aligning with current research evidence.

  15. Acoustic characteristics of modern Greek Orthodox Church music.

    PubMed

    Delviniotis, Dimitrios S

    2013-09-01

    Some acoustic characteristics of the two types of vocal music of the Greek Orthodox Church Music, the Byzantine chant (BC) and ecclesiastical speech (ES), are studied in relation to the common Greek speech and the Western opera. Vocal samples were obtained, and their acoustic parameters of sound pressure level (SPL), fundamental frequency (F0), and the long-time average spectrum (LTAS) characteristics were analyzed. Twenty chanters, including two chanters-singers of opera, sang (BC) and read (ES) the same hymn of Byzantine music (BM), the two opera singers sang the same aria of opera, and common speech samples were obtained, and all audio were analyzed. The distribution of SPL values showed that the BC and ES have higher SPL by 9 and 12 dB, respectively, than common speech. The average F0 in ES tends to be lower than the common speech, and the smallest standard deviation (SD) of F0 values characterizes its monotonicity. The tone-scale intervals of BC are close enough to the currently accepted theory with SD equal to 0.24 semitones. The rate and extent of vibrato, which is rare in BC, equals 4.1 Hz and 0.6 semitones, respectively. The average LTAS slope is greatest in BC (+4.5 dB) but smaller than in opera (+5.7 dB). In both BC and ES, instead of a singer's formant appearing in an opera voice, a speaker's formant (SPF) was observed around 3300 Hz, with relative levels of +6.3 and +4.6 dB, respectively. The two vocal types of BM, BC, and ES differ both to each other and common Greek speech and opera style regarding SPL, the mean and SD of F0, the LTAS slope, and the relative level of SPF. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  16. A novel speech-processing strategy incorporating tonal information for cochlear implants.

    PubMed

    Lan, N; Nie, K B; Gao, S K; Zeng, F G

    2004-05-01

    Good performance in cochlear implant users depends in large part on the ability of a speech processor to effectively decompose speech signals into multiple channels of narrow-band electrical pulses for stimulation of the auditory nerve. Speech processors that extract only envelopes of the narrow-band signals (e.g., the continuous interleaved sampling (CIS) processor) may not provide sufficient information to encode the tonal cues in languages such as Chinese. To improve the performance in cochlear implant users who speak tonal language, we proposed and developed a novel speech-processing strategy, which extracted both the envelopes of the narrow-band signals and the fundamental frequency (F0) of the speech signal, and used them to modulate both the amplitude and the frequency of the electrical pulses delivered to stimulation electrodes. We developed an algorithm to extract the fundatmental frequency and identified the general patterns of pitch variations of four typical tones in Chinese speech. The effectiveness of the extraction algorithm was verified with an artificial neural network that recognized the tonal patterns from the extracted F0 information. We then compared the novel strategy with the envelope-extraction CIS strategy in human subjects with normal hearing. The novel strategy produced significant improvement in perception of Chinese tones, phrases, and sentences. This novel processor with dynamic modulation of both frequency and amplitude is encouraging for the design of a cochlear implant device for sensorineurally deaf patients who speak tonal languages.

  17. [Experimental evaluation of the spraying disinfection efficiency on dental models].

    PubMed

    Zhang, Yi; Fu, Yuan-fei; Xu, Kan

    2013-08-01

    To evaluate the disinfect effect after spraying a new kind of disinfectant on the dental plaster models. The germ-free plaster samples, which were smeared with bacteria compound including Staphylococcus aureus, Escherichia coli, Saccharomyces albicans, Streptococcus mutans and Actinomyces viscosus were sprayed with disinfectants (CaviCide) and glutaraldehyde individually. In one group(5 minutes later) and another group(15 minutes later), the colonies were counted for statistical analysis after sampling, inoculating, and culturing which were used for evaluation of disinfecting efficiency. ANOVA was performed using SPSS12.0 software package. All sample bacteria were eradicated after spraying disinfectants(CaviCide) within 5 minutes and effective bacteria control was retained after 15 minutes. There was significant difference between the disinfecting efficiency of CaviCide and glutaraldehyde. The effect of disinfection with spraying disinfectants (CaviCide) on dental models is quick and effective.

  18. Variability and Intelligibility of Clarified Speech to Different Listener Groups

    NASA Astrophysics Data System (ADS)

    Silber, Ronnie F.

    Two studies examined the modifications that adult speakers make in speech to disadvantaged listeners. Previous research that has focused on speech to the deaf individuals and to young children has shown that adults clarify speech when addressing these two populations. Acoustic measurements suggest that the signal undergoes similar changes for both populations. Perceptual tests corroborate these results for the deaf population, but are nonsystematic in developmental studies. The differences in the findings for these populations and the nonsystematic results in the developmental literature may be due to methodological factors. The present experiments addressed these methodological questions. Studies of speech to hearing impaired listeners have used read, nonsense, sentences, for which speakers received explicit clarification instructions and feedback, while in the child literature, excerpts of real-time conversations were used. Therefore, linguistic samples were not precisely matched. In this study, experiments used various linguistic materials. Experiment 1 used a children's story; experiment 2, nonsense sentences. Four mothers read both types of material in four ways: (1) in "normal" adult speech, (2) in "babytalk," (3) under the clarification instructions used in the "hearing impaired studies" (instructed clear speech) and (4) in (spontaneous) clear speech without instruction. No extra practice or feedback was given. Sentences were presented to 40 normal hearing college students with and without simultaneous masking noise. Results were separately tabulated for content and function words, and analyzed using standard statistical tests. The major finding in the study was individual variation in speaker intelligibility. "Real world" speakers vary in their baseline intelligibility. The four speakers also showed unique patterns of intelligibility as a function of each independent variable. Results were as follows. Nonsense sentences were less intelligible than story sentences. Function words were equal to, or more intelligible than, content words. Babytalk functioned as a clear speech style in story sentences but not nonsense sentences. One of the two clear speech styles was clearer than normal speech in adult-directed clarification. However, which style was clearer depended on interactions among the variables. The individual patterns seemed to result from interactions among demand characteristics, baseline intelligibility, materials, and differences in articulatory flexibility.

  19. Speech therapy for children with dysarthria acquired before three years of age.

    PubMed

    Pennington, Lindsay; Parker, Naomi K; Kelly, Helen; Miller, Nick

    2016-07-18

    Children with motor impairments often have the motor speech disorder dysarthria, a condition which effects the tone, strength and co-ordination of any or all of the muscles used for speech. Resulting speech difficulties can range from mild, with slightly slurred articulation and breathy voice, to profound, with an inability to produce any recognisable words. Children with dysarthria are often prescribed communication aids to supplement their natural forms of communication. However, there is variation in practice regarding the provision of therapy focusing on voice and speech production. Descriptive studies have suggested that therapy may improve speech, but its effectiveness has not been evaluated. To assess whether any speech and language therapy intervention aimed at improving the speech of children with dysarthria is more effective in increasing children's speech intelligibility or communicative participation than no intervention at all , and to compare the efficacy of individual types of speech language therapy in improving the speech intelligibility or communicative participation of children with dysarthria. We searched the Cochrane Central Register of Controlled Trials (CENTRAL; 2015 , Issue 7 ), MEDLINE, EMBASE, CINAHL , LLBA, ERIC, PsychInfo, Web of Science, Scopus, UK National Research Register and Dissertation Abstracts up to July 2015, handsearched relevant journals published between 1980 and July 2015, and searched proceedings of relevant conferences between 1996 to 2015. We placed no restrictions on the language or setting of the studies. A previous version of this review considered studies published up to April 2009. In this update we searched for studies published from April 2009 to July 2015. We considered randomised controlled trials and studies using quasi-experimental designs in which children were allocated to groups using non-random methods. One author (LP) conducted searches of all databases, journals and conference reports. All searches included a reliability check in which a second review author independently checked a random sample comprising 15% of all identified reports. We planned that two review authors would independently assess the quality and extract data from eligible studies. No randomised controlled trials or group studies were identified. This review found no evidence from randomised trials of the effectiveness of speech and language therapy interventions to improve the speech of children with early acquired dysarthria. Rigorous, fully powered randomised controlled trials are needed to investigate if the positive changes in children's speech observed in phase I and phase II studies are generalisable to the population of children with early acquired dysarthria served by speech and language therapy services. Research should examine change in children's speech production and intelligibility. It must also investigate children's participation in social and educational activities, and their quality of life, as well as the cost and acceptability of interventions.

  20. Multi-modal Biomarkers to Discriminate Cognitive State

    DTIC Science & Technology

    2015-11-01

    in the speech of a large sample of Parkinson patients. J. Speech Hear. Disord. 43(1), 47. 36. Ekman, P., Freisen, W.V. and Ancoli, S . 1980. Facial...Patel, Laura Brattain, Brian S . Helfer, Daryush D. Mehta, Jeffrey Palmer Kristin Heaton2, Marianna Eddy3, Joseph Moran3 1MIT Lincoln Laboratory...Parkinson’s disease [23]-[35]. Voice has been used in cognitive load by Yin et al [2] who achieved 77% accuracy using standard vocal features (e.g., mel

  1. Speech characteristics in a Ugandan child with a rare paramedian craniofacial cleft: a case report.

    PubMed

    Van Lierde, K M; Bettens, K; Luyten, A; De Ley, S; Tungotyo, M; Balumukad, D; Galiwango, G; Bauters, W; Vermeersch, H; Hodges, A

    2013-03-01

    The purpose of this study is to describe the speech characteristics in an English-speaking Ugandan boy of 4.5 years who has a rare paramedian craniofacial cleft (unilateral lip, alveolar, palatal, nasal and maxillary cleft, and associated hypertelorism). Closure of the lip together with the closure of the hard and soft palate (one-stage palatal closure) was performed at the age of 5 months. Objective as well as subjective speech assessment techniques were used. The speech samples were perceptually judged for articulation, intelligibility and nasality. The Nasometer was used for the objective measurement of the nasalance values. The most striking communication problems in this child with the rare craniofacial cleft are an incomplete phonetic inventory, a severely impaired speech intelligibility with the presence of very severe hypernasality, mild nasal emission, phonetic disorders (omission of several consonants, decreased intraoral pressure in explosives, insufficient frication of fricatives and the use of a middorsum palatal stop) and phonological disorders (deletion of initial and final consonants and consonant clusters). The increased objective nasalance values are in agreement with the presence of the audible nasality disorders. The results revealed that several phonetic and phonological articulation disorders together with a decreased speech intelligibility and resonance disorders are present in the child with a rare craniofacial cleft. To what extent a secondary surgery for velopharyngeal insufficiency, combined with speech therapy, will improve speech intelligibility, articulation and resonance characteristics is a subject for further research. The results of such analyses may ultimately serve as a starting point for specific surgical and logopedic treatment that addresses the specific needs of children with rare facial clefts. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. The influence of listener experience and academic training on ratings of nasality.

    PubMed

    Lewis, Kerry E; Watterson, Thomas L; Houghton, Sarah M

    2003-01-01

    This study assessed listener agreement levels for nasality ratings, and the strength of relationship between nasality ratings and nasalance scores on one hand, and listener clinical experience and formal academic training in cleft palate speech on the other. The listeners were 12 adults who represented four levels of clinical experience and academic training in cleft palate speech. Three listeners were teachers with no clinical experience and no academic training (TR), three were graduate students in speech-language pathology (GS) with academic training but no clinical experience, three were craniofacial surgeons (MD) with extensive experience listening to cleft palate speech but with no academic training in speech disorders, and three were certified speech-language pathologists (SLP) with both extensive academic training and clinical experience. The speech samples were audio recordings from 20 persons representing a range of nasality from normal to severely hypernasal. Nasalance scores were obtained simultaneously with the audio recordings. Results revealed that agreement levels for nasality ratings were highest for the SLPs, followed by the MDs. Thus, the more experienced groups tended to be more reliable. Mean nasality ratings obtained for each of the rater groups revealed an inverse relationship with experience. That is, the two groups with clinical experience (SLP and MD) tended to rate nasality lower than the two groups without experience (GS and TR). Correlation coefficients between nasalance scores and nasality judgments were low to moderate for all groups and did not follow a pattern. EDUCATIONAL OUTCOMES: As a result of this activity, the reader will be able to (1) describe the influence of listener experience and academic training in cleft palate speech on perceptual ratings of nasality. (2) describe the influence of experience and training on the nasality/nasalance relationship and, (3) compare the present findings to previous findings reported in the literature.

  3. Binaural Interference: Quo Vadis?

    PubMed

    Jerger, James; Silman, Shlomo; Silverman, Carol; Emmer, Michele

    2017-04-01

    The reality of the phenomenon of binaural interference with speech recognition has been debated for two decades. Research has taken one of two avenues; group studies or case reports. In group studies, a sample of the elderly population is tested on speech recognition under three conditions; binaural, monaural right and monaural left. The aim is to determine the percent of the sample in which the expected outcome (binaural score-better-than-either-monaural score) is reversed (i.e., one of the monaural scores is better than the binaural score). This outcome has been commonly used to define binaural interference. The object of group studies is to answer the "how many" question, what is the prevalence of binaural interference in the sample. In case reports the binaural interference conclusion suggested by the speech recognition tests is not accepted until it has been corroborated by other independent diagnostic audiological measures. The aim is to attempt to determine the basis for the findings, to answer the "why" question. This article is at once tutorial, editorial and a case report. We argue that it is time to accept the reality of the phenomenon of binaural interference, to eschew group statistical approaches in search of an answer to the "how many" question, and to focus on individual case reports in search of an answer to the "why" question. American Academy of Audiology.

  4. Effectiveness and safety of autologous fat grafting to the soft palate alone.

    PubMed

    Boneti, Cristiano; Ray, Peter D; Macklem, Elizabeth B; Kohanzadeh, Som; de la Torre, Jorge; Grant, John H

    2015-06-01

    Posterior pharyngeal augmentation is an accepted method of treating velopharyngeal insufficiency (VPI). Techniques using autologous fat harvest, preparation, and grafting are well described. Based on the complications from retropharyngeal injection, we performed augmentation of the nasal surface of the palate to reduce hypernasality with decreased risks. After Institutional Review Board approval, a chart review from 2010 to 2013 identified 46 patients with cleft palate, subjective and nasoendoscopic evidence of VPI treated with autologous fat grafting to the soft palate. Speech evaluation of velopharyngeal function was compared before and after autologous fat grafting. A total of 61 autologous fat grafting procedures were performed in 46 patients. The average age of the study population is 5.59 ± 2.05 years. The majority underwent a single procedure (32/46 or 69.6%), 13 of 46 patients (28.2%) had 2 fat grafting procedures and only 1 patient (2.2%) had 3 fat grafting procedures. The fat was injected primarily in the soft palate. The recorded volume of fat grafted averaged 2.4 ± 1.1 mL. Average operative time was 39 ± 12.55 minutes. There were no local or donor site complications. Four patients were lost to follow-up. Of 34 patients with adequate speech follow-up, including Pittsburgh Weighted Speech Scale (PWSS) assessment, the average preoperative score of 8.17 ± 3.59 was reduced to 5.17 ± 3.14 postoperatively. Although 26 of 34 patients (76.5%) had an improvement in their PWSS score, only 13 of 34 patients (38.23%) saw an improvement in their PWSS category. Autologous fat grafting to the soft palate is a safe operation with minimal risks. Speech outcomes are subjectively enhanced in the majority of patients, with a full PWSS category improvement seen in 40% of the cases. Patient selection criteria to optimize results are provided.

  5. Comparison of single-microphone noise reduction schemes: can hearing impaired listeners tell the difference?

    PubMed

    Huber, Rainer; Bisitz, Thomas; Gerkmann, Timo; Kiessling, Jürgen; Meister, Hartmut; Kollmeier, Birger

    2018-06-01

    The perceived qualities of nine different single-microphone noise reduction (SMNR) algorithms were to be evaluated and compared in subjective listening tests with normal hearing and hearing impaired (HI) listeners. Speech samples added with traffic noise or with party noise were processed by the SMNR algorithms. Subjects rated the amount of speech distortions, intrusiveness of background noise, listening effort and overall quality, using a simplified MUSHRA (ITU-R, 2003 ) assessment method. 18 normal hearing and 18 moderately HI subjects participated in the study. Significant differences between the rating behaviours of the two subject groups were observed: While normal hearing subjects clearly differentiated between different SMNR algorithms, HI subjects rated all processed signals very similarly. Moreover, HI subjects rated speech distortions of the unprocessed, noisier signals as being more severe than the distortions of the processed signals, in contrast to normal hearing subjects. It seems harder for HI listeners to distinguish between additive noise and speech distortions or/and they might have a different understanding of the term "speech distortion" than normal hearing listeners have. The findings confirm that the evaluation of SMNR schemes for hearing aids should always involve HI listeners.

  6. Normative Data on Audiovisual Speech Integration Using Sentence Recognition and Capacity Measures

    PubMed Central

    Altieri, Nicholas; Hudock, Daniel

    2016-01-01

    Objective The ability to use visual speech cues and integrate them with auditory information is important, especially in noisy environments and for hearing-impaired (HI) listeners. Providing data on measures of integration skills that encompass accuracy and processing speed will benefit researchers and clinicians. Design The study consisted of two experiments: First, accuracy scores were obtained using CUNY sentences, and capacity measures that assessed reaction-time distributions were obtained from a monosyllabic word recognition task. Study Sample We report data on two measures of integration obtained from a sample comprised of 86 young and middle-age adult listeners: Results To summarize our results, capacity showed a positive correlation with accuracy measures of audiovisual benefit obtained from sentence recognition. More relevant, factor analysis indicated that a single-factor model captured audiovisual speech integration better than models containing more factors. Capacity exhibited strong loadings on the factor, while the accuracy-based measures from sentence recognition exhibited weaker loadings. Conclusions Results suggest that a listener’s integration skills may be assessed optimally using a measure that incorporates both processing speed and accuracy. PMID:26853446

  7. Speech disruptions in relation to language growth in children who stutter: an exploratory study.

    PubMed

    Wagovich, Stacy A; Hall, Nancy E; Clifford, Betsy A

    2009-12-01

    Young children with typical fluency demonstrate a range of disfluencies, or speech disruptions. One type of disruption, revision, appears to increase in frequency as syntactic skills develop. To date, this phenomenon has not been studied in children who stutter (CWS). Rispoli, Hadley, and Holt (2008) suggest a schema for categorizing speech disruptions in terms of revisions and stalls. The purpose of this exploratory study was to use this schema to evaluate whether CWS show a pattern over time in their production of stuttering, revisions, and stalls. Nine CWS, ages 2;1 to 4;11, participated in the study, producing language samples each month for 10 months. MLU and vocd analyses were performed for samples across three time periods. Active declarative sentences within these samples were examined for the presence of disruptions. Results indicated that the proportion of sentences containing revisions increased over time, but proportions for stalls and stuttering did not. Visual inspection revealed that more stuttering and stalls occurred on longer utterances than on shorter utterances. Upon examination of individual children's language, it appears two-thirds of the children showed a pattern in which, as MLU increased, revisions increased as well. Findings are similar to studies of children with typical fluency, suggesting that, despite the fact that CWS display more (and different) disfluencies relative to typically fluent peers, revisions appear to increase over time and correspond to increases in MLU, just as is the case with peers. The reader will be able to: (1) describe the three types of speech disruptions assessed in this article; (2) compare present findings of disruptions in children who stutter to findings of previous research with children who are typically fluent; and (3) discuss future directions in this area of research, given the findings and implications of this study.

  8. Rapid recovery from aphasia after infarction of Wernicke's area

    PubMed Central

    Yagata, Stephanie A.; Yen, Melodie; McCarron, Angelica; Bautista, Alexa; Lamair-Orosco, Genevieve

    2017-01-01

    Background Aphasia following infarction of Wernicke's area typically resolves to some extent over time. The nature of this recovery process and its time course have not been characterized in detail, especially in the acute/subacute period. Aims The goal of this study was to document recovery after infarction of Wernicke's area in detail in the first 3 months after stroke. Specifically, we aimed to address two questions about language recovery. First, which impaired language domains improve over time, and which do not? Second, what is the time course of recovery? Methods & Procedures We used quantitative analysis of connected speech and a brief aphasia battery to document language recovery in two individuals with aphasia following infarction of the posterior superior temporal gyrus. Speech samples were acquired daily between 2 and 16 days post stroke, and also at 1 month and 3 months. Speech samples were transcribed and coded using the CHAT system, in order to quantify multiple language domains. A brief aphasia battery was also administered at a subset of five time points during the 3 months. Outcomes & Results Both patients showed substantial recovery of language function over this time period. Most, but not all, language domains showed improvements, including fluency, lexical access, phonological retrieval and encoding, and syntactic complexity. The time course of recovery was logarithmic, with the greatest gains taking place early in the course of recovery. Conclusions There is considerable potential for amelioration of language deficits when damage is relatively circumscribed to the posterior superior temporal gyrus. Quantitative analysis of connected speech samples proved to be an effective, albeit time-consuming, approach to tracking day-by-day recovery in the acute/subacute post-stroke period. PMID:29051682

  9. Sensory-Cognitive Interaction in the Neural Encoding of Speech in Noise: A Review

    PubMed Central

    Anderson, Samira; Kraus, Nina

    2011-01-01

    Background Speech-in-noise (SIN) perception is one of the most complex tasks faced by listeners on a daily basis. Although listening in noise presents challenges for all listeners, background noise inordinately affects speech perception in older adults and in children with learning disabilities. Hearing thresholds are an important factor in SIN perception, but they are not the only factor. For successful comprehension, the listener must perceive and attend to relevant speech features, such as the pitch, timing, and timbre of the target speaker’s voice. Here, we review recent studies linking SIN and brainstem processing of speech sounds. Purpose To review recent work that has examined the ability of the auditory brainstem response to complex sounds (cABR), which reflects the nervous system’s transcription of pitch, timing, and timbre, to be used as an objective neural index for hearing-in-noise abilities. Study Sample We examined speech-evoked brainstem responses in a variety of populations, including children who are typically developing, children with language-based learning impairment, young adults, older adults, and auditory experts (i.e., musicians). Data Collection and Analysis In a number of studies, we recorded brainstem responses in quiet and babble noise conditions to the speech syllable /da/ in all age groups, as well as in a variable condition in children in which /da/ was presented in the context of seven other speech sounds. We also measured speech-in-noise perception using the Hearing-in-Noise Test (HINT) and the Quick Speech-in-Noise Test (QuickSIN). Results Children and adults with poor SIN perception have deficits in the subcortical spectrotemporal representation of speech, including low-frequency spectral magnitudes and the timing of transient response peaks. Furthermore, auditory expertise, as engendered by musical training, provides both behavioral and neural advantages for processing speech in noise. Conclusions These results have implications for future assessment and management strategies for young and old populations whose primary complaint is difficulty hearing in background noise. The cABR provides a clinically applicable metric for objective assessment of individuals with SIN deficits, for determination of the biologic nature of disorders affecting SIN perception, for evaluation of appropriate hearing aid algorithms, and for monitoring the efficacy of auditory remediation and training. PMID:21241645

  10. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    PubMed

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time. It is found that the proposed ML based sentence extraction techniques and the composite feature set used with RNN as classifier outperform all other approaches. By using ANN in FF form as feature extractor, the performance of the system is evaluated and a comparison is made. Experimental results show that the application of big data samples has enhanced the learning of the ASR system. Further, the ANN based sample and feature extraction techniques are found to be efficient enough to enable application of ML techniques in big data aspects as part of ASR systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Irrelevant speech does not interfere with serial recall in early blind listeners.

    PubMed

    Kattner, Florian; Ellermeier, Wolfgang

    2014-01-01

    Phonological working memory is known be (a) inversely related to the duration of the items to be learned (word-length effect), and (b) impaired by the presence of irrelevant speech-like sounds (irrelevant-speech effect). As it is discussed controversially whether these memory disruptions are subject to attentional control, both effects were studied in sighted participants and in a sample of early blind individuals who are expected to be superior in selectively attending to auditory stimuli. Results show that, while performance depended on word length in both groups, irrelevant speech interfered with recall only in the sighted group, but not in blind participants. This suggests that blind listeners may be able to effectively prevent irrelevant sound from being encoded in the phonological store, presumably due to superior auditory processing. The occurrence of a word-length effect, however, implies that blind and sighted listeners are utilizing the same phonological rehearsal mechanism in order to maintain information in the phonological store.

  12. The effects of complementary and alternative medicine on the speech of patients with depression

    NASA Astrophysics Data System (ADS)

    Fraas, Michael; Solloway, Michele

    2004-05-01

    It is well documented that patients suffering from depression exhibit articulatory timing deficits and speech that is monotonous and lacking pitch variation. Traditional remediation of depression has left many patients with adverse side effects and ineffective outcomes. Recent studies indicate that many Americans are seeking complementary and alternative forms of medicine to supplement traditional therapy approaches. The current investigation wishes to determine the efficacy of complementary and alternative medicine (CAM) on the remediation of speech deficits associated with depression. Subjects with depression and normal controls will participate in an 8-week treatment session using polarity therapy, a form of CAM. Subjects will be recorded producing a series of spontaneous and narrative speech samples. Acoustic analysis of mean fundamental frequency (F0), variation in F0 (standard deviation of F0), average rate of F0 change, and pause and utterance durations will be conducted. Differences pre- and post-CAM therapy between subjects with depression and normal controls will be discussed.

  13. Analyzing crowdsourced ratings of speech-based take-over requests for automated driving.

    PubMed

    Bazilinskyy, P; de Winter, J C F

    2017-10-01

    Take-over requests in automated driving should fit the urgency of the traffic situation. The robustness of various published research findings on the valuations of speech-based warning messages is unclear. This research aimed to establish how people value speech-based take-over requests as a function of speech rate, background noise, spoken phrase, and speaker's gender and emotional tone. By means of crowdsourcing, 2669 participants from 95 countries listened to a random 10 out of 140 take-over requests, and rated each take-over request on urgency, commandingness, pleasantness, and ease of understanding. Our results replicate several published findings, in particular that an increase in speech rate results in a monotonic increase of perceived urgency. The female voice was easier to understand than a male voice when there was a high level of background noise, a finding that contradicts the literature. Moreover, a take-over request spoken with Indian accent was found to be easier to understand by participants from India than by participants from other countries. Our results replicate effects in the literature regarding speech-based warnings, and shed new light on effects of background noise, gender, and nationality. The results may have implications for the selection of appropriate take-over requests in automated driving. Additionally, our study demonstrates the promise of crowdsourcing for testing human factors and ergonomics theories with large sample sizes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Developing the Alphabetic Principle to Aid Text-Based Augmentative and Alternative Communication Use by Adults With Low Speech Intelligibility and Intellectual Disabilities

    PubMed Central

    Schmidt-Naylor, Anna C.; Brady, Nancy C.

    2017-01-01

    Purpose We explored alphabet supplementation as an augmentative and alternative communication strategy for adults with minimal literacy. Study 1's goal was to teach onset-letter selection with spoken words and assess generalization to untaught words, demonstrating the alphabetic principle. Study 2 incorporated alphabet supplementation within a naming task and then assessed effects on speech intelligibility. Method Three men with intellectual disabilities (ID) and low speech intelligibility participated. Study 1 used a multiple-probe design, across three 20-word sets, to show that our computer-based training improved onset-letter selection. We also probed generalization to untrained words. Study 2 taught onset-letter selection for 30 new words chosen for functionality. Five listeners transcribed speech samples of the 30 words in 2 conditions: speech only and speech with alphabet supplementation. Results Across studies 1 and 2, participants demonstrated onset-letter selection for at least 90 words. Study 1 showed evidence of the alphabetic principle for some but not all word sets. In study 2, participants readily used alphabet supplementation, enabling listeners to understand twice as many words. Conclusions This is the first demonstration of alphabet supplementation in individuals with ID and minimal literacy. The large number of words learned holds promise both for improving communication and providing a foundation for improved literacy. PMID:28474087

  15. Contralateral Bimodal Stimulation: A Way to Enhance Speech Performance in Arabic-Speaking Cochlear Implant Patients.

    PubMed

    Abdeltawwab, Mohamed M; Khater, Ahmed; El-Anwar, Mohammad W

    2016-01-01

    The combination of acoustic and electric stimulation as a way to enhance speech recognition performance in cochlear implant (CI) users has generated considerable interest in the recent years. The purpose of this study was to evaluate the bimodal advantage of the FS4 speech processing strategy in combination with hearing aids (HA) as a means to improve low-frequency resolution in CI patients. Nineteen postlingual CI adults were selected to participate in this study. All patients wore implants on one side and HA on the contralateral side with residual hearing. Monosyllabic word recognition, speech in noise, and emotion and talker identification were assessed using CI with fine structure processing/FS4 and high-definition continuous interleaved sampling strategies, HA alone, and a combination of CI and HA. The bimodal stimulation showed improvement in speech performance and emotion identification for the question/statement/order tasks, which was statistically significant compared to patients with CI alone, but there were no significant statistical differences in intragender talker discrimination and emotion identification for the happy/angry/neutral tasks. The poorest performance was obtained with HA only, and it was statistically significant compared to the other modalities. The bimodal stimulation showed enhanced speech performance in CI patients, and it improves the limitations provided by electric or acoustic stimulation alone. © 2016 S. Karger AG, Basel.

  16. Toward diagnostic and phenotype markers for genetically transmitted speech delay.

    PubMed

    Shriberg, Lawrence D; Lewis, Barbara A; Tomblin, J Bruce; McSweeny, Jane L; Karlsson, Heather B; Scheer, Alison R

    2005-08-01

    Converging evidence supports the hypothesis that the most common subtype of childhood speech sound disorder (SSD) of currently unknown origin is genetically transmitted. We report the first findings toward a set of diagnostic markers to differentiate this proposed etiological subtype (provisionally termed speech delay-genetic) from other proposed subtypes of SSD of unknown origin. Conversational speech samples from 72 preschool children with speech delay of unknown origin from 3 research centers were selected from an audio archive. Participants differed on the number of biological, nuclear family members (0 or 2+) classified as positive for current and/or prior speech-language disorder. Although participants in the 2 groups were found to have similar speech competence, as indexed by their Percentage of Consonants Correct scores, their speech error patterns differed significantly in 3 ways. Compared with children who may have reduced genetic load for speech delay (no affected nuclear family members), children with possibly higher genetic load (2+ affected members) had (a) a significantly higher proportion of relative omission errors on the Late-8 consonants; (b) a significantly lower proportion of relative distortion errors on these consonants, particularly on the sibilant fricatives /s/, /z/, and //; and (c) a significantly lower proportion of backed /s/ distortions, as assessed by both perceptual and acoustic methods. Machine learning routines identified a 3-part classification rule that included differential weightings of these variables. The classification rule had diagnostic accuracy value of 0.83 (95% confidence limits = 0.74-0.92), with positive and negative likelihood ratios of 9.6 (95% confidence limits = 3.1-29.9) and 0.40 (95% confidence limits = 0.24-0.68), respectively. The diagnostic accuracy findings are viewed as promising. The error pattern for this proposed subtype of SSD is viewed as consistent with the cognitive-linguistic processing deficits that have been reported for genetically transmitted verbal disorders.

  17. Child speech, language and communication need re-examined in a public health context: a new direction for the speech and language therapy profession.

    PubMed

    Law, James; Reilly, Sheena; Snow, Pamela C

    2013-01-01

    Historically speech and language therapy services for children have been framed within a rehabilitative framework with explicit assumptions made about providing therapy to individuals. While this is clearly important in many cases, we argue that this model needs revisiting for a number of reasons. First, our understanding of the nature of disability, and therefore communication disabilities, has changed over the past century. Second, there is an increasing understanding of the impact that the social gradient has on early communication difficulties. Finally, understanding how these factors interact with one other and have an impact across the life course remains poorly understood. To describe the public health paradigm and explore its implications for speech and language therapy with children. We test the application of public health methodologies to speech and language therapy services by looking at four dimensions of service delivery: (1) the uptake of services and whether those children who need services receive them; (2) the development of universal prevention services in relation to social disadvantage; (3) the risk of over-interpreting co-morbidity from clinical samples; and (4) the overlap between communicative competence and mental health. It is concluded that there is a strong case for speech and language therapy services to be reconceptualized to respond to the needs of the whole population and according to socially determined needs, focusing on primary prevention. This is not to disregard individual need, but to highlight the needs of the population as a whole. Although the socio-political context is different between countries, we maintain that this is relevant wherever speech and language therapists have a responsibility for covering whole populations. Finally, we recommend that speech and language therapy services be conceptualized within the framework laid down in The Ottawa Charter for Health Promotion. © 2013 Royal College of Speech and Language Therapists.

  18. Effect of surgical hand scrub time on subsequent bacterial growth.

    PubMed

    Wheelock, S M; Lookinland, S

    1997-06-01

    In this experimental study, the researchers evaluated the effect of surgical hand scrub time on subsequent bacterial growth and assessed the effectiveness of the glove juice technique in a clinical setting. In a randomized crossover design, 25 perioperative staff members scrubbed for two or three minutes in the first trial and vice versa in the second trial, after which the wore sterile surgical gloves for one hour under clinical conditions. The researchers then sampled the subjects' nondominant hands for bacterial growth, cultured aliquots from the sampling solution, and counted microorganisms. Scrubbing for three minutes produced lower mean log bacterial counts than scrubbing for two minutes. Although the mean bacterial count differed significantly (P = .02) between the two-minute and three-minute surgical hand scrub times, it fell below 0.5 log, which is the threshold for practical and clinical significance. This finding suggests that a two-minute surgical hand scrub is clinically as effective as a three-minute surgical had scrub. The glove juice technique demonstrated sensitivity and reliability in enumerating bacteria on the hands of perioperative staff members in a clinical setting.

  19. Measures of digit span and verbal rehearsal speed in deaf children after more than 10 years of cochlear implantation.

    PubMed

    Pisoni, David B; Kronenberger, William G; Roman, Adrienne S; Geers, Ann E

    2011-02-01

    Conventional assessments of outcomes in deaf children with cochlear implants (CIs) have focused primarily on endpoint or product measures of speech and language. Little attention has been devoted to understanding the basic underlying core neurocognitive factors involved in the development and processing of speech and language. In this study, we examined the development of factors related to the quality of phonological information in immediate verbal memory, including immediate memory capacity and verbal rehearsal speed, in a sample of deaf children after >10 yrs of CI use and assessed the correlations between these two process measures and a set of speech and language outcomes. Of an initial sample of 180 prelingually deaf children with CIs assessed at ages 8 to 9 yrs after 3 to 7 yrs of CI use, 112 returned for testing again in adolescence after 10 more years of CI experience. In addition to completing a battery of conventional speech and language outcome measures, subjects were administered the Wechsler Intelligence Scale for Children-III Digit Span subtest to measure immediate verbal memory capacity. Sentence durations obtained from the McGarr speech intelligibility test were used as a measure of verbal rehearsal speed. Relative to norms for normal-hearing children, Digit Span scores were well below average for children with CIs at both elementary and high school ages. Improvement was observed over the 8-yr period in the mean longest digit span forward score but not in the mean longest digit span backward score. Longest digit span forward scores at ages 8 to 9 yrs were significantly correlated with all speech and language outcomes in adolescence, but backward digit spans correlated significantly only with measures of higher-order language functioning over that time period. While verbal rehearsal speed increased for almost all subjects between elementary grades and high school, it was still slower than the rehearsal speed obtained from a control group of normal-hearing adolescents. Verbal rehearsal speed at ages 8 to 9 yrs was also found to be strongly correlated with speech and language outcomes and Digit Span scores in adolescence. Despite improvement after 8 additional years of CI use, measures of immediate verbal memory capacity and verbal rehearsal speed, which reflect core fundamental information processing skills associated with representational efficiency and information processing capacity, continue to be delayed in children with CIs relative to NH peers. Furthermore, immediate verbal memory capacity and verbal rehearsal speed at 8 to 9 yrs of age were both found to predict speech and language outcomes in adolescence, demonstrating the important contribution of these processing measures for speech-language development in children with CIs. Understanding the relations between these core underlying processes and speech-language outcomes in children with CIs may help researchers to develop new approaches to intervention and treatment of deaf children who perform poorly with their CIs. Moreover, this knowledge could be used for early identification of deaf children who may be at high risk for poor speech and language outcomes after cochlear implantation as well as for the development of novel targeted interventions that focus selectively on these core elementary information processing variables.

  20. Auditory Evoked Potentials with Different Speech Stimuli: a Comparison and Standardization of Values

    PubMed Central

    Didoné, Dayane Domeneghini; Oppitz, Sheila Jacques; Folgearini, Jordana; Biaggio, Eliara Pinto Vieira; Garcia, Michele Vargas

    2016-01-01

    Introduction Long Latency Auditory Evoked Potentials (LLAEP) with speech sounds has been the subject of research, as these stimuli would be ideal to check individualś detection and discrimination. Objective The objective of this study is to compare and describe the values of latency and amplitude of cortical potentials for speech stimuli in adults with normal hearing. Methods The sample population included 30 normal hearing individuals aged between 18 and 32 years old with ontological disease and auditory processing. All participants underwent LLAEP search using pairs of speech stimuli (/ba/ x /ga/, /ba/ x /da/, and /ba/ x /di/. The authors studied the LLAEP using binaural stimuli at an intensity of 75dBNPS. In total, they used 300 stimuli were used (∼60 rare and 240 frequent) to obtain the LLAEP. Individuals received guidance to count the rare stimuli. The authors analyzed latencies of potential P1, N1, P2, N2, and P300, as well as the ampleness of P300. Results The mean age of the group was approximately 23 years. The averages of cortical potentials vary according to different speech stimuli. The N2 latency was greater for /ba/ x /di/ and P300 latency was greater for /ba/ x /ga/. Considering the overall average amplitude, it ranged from 5.35 and 7.35uV for different speech stimuli. Conclusion It was possible to obtain the values of latency and amplitude for different speech stimuli. Furthermore, the N2 component showed higher latency with the / ba / x / di / stimulus and P300 for /ba/ x / ga /. PMID:27096012

Top